Skip to main content

Command Palette

Search for a command to run...

AI is no longer a SaaS feature. It's the foundation. Here's what that actually requires.

Updated
6 min read
AI is no longer a SaaS feature. It's the foundation. Here's what that actually requires.

Two years ago, SaaS companies added AI as a feature. A "Summarize" button here, a "Generate" button there, maybe a chatbot in the support widget. These were additive capabilities. The product worked fine without them, and they worked fine as isolated features.

That era is ending. The SaaS products gaining traction in 2026 aren't bolting AI features onto existing workflows. They're rebuilding workflows around AI capabilities. Bain's 2025 technology report frames this as the shift from "AI-enabled SaaS" to "AI-native SaaS," and the distinction matters because it creates infrastructure problems that the feature era never had to deal with.

The feature era was easy

Adding a summarization button to a dashboard is straightforward. Take the text on the screen, send it to an LLM API, display the result. The AI capability is self-contained. It doesn't need to understand the rest of your system. It doesn't need to call other APIs. It doesn't need graceful failure handling because a bad summary is annoying, not catastrophic.

Feature-level AI has three nice properties: it's stateless (each call is independent), it's read-only (it consumes data but doesn't change system state), and it's fault-tolerant (if the LLM returns garbage, the user just ignores it).

Foundation-level AI has none of these.

What foundation-level AI demands

When AI becomes the foundation, when your product's core value depends on agents executing multi-step workflows across your system, you hit requirements that feature-level integrations never had to think about.

Stateful workflow execution. An AI agent processing a customer refund needs to maintain state across seven API calls: verify the order, check eligibility, calculate the amount, reverse the payment, update inventory, send confirmation, log for compliance. Each step depends on the previous step's output. Lose state between steps and you get a payment reversal without an inventory update. Or a confirmation email for a refund that actually failed. Neither is acceptable.

This is a different world from "send text to LLM, display result." It requires orchestration infrastructure: workflow dependency resolution, parameter passing between steps, state management across API boundaries. As we covered in agent memory as a first-class primitive, this structural knowledge about how APIs connect is exactly what most agent systems lack.

Write operations with rollback. Feature-level AI reads data and presents it. Foundation-level AI modifies data. It creates records, updates statuses, processes payments, triggers notifications. When a multi-step write operation fails midway, you need compensation actions: reverse the payment, cancel the notification, restore the inventory count. Without transactional guarantees, partial failures leave your system in a corrupted state that's painful to untangle.

The distributed systems community solved this decades ago with the Saga pattern: pair every forward action with a compensating action, execute compensations in reverse order on failure. AWS now documents Saga orchestration patterns specifically for agentic AI, and platforms like Temporal treat sagas as durable long-running workflows with built-in retry and versioning. But this pattern hasn't made it into most AI agent frameworks. LangChain and CrewAI handle prompt management and tool registration, not distributed transactions.

Deterministic reliability. A feature that fails 5% of the time is annoying. A foundation that fails 5% of the time is unusable. If your product's core workflow depends on an AI agent executing a multi-step process, that process needs to succeed with traditional-software reliability: 99.5%+ for business-critical operations.

Current benchmarks tell a sobering story. On τ-bench, the best GPT-4o agent achieved less than 50% average success rate across two domains. A 2025 survey of 306 AI agent practitioners found that reliability is the biggest barrier to enterprise adoption, and teams are actively avoiding open-ended, long-running tasks in favor of shorter workflows.

Getting there means moving from probabilistic execution (the agent reasons about each step on the fly) to deterministic execution (the workflow follows a pre-validated, known path). The agent's intelligence gets used at query time to understand what the user wants. The execution itself follows a tested path that doesn't depend on the model getting each individual step right in the moment.

The infrastructure gap

SaaS has mature infrastructure for almost everything. You can deploy a web application in minutes. CI/CD, monitoring, alerting, all off-the-shelf. Payment processing, email delivery, analytics, all solved problems.

But there's no off-the-shelf solution for "make my AI agent reliably execute multi-step workflows across my API surface." I keep seeing teams hit the same three missing pieces:

Workflow knowledge extraction. The knowledge of how your APIs connect, which endpoints to call in what order with what parameters and what to do when something fails, currently lives in engineers' heads and scattered docs. Nothing automatically extracts this from your OpenAPI specs, test suites, and internal documentation and makes it available to AI agents.

Workflow validation. Even if you manually encode workflow knowledge, how do you verify it's correct? Running extracted workflows against a staging environment, confirming each step produces the expected output, validating that compensation actions work when you inject failures. This validation pipeline doesn't exist in current AI tooling. It's the same human-in-the-loop validation gap we've written about before, applied to infrastructure rather than individual agent decisions.

Managed execution infrastructure. Validated workflows need somewhere to run. That somewhere needs authentication (agents need verified identity), isolation (a failing workflow shouldn't take down other workflows), audit logging (every action needs a trail), and the transactional guarantees we just discussed. And the security of the MCP servers exposing those workflows matters just as much as the workflows themselves.

What this means for SaaS builders

If you're building a SaaS product and your roadmap includes "AI-powered workflows" (and let's be honest, everyone's does), you're going to hit this infrastructure gap. Gartner predicts 40% of enterprise apps will feature task-specific AI agents by 2026, up from less than 5% in 2025. The question is how you deal with it.

Build it yourself. Viable if your engineering team has distributed systems experience and you have a limited number of workflows. But the maintenance burden compounds: every API change means updating workflow definitions, every new workflow needs extraction and validation, and the execution infrastructure needs ongoing operational investment.

Wait for the ecosystem. You could wait for MCP server generators like Speakeasy and Stainless to move beyond endpoint-level wrapping and add workflow intelligence. Possible, but not guaranteed. Their core competency is code generation from API specs, not workflow knowledge extraction. Those are different problems.

Use purpose-built infrastructure. Something that handles extraction, validation, and execution, exposed through a standard interface (MCP) that any agent can consume. This scales better because the workflow knowledge lives as a persistent, evolving asset rather than getting reimplemented for each new AI feature.

So what does this actually mean

AI-as-feature was a product decision. AI-as-foundation is an infrastructure decision. The infrastructure requirements — stateful orchestration, transactional integrity, deterministic reliability — are the difference between a demo and a product.

The SaaS companies that figure out this infrastructure layer will build products where AI actually does the work, not just summarizes it. Everyone else will keep shipping "Summarize" buttons.


Hintas provides the workflow infrastructure layer for when AI becomes the foundation: automated knowledge extraction, validated execution paths, and managed MCP deployment with transactional guarantees. If you're hitting this problem, take a look at hintas.ai.

Photo by ANOOF C on Unsplash

More from this blog

H

Hintas

15 posts

AI is no longer a SaaS feature. It's the foundation. Here's what that actually requires.