Executives keep asking whether agentic AI is ready for their organization. The better question is whether their organization is ready for agentic AI.

The problem is not just that many projects fail. Gartner predicts that over 40% of agentic AI initiatives will be canceled by 2027, driven by escalating costs, unclear business value, and inadequate risk controls.¹ The subtler risk is more dangerous: the projects that "succeed," the ones that run, automate work, and show positive near-term ROI, may be quietly encoding your worst processes into autonomous systems.

That failure mode is harder to see, and it is already emerging in the data.

Agents learn what you actually do, not what your process map says

Most organizations are deploying agentic AI the way they deployed RPA: define the inputs, automate the steps, and measure the time saved.

Agentic AI behaves differently. It operates over sequences of actions and decisions, observing how work is really done, including the handoffs, exceptions, and workarounds, and optimizing around that reality. The more autonomy you give it, the more it learns from behaviors that often do not appear in official process documentation.

That would be an advantage if your workflows were well-designed. In practice, they are usually layered with accumulated process and organizational debt: legacy rules, local hacks, and undocumented dependencies that build up over years of incremental change. A 2023 global survey of 3,000 decision-makers, published in California Management Review, found that most companies implementing AI focus on technology and data issues, including usability, access, and quality, rather than rethinking the underlying workflows in which AI operates.²

Research on cognitive automation directly reinforces this. These systems perform best when state spaces are constrained, and decision criteria are explicit. Complexity and hidden dependencies, precisely the conditions created by years of accumulated process workarounds, sharply increase failure risk.³ Where organizations have not done the simplification work first, the failure pattern is not just "AI underperforms." It is "AI becomes the most consistent executor of a bad process you have ever had."

This is the central reversal: agents do not fail because your processes are broken. They succeed at reproducing them.

The Agentic Echo Effect: how dysfunction scales

A useful way to think about this is the Agentic Echo Effect: whatever behavioral patterns exist in your current workflows, good or bad, will be amplified and normalized once agents are trained on them and embedded in daily operations.

Three mechanisms drive that effect.

First, process complexity and variance are primary risk drivers. Cognitive automation research finds that systems deployed into high-variance, exception-heavy environments are significantly more likely to produce errors and require unplanned human intervention than those operating on standardized, well-defined workflows.³ In practice, a process with fifteen documented variants and a half-dozen informal workarounds does not become more manageable when an agent runs it. It becomes more consistently wrong, in more places, faster than any human team could produce.

Second, agents learn workarounds as if they were policy. When people repeatedly bypass steps, skipping a control, reclassifying an exception to avoid a queue, routing work to a preferred colleague, those patterns exist in the interaction data an agent learns from. Agents do not distinguish between official policy and informal workaround. They observe what happens, optimize for what works, and treat the gap between the two as irrelevant. If your team has been quietly routing around a broken approval step for two years, your agent will route around it too, reliably, at volume, without ever flagging that the step exists.

Third, human oversight is a weak backstop at scale. A March 2026 study by BCG and Harvard Business Review found that workers in high AI-oversight roles, those responsible for monitoring and correcting AI outputs, report 14% more mental effort, 12% greater mental fatigue, and 19% greater information overload than workers who use AI more collaboratively.⁴ That cognitive burden has downstream consequences: 34% of workers experiencing what the researchers call "AI brain fry" intend to leave their jobs, versus 25% of those who do not.⁴ When the same people are doing the work and supervising agents, the chance that they will catch subtle, pattern-level dysfunction rather than isolated mistakes drops sharply. The oversight model that makes executives feel in control is often the one least capable of catching what matters most.

Taken together, the Agentic Echo Effect is no mere hypothesis. It is the predictable outcome of deploying agents into messy workflows and relying on overloaded humans to notice when the wrong things are being done.

The Agent Surface: where you actually have leverage

If the risk is that agents learn and reproduce what they see, the leverage point is what they are allowed to see.

Think of this as your Agent Surface: the subset of workflows, decisions, and data that any given agent is permitted to touch. Before you ask which agent to deploy, ask three questions:

  • How much process variance exists in this workflow today? How many documented variants, exceptions, and informal workarounds?

  • If an agent executed this workflow exactly as it is currently performed 10,000 times, would the output be acceptable?

  • Who in the organization owns process quality in this area, and are they part of the deployment decision?

If you cannot answer the first two questions with confidence, or if the answer to the second is no, your Agent Surface is not ready. Deploying into that condition does not accelerate the business. It accelerates whatever is already wrong with it.

Two patterns are emerging in practice. Organizations taking a maximalist approach expose large swaths of messy, high-variance workflows to agents and rely on generic guardrails and human monitoring to catch problems. This approach promises big savings quickly, and it is consistent with the conditions Gartner associates with canceled projects: costs escalate, governance overhead mounts, and business value remains unclear.¹

Organizations taking a minimalist, redesigned approach first reduce process variance through process mining, standardization, and explicit decision rules, and then allow agents to operate within those narrower, cleaner surfaces. A 2026 MIT Technology Review Insights report on agent-first process redesign makes the requirement explicit: safe, effective agentic deployment requires machine-readable processes, explicit policy constraints, and organized data flows.⁵ Agents deployed without these conditions require constant human translation between messy reality and the structured inputs they need to function, which negates the efficiency gain and multiplies the oversight burden.

The financial stakes of getting this right are significant. BCG's analysis of AI value creation found that only 5% of organizations generate substantial financial gains from AI at scale, achieving revenue increases of up to 5x and cost reductions of up to 3x relative to peers.⁶ The distinguishing characteristic of that group is not which AI tools they use. It is that they redesign workflows and processes alongside AI deployment rather than layering AI on top of existing operations.

The strategic move for an executive is not "deploy more agents" or "slow down on AI." It is to shrink and clean the Agent Surface before scaling autonomy.

That is an organizational task, not a tooling decision. It requires process ownership, variant reduction, and a governance model that treats process debt as a strategic liability rather than a backlog item to be addressed after the next deployment.

The uncomfortable question for your next AI review

Most AI steering committees are still asking technical and financial questions: Which model? What guardrails? What is the projected ROI?

Those are necessary. But if you stop there, you are missing the real decision.

The better question for your next review is this:

If our agents get very good at doing what our people are doing today, are we comfortable with that being how this part of the organization runs for the next five years?

If the honest answer is no, your problem is not whether agentic AI is ready for your company.

Your problem is that your company is not ready to have its current behavior learned, normalized, and scaled by something that will not complain and will not tell you where the bodies are buried.

Sources

  1. Gartner. (2025). "Over 40% of agentic AI projects will be scrapped by 2027." Reuters, June 25, 2025. https://www.reuters.com/business/over-40-agentic-ai-projects-will-be-scrapped-by-2027-gartner-says-2025-06-25/

  2. Angstrom, J. et al. (2023). "Getting AI Implementation Right: Insights from a Global Survey." California Management Review. https://cmr.berkeley.edu/assets/documents/sample-articles/angstrom-et-al-2023-getting-ai-implementation-right-insights-from-a-global-survey.pdf

  3. Satzger, B. et al. (2024). "A model for assessing cognitive automation use cases." Journal of Information Technology, March 10, 2024. https://journals.sagepub.com/doi/10.1177/02683962231185599

  4. Bedard, J., Kropp, M., Hsu, M., et al. (2026). "When Using AI Leads to 'Brain Fry.'" Harvard Business Review, March 4, 2026. https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry

  5. MIT Technology Review Insights. (2026). "Enabling agent-first process redesign." April 7, 2026. https://www.technologyreview.com/2026/04/07/1134966/enabling-agent-first-process-redesign/

  6. BCG. (2025). "Are You Generating Value from AI? The Widening Gap." September 16, 2025. https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap

1  

Keep Reading