Your AI Investment Is Making Everyone Busier and Nothing Faster

There's a quiet assumption running through boardrooms right now: if we just push harder on AI, the productivity will show up eventually. More pilots, more vendors, more "AI-enabled" in the deck. The working belief is that the technology curve will outrun the organizational mess.

The data tells a different story. A 2025 McKinsey global survey finds that while most organizations have deployed generative AI in some form, only a minority report any meaningful financial impact from it [2]. At the same time, AI project failure rates are climbing sharply. By late 2024, 42% of companies had abandoned most of their AI initiatives—up from just 17% the year before [3]. MIT research shows 95% of organizations see no measurable return on AI investments [6].

This isn't a tooling problem. It's an execution problem. If Issues 1 and 2 of this series were about the human bottleneck (manager burnout) and the structural bottleneck (clarity collapse), Issue 3 is about the false solution: trying to automate your way out of a system that's already overloaded and misaligned. AI, layered onto that system without redesign, doesn't remove drag. It multiplies it.

AI Is Exposing Execution Weaknesses You Already Had

From a distance, AI adoption looks like progress. Dashboards, copilots, internal hackathons, vendor roadmaps. Inside the machine, the story is more complicated.

Recent surveys show high AI deployment paired with stubbornly flat or declining productivity outcomes [2][13]. The pattern shows up in predictable forms:

One enterprise runs dozens of generative AI pilots in marketing and customer service, but can't point to a single campaign or support process where cycle time or conversion has structurally improved. A financial firm deploys AI copilots to "save time" on documentation, then watches review cycles, legal scrutiny, and rework actually increase as people try to clean up machine-generated drafts. A manufacturer invests heavily in AI-driven forecasting, only to keep human override rates so high that the net effect is more meetings and second-guessing, not faster, better decisions [5].

On the surface, AI activity is easy to see: demos, prototypes, enthusiastic internal champions. Actual productivity gains are harder to trace. Many organizations have not done the unglamorous work of defining baseline metrics, counterfactuals, and explicit kill criteria. As a result, AI becomes another place where effort is mistaken for impact.

The uncomfortable throughline: AI is exposing execution weaknesses that were already there. It's just doing it faster and more expensively.

The Real Bottleneck Is Capacity, Not Capability

Most AI conversations at the top are still about capability: model quality, vendor selection, use case libraries. The real constraint inside most organizations is capacity.

Deloitte's 2025 work on organizational capacity makes a straightforward point: many companies are operating at or beyond their true capacity, disguising it with heroic effort, longer hours, and quiet scope creep [7]. In that environment, every new priority—even if it's "just a pilot"—has an opportunity cost that rarely gets priced in.

McKinsey's research on "rewiring" organizations for AI makes a similar argument: meaningful value comes when companies redesign core workflows, roles, and decision rights around AI—not when they bolt it onto existing processes [8][2]. That redesign requires real energy from exactly the people who are already over-committed: senior operators, architects, and the best frontline managers.

At the same time, trust and skill gaps mean most employees are not ready to absorb AI into their real work. Surveys in 2025 show that while executives report high confidence about AI's strategic importance, a majority of employees feel undertrained and uncertain about how to apply it safely and effectively day to day [4][12]. EY's research estimates that companies are leaving up to 40% of potential AI productivity gains on the table because of underinvestment in human capabilities and change management [4].

Meanwhile, AI is introducing new types of friction on top of old ones. Research on "coordination taxes" shows that collaboration overhead—meetings, status updates, cross-functional approvals—already consumes a large share of knowledge worker time [9][10]. AI adds another layer:

  • Prompting and wrangling the tools: learning each system's quirks, rewriting prompts, chasing better outputs.
  • Reviewing and editing machine output: checking for hallucinations, inconsistencies, and tone issues, often under legal or compliance scrutiny [6].
  • Reconciling conflicting outputs: when different teams or tools generate different "answers" to similar questions.

Harvard Business Review has already named one flavor of this: AI-generated "workslop"—a flood of low-quality drafts, decks, and summaries that look like progress but require heavy human cleanup [6]. Asana's 2025 analysis of "hidden productivity taxes" puts this kind of rework and context-switching alongside meetings and notification overload as a primary sink of capacity [9].

Put bluntly: The people who could redesign work for AI have no slack. The people who are being asked to use AI don't feel equipped or protected. The systems they all operate in were never built for this level of speed and complexity. Capability is not the binding constraint. Capacity is.

Why Most AI Programs Stall in the Middle

When you look across the recent AI failure and abandonment data, a familiar pattern emerges [1][3][14].

AI initiatives rarely fail at the start. The early stages—strategy decks, vendor demos, proof-of-concept builds—are relatively contained and exciting. They also sit close to executive attention. They also rarely fail at the technical level. Most modern AI platforms can, in fact, generate content, summarize data, or propose code that is at least directionally useful in the abstract [2][5].

They stall or die in the middle:

No one owns the workflow change. The AI pilot works in a lab, but no leader is explicitly accountable for redesigning roles, approvals, and policies around it. The use case remains "interesting" but never mandatory.

Risk and compliance quietly throttle adoption. Legal, risk, and security teams layer on guardrails so strict that the tool becomes unusable in real scenarios, or approval times balloon.

Frontline teams don't trust the outputs. A few early mistakes or hallucinations create lasting skepticism. People revert to old habits while still going through the motions of "using the tool."

Metrics don't change. No one agreed on a clear before/after metric (cycle time, error rates, cases handled per FTE), so six months in, no one can make a credible case that the AI has improved anything that matters.

MIT Sloan's work on the AI productivity paradox in manufacturing captures this dynamic: even when algorithms outperform humans in narrow tasks, organizational inertia, skills gaps, and misaligned incentives prevent those gains from translating into firm-level productivity [5]. McKinsey's repeated surveys say the same thing in more corporate language: the companies that see outsized gains are the ones that commit to structural change, not just tool deployment [2][8].

Executives often interpret this stall as a signal that they chose the wrong technology or vendor. So they switch platforms, start new pilots, or rebrand the program. In reality, they've avoided the harder work: deciding what they are willing to stop doing, which roles they are willing to redefine, and how they will protect the humans in the system as the work changes.

What to Do Monday Morning

If the AI productivity paradox is the tension, the practical question is: how do you avoid being the company with great demos and flat results?

Design from decisions backward, not tools forward. Identify the 10–15 recurring decisions where better speed or quality would actually move the needle—forecasting, pricing changes, case routing, quality checks. Then ask: where, precisely, could AI support these decisions, and what would we have to change in the workflow, roles, and data to make that real? [2][5][8]

Price the capacity cost up front. Treat every AI initiative like any other strategic project: what will we stop or slow to make room for the redesign work? If the answer is "nothing," you're planning on invisible overtime and heroics again [7][9].

Protect humans from the worst version of AI work. Set explicit norms: no one's performance review is based on "prompt volume"; AI outputs always require human judgment before going to customers; people can flag bad patterns without being labeled resistant [4][6][12].

Kill as ruthlessly as you start. Define success metrics before you launch, with clear thresholds for expansion, iteration, or shutdown. When something doesn't clear the bar, shut it down and publish what you learned. AI tourism—wandering from pilot to pilot without consequences—is one of the fastest ways to burn trust and capacity [1][3][14].

The underlying move is simple and hard: treat AI as an organizational design problem, not a procurement problem.


The AI wave has revealed a lot about how work really happens inside large organizations. It has exposed brittle processes, overloaded roles, and shallow clarity about who decides what. Leaders have a choice: treat AI as another thing to bolt onto a strained system, or use it as a forcing function to finally fix the system. Underneath the tooling and the jargon, this is still about a very old question: can people inside this company do their best, clearest work together, at the speed reality now demands? Technology can help. It just can't answer that question for you.


Sources

[1] Companies Are Pouring Billions Into A.I. It Has Yet to Pay Off. – The New York Times

[2] The State of AI: Global Survey 2025 – McKinsey & Company

[3] AI project failure rates are on the rise: report – CIO Dive (S&P Global survey data)

[4] EY survey reveals companies are missing out on up to 40% of AI productivity gains – EY

[5] The "productivity paradox" of AI adoption in manufacturing firms – MIT Sloan Management Review

[6] AI-Generated "Workslop" Is Destroying Productivity – Harvard Business Review

[7] Reclaiming organizational capacity – Deloitte Insights

[8] Seizing the agentic AI advantage – McKinsey & Company

[9] The Four Taxes That Bankrupt Your Workday [2025] – Asana

[10] Understanding and reducing the coordination tax – Deskbird

[11] Five Hybrid Work Trends to Watch in 2025 – MIT Sloan / Brian Elliott

[12] The American Trust in AI Paradox: Adoption Outpaces Governance – KPMG

[13] Global AI survey 2025: The paradox of AI adoption – Wavestone

[14] The AI Productivity Paradox: High Adoption, Low Transformation – Inference by Sequoia