Everyone wants the spotlight, but only few can stand the heat.

— Jimmy

Fast Code, Slow Truth: AI Didn't Remove the Bottleneck—It Moved It

My keyboard time is down, my output is up, but the overall system—delivery, reliability, alignment—often doesn't move the way the commit graph suggests it should.

Technology & InnovationMarch 21, 2026
By Jimmy Nguyen
8 min read
Fast Code, Slow Truth: AI Didn't Remove the Bottleneck—It Moved It

Why I'm not celebrating raw speed anymore

Quick question: have you ever felt amazing after shipping something in a day… and then felt slightly sicktwo weeks later when you realise nobody wanted it, support tickets spiked, or the “tiny change” turned into a production incident?

That's the vibe I've been getting with AI-assisted development. My keyboard time is down, my output is up, and the “blank page” fear is basically gone. But the overall system—delivery, reliability, alignment, real customer impact—often doesn't move the way the commit graph suggests it should.

What's changed is not that engineering got easy. It's that the easy part(turning intent into code) got cheaper. The expensive part—deciding what's worth building, proving it works, and keeping it safe—didn't get cheaper at the same rate.

The data: AI can speed up coding, and sometimes slows experts down

If you want the most believable “AI makes developers faster” number, it's probably the controlled experiment behind a lot of early claims: developers building an HTTP server in JavaScript finished 55.8% faster when they had access to an AI pair-programmer.

That result is real—and it makes sense in that setting. The task is well-scoped, the requirements are clear, and the path to “done” is straightforward. In other words: you already know what you're trying to do; AI helps you do it faster. Now for the part people skip in the keynote slides:

A randomised controlled trial looked at experienced open-source developers working in codebases they already knew well. They randomised 246 real issues across AI-allowed vs AI-disallowed conditions. When AI was allowed, developers took 19% longeron average—even though they believed AI sped them up.

That belief gap is not a rounding error. Developers expected AI to cut time by about 24%, and even after living through the slowdown they still felt like they were ~20% faster. The best explanation I've seen is painfully human: AI often produces something “directionally correct,” but not quite right for the local context—so you pay the verification and correction tax.

To their credit, they published an update basically saying: “this is getting harder to measure because people refuse to work without AI now, and selection effects are messing with the experiment.” They suspect uplift is higher now, but their new data can't cleanly pin it down. So I don't read the evidence as “AI makes everyone slower” or “AI makes everyone faster.” I read it as: AI amplifies what you already have.

  • If you have clarity, it accelerates execution.
  • If you have uncertainty, it can create more branches, more code paths, and more things to validate.

Why output spikes but delivery metrics don't

Here's the part that made me stop obsessing over “code shipped per week.” In late 2024, findings on AI in the workplace were published. They reported that higher AI adoption was associated with improvements in documentation quality (+7.5%), code quality (+3.4%), and code review speed (+3.1%).

The paradox:

AI adoption was also associated with an estimated decrease in delivery throughput (−1.5%) and an estimated decrease in delivery stability (−7.2%). The parts engineers feel day-to-day get smoother, yet the system-level outcomes can stagnate or even degrade.

Two extra details are worth holding in your head at the same time:

  • Trust is not automatic. 39% of respondents reported little to no trust in AI-generated code.
  • Only 24% said they trust AI-generated code “a lot” or “a great deal.”

So we're in a situation where people use AI every day, feel faster, but still don't fully trust what it produces—and delivery stability can take a hit. That combination practically guarantees extra review friction and risk management overhead.

What AI doesn't speed up: coordination, validation, and the human tax

One reason “AI makes me code faster” doesn't translate into “our team ships value faster” is simple: coding is not most of the job.

A large study bluntly points out: developers spend “surprisingly little time” writing code, with prior studies estimating coding time anywhere from 9% to 61%. Developers constantly trade off between main coding tasks and collaborative activities. Even if AI makes the coding slice dramatically faster, it doesn't automatically speed up:

  • Alignment across product, design, security, compliance, and SRE.
  • Waiting for feedback from users and stakeholders.
  • De-risking changes so they don't break production.
  • The plain old coordination overhead of more humans touching the same thing.

On coordination: empirical software engineering research has been warning us for years that productivity doesn't scale linearly with team size because communication and coordination overhead grows as teams grow. There's also a mathematical way to internalise this: Amdahl's Law. It says that speeding up one part of a system has a capped impact on the whole system, because the parts you didn't speed up become the new limit.

Here's a concrete example I use when I'm tempted to high-five myself:

Imagine only 30% of my end-to-end delivery time is “writing code”. The other 70% is everything else. If AI makes my coding 2x faster, the overall speedup is: 1 / (0.7 + 0.3 / 2) = 1.18x. So the best-case system improvement is ~18%—even if the coding part is magically twice as fast.

And that's before we account for the “verification tax”: time spent reviewing, correcting, and sanity-checking AI suggestions.

When faster execution amplifies risk

Speed is not neutral. When you make something cheaper, people do more of it. That's usually great—unless the “more” includes more unreviewed security surface area, more brittle integrations, and more production blast radius.

On security specifically, the evidence is uncomfortable:

A user study found that participants with access to an AI assistant wrote significantly less secure code than those without it, and were also more likely to believe their code was secure. The risk isn't only bad output; it's misplaced confidence.

The same study also notes that reusing previous AI outputs as prompts can magnify or replicate security problems—basically, you can accidentally create an insecurity photocopier.

And it's not just “first draft” code. Another paper looked at iterative “improvements” to AI-generated code and found a 37.6% increase in critical vulnerabilities after just five iterations. Iteration without robust human validation can degrade security in a way that feels counterintuitive. This is the dark side of cheap execution: you can scale mistakes, and you can scale them fast.

How I'm trying to turn AI speed into better outcomes

When I step back, the fix isn't “use less AI.” It's “stop measuring the wrong thing, and tighten the loops that actually matter.” Productivity is multi-dimensional, and activity metrics (commits, lines changed) are famously easy to game and easy to misread.

Here's what I'm changing in my own workflow (and what I'd push as a team habit) so AI speed turns into actual value:

1

Force clarity first: If I can't write a one-paragraph problem statement and a success metric, I'm not “ready to code,” I'm just about to create expensive clutter. AI can explode the option space instead of shrinking it.

2

Keep batches small: Better development process doesn't automatically translate into better delivery. Small batch sizes and robust testing mechanisms are part of the difference.

3

Treat AI output like a smart new hire: Useful, sometimes brilliant, occasionally confident nonsense. Which means the process must assume verification is required—especially for security-sensitive code.

🎯

The Practical Checklist

To turn speed into outcomes, follow these rules:

  • I don't accept AI-generated code without a test that would fail if the code is wrong.
  • I bias AI usage towards “acceleration mode” tasks, and slow down on “exploration mode” tasks.
  • I assume security needs explicit guardrails, because assistance can reduce security while increasing confidence.
  • I measure outcomes (stability, user impact, rework) instead of worshipping activity.
  • I keep AI adoption policies transparent to build back trust.

AI didn't remove the hard work; it just helped me reach it faster.

Share this article

Ask Jimmy's AI Assistant