← home

Content Teams Don't Have a Speed Problem

March 2026

I can take an essay from idea to published in a few hours. I talk through my thinking with Claude Code, it drafts, I refine, and it handles the entire production layer—HTML, design, formatting, publishing to my site. No CMS. No design handoff. No upload workflow. Just thinking and shipping.

So why does the editorial team I lead—using the same AI models—still take weeks on a single article?

That gap haunted me for months. We use AI. Everyone on the team uses it in some form. But production hasn't 10xed. It hasn't even 2xed. And for a while, I couldn't figure out why. I had this quiet "should" feeling—we should be further ahead, we should be publishing more, we should have figured this out by now. Every content leader I talk to carries a version of the same anxiety.

We don't have a speed problem. We have a creativity problem disguised as a speed problem. Speed matters enormously—publishing quality content at pace is how you build a content library that compounds. But the tools most of us are reaching for are accelerating the wrong layer of the work.

Why engineering teams shipped 10x faster

I have a friend who's a senior engineer at an AI platform. He works enormous days and he'll freely tell you his output has increased by an order of magnitude. Hundreds of pull requests per month. His company measures developer productivity by tokens consumed—not as a vanity metric, but as a signal of how much implementation work is flowing through AI rather than being typed by hand.

His CEO mandated that no one propose a problem without first considering how to build an agentic workflow for it. The entire culture shifted around one question: what can be automated so humans focus on what can't?

When I hear stories like this, my reaction is envy. We have the same AI models. We have content guidelines, quality criteria, style documents. Why can't we ship at the same rate?

Here's what's easy to miss: most of what got faster was implementation. The code itself. The translation of a design decision into working software. An engineer still decides what to build, how to architect it, which tradeoffs to accept. That judgment hasn't been automated. What's been eliminated is the hours of typing, debugging, and boilerplate between "I know what this should do" and "it works."

AI collapsed the implementation layer. The thinking layer stayed human.

Software Engineering
Implementation
Thinking
Content / Editorial
Impl.
Thinking
Compressible by AI
Irreducibly human

Why the same AI gains don't transfer

In engineering, a huge percentage of the work was implementation. Compressing it created enormous gains.

Content has a different ratio.

Why the same playbook doesn't transfer

When you sit down to produce an article, what's actually hard? Not the typing. Not the formatting. Not even the research, most of the time.

What's hard is the through-line—the argument that organises everything and makes the piece worth reading. It's the expertise—the insight that only comes from interviewing someone who's done the work, or having done it yourself. It's the editorial judgment—knowing what to cut, what to expand, where the piece is actually saying something versus going through the motions.

These are thinking problems. And thinking doesn't compress the same way typing does.

A journalist mate told me his anxiety about AI eased when he realised something obvious: ChatGPT can't interview people. The core of his work—building relationships, asking the right follow-up, earning trust from sources—is irreducibly human. Everything around it can be accelerated. But the thing that makes journalism journalism?

Untouched.

This also explains why my solo workflow is so much faster than my team's. When I'm publishing on my own site, the entire production layer—design, formatting, CMS, publishing—is handled by AI. And the thinking layer lives in one brain. The moment you add people—more voices to align, more review steps, more legal and brand considerations—the coordination itself becomes a bottleneck that AI can't really solve. The challenge scales with team size.

Context AI speed gain Real bottleneck
Solo creator High Your own thinking
Small team Moderate Aligning on angle and quality
Mid-size editorial Lower than expected Coordination, review cycles, expertise sourcing
Enterprise Lowest Legal, brand, stakeholders, multi-team alignment

AI leverage decreases as team complexity increases

So why do we all feel behind? Because the promise of AI in content was framed as a production story. Faster drafts. More output. Scale without hiring more. And sure, you can generate a competent first draft in minutes. Many of us dream of one-shotting a publishable piece with the right prompt, or at least generating an AI-assisted draft that just needs "some" editing. That's getting better—but for teams with real quality standards, even the best AI draft needs all the front-loaded thinking done first. Prompt engineering alone doesn't substitute for editorial strategy.

Competent first drafts were never the constraint. The constraint was always: is this piece actually saying something worth remembering?

When you hold your work to that standard—and you should, because that's the standard that builds a library worth having—AI doesn't make it dramatically faster. It makes certain parts faster while revealing that the slow parts were always the important parts.

What's actually working

I don't have a clean solution. Anyone who tells you they do is likely selling something. But I can share what's emerging on our team, because the pattern is instructive even if it's messy.

We rebuilt our quality criteria around judgment, not checkboxes. We used to score content against a list: engaging introduction, visual break density, proper formatting. Writers could tick every box and still produce something helpful, but forgettable. The piece scored well without being transformative.

So we rebuilt the criteria around:

This was counterintuitive. We raised the bar on human judgment at the same moment we were trying to speed things up. But it forced a useful clarification: the quality standard defines what humans need to contribute. Everything else becomes fair game for AI assistance.

The team built their own tools—and shared them organically. One writer created a custom GPT loaded with our quality criteria. She used it to evaluate drafts before submission, then shared it with the team. Another built a skill for generating custom graphics in our brand style—something that used to eat hours in design tools.

Nobody mandated these. They emerged because the team had a clear framework for what quality means, and individuals found their own ways to handle the operational work around it.

The work got more collaborative, not more automated. This surprised me most. As the criteria shifted toward packaging and expertise, the process started looking less like a production line and more like a writers' room. More calls between editors and writers. More workshopping of angles before drafting begins. More input from strategists on what the content library actually needs.

The front of the process—creative strategy, angle development, expert partnerships—is getting more investment. The back of the process—style conformance, formatting, research compilation—is where AI picks up the slack.

But this only works if you actually build collaboration into the workflow. Not as a nice-to-have. As a literal editorial step.

If your team is co-located, treat briefing like a writers' room. Get the strategist, writer, and editor in a room together before the outline exists. If you're distributed—and most content teams are—this takes more deliberate infrastructure: scheduled cross-timezone calls at the outlining phase, an automated Slack thread that pings every relevant person when a piece enters ideation, AI transcription on every collaboration call so insights don't evaporate after hanging up.

Ideation session
Writer + editor + strategist. Workshop the through-line before anything gets outlined.
Expert outreach
Interviews, partnerships, gathering first-hand insight. AI transcribes and summarises.
Outline + angle review
Collaborative checkpoint. Is the through-line sharp? Does the structure serve the argument?
Draft + AI operational pass
Writer drafts. AI handles style conformance, research gaps, formatting.
Editorial review
Editor focuses on argument, credibility, transformation. Not fixing commas.
Production + publish
Final QA, asset creation, formatting. AI handles the heavy lifting.

Collaboration as an editorial step, not an afterthought

The goal isn't to slow down. It's the opposite: front-load the creative collaboration so that drafts arrive with a stronger through-line, fewer revision cycles, and less back-and-forth later. Speed and quality aren't in tension when the thinking happens at the right stage.

The split that actually matters

A pattern runs through all of this. The work divides into two layers, and AI has dramatically different leverage on each.

Creative Layer Expand this
Through-line · Expert relationships · Editorial judgment · Frameworks · The "so what"
↓ invest human energy here ↑
Operational Layer Compress this
Style conformance · Research compilation · Format conversion · QA · Criteria checks

Where AI has leverage vs. where humans are irreplaceable

In engineering, the infrastructure that enabled 10x gains was technical: centralised APIs, unified tooling. The editorial equivalent isn't a shared AI account or a mandated workflow. It's shared intellectual infrastructure—a quality framework that defines what "good" means at the judgment level, plus a culture where creative collaboration grows as operational work gets automated.

The tools layer on top of that organically. The framework is the infrastructure.

Compress the operational layer ruthlessly. Expand the creative layer deliberately. That's the move. Not "how do we get the team to use AI more?" but "how do we restructure the workflow so our people spend the vast majority of their time on the work that only they can do?"

The real advantage

Here's what I keep coming back to. The anxiety is backwards.

We're not behind because we can't ship like engineering teams. We're ahead of a realisation that every field will eventually reach: the value of human work lives in the judgment, not the output. Engineers are discovering this too, as AI handles more implementation and the differentiator becomes architectural thinking. But content has always lived in this territory. The best editorial work has always been about the through-line, the expertise, the transformation.

AI didn't change that. It clarified it.

If your team hasn't 10xed output, that's not a failure. It probably means the work you're doing is genuinely hard—the kind of hard that doesn't compress. But it can get faster. Not by automating the thinking, but by building better environments for it: tighter collaboration loops, clearer quality frameworks, and AI handling everything that isn't judgment.

The teams that figure this out will be fast and good. They'll build content libraries that compound—not because they published the most, but because what they published was worth building on.

That's always been the game. The technology just made it clearer.

Further reading

The solo workflow I described in the opening comes from pairing voice dictation with AI. I wrote about how that works in Speaking Expands, Typing Compresses.

Get essays and tools by email — no spam, no schedule.