Thoughts

AI for Editors: It’s a Workflow Problem

Author

Corey Vilhauer

Categories:

AI is not a people problem. It’s a workflow problem. Hear me out.

For decades, the editorial process has been the same: a rambling path from brainstorming to research to drafting to editing. Every writer and every editor loathes some part of this process, not because it’s hard or bad, but because writers get into the “writing” business for their own selfish reasons. They enjoy learning and processing facts, or they love the craft and wordplay, or they love solving problems and language is their tool of choice. But I know very few writers who love every part of it.

Which is why the conversation about AI in editorial work should be … more interesting. Everyone hates different parts of the process, so theoretically everyone could benefit from help with the parts they hate.

But that’s not the conversation we’re seeing. Instead, the conversation is around people. No one will ever say it, but for scared writers and editors, AI looks like a people problem. A staff replacement problem. A solution for stakeholders, rather than a solution for the editors. Which, to be frank, is not new.

We’ve watched this pattern for twenty years in web development: new technology arrives, vendors promise transformation, companies buy platforms, and then… nothing changes. The problem was never whether the technology worked. The problem was that nobody stopped to ask how it would work within the actual, messy, complicated ways people get things done.

AI in editorial work is the same story. The real question isn’t “can AI write this?” It’s “where in our editorial process would AI actually create usable efficiencies for the people doing the work?”

The gap between demo and Tuesday afternoon.

You have already been accosted by an AI-demo. Some of them are legitimately impressive. Some of them are highly questionable. It doesn’t matter which is which, because we have to understand the purpose of a product demo: to get people excited about the promise of a new technology. 

We know how this works. Someone sees an impressive AI demo — maybe the tool generates a blog post in seconds, or it summarizes a research paper perfectly, or it rewrites copy in three different tones. Leadership gets excited. Budget gets allocated. The tool gets purchased.

And then it sits there.

Because on Tuesday afternoon, when your senior editor is trying to get three articles through legal review, coordinate with two stakeholders who haven’t responded to emails, maintain brand voice across four different content channels, and still make Thursday’s publication deadline — the AI tool that generates decent first drafts doesn’t solve any of those problems.

The gap between “this technology is impressive” and “this technology fits into how we actually work” is where most AI adoption dies. And it dies there because organizations focus on the technology instead of the workflow. That’s not necessarily the fault of the AI vendor — instead, it’s the fault of the organization for trying to bolt an AI onto the side of whatever they’ve already got going on, usually because the right people weren’t in the room to begin with.

Your editorial team already knows where the problems are.

When we talk about “the right people,” we’re of course talking about the editorial team. Before you evaluate a single AI tool, talk with your current editorial and map out your workflow. The REAL workflow — not the idealized version from the process documentation nobody updated since 2019, but the real, messy, sometimes frustrating one that nobody really wants to talk about.

Where does time actually go? Research and source-gathering. Rounds of revision. Reformatting content for different channels. Waiting for stakeholder feedback. Updating metadata. Checking style guide compliance. Version control. Coordination across teams. Your process includes dozens of steps that extend and complicate the timeline, and you might not even be aware of which ones do which thing. Which leads to the biggest question in editorial AI adoption: which of your steps require editorial judgment, and which are mechanical tasks that eat hours without requiring creative thinking?

  • Editorial judgment looks like: Is this the right story to tell right now? Does this serve our audience’s needs? Is this on-brand? Does this argument actually hold together? What’s missing that we need to address?
  • Mechanical tasks look like: Converting this transcript into readable prose. Pulling all the statistics we’ve cited on this topic. Checking whether we’ve used these terms consistently. Reformatting this piece for email versus web.

AI doesn’t replace the first category. But it can accelerate the second — if you integrate it into your workflow instead of bolting it onto the side.

Where AI creates efficiency.

The valuable applications of AI in editorial work are, frankly, unglamorous. They don’t lead to compelling demos. But they save real time on real work. Here’s where AI works well:

  • Research synthesis — Not asking AI to write your article, but asking it to pull key points from a stack of source documents so your team can quickly assess what’s relevant. Your editorial team still decides what matters and how to frame it — but they spend less time hunting through PDFs.
  • Format adaptation — You’ve published a long-form feature. Now you need a social media summary, an email teaser, and a metadata description. AI can generate starting points for all three — which your editor reviews and adjusts, but doesn’t draft from scratch.
  • Consistency — When writing The Web Project Guide, we had to eventually take our 24 separate chapters across two unique authors and smush them into a single entity. That’s a lot of inconsistency. But it’s not just for gigantic books. Have you used “healthcare” versus “health care” consistently? Are your statistics formatted the same way throughout? Does your article maintain the same tone from start to finish? AI can flag inconsistencies faster than a human reading through hundreds of pages.
  • Repeatable scale — If you need to assess a thousand existing articles for outdated information or reorganization, AI can help make content audits less painful, quickly categorizing and surfacing patterns so your editorial team can focus on decision-making, not spreadsheet work.
  • Repurposing content — You’ve written extensively about a topic across multiple articles. AI can help identify what you’ve already said and suggest how to consolidate or update it — but your editor still makes the strategic calls about what serves the current audience needs.

Notice what all of these have in common: AI handles the mechanical processing. Your editorial team handles the thinking.

What should stay human.

Editorial judgment shouldn’t be automated away. By definition, it can’t be automated away in the first place — editorial judgement means making decisions with a human eye toward the intended communication targets. If someone’s selling you an AI tool that claims it can handle this, walk away from that conversation.

  • Strategic content decisions should remain human — Which stories matter right now? What does our audience need that they’re not getting? How do we balance competing stakeholder priorities? AI can’t answer these questions because they require institutional knowledge, political awareness, and understanding of context that exists outside any content management system.
  • Brand voice should remain human — Sure, AI can mimic tone — but maintaining authentic organizational voice across changing circumstances requires the kind of judgment that comes from actually being part of the organization and understanding what it stands for.
  • Quality assessment should remain human — As a historically awful speller, I am eternally thankful for AI tools that check grammar and flag inconsistencies. But what these AI tools cannot reliably do is answer whether content “actually makes sense,” or whether it is ”factually accurate?” or will "land the way we intend.” These require editorial expertise that no tool can replace.
  • Stakeholder management should remain human — If your VP has strong opinions about how you’ve framed something, no AI tool is going to navigate that conversation for you.

And honestly, all of this is good news. These are the parts of editorial work that require actual expertise — the parts that make your team valuable. If AI handles more of the mechanical processing, your editors get to spend more time on editorial thinking. That’s not replacement. That’s freed capacity.

How to actually implement editorial AI tools.

First, start with process, not platform. Don’t begin by evaluating AI tools; instead begin by identifying a specific workflow pain point your team actually experiences.

  • Pick one thing. Maybe it’s a bad habit — not making time to add alternative text, or not giving enough attention to your page summaries or FAQ. Maybe it’s an extension of your original work — the time spent reformatting content for different channels, for example. Maybe it’s the research phase for weekly industry roundups. Maybe it’s keeping metadata current across a large content library. Choose something concrete and measurable. Then — and only then — look for AI tools that address that specific workflow problem. Pilot the tool with a small team, and then measure time saved on that specific task, rather than abstract “productivity gains.”
  • Build your process before you need it. If AI is drafting anything that will eventually be published, define clearly: Who reviews it? What are they checking for? What’s the approval chain? You need guardrails in place before content starts flowing through.
  • Train on your editorial standards. The out-of-the-box AI tool doesn’t know your style guide, your brand voice, your legal requirements, or your audience expectations. If you’re going to integrate AI into your workflow, you need to teach it how your organization works — which means your editorial team needs to be involved in implementation, not just handed a tool after the purchase.

Most importantly, be honest about what’s working and what’s not. If something isn’t saving time or creating usable output, stop using it. The goal is better workflows, not checking a box that says you’ve “adopted AI.”

Better workflows = better content.

It’s all so much more than “put an AI on it.” Successful integration means your team has more time for the work that requires creative thinking and editorial judgment. They spend less time on mechanical tasks. They can iterate faster. They can experiment more. They can pay attention to quality instead of scrambling to meet deadlines.

That’s actually pretty useful. That’s an extension of the promise we’ve already built through early workflow tools like Grammarly (grammar and spelling), Google Drive (collaboration and access), and content management vendors like Optimizely and Umbraco (personalization, data capture, fast and reliable publishing).

This isn’t a technology problem. The technology works. This is a workflow problem, and workflow problems get solved by understanding how work actually happens, identifying where efficiencies matter, and integrating tools thoughtfully into existing processes.

As an AI-pessimist, I can tell you that the editorial promise of AI-workflow tools is a bright one, if it’s handled correctly. That means listening to your editorial team, adjusting your workflow, and focusing on those reasons we hate writing in the first place. Your editorial team’s expertise becomes more valuable in this equation, not less, because instead of competing with AI, they’re using it to handle the parts of the job that don’t require human judgment. There’s always pain in the process — that’s just how writing works — but that pain can be lessened considerably by understanding the “why” of tool adoption, rather than the “what” or “how.” 

That’s the conversation worth having. The discussion shouldn’t focus on fewer editors; it should focus on better content — because your editorial team’s expertise is being applied where it matters most. Start there, and you’ll end up with AI adoption that actually works.