Stop Editing AI Output. Train It Instead.
Smart teams build 400-rule style guides and prune AI prompts like bonsai. A faster way exists: train AI with your writing upfront, not edit output after.
You ask ChatGPT to draft an email. It comes back competent, polished, and completely wrong. Not factually wrong. Stylistically wrong. The sentence structure is too uniform. The vocabulary is too safe. The rhythm is off in a way you can feel but can't name.
So you edit. You rewrite the opening. You break up a paragraph. You swap three words that are technically correct but not yours. Five minutes later, you've essentially written the email yourself, with AI serving as an expensive first draft.
This is the default workflow for millions of professionals right now. And some very smart people are trying to fix it.
The Manual Approach: What Smart Teams Are Building
At the media company Every, two practitioners have developed sophisticated systems for making AI output sound less like AI.
Kate Lee, an editor, built a 400-rule style guide that she feeds into Claude Projects. The guide captures everything: sentence length preferences, transition patterns, vocabulary choices, formatting conventions. When writers submit AI-assisted drafts, editors check them against this document. It works. The output is noticeably better than default AI writing.
Katie Parrott, a writer, takes a different approach. She feeds Claude samples of her own writing, then has it interview her about her preferences. Through this back-and-forth process, she builds a personalized style guide that she describes as a "bonsai garden," something she continuously prunes and shapes, adding new rules when she notices patterns she dislikes, removing ones that no longer apply.
Both approaches share the same insight: AI doesn't know how to write like you until you teach it. The question is how you do the teaching.
Why This Works (But Doesn't Scale)
Kate Lee's 400-rule guide is impressive because it does produce better output. But consider what it took to build:
Weeks of editorial expertise. Identifying 400 rules about your writing style requires deep knowledge of linguistics, rhetoric, and editing craft. Most professionals can articulate maybe 10 preferences ("I like short paragraphs," "Don't use passive voice"). The other 390 rules exist in your writing but below your conscious awareness. Extracting them requires the kind of trained eye that professional editors spend years developing.
Continuous maintenance. Katie Parrott's bonsai garden metaphor is revealing. A bonsai is beautiful, but it demands constant attention. Every new writing context introduces new patterns. Every shift in audience or platform requires new rules. The garden never finishes growing.
Deep writing expertise as a prerequisite. Both Kate Lee and Katie Parrott are professional writers and editors. They can articulate what makes good writing good because that's their job. A sales executive, a product manager, or an engineer who writes well probably can't build a 400-rule guide about their own style, not because they lack writing ability, but because they lack the metalinguistic vocabulary to describe it.
Eleanor Warnock, also at Every, captures the core problem well: AI-generated text feels "flimsy" because it lacks intentionality. It hasn't been "scrutinized by tough editors." The words are correct. The meaning is present. But the weight behind them is missing.
Editing fixes this one draft at a time. Training fixes it at the source.
The Symptom vs. The Cause
Every minute spent editing AI output is a minute spent treating a symptom.
The symptom: this draft doesn't sound like me. The cause: AI has no model of how you write.
Consider the math. A professional who writes 10 AI-assisted messages per day and spends 5 minutes editing each one loses 50 minutes daily to style correction. That's over 4 hours per week, 200+ hours per year, spent doing the same thing: forcing generic AI output to match a writing style the AI never learned in the first place.
Kate Lee's approach moves the fix upstream. Instead of editing every draft, she gave AI a reference document. That's fundamentally smarter. But building that reference document is itself a massive project, one that requires months of editorial work, a deep understanding of linguistic patterns, and time that most professionals simply can't spare.
The real question isn't "how do I edit AI output better?" It's "how do I make AI output that doesn't need editing?"
Training AI With Your Writing, Not Fixing Its Output
The answer turns out to be the same insight Katie Parrott stumbled on: feed AI your actual writing, then extract the patterns.
Where her approach requires a manual interview process and ongoing maintenance, this extraction can be automated. The raw material already exists in your sent emails, your Slack messages, your reports, your presentations. Your writing style is encoded in thousands of documents you've already written. The challenge is decoding it systematically.
This is what computational stylometry does. It's an academic discipline (used in forensic linguistics to identify anonymous authors) that analyzes writing across 50+ dimensions: sentence length distribution, punctuation density, vocabulary sophistication, transition patterns, paragraph structure, hedging frequency, and dozens more.
When you apply this kind of analysis to a person's writing samples, you get something like a Writing DNA profile, a complete map of their stylistic patterns that can be translated into AI instructions.
The difference between this approach and a manual style guide:
| Manual Style Guide | Automated Style Analysis |
|---|---|
| Requires editorial expertise to build | Requires only writing samples |
| Captures 10-20 conscious preferences | Captures 50+ measurable dimensions |
| Needs ongoing maintenance | Updates when you add new samples |
| Takes weeks to develop | Takes minutes to generate |
| Reflects what you think your style is | Reflects what your style actually is |
That last point matters more than it seems. Most writers have a gap between their perceived style and their actual style. You might believe you write short sentences, but analysis of your samples reveals an average sentence length of 22 words. You might think you avoid jargon, but your vocabulary sophistication score tells a different story. Automated analysis closes that gap.
How MyWritingTwin Does This Automatically
MyWritingTwin takes the approach Kate Lee and Katie Parrott developed and removes the manual labor.
Step 1: Collect your writing samples. You upload emails, documents, messages, anything that represents how you actually write. The system needs a minimum of 3 samples, with 5+ recommended per communication context (formal emails, casual Slack messages, client presentations).
Step 2: Complete a style assessment. A short questionnaire captures the preferences you are aware of: your audience, your formality level, your communication goals. This provides context that pure text analysis can miss.
Step 3: Writing DNA extraction. The system analyzes your samples across 50+ linguistic dimensions, using the same computational stylometry techniques that forensic linguists use to identify authors. Sentence rhythm, punctuation patterns, vocabulary choices, structural preferences, tonal markers: all of it gets mapped.
Step 4: Style Profile generation. The extracted patterns get compiled into a Master Prompt, a structured set of instructions you paste into ChatGPT, Claude, Gemini, or any AI tool. The AI then writes in your style by default, no editing required.
The maintenance angle maps directly to Katie Parrott's bonsai garden concept. When your writing style evolves (new role, new audience, new platform), you add new samples. The system re-analyzes and updates your Style Profile. But instead of manually pruning rules, the update happens automatically.
What Changes When AI Already Knows Your Style
The shift from "edit after" to "train before" changes the entire workflow.
Instead of generating a draft and spending 5 minutes fixing the tone, you generate a draft that already matches your sentence rhythm, vocabulary level, and structural preferences. The editing that remains is purely substantive (facts, strategy, nuance), not stylistic.
Kate Lee's 400 rules become unnecessary because the Style Profile captures patterns at a resolution no human editor would manually specify. Rules like "use semicolons 2.3% of the time" or "average 1.7 hedging phrases per paragraph" aren't the kind of thing anyone writes down, but they're exactly the kind of thing that makes writing sound like you.
For teams, this scales differently than a shared style guide. Each person on a team can have their own Style Profile deployed to their own AI tools. The marketing director writes differently than the CEO, who writes differently than the support lead. One shared style guide forces everyone into the same voice. Individual Writing Twins preserve each person's authentic writing style while still benefiting from AI speed.
Ready to Stop Editing and Start Training?
MyWritingTwin analyzes your writing samples and builds a Style Profile you can deploy to ChatGPT, Claude, or any AI tool.
Build Your Writing Twin for Any AI →
Or learn more about how Style Analysis works.