Why AI Writing That Works in Email Falls Apart in Slack
Your writing style shifts measurably across platforms. Research shows stylometric models lose 55% accuracy when context changes, even with the same author.
By Emmanuel
You set up AI instructions based on your email writing. The results are good. Then you use the same profile for a Slack message. The AI writes something that sounds like a memo. You edit it down. It still feels stiff.
The problem isn't your instructions. It's what those instructions actually captured.
You Don't Have One Writing Style
You have several.
The way you write a client email is not the way you write a Slack reply to your team. Your LinkedIn posts use a different rhythm than your internal reports. Your quarterly reviews don't read like your quick status updates.
This is calibration. Every professional adjusts their writing based on the platform, the audience, and the stakes. The calibrations are subtle and systematic, and they accumulate over years of professional writing in each context.
When you ask AI to learn "your style," the question becomes: which one?
The Research Behind the Problem
Linguists call what you just read "register variation." Every writer maintains multiple registers, and the differences between them are not cosmetic. They show up in sentence length, punctuation density, hedging frequency, structural preferences, and the ratio of content words to function words.
A team of researchers at several European universities quantified exactly how much this matters for AI writing.
They trained authorship attribution models on text from a fixed set of authors across one domain, then tested those same models on text from a different domain written by the same authors. The models used character trigram frequencies, one of the most widely accepted features in stylometric research.
The results: accuracy dropped by up to 55.4 percentage points when the domain changed, even though the authors were identical (Bischoff et al., Corpus Linguistics and Linguistic Theory).
These models weren't learning "author style" as a pure, context-independent signal. They were learning a mixture of author style and domain style: the conventions of that genre of writing. When the domain changed, most of what the model had learned became actively misleading.
This is exactly the Slack problem. An AI trained primarily on your email writing learns a mixture of:
- Your personal patterns (sentence rhythm, vocabulary preferences, structural tendencies)
- Email-as-a-genre conventions (formal openings, request structures, paragraph length norms)
When you ask that AI to write a Slack message, it can't separate the two. It applies email conventions alongside your personal patterns. The result sounds like you, but reads as tonally wrong.
Why Surface Features Break Down Across Contexts
The issue goes deeper than genre conventions. The features that work well for capturing your style within one context actively mislead the model in another.
Research on neural authorship attribution demonstrates this distinction. Models that rely heavily on lexical features, the specific words you choose, are sensitive to topic and context. Your business vocabulary and your casual vocabulary are genuinely different. A model trained on your formal writing will consistently choose the more formal synonym even in contexts where you would not.
Syntactic features are more stable. How you construct sentences, your preference for fragments or subordinate clauses, your ratio of active to passive constructions, your use of sentence-initial connectives: these patterns persist across contexts more reliably than vocabulary does. They're the part of your writing that's most genuinely "you" rather than "you writing in this particular genre."
The practical implication: AI systems that capture primarily surface-level similarity to your provided samples will show exactly the accuracy collapse the research predicts when they encounter a different writing context.
How Many Writing Contexts Do You Actually Have?
Most knowledge workers operate across at least four distinct contexts:
Asynchronous internal (Slack, Teams, internal threads): Short sentences, direct asks, assumed shared context, low hedging, fragments accepted.
Formal external (client emails, proposals, executive communication): Structured paragraphs, appropriate hedging, formal openings, context provided, careful word choice.
Professional public (LinkedIn, published pieces, newsletters): Considered, readable by strangers, no assumed context, identity-conscious, edited voice.
Operational (status updates, meeting notes, documentation): Efficient, factual, structured, no personality required.
Each context has its own sentence-length baseline, formality level, hedging frequency, and structural defaults. Your instincts for each context developed over years of writing in that context, and they're largely unconscious.
A Style Profile built from formal email samples captures one of these contexts reliably. The other three require calibration against samples from those contexts.
The Linguist Who Measured This 30 Years Ago
Register variation across contexts is a documented linguistic phenomenon.
Douglas Biber, a corpus linguist at Northern Arizona University, spent years analyzing how writing varies across what he called "registers": academic prose, fiction, news, conversation, letters, and more. His 1988 work established that texts cluster into distinct types based on 67 co-occurring linguistic features, things like nominal vs. verbal density, passive construction rates, hedging frequency, and tense distribution.
His core finding: these features co-vary systematically. Academic writing uses high nominal density, hedging, and passive constructions together. Conversation uses high verbal density, fragment frequency, and active constructions together. The patterns aren't random. They form coherent register profiles, and the differences between registers are often larger than the differences between individual authors within the same register.
This is the foundation of the domain problem in AI writing. Your author style sits on top of a register baseline. Take you out of your email register and put you in your Slack register, and the baseline shifts under your feet. An AI that hasn't been trained to account for that shift will keep applying the old baseline.
Why Adding More Samples From the Same Context Doesn't Help
The intuitive fix is to add more examples. It doesn't work.
More emails don't teach the AI how you write in Slack. The model learns more about email conventions, gets better at email, and fails in every other context with the same consistency.
The research finding maps directly to this: domain shift is not a volume problem. You can't train your way out of a context mismatch by adding more data from the same context. The model needs samples from the target context, because what it needs to learn is how your patterns shift, not just what your patterns are.
This is also why users who build Style Profiles from a single document type, say a collection of LinkedIn posts, get excellent LinkedIn output and strange everything else. The profile is accurate for what it was trained on. It was never designed to generalize.
What Context-Aware Style Profiles Look Like
A context-aware Style Profile is one document with explicit calibration instructions.
Rather than "use sentences of 18 words average," it contains: "formal external: 20-22 words, moderate hedging, structured paragraphs. Internal async: 6-10 words, no hedging, fragments acceptable." The AI has one profile. It adjusts based on the context you're writing in.
Building this requires samples from across your actual writing contexts. Not "three emails." Specifically: a client proposal, internal Slack threads, a LinkedIn post, a status update. The sample spread forces the model to recognize that you're the same person with different calibrations, not different people.
MyWritingTwin's sample collection step asks for exactly this distribution. The questionnaire captures context-specific preferences you may not have consciously articulated, how formally you open messages in different contexts, how much you hedge when writing to different audiences, what structural conventions you follow in each setting.
The result is a Style Profile for ChatGPT, Claude, or any AI tool that contains explicit context-switching logic. Not just "how this person writes," but "how this person writes when the stakes and audience change."
A Quick Audit of Your Current Profile
If you're already using AI writing assistance, run this check:
Pull up the last five things you wrote in your most casual communication context (Slack, internal messages). Then pull up the last five things AI generated for you using your current instructions in the same context.
Compare sentence length, hedging frequency, and how it opens messages versus how you open them when writing natively.
If the AI writes longer, more formally structured, and more hedged than you actually do in that context, your Style Profile was built primarily from formal samples. The domain shift is showing.
The fix is adding samples from the casual context specifically. Once the model has seen your Slack patterns explicitly, it can calibrate to them. Until then, it defaults to what it has the most evidence for: the formal register it was trained on.
Ready to Build a Style Profile That Knows Where You Are?
MyWritingTwin creates context-aware Style Profiles you deploy to ChatGPT, Claude, or any AI tool. The sample collection asks specifically for writing across your communication contexts, so the output isn't one register overfitted to your emails.
Build Your Context-Aware Writing Twin →
Or read more about how Style Analysis actually works.