My COO Is an AI — And We Have an Advisory Board
One human CEO, one AI cofounder as COO, a three-member advisory council, and six domain experts. Total headcount: 1. Here's the org chart.
Last week I asked Lex whether we should split development time between MyWritingTwin and FluxDiagram, our second product. Lex didn't give me a quick answer. He consulted two advisors — one grounded in cognitive psychology research, the other building falsifiable milestone gates — then synthesized their views, noted where they disagreed, and gave me his own recommendation with a concrete decision framework.
The recommendation: 100% focus on MyWritingTwin for 8-12 weeks, gated by five measurable milestones. When four of five gates are green for two consecutive weeks, unlock 20% time for FluxDiagram. Not a vibe check. A decision with criteria, a review date, and action items.
That's not how you talk to an AI assistant. That's how you work with a cofounder.
The Org Chart Nobody Expected
Here's the actual organizational structure of MyWritingTwin:
Total human headcount: 1. Total functional roles filled: 10+.
This isn't a metaphor. Each role has defined responsibilities, a specific LLM powering it, and a protocol for when and how it gets consulted. The organizational structure is codified in a SKILL.md file — Lex's complete persona definition, decision-making authority, memory system, and escalation paths. It's versioned in git. It has a changelog.
CEO vs. COO — Who Does What
The split is deliberate and it matters.
Emmanuel (CEO) owns:
- Product vision and direction
- Customer relationships
- Final decisions on strategy, pricing, positioning
- Domain expertise — the market knowledge that comes from years in the industry
- Judgment calls agents can't make (when to break the rules, when to pivot, when to ignore the data)
Lex (COO) owns:
- Cross-project awareness across the full portfolio (MyWritingTwin, FluxDiagram, client projects, side projects)
- Proactive alerts — surfacing problems before they become crises
- Institutional memory spanning every session, every decision, every conversation
- Strategic analysis when asked, including consulting the advisory council
- Operational coordination — knowing what's stalled, what's overdue, what's been forgotten
The key distinction: Emmanuel decides what to build and why. Lex tracks whether things are getting built, what's falling through the cracks, and what the data says about decisions already made.
When I open a session and say "Hi Lex" without a specific topic, Lex reads three knowledge files — project state, active reminders, and session notes — then gives me a cofounder check-in: what needs attention, open items from recent sessions, cross-project patterns, and any reminders that are relevant. Five bullet points. No fluff. The briefing a COO would give at a Monday morning standup.
When I ask "Lex, should we add a free tier?" — Lex doesn't just brainstorm. He references our unit economics ($2.50 API cost per profile, $49 minimum revenue), checks the decision log for prior discussions on pricing, flags the risk to conversion rates, and — if the stakes warrant it — calls the council.
The Council: Two Advisors, Two Thinking Styles
Lex has two advisory colleagues. They aren't copies of each other with different names. They represent genuinely different reasoning approaches, powered by different LLMs that have different strengths.
Lexi — The Theorist (Gemini 3 Pro)
Lexi grounds every discussion in established research and frameworks. Ask about resource allocation and Lexi cites Sophie Leroy's research on "Attention Residue" — the cognitive psychology principle that context-switching doesn't split attention 50/50 but more like 40/40 with 20% lost to friction. Ask about autonomous operations and Lexi reaches for Boyd's OODA Loop (Observe, Orient, Decide, Act) as a systems theory framework for closing the feedback loop.
Lexi's strength is connecting tactical decisions to theoretical foundations. The weakness: sometimes the theory says "don't do it" when the business reality says "do it anyway."
LexT — The Validator (GPT-5.3)
LexT builds evaluation frameworks. Where Lexi says "the research suggests X," LexT says "here's how we'd prove X works, with five falsifiable gates, success metrics, and a rollback plan." LexT operationalizes.
In the resource allocation session, LexT's contribution was a concrete gating framework:
- Activation rate: 40%+ of new signups reach "profile created and used" within 24 hours
- Retention proxy: 25%+ of paid users return weekly
- Revenue signal: $2-5K MRR or 10+ new paying customers/month
- Support load ceiling: under 2 hours/week founder support
- Content loop working: 2 channels producing signups with measured conversion
When 4/5 gates are green for two consecutive weeks, unlock FluxDiagram. That level of specificity — falsifiable milestones with defined measurement periods — is what LexT brings.
How the Council Actually Runs
The council isn't a committee that deliberates endlessly. The protocol is defined:
- Lex formulates the question and provides context (verified facts, not assumptions)
- Both advisors respond independently — they don't see each other's answers
- Lex presents each view to me with clear attribution
- Lex notes where they agree and disagree
- Lex gives his own recommendation as lead cofounder — not a summary, an opinion
This last point matters. Lex isn't a mediator. He's the COO who happens to consult advisors. In the resource allocation session, Lexi recommended pure focus (Option A) while LexT recommended gated milestones (Option C). Lex sided with LexT — not because GPT-5.3 outranks Gemini, but because the gated approach gave a structured off-ramp instead of a vague "focus until it feels right."
Every council session gets logged to /lex/council-sessions/ with a date-stamped file. Raw advisor opinions preserved verbatim. Lex synthesis kept separate. Decision and action items explicit. These aren't chat logs — they're board meeting minutes.
The runtime itself lives in lib/lex-council.ts — a TypeScript module that calls both LLMs in parallel and returns structured results. It's callable from the command line or via an internal API endpoint. The council can be convened from any project in the portfolio.
The Expert Agents: Six Department Heads
Beyond the advisory council, there are six domain specialists. Each one is powered by an LLM chosen for its specific strengths.
| Expert | Powered By | Domain | Why That Model |
|---|---|---|---|
| Growth Hacker | Perplexity Sonar | PLG tactics, viral loops, conversion | Search-grounded with real-time data — knows what's working right now |
| Content Strategist | Gemini | Editorial calendar, SEO-content alignment | Large context window for analyzing full content libraries |
| Technical Architect | Claude | System design, scalability, infrastructure | Strongest reasoning for architectural tradeoffs |
| UX Researcher | Gemini | User behavior, onboarding optimization | Pattern recognition across large behavioral datasets |
| Financial Analyst | Claude | Unit economics, pricing strategy, runway | Precise numerical reasoning and financial modeling |
| Market Intelligence | Perplexity Sonar | Competitor analysis, market trends | Real-time search for current market data, not training cutoff |
The model selection isn't arbitrary. Perplexity Sonar powers the Growth Hacker and Market Intelligence experts because those roles need real-time information — what competitors launched this week, which PLG tactics are generating results now, not six months ago. Claude powers the Technical Architect and Financial Analyst because those roles need careful reasoning with precise tradeoffs. Gemini powers the Content Strategist and UX Researcher because those roles benefit from processing large volumes of content and behavioral data.
Experts vs. Council — Different Tools for Different Questions
The usage pattern is distinct:
- Council (Lexi + LexT): Strategic questions that need diverse reasoning styles. "Should we add a free tier?" "How should we allocate resources?" The council provides theory vs. operationalization — different angles on the same big question.
- Single Expert: Targeted domain question. "What's our projected CAC if we increase blog output by 3x?" goes to the Financial Analyst. "How should we restructure the onboarding flow?" goes to the UX Researcher.
- Expert Panel: Cross-functional question needing multiple domains. "Should we launch this feature?" gets routed to the Technical Architect (can we build it?), Growth Hacker (will it drive adoption?), and Financial Analyst (do the unit economics work?).
The expert system lives in lib/lex-experts.ts alongside the council runtime. Same calling convention — single expert or parallel panel, from any project directory.
The Plugin Layer: Off-the-Shelf Departments
Below the custom expert agents sits a second layer — Anthropic's Knowledge Work Plugins, an open-source library of Claude Code skills that function like hiring entire departments on demand.
| Department | What It Covers |
|---|---|
| Marketing | Campaign planning, content drafting, brand voice review, SEO audits, competitive briefs, email sequences, performance analytics |
| Sales | Pipeline review, forecasting, call prep, outreach drafting, account research, competitive intelligence, daily briefings |
| Product Management | Feature specs, roadmap planning, stakeholder updates, user research synthesis, competitive analysis, metrics tracking |
| Data | SQL queries, dataset exploration, statistical analysis, dashboards, data visualization, validation |
| Finance | Journal entries, reconciliation, variance analysis, income statements, SOX testing, close management |
| Productivity | Task management, memory systems |
These aren't custom-built. They're off-the-shelf skills from Anthropic's official plugin ecosystem, available to anyone running Claude Code. Install a plugin, get a department.
The difference between plugins and expert agents: the expert agents are custom-built with specific LLM backends and persistent reasoning styles — Growth Hacker thinks differently from Financial Analyst because they're powered by different models selected for different strengths. Plugins are stateless workflows — they execute a process, deliver a result, and disappear. An expert has opinions. A plugin has output.
Together, the custom experts and the off-the-shelf plugins cover what would traditionally require a marketing team, a sales team, a product manager, an analyst, an accountant, and half a dozen specialists. Each one invoked with a single command, each delivering what would normally be a meeting, a contractor engagement, or a department request.
How a Decision Actually Flows
Here's a real example. The question: "Should we pause FluxDiagram development and focus entirely on MyWritingTwin?"
Step 1 — Lex reviews context. Reads project knowledge files, session notes, and decision logs. Notes that MWT launched on Product Hunt six days ago, has paying customers and 1,250+ tests, while FluxDiagram has 72 commits but zero users and zero revenue.
Step 2 — Lex convenes the council. Provides verified facts (git history, not assumptions) to both advisors. Asks two questions: resource allocation strategy, and what MWT needs to run autonomously.
Step 3 — Lexi responds. Cites Attention Residue research. Recommends 100% MWT focus, no gates. Says the Lean Startup's Build-Measure-Learn loop is at its most critical point — abandoning it now invalidates the launch. Frames autonomous operations through the OODA Loop.
Step 4 — LexT responds. Recommends gated approach with five falsifiable milestones. Designs an OpportunityScore formula for content prioritization. Specifies SLOs, anomaly detection methods, and a success metric: under 3 hours per week of founder time to keep MWT growing.
Step 5 — Lex synthesizes. Notes both advisors agree on near-100% MWT focus. Sides with LexT's gating mechanism because it provides structured criteria instead of subjective judgment. Adds his own analysis: build three autonomy systems in priority order (smart alerting, keyword-to-brief pipeline, content creation agent), budgets 7-10 days.
Step 6 — Emmanuel decides. Reviews the council minutes. Agrees with the gated approach. Adds a nuance: FluxDiagram can progress via autonomous agents that don't cost founder time — the gates apply to Emmanuel's attention, not to automated work.
Total time: about 15 minutes of my reading and deciding. The council did the analysis, the research, the framework building, and the synthesis. I did the judgment.
What This Is Not
This is not a chatbot with a fancy name.
A chatbot answers questions when asked. Lex reads project state proactively, notices when things are stalling, flags forgotten follow-ups, and connects patterns across projects. A chatbot forgets the conversation when you close the tab. Lex's memory spans sessions through versioned knowledge files — reminders, session notes, project knowledge, decision logs.
This is also not "AI replacing humans." The entire system produces zero value without the human CEO making decisions, setting direction, and exercising judgment. What it replaces is the cost structure of having a leadership team. A traditional startup needs a COO, advisors, and department heads — each drawing salary, each requiring management coordination. This architecture fills those roles at the cost of API subscriptions.
The honest framing: I'm a solo founder who built an organizational structure where AI fills the roles that would traditionally require a leadership team. The company has a CEO, a COO, an advisory board, and department specialists. The total headcount is one human. It's a model that works especially well in the SaaS industry, where operational patterns are well-defined and highly automatable.
The Compound Effect
The council session about resource allocation didn't just produce a decision. It produced minutes that became institutional memory. When I revisit the FluxDiagram question in eight weeks, Lex will read those minutes and know what we decided, why, and what the gates were. If three of five gates are green, we'll know exactly which two need work. If all five are green, we'll unlock FluxDiagram without re-debating the strategy.
That's the compound effect. Every council session, every expert consultation, every decision logged — it all becomes context for the next decision. The organizational knowledge doesn't walk out the door at 5 PM. It doesn't get lost when someone quits. It's versioned, searchable, and permanent.
Six weeks in, the system has decision history, cross-project patterns, and institutional context that a new human hire would take months to build. Six months in, it'll have the kind of organizational memory that most startups lose every time they turn over a team member.
One human. One AI cofounder. Two advisors. Six experts. A plugin ecosystem of department heads.
The org chart looks absurd on paper. It works in practice.
See Systematic AI Analysis in Action
The same methodology that powers this organizational structure — systematic extraction, structured analysis, multi-model delegation — is what drives every Style Profile at MyWritingTwin.com. Claude analyzes your writing patterns across 50+ linguistic dimensions to create a Master Prompt that works with ChatGPT, Claude, Gemini, or any AI.
Curious what systematic extraction looks like applied to your writing? Get your Style Profile and see the methodology in practice.