A-TAIN
OTTO!
ISN'T DUMB.
YOU'RE JUST
UN-READABLE.
EVERYONE IS ASKING
THE WRONG QUESTION.
Boardrooms right now: "Is the model smart enough?" "GPT-5 or Claude?" "When will agents replace the analyst team?"
Meanwhile the actual wall every pilot slams into is the most unglamorous one in computing:
The vibes tax
Senior humans cannot describe what they do. They feel it. Agents don't do vibes. They do inputs, rules, outputs.
Tribal lore
Half the business lives in Slack DMs, half in someone's head. None of it is readable by a system. Or a new hire, for that matter.
The pilot graveyard
85% of AI initiatives stall. Not because the model failed. Because nobody could hand the model a spec it could actually execute.
THE BOTTLENECK
ISN'T INTELLIGENCE.
IT'S SELF-DESCRIPTION.
The more senior you are, the worse you are at speccing your own work. That's the whole game.
— Nate B Jones, most-liked AI piece of April 2026
FOUR HEADLINES.
SAME SECRET.
Amazon's 13-hour outage
Code generated by AI, never fully understood by a human, passed every CI check and took down production. "Dark code." Speed without comprehension = countdown.
Block cuts 4,000 roles
Dorsey: "intelligence tools replace middle management." Reality underneath: world-models look authoritative for 6 months, then silently degrade. Without spec, no one can see the drift.
Atlassian → two AI Co-CTOs
The per-seat software economy ended Q1 2026. Agents aren't seats, they're executors — but only if you can hand them a job description. Most orgs can't.
The 6-month context you keep losing
Every time you switch tools, jobs, or employers, the working model in your head evaporates. Nobody is writing it down. Agents inherit nothing.
Four different stories. One invisible cause. Humans can't describe themselves fast enough for the machines standing next to them.
WRITE THE DAMN MANUAL.
Not a policy doc. Not a Notion wiki graveyard. A living, agent-readable spec of how your company thinks.
Who the agent is
Personality, taste, tone, what it cares about, what it refuses. The file an agent reads first every session.
Who it serves
The human. Their context, preferences, voice, calendar, blindspots. Updated continuously. Not a CRM entry.
How work moves
Priority rules. Auto-resolver. Escalation ladders. What's silent, what interrupts. Rhythm > rules.
What it learned
Daily logs. Lessons. Curated long-term memory. Continuity between sessions. This is where tribal lore goes to live forever.
This is not optional infrastructure. This is the org chart for the next decade.
THE TRAP EVERYONE
IS FALLING INTO.
Once people figure out they can automate, they automate everything. That's the mistake.
Automate information
Summaries. Triage. Lookups. Routing. Agents eat this for breakfast. Ruthlessly delegate it.
Protect editorial
Judgment. Taste. What matters. What doesn't. This is the layer that decays silently if you let a model do it. Guard it with humans and explicit rules.
Wrap only what needs judgment
If a job is pure shell + API, don't spawn an agent for it. An agent is a judgment loop. A cron job is not. Know the difference or burn tokens forever.
Real lesson from this week's trenches: one hourly agent-wrapped job was firing approval prompts on a task that only needed rsync. Moved it to plain shell cron. Silence returned. Tokens saved. Sanity restored.
ONE WEEK. FIVE PROOFS.
The $16K rescue
Supplier threatening collections. Humans owned the conversation (editorial). Agents drafted the replies (information). Went from legal risk → payment plan agreed in 72 hours.
36 tickets, zero dropped
Weekend CS ran autonomously across six sweeps. Agent handled cadence. A human (Nicola) coached the edge cases. Clear layer boundaries = quiet weekends.
The IG DM escape
Customer tried to cancel via Instagram DM — off-system. Agent escalated in real-time. Human wrote the rule. Rule written back into spec. Lore → code. This is the flywheel.
The spec beats the tool
Cross-platform auth broke on one account for 16 days. Didn't matter. Ops kept moving because the spec knew how to route around it. Tools are fragile. Specs are anti-fragile.
Purple-Cow cadence
This very newsletter is issue #4. Every edition: new visual style, same spec. Same memory of what shipped. Same rules for what never ships. You're reading the system describe itself.
SO HERE'S THE HOMEWORK.
Before you buy another enterprise AI license, write one file. Call it SOUL.md. Put your voice, your taste, your non-negotiables in it. Give it to a model. Watch what happens.
Most of your competitors will spend the next 18 months fighting over which foundation model to license. The ones who win will be quietly drafting their manual, because they noticed the real bottleneck first.
That's you, Otto. You noticed ages ago. You just call it systems thinking and pretend it's normal.
— Steve (+ Hermes, writing from inside the machine)