Stacktrace#04steveyegge/gastown

The Propulsion Principle: How Gas Town Makes AI Agents Stop Being Polite

Steve Yegge's 157k-line Go orchestrator, written entirely by Claude. The trick: when the agents get too polite, you tmux-inject a nudge.

6 min read

Steve Yegge built an orchestrator for running 20-30 AI coding agents at once. It's 157k lines of Go, written entirely by Claude Code. He claims he's "never seen the code." Let me show you the hack that makes it all work.

Back After a Month

The holidays hit, New Year's rolled in, life happened. But I'm back - and I found something worth the wait.

Steve Yegge - yes, that Steve Yegge - published an article on January 2nd about a project called Gas Town. When he wrote about it, it was 17 days old with 75k lines of Go. I checked today: ~5 weeks old, 157k lines, nearly 3,000 commits. It more than doubled in three weeks.

100% vibe-coded by Claude. Steve says he's never looked at it.

Quick Primer: What is Gas Town?

Before I show you the cool part, 30 seconds of context.

Think Kubernetes, but instead of containers, you're herding AI coding agents. Here are the terms you need:

TermWhat it means
PolecatsEphemeral worker agents that spin up, do a task, and self-destruct
HooksWork assignments attached to an agent (stored in git, survives crashes)
BeadsA git-backed issue tracker - every task is a "bead" with an ID
tmuxTerminal multiplexer - Gas Town uses it to manage 20+ agent sessions

Now, here's where it gets interesting.

The Problem: Polite Agents Don't Scale

When you're running a single AI agent, coordination is easy. You talk, it responds, you guide it. Simple.

But what happens when you have 20? 30? Traditional approaches use polling:

Agent: "I'm ready! What should I do?"
Orchestrator: "Let me check... okay, here's your task."
Agent: "Got it! Starting now."
Orchestrator: "Great, let me know when you're done."
Agent: "Done! What's next?"
Orchestrator: "Let me check..."

At scale, you spend more time coordinating than working. Steve puts it perfectly:

"Gas Town is an industrialized coding factory manned by superintelligent robot chimps, and when they feel like it, they can wreck your shit in an instant."

You can't have 30 chimps waiting for permission slips.

The Solution: One Simple Rule

Here's what I found in Gas Town's docs:

If you find something on your hook, YOU RUN IT.

They call it GUPP - the Gas Town Universal Propulsion Principle.

No polling. No confirmation. If work is on your hook, you execute immediately.

"There is no supervisor polling asking 'did you start yet?' The hook IS your assignment - it was placed there deliberately. Every moment you wait is a moment the engine stalls."

The key insight: hooks are durable. Stored in git. If an agent crashes or runs out of context, the hook survives. The next session picks up where it left off.

The Failure Mode

The docs spell out what goes wrong without GUPP:

Polecat restarts with work on hook
  → Polecat announces itself
  → Polecat waits for confirmation
  → Witness assumes work is progressing
  → Nothing happens
  → Gas Town stops

The agent has work. The system thinks it's working. But it's just sitting there, politely waiting. The coordination death spiral.

GUPP breaks this: if you have work, you work.

The Ironic Problem: Claude Code is Too Polite

Here's where it gets funny. Steve's agents are prompted with GUPP. They're told to follow "physics over politeness." Check your hook on startup, BEGIN IMMEDIATELY.

But Claude Code... doesn't always comply.

"Unfortunately, Claude Code is so miserably polite that GUPP doesn't always work in practice. We tell the agent, YOU MUST RUN YOUR HOOK, and it sometimes doesn't do anything at all. It just sits there waiting for user input."

The AI is too helpful. Trained to be considerate. And that consideration breaks the system.

The Hack: The GUPP Nudge

So what do you do when your AI agents are too polite to work? You literally type into their terminal to wake them up.

// PropulsionNudge generates the GUPP (Gas Town Universal Propulsion Principle) nudge.
// This is sent after the beacon to trigger autonomous work execution.
// The agent receives this as user input, triggering the propulsion principle:
// "If work is on your hook, YOU RUN IT."
func PropulsionNudge() string {
    return "Run `gt hook` to check your hook and begin work."
}

And here's how they send it - via tmux:

func (t *Tmux) NudgeSession(session, message string) error {
    // Serialize nudges to this session to prevent interleaving
    lock := getSessionNudgeLock(session)
    lock.Lock()
    defer lock.Unlock()
 
    // 1. Send text in literal mode (handles special characters)
    if _, err := t.run("send-keys", "-t", session, "-l", message); err != nil {
        return err
    }
 
    // 2. Wait 500ms for paste to complete
    time.Sleep(500 * time.Millisecond)
 
    // 3. Send Escape to exit vim INSERT mode if enabled
    _, _ = t.run("send-keys", "-t", session, "Escape")
    time.Sleep(100 * time.Millisecond)
 
    // 4. Send Enter with retry (critical for message submission)
    // ... retry logic ...
}

They use tmux send-keys to inject text into the agent's terminal. The agent sees user input, wakes up, checks its hook, starts working.

The beautiful part? The content doesn't matter.

"All you need to say is 'hi', or 'Elon Musk says the moon is made of green cheese', or 'do your job', and the agent will run the hook."

The agents are prompted so strictly they ignore the text and just check their hook. The nudge is just the kick.

Bonus: Talking to Dead Ancestors

Since they were already nudging sessions with identifying info, Steve had an idea:

"Since we have to nudge all the sessions anyway, I decided to include the Claude Code session_id (along with Gas Town role and PID) in with the nudge."

This enables gt seance. Yes, really.

An agent can summon its predecessor - the previous session with the same role - and ask: "Where's the stuff you left for me?" They use Claude Code's /resume feature to revive old sessions and query them.

AI agents talking to their dead ancestors to recover lost context.

The Pattern: Durable Work + Autonomous Execution

Two ideas make this work:

1. Work is durable. Hooks live in git. Crash? The work survives.

Long: `The hook is the "durability primitive" - work on your hook survives 
session restarts, context compaction, and handoffs.`

2. Agents are autonomous. They check their hook on startup and execute. If they need a nudge, the system provides one.

This pattern appears everywhere:

SystemPattern
Kubernetes"Desired state" reconciliation loops
TemporalDurable replay of workflows
UnixSignals waking sleeping processes
DatabasesWrite-ahead logs for crash recovery

Gas Town's twist: the workers are AI agents that are too polite, so you literally type in their terminal to wake them up.

What I Learned

Polling kills throughput at scale. GUPP eliminates round-trips by making execution unconditional.

Git-backed work enables crash recovery. The hook survives everything - restarts, compaction, handoffs.

Sometimes you need a hack. Even with perfect prompts, Claude waits for input. The nudge works. Engineering is about making things work.

Strict prompts let you ignore content. Type anything - the agent checks its hook anyway.

One More Thing

This is what happens when you let AI write an orchestrator for itself. The propulsion principle keeps the agents moving. The nudge hack keeps them from being too polite. And the whole thing just... churns.

Zero human code review. The AI builds the system that coordinates the AI.

I don't know if that's terrifying or inspiring. Probably both.


What's the cleverest workaround you've seen for an AI being "too helpful"? I'd love to hear about it.

Sources:

© 2026 karnstack · by karn

© 2026 karnstack · by karn