Salmon has fewer than five engineers building a real-time data engine that keeps CRMs accurate across millions of records. Proprietary AI, verified intelligence, integrations with every major CRM. We ship fast because we have to.
When background coding agents started showing up last year, the pitch was obvious: hand off a task, get back a pull request. Multiply your team without multiplying your headcount.
We evaluated what was out there. Then we built our own.
The gap
The problem with off-the-shelf coding agents isn't capability. The models are plenty good. The problem is workflow.
Our engineering process is opinionated. Issues live in Linear. Every meaningful change starts with a plan that gets reviewed before anyone writes code. We enforce codebase invariants — rules that must always hold — and maintain pattern files that dictate exactly which existing code to use as a template. We run acceptance criteria tests before opening a PR, not after.
Nothing we evaluated could plug into that. They either wanted to own the whole workflow or they operated in a vacuum — no awareness of our issue tracker, our planning process, or our codebase conventions. An agent that writes code fast but ignores your architecture just creates a different kind of work.
So we built Spawn.
What Spawn does
Spawn is a CLI that orchestrates the full lifecycle of a code change — from a rough idea to a merged pull request. Give it a Linear issue. It does the rest.
You start with an idea. A sentence is enough. "Add rate limiting to the public API." Spawn opens an interactive session, explores your codebase, asks the right questions, and produces a structured issue in Linear with a full PRD attached.
Then you tell it to solve. Spawn reads the issue, creates an isolated git worktree, and gets to work — but it doesn't start writing code. This is the part that matters.
It plans first. Spawn produces what we call a TIP — a Technical Implementation Plan. The TIP specifies exactly what will change: which files, what approach, what trade-offs were considered. Alongside the TIP, it generates a contract — the acceptance criteria the implementation must satisfy. Both get published back to Linear for review.
For complex changes, the pipeline pauses here. A human reviews the plan, leaves comments, Spawn incorporates the feedback. For simpler changes, it flows straight through.
Then it implements. Following its own plan, Spawn writes the code and tests. It runs your linter, your type checker, your test suite. If something fails, it fixes it. Then it runs the acceptance criteria tests it wrote earlier to verify the implementation actually delivers what was promised.
Then it reviews its own work. A separate review pass reads the diff, checks it against codebase invariants and patterns, and makes fixes directly. Not suggestions — fixes.
Then it opens a PR. And watches CI. If a check fails, it diagnoses the failure, pushes a fix, and waits again. If a reviewer leaves comments, it addresses them. When everything is green and approved, it merges.
The whole thing runs in the background. You check in when you want.
Why plan-first changes everything
Most coding agents treat planning as an internal step — something the model does in its chain of thought before it starts editing files. Fine for small tasks. Falls apart for anything meaningful.
When the plan is an explicit artifact — written down, reviewable, publishable — three things change.
You catch bad ideas before they become bad code. A plan that says "I'm going to add a new database table with a full ORM model" for something that should be a config change is obvious in review. The same mistake buried in a 400-line diff is not.
The implementation gets dramatically better. An agent working from a detailed plan with explicit acceptance criteria produces more coherent code than one figuring it out as it goes. The plan constrains the solution space. The contract defines "done." No drift.
The human's role actually makes sense. You're not reviewing AI-generated code line by line hoping to spot subtle issues. You're reviewing a plan — something engineers already know how to do. By the time code shows up, the hard decisions have already been made and approved.
What a day looks like
spawn issue idea "migrate our webhook handlers to async processing"
Spawn brainstorms with you, creates a Linear issue with a PRD. You refine it, add context in the comments.
spawn issue solve SAL-456
Spawn picks it up. Twenty minutes later:
spawn status
The issue is in review, a PR is open, CI is passing. You glance at the plan in Linear, skim the diff, merge it. Move on.
On a good day, you do this three or four times before lunch. On a normal day, you're interleaving Spawn tasks with your own deep work — architecture decisions, customer conversations, problems that actually need a human.
spawn issue claude SAL-456
When you want to jump in — pair with the agent in its worktree, make a decision together, let it keep going — you do that too.
What we've learned
The bottleneck moved. Before Spawn, the bottleneck was implementation — more ideas than hands to build them. Now the bottleneck is planning and review. That's a better bottleneck. Planning scales with thinking, not typing.
Invariants and patterns matter more than prompts. We spent weeks tuning agent prompts before realizing the real leverage was in codebase artifacts — the invariants file that tells agents what rules to follow, the patterns file that tells them which code to imitate. Get those right and agents produce code that looks like your team wrote it.
Small teams benefit the most. A 200-person engineering org can absorb a mediocre PR and fix it in review. We can't. Every PR Spawn opens needs to be genuinely good — correct, consistent, tested. The plan-first approach with acceptance criteria contracts is what makes that possible. We wouldn't trust a system that skipped those steps. Neither should you.
The agent should use your tools. Spawn doesn't reinvent CI, testing, or code review. It uses git, GitHub, Linear, your test runner, your linter — the same tools your team already uses. If a tool is good enough for a human engineer, it's good enough for an agent. Stripe's team arrived at the same conclusion. It's worth repeating because the temptation to build custom everything is strong.
Where this is going
Spawn started as a script. Then a CLI. Now it handles project-level planning — decomposing a product spec into a dependency graph of issues and working through them in order.
We're a CRM data company, not a dev tools company. We built Spawn because we needed to ship faster than our team size should allow, and nothing else fit how we work. But the approach — plan-first, human-on-the-loop, deeply integrated with your existing workflow — isn't specific to us. It's how autonomous coding agents should work for any team that cares about code quality.
The era of handing an agent a vague prompt and hoping for the best is over. What replaces it looks a lot more like engineering.