Fred Brooks Is Dead. Rewrite the Damn Thing.

TL;DR
- The "never rewrite" rule from The Mythical Man Month assumed rewrites take years. They now take days.
- Pre-launch rewrites are a legitimate product development strategy when you discover a core assumption was wrong.
- The willingness to discard and restart is a competitive advantage, not a failure mode.
Fred Brooks wrote The Mythical Man Month in 1975. One of its most durable pieces of advice: never rewrite a software system from scratch. Always evolve it incrementally. The reasoning was sound. Rewrites take years. They destroy the implicit knowledge baked into the original. They fail more often than they succeed. The history of software is littered with companies that rewrote themselves into oblivion.
Brooks was right. For fifty years, he was right.
He's not right anymore.
What the rule was actually protecting
The "never rewrite" doctrine wasn't protecting software quality. It was protecting against a specific economic risk: the cost of discarding a large body of work and starting over. When a rewrite takes eighteen months, the opportunity cost is enormous. You've frozen new development, burned engineer-years, and bet the company on the second system being better than the first.
Brooks also identified second system syndrome: engineers who successfully ship a v1 tend to overload v2 with every feature they compromised on the first time. The rewrite becomes a monument to ambition rather than a functional product. It ships late, does too much, and serves no one well.
Both of these concerns assumed a long cycle. They assumed the rewrite would be measured in months, that large teams would do it, and that the cost of being wrong was existential.
None of those assumptions hold when AI rewrites a codebase in a day.
What a rewrite costs now
When I was iterating on the architecture for OpenChair, I hit a point roughly two weeks in where the data model was structurally wrong for the multi-tenant approach I needed. The options were: layer fixes on top of the bad foundation, or tear it down and rebuild correctly.
The classic answer is to layer fixes. Tearing it down means losing two weeks. The layered approach means compounding the debt until it becomes unpayable.
I tore it down. The rebuild took two days. The resulting architecture was cleaner, the codebase was smaller, and the features I'd already built mostly dropped in with minor adjustments. Two days to undo two weeks of wrong direction.
That calculation doesn't work in a world where the rebuild takes six months. It works now because AI compresses the time cost of starting over. Not to zero, but to something that's genuinely within the range of a rational decision.
The break-even point has shifted. When a rebuild takes a day, the threshold for "this foundation is wrong enough to warrant starting over" is much lower than when a rebuild takes a year.

Pre-launch rewrites as strategy
There's a pattern that's emerging in AI-native product development: teams doing full pre-launch rewrites as a deliberate step after the initial prototype reveals a wrong assumption.
The sequence: build fast to a v0 that demonstrates the core concept, expose it to a small internal audience, discover the fundamental thing you got wrong, tear it down and rebuild correctly, then launch.
This used to be rare because the cost was prohibitive. Now it's a reasonable way to work. You spend less time polishing a wrong foundation and more time getting the right foundation before anyone sees it.
The key word is pre-launch. Post-launch rewrites carry the same user disruption they always did. Enterprise customers have built workflows around your product. Power users have keyboard shortcuts memorised. The rewrite that seemed clean internally is externally disruptive, regardless of how fast it was to execute.
The window for consequence-free rewrites is pre-launch, and AI has made that window much more useful. You can iterate through two or three architectural approaches in the time it used to take to finish one.
The second system problem, reconsidered
Brooks' second system syndrome — the tendency to overload v2 with everything you couldn't fit in v1 — is still real. AI doesn't cure it. If anything, it makes it worse, because the cost of adding features during a rewrite is also near zero.
The discipline that pre-launch rewrites require isn't just "be willing to start over." It's "start over with a more constrained scope, not a larger one." The rewrite is valuable because it incorporates what you learned from the first attempt, not because it adds everything you didn't have time for before.
When I rebuilt the OpenChair data model, I also removed things from the rebuild that I'd added to the original during a moment of optimism. The rewrite wasn't just a new foundation — it was a smaller, better-defined foundation. The scope discipline and the architectural reset happened together.
That combination is what makes pre-launch rewrites useful. The rebuild is the opportunity to fix what was architecturally wrong and cut what was scopally wrong at the same time. Knowing what to cut is the harder skill, and the rewrite creates a forcing function for it.
When to rewrite vs when to refactor
The heuristic I use: refactor when the core structure is sound and specific parts are wrong. Rewrite when the core structure is wrong and no amount of refactoring will fix it.
Symptoms of a structural problem: you find yourself working around the data model on every new feature. The code that should be simple keeps requiring complicated exceptions. Tests are hard to write because the system's behaviour is hard to describe. Onboarding someone new to the codebase takes longer than you'd expect because the architecture doesn't match the product's actual logic.
Building for the model that doesn't exist yet compounds this. If you've built heavy scaffolding to compensate for current model limitations, and then the model improves and the scaffolding is no longer needed, you often have a structural problem. The scaffolding isn't an isolated piece; it's threaded through the architecture. That's a rewrite, not a refactor.
The cost of knowing the difference used to be higher than the cost of just tolerating a wrong structure. That equation has changed. The wrong structure costs you every day in reduced velocity. The rewrite costs you a day or two. Do the maths.
Frequently Asked Questions
What about post-launch products with real users?
The economics change significantly. Post-launch rewrites carry user disruption costs that pre-launch rewrites don't. The calculus is: is the current architecture limiting our ability to serve users, and does a rewrite unlock enough capability to justify the disruption? That's a higher bar, but it's still a real option. The key is the transition plan — can you run both versions briefly and cut over cleanly?
How do you avoid second system syndrome during a rewrite?
Start with a smaller scope target than you think you need. If the original had twenty features, the rewrite should probably launch with fifteen. Use the rewrite to fix the architecture and cut the features that turned out to be wrong, not to add the features you didn't have time for in v1. Treat scope expansion as a separate decision that happens after the new foundation is stable.
Does this apply to large enterprise codebases?
Less directly. Enterprise systems have the same structural problems but carry much higher rewrite costs because of integration dependencies, compliance requirements, and user expectations. The principle applies — wrong foundations are expensive to build on — but the threshold for "start over" is higher and the transition plan is more important. The pre-launch window doesn't exist for most enterprise systems, which is why architectural discipline before scaling matters more.
Logan Lincoln
Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience shipping AI products.


