AI Makes Overbuilding the Default. Discipline Is the Antidote.

TL;DR
- AI lets you go from zero to a feature-complete product in hours. That speed is a liability unless you have prior discipline about what belongs in v1.
- Products built without incremental user exposure look complete but aren't strong. Scope grows because it's cheap, not because the product needs it.
- The skill that matters now isn't building fast. It's knowing what to cut.
The most seductive thing about building with AI is that you can keep going.
Any feature that occurs to you while you're in flow can be added. The cost is one more prompt. The time is fifteen minutes. So you add it. And then you add the thing that the new feature implies. And then you add the configuration option that the implied thing really ought to have. An hour later you have something that's technically impressive and practically overwhelming.
I've done this. Building OpenChair and OpenTradie solo, the pull toward overbuilding is constant. The marginal cost of each addition approaches zero, so the question "should this be in v1?" never gets the scrutiny it deserves. What would have required a week of engineering time six months ago is now a prompt and a coffee break. The friction that used to act as a natural filter is gone.
The result is products that feel like they were designed by someone who thought of everything, which is exactly the problem.
The scope growth trap
Traditional product development had a natural throttle. Every feature had a real cost: spec it, build it, test it, ship it. That cost forced prioritisation. Not because product teams were inherently disciplined, but because the economics demanded it. You couldn't add the third configuration option unless you had budget and engineering time for the third configuration option.
AI has removed that constraint without replacing it with anything. The economics now say yes to everything. The result is that discipline, which used to come partly from scarcity, now has to come entirely from judgment.
The problem compounds with consumer products. When you build something incrementally, adding one piece at a time and exposing it to users, you develop a mental model of what the product is trying to do. Users develop that same model. You discover friction points before they multiply. You understand which features earn their place and which create noise.
When you build end-to-end in a single session, no one develops that model. The first user to encounter your product gets the final episode without having watched any of the preceding ones. Everything is there. None of it has context. A user landing on a feature-complete product they've never seen before faces a completely different cognitive challenge than a user who discovered each piece as it arrived.
That's not a polish problem. It's a structure problem. The product wasn't grown into its shape through iteration. It was assembled all at once, and the seams show.
What lean startup principles actually meant
When the lean startup movement arrived, the discipline it was selling wasn't about building slower. It was about exposing assumptions to reality before they compound.
The minimum viable product wasn't valuable because it was small. It was valuable because each iteration produced signal that shaped the next one. You learned which bets were wrong before you'd staked everything on them. The process of incremental exposure built product intuition in the people doing the building.
AI hasn't invalidated that principle. It's made the principle harder to follow.
When you can add a feature in fifteen minutes, the argument for including it feels overwhelming. Why wait for v2? It's right here. And because adding the feature is effortless, the discipline required to say no to it is pure willpower. Nothing in the economics supports restraint.

The YAGNI problem
Software engineers have a principle called YAGNI: you ain't gonna need it. Don't build infrastructure for a use case that doesn't exist yet. Don't abstract until you have three concrete cases that justify the abstraction. Build for the problem you have, not the problem you imagine.
YAGNI was easy to enforce when features were expensive. Hard to build, easy to cut. Now it's inverted: features are cheap to build, impossible to cut once users have touched them. The pre-build decision is the only moment you have full control.
I've learned to treat that decision as irreversible even when it technically isn't. Once a feature is in a product, someone will use it. Someone will build a workflow around it. Someone will write a help article about it. Removing it after launch costs multiples of what it cost to add it, and that cost calculation is mostly invisible when you're in the building phase.
The practical discipline is: before building anything, ask whether it earns its place against the simplest version of the product. Not against the most capable version, or the most complete version, or the version that covers every edge case. Against the simplest version that solves the core problem. If the answer is "it would be nice to have," that's a no.
The cut list
Every product I've shipped well has had a cut list. Features that were technically complete, sometimes halfway built, that didn't make it into launch. Not because they were bad ideas, but because their presence would have diluted the product's focus before users had a chance to understand what the product was for.
Discovery is fast now, which means you can test the cut features later. If they turn out to be latent demand — things users would have done if given the option — they'll surface in behaviour data and you can add them. If they don't surface, you'll be grateful you didn't ship them.
The cut list is where taste gets operationalised. Anyone can generate features. The skill is deciding which ones the product needs to be great versus which ones it needs to avoid being generic. AI can build everything on the list. Only judgment can tell you which items don't belong there.
The bottleneck has moved from building to editing. Products that break through aren't the ones built fastest. They're the ones edited most ruthlessly.
Frequently Asked Questions
How do you decide what belongs in v1 versus a later version?
Work backwards from the problem you're solving and ask: what is the minimum feature set that would make a user's life noticeably better? Everything else is a candidate for the cut list. If removing a feature doesn't meaningfully reduce the core value proposition, it probably doesn't belong in v1. If it does reduce core value, it belongs.
Isn't it easy to remove features if they turn out to be wrong?
Technically yes. Practically no. Once a feature is live, users find it, build workflows around it, and occasionally rely on it for something you didn't anticipate. Removing features from live products creates friction with existing users even when it improves the product for everyone else. The easiest time to cut is before launch, when the cost is only the build time rather than the disruption cost.
What do you do when a stakeholder insists on adding something to v1?
Ask them to articulate what would break if the feature wasn't in v1. Not what would be better — what would actually fail. If the answer is "nothing would break, it would just be more useful," that's a v2 feature. If the answer is "the core use case doesn't work without it," it belongs in v1. The discipline is making them answer the question specifically rather than generically.
Logan Lincoln
Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience shipping AI products.


