You Can't Vibe Code Your Way to SAP

TL;DR
- Enterprise software like SAP isn't a data layer with a UI on top. Domain knowledge is distributed across every architectural layer: UIs encoding regulatory workflows, middleware enforcing business rules, and usage patterns reflecting decades of production edge cases.
- Vibe coding works brilliantly for greenfield applications where the domain knowledge fits in one person's head. It structurally cannot replicate systems where the knowledge took thousands of people decades to accumulate.
- The AI capability diffusion gap isn't about code generation. It's about domain encoding. That gap will persist much longer than Silicon Valley expects.
You cannot vibe-code a replacement for enterprise software. The domain knowledge embedded in systems like SAP, Workday, and Salesforce was accumulated by thousands of people over decades, and it's distributed across every architectural layer, not sitting in a database waiting to be extracted.
There's a narrative gaining momentum that says otherwise: AI coding tools have made software so cheap to build that anyone can recreate anything. Enterprise software is just a UI on a database. Point an AI at the problem, describe what you want, and out comes a replacement.
The mistake isn't about what AI coding tools can do. They're genuinely remarkable. Solo operators are building production SaaS platforms with AI tools, and the productivity gains are real. The mistake is about what enterprise software actually is.
Why can't you replicate enterprise software with AI coding tools?
SAP runs 77% of global transaction revenue. Workday processes payroll for organisations with tens of thousands of employees across dozens of jurisdictions. Salesforce has been rebuilt from the ground up three times trying to balance the tension between platform openness and data integrity.
These systems aren't large because their engineers were slow. They're large because the problem domains are enormous and the knowledge required to solve them correctly took decades to accumulate.
Pick any function in an enterprise ERP system. Accounts payable, for example. A vibe-coded version might handle the core flow: receive invoice, match to purchase order, approve payment, record transaction. That covers maybe 60% of real-world scenarios.
The remaining 40% is where the decades of domain knowledge live. Three-way matching exceptions where the goods receipt doesn't match the PO because of partial deliveries. Tax withholding rules that vary by jurisdiction, entity type, and treaty status. Intercompany eliminations for subsidiaries that transact in different currencies. Retroactive adjustments when a vendor corrects an invoice that was already partially paid in a prior period. Each of these has been encountered, debugged, and encoded by teams who spent years in the domain.
That knowledge doesn't sit in a database schema. It sits in conditional logic branches, validation rules, UI workflows that guide users through regulatory requirements they don't fully understand, and middleware that enforces constraints the users never see.
Where does domain knowledge live in enterprise systems?
Consider property data platforms. The data is valuable: property values, transaction histories, ownership records, building attributes. But the data isn't the product. The intelligence is.
Valuation methodologies encode regulatory requirements from prudential authorities that most users don't know exist. Data correction workflows catch inconsistencies that only appear when you cross-reference three different government data sources. Bank integration APIs enforce submission formats that have been negotiated over years with individual lender compliance teams.
None of this is documented in a single place. It's distributed across layers.
The UI layer guides bank valuers through a workflow that looks simple but encodes complex compliance requirements. Skip a field or enter a value outside tolerance, and the system prevents submission with an error message that references a specific regulatory clause.
The middleware layer runs validation rules that reflect decades of production data quality issues. A property can't be valued below a certain threshold relative to comparable sales unless the valuer provides a specific override justification, because a bank got burned by that exact scenario years ago and the rule was added after a compliance review.
The data layer contains not just current values but correction histories, provenance chains, and confidence scores that downstream systems depend on for their own regulatory reporting.
You could vibe-code a property data platform in a weekend. What you'd build would be a surface-level application suitable for consumer-grade use. You wouldn't build the kind of platform trusted by major banks to make lending decisions. The gap between those two products isn't code. It's decades of domain knowledge embedded in every layer of the stack.
Will AI compress enterprise software into simpler applications?
There's a belief that AI will eventually compress enterprise systems into simple applications because AI can handle the complexity internally. The user describes what they want. The AI figures out the regulatory requirements, the edge cases, and the business rules.
This assumes the domain knowledge exists in a form the AI can access.
For well-documented, publicly available domains (tax tables, standard accounting rules, widely published regulations), AI can handle a lot. For the long tail of enterprise-specific knowledge (how this particular bank wants its valuation submissions formatted, what happens when a partial goods receipt conflicts with a blanket purchase order, which compliance exception requires human approval versus automated processing), the knowledge doesn't exist in training data. It exists in production systems that learned it through years of operational experience.
This is why the abstraction layer doesn't collapse the way some people expect. Each layer in an enterprise system exists for a reason that extends beyond technical architecture. Layers encode organisational boundaries, regulatory constraints, security policies, and accumulated operational decisions. Even if you rebuilt the technical system from first principles, you'd end up recreating most of those layers because the reasons they exist are structural, not technical.
The framing that cuts through the noise: it's absurd to think you're going to vibe code your way to SAP. SAP's code is neither elegant nor irreplaceable. What's irreplaceable is the 50 years of global business process knowledge encoded in it. No amount of AI-generated code can shortcut that.
Where does vibe coding actually work?
None of this means vibe coding is overhyped. It's correctly hyped for the use cases where it works.
Vibe coding works brilliantly when the domain knowledge fits in one person's head. A solo operator building a booking system for a beauty salon can hold the entire business logic of appointment scheduling, client management, and payment processing in working memory. Add AI coding tools and that solo operator becomes genuinely productive, as I've experienced firsthand.
Vibe coding works for greenfield applications where you're defining the domain as you build. There's no 25-year backlog of edge cases to encode. You're discovering them in real time with real users.
Vibe coding works for narrow SOPs wrapped in code and internal tools where the tolerance for edge-case failures is higher. If your internal expense report tool breaks on an unusual reimbursement type, someone files a ticket and it gets fixed. If your bank's valuation platform breaks on an unusual property type, the loan decision is delayed and you get a call from the regulator.
The pattern is consistent: vibe coding scales with how much domain knowledge can fit in the builder's head and how much tolerance the use case has for undiscovered edge cases.
Enterprise systems fail on both counts. The domain knowledge is too vast for any individual to hold, and the tolerance for failure is near zero because the consequences are regulatory, financial, or both.
Why will AI capability diffusion take longer than expected?
The AI industry talks about a "diffusion gap" between what's technically possible and what enterprises actually adopt. Most explanations focus on bureaucracy, risk aversion, and procurement cycles. Those are real, but they're not the primary cause.
The primary cause is domain encoding. The hard part of deploying AI in enterprise contexts isn't generating code. It's ensuring that the generated code correctly encodes the business rules, regulatory requirements, and operational heuristics that the existing system took decades to accumulate.
A startup can build an AI-first HR tool from scratch. A Fortune 500 company can't rip out Workday and replace it with a vibe-coded alternative, because doing so would require re-encoding thousands of payroll rules, tax calculations, compliance requirements, and integration protocols that Workday accumulated over 18 years of production operation.
This gap will close eventually. AI systems will get better at absorbing domain knowledge from existing systems. Enterprise teams will develop better methods for documenting and transferring institutional knowledge. New entrants will build domain depth over time.
But "eventually" is measured in years, not quarters. The diffusion of AI capability into enterprise software will take longer than Silicon Valley expects. Enterprises aren't slow. The domain knowledge that makes their software valuable is distributed across layers and difficult to replicate through code generation alone.
Frequently Asked Questions
Aren't you just defending legacy software?
No. Legacy software has plenty of genuine problems: technical debt, poor UX, vendor lock-in, inflated pricing. The argument isn't that SAP is good. It's that the domain knowledge SAP encodes is real and can't be shortcut through code generation. New entrants will eventually replicate that domain depth, but it takes time.
What about vertical AI startups building in specific enterprise domains?
They're the most promising path to enterprise disruption, precisely because they're building domain depth from day one in a specific vertical. The ones that succeed will spend years accumulating the same kind of embedded knowledge that incumbents have, just with better architecture underneath it.
How does this relate to the "SaaS is dead" narrative?
The "SaaS is dead" narrative correctly identifies that hollow SaaS (CRUD apps with no domain depth) is vulnerable. It incorrectly extends that conclusion to all enterprise software. Workflow-embedded SaaS with deep domain knowledge is becoming more defensible, not less, because agents need that accumulated context to do useful work.
Related: Vibe Coding Is Real. It's Just Not What They're Selling You. and SaaS Isn't Dead. Hollow SaaS Is.
Logan Lincoln
Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience building production AI platforms solo.


