Enterprise AI Adoption Fails at the Harness, Not the Model

TL;DR
- 90%+ AI tool access doesn't produce workflow change. Ramp hit near-universal adoption and found most people stuck on basic chat interfaces. The models were fine. The harness was not.
- Internal AI adoption fails at setup. Terminal installs, MCP configuration, npm commands: these are walls most non-engineers don't climb. They close the tab and return to the spreadsheet.
- Design internal AI tooling like a consumer product: zero-setup onboarding, pre-connected data sources, skills that produce a result on day one. If someone has to debug before they get value, the rollout is already over.
Ramp had 90% AI tool adoption across the company. Models were good. Budget was unlimited. Most people were still using a basic chat interface and hadn't changed a single workflow.
That number should reframe how you think about AI rollouts.
The bottleneck wasn't the model. It was the harness: the configuration and setup layer between a person and their first useful result. Terminal windows. npm installs. MCP configuration files. For engineers who've spent years in CLIs, this is routine. For a risk analyst or a finance manager, it's a wall. Most people who hit that wall don't debug their way through it. They close the terminal and go back to what worked.
The default organisational response to low adoption is more training. Workshops. Lunch-and-learns. Instructional videos. These help at the margins. They're not the fix. They address a symptom and leave the actual problem intact.
Internal AI adoption is a product problem. Solve it like one.
Why Enterprise AI Adoption Needs Consumer UX Thinking
Consumer product teams understand the aha moment. Every signup flow, every onboarding sequence is designed to get the new user to their first moment of value as fast as possible. Not after they've read the documentation. Not after a thirty-minute training session. On first use.
This discipline is almost entirely absent from enterprise AI deployments.
When Ramp built their internal Claude-based platform, the design principle was straightforward: authenticate once via SSO and 30+ tools connect automatically. Salesforce, Snowflake, Gong, Slack, Notion, Figma. All live on first login. No setup guide. No IT ticket. No configuration file to edit.
A team of four built it in under three months. 700 daily active users within a month of launch.
The people who got the most value weren't the ones who attended training sessions. They were the ones who used a skill on day one and immediately got a result. The product taught them faster than any workshop could.
That's not a coincidence. It's the design working.
Where Internal AI Rollouts Actually Break Down
At Cotality, I rolled out AI features across thousands of enterprise seats: Tier 1 banks, institutional valuers, construction firms. The technology worked. Two months later, adoption was in single digits on most features.
We'd built the right capability with the wrong access model. Features required users to change their workflow to accommodate the tool, rather than the tool meeting users inside their existing workflow. The AI usage gap is often architectural: AI sits as a separate destination instead of surfacing where work is already happening.
But there's a layer below that. Even when the feature is embedded in the right place, setup friction matters. If someone has to create an account, configure an API key, install a plugin, or debug a permission error before they see value, most of them won't come back.
One question worth asking for any internal AI rollout: how long does it take a new staff member to get their first useful result? If the answer is "after they've completed onboarding" or "once IT sets up their access", you have a harness problem.
The Skills Marketplace as an Adoption Flywheel
There's a second mechanism that's harder to see and more powerful once it runs.
Ramp built a skills marketplace alongside their internal platform. Anyone can package a workflow (a prompt sequence, a reporting template, an analysis method) and share it company-wide. Over 350 skills published. A sales rep figures out the best way to analyse call recordings and draft battlecards. Packages it as a skill. Every rep now has that capability without any of the trial and error.
The adoption loop this creates is different from a training programme. Training requires time, attendance, and discipline. A skills marketplace requires none of those. A colleague in your function solves a problem you also have. The solution is one click away. You use it. It works. You're in.
Most enterprise AI deployments skip this entirely. Individuals develop workflows they keep to themselves. Learnings don't compound. Six months in, three teams have independently built the same call-summarisation tool without knowing the others exist.
The fix is the sharing mechanism. Build it before you run the rollout, not after adoption stalls. Moving people from Level 0 to Level 1 AI proficiency is largely a tooling problem: if the sharing infrastructure doesn't exist, the learnings stay personal.
Why Owning Your Internal AI Platform Gives You the Signal
When you own the internal tool, you see exactly where people get stuck.
Every session generates signal. Which skills get adopted. Where people abandon a workflow. What separates daily users from weekly users. When you have that data, you can fix the problem the same day. A vendor's roadmap runs on their priorities, not yours.
This isn't an argument for building everything yourself. The model routing, data connectors, and LLM infrastructure should generally sit on top of existing platforms. But the user-facing experience, the thing your risk analyst opens on Monday morning, is worth owning. The signal it generates will tell you more about your adoption problems than any survey.
The enterprise AI adoption playbook points to MAU-to-seat ratio as the metric that predicts churn. Owning the tool is what gives you that metric in real time, at the session level, with enough granularity to act on it. The hub-and-spoke AI org model describes how to structure the team that builds and maintains it.
Three Questions for Your Next Internal AI Rollout
How fast does someone get to their first useful result? How connected is the tool to the data and systems people actually use? Can non-engineers share what works, or does scaling a workflow require a ticket to the AI team?
Adoption is a product problem. It has product answers.
Logan Lincoln
Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience enterprise AI at Cotality.


