Back to blog

Capable Is the New Unacceptable: Zapier's AI Fluency Rubric

1 April 20266 min read
Capable Is the New Unacceptable: Zapier's AI Fluency Rubric

TL;DR

  • Zapier's updated AI Fluency Rubric defines "Capable" as barely above their minimum acceptable bar. Most companies treat this tier as their ambition.
  • The rubric's most important insight isn't the levels, it's the slope: trajectory of improvement matters more than where someone sits today.
  • Most orgs are structurally stuck at Capable because they grant permission to improve, not permission to redesign.

Zapier reached 100% AI adoption across every function. Then they raised the bar.

Their V2 AI Fluency Rubric, published in April 2026, defines four tiers (Unacceptable, Capable, Adoptive, Transformative) and resets the minimum standard for every new hire. Most commentary will explain what the rubric says. This piece is about what it reveals: specifically, the calibration problem most organisations don't know they have.

"Capable" is the tier that sounds like success. The definition: "I use AI to operate at a meaningfully higher level." Most AI adoption programs call this their target state. On Zapier's rubric, it sits one step above "Unacceptable."

TierDefinitionWhat it looks like
UnacceptableMinimal or no AI use; output does not reflect AI capabilityAbsent from workflows
CapableUses AI regularly; output is meaningfully better; can cite examplesIndividual productivity
AdoptiveBuilds systems others depend on; always-on AI pipelinesShared team infrastructure
TransformativeRedesigns how the function works; role looks materially differentOrg redesign

What "Capable" Actually Is

The rubric has four levels: Unacceptable, Capable, Adoptive, Transformative. Capable means you use AI regularly, can cite specific examples, have refined your workflows, and your output quality reflects it. For a product manager, that means a structured approach to spec-writing and prototyping, AI-assisted user research, and faster turnaround on working solutions.

That's table stakes. It's not differentiation.

Adoptive is where the rubric gets discriminating: building systems others rely on. The product manager at Adoptive builds pipelines of agents that take customer feedback, write specs, prototype, and ship small features. They have always-on AI systems others depend on. This isn't personal productivity. It's team infrastructure.

Transformative is org redesign. The PM/Design role looks materially different than six months ago. New ways of building product. PMs shipping code in production.

The tier jump isn't effort. It's authority.

Trajectory Beats Position

The most valuable insight in Zapier's full write-up isn't the tier descriptions. It's this: they assess the trendline, not just where someone sits today.

"Someone who plateaued eight months ago on the same three tools is a different candidate than someone actively experimenting and building on what they've learned."

Most companies evaluate AI fluency as a snapshot. Present tense, current tools. They ask "do you use AI?" or "what are you using it for?" Zapier asks how your approach has evolved. What did you try and abandon? What are you building toward?

This shifts the signal from capability to learning velocity. In a field where the floor moves every few months (Zapier's V1 rubric became insufficient in under a year), trajectory beats position. A candidate at Capable and accelerating is a better bet than someone settled comfortably at Adoptive.

Building and Accountability Are the Real Filters

The rubric evaluates four dimensions: mindset, strategy, building, and accountability. Mindset and strategy are gameable in interviews. Someone can talk fluently about AI-first product development without having built anything. Building and accountability are harder to fake.

The distinction shows up clearly across departments.

Marketing Capable: uses AI regularly across content and SEO, has a reusable prompt library, can explain what works and why. Good.

Marketing Transformative: built a personalisation engine serving AI-generated campaign variants at scale, tied directly to pipeline. Restructured how the marketing team works: what gets automated, what gets owned, how success gets measured.

People Transformative: stopped running a legacy programme entirely and rebuilt the function AI-first. Agents generate personalised outputs from role data. Team members now own agentic platforms rather than manual workflows.

Accountability is Zapier's new addition for V2. Their framing: "With AI, you can delegate the work, but not the accountability." This dimension asks whether someone defines success criteria before they start, evaluates outputs critically, and owns what ships. It's the difference between someone who uses AI as a drafting shortcut and someone who treats AI-generated work as their professional output and stands behind it.

The Manager Trap

There's a requirement in the V2 rubric most organisations haven't thought about: managers must demonstrate how they moved their teams to adopt AI, not just their own fluency.

A personally Adoptive manager with a Capable team fails the bar.

I watched this play out across four business units. Senior leaders were genuinely strong AI users: building workflows, integrating tools, getting faster and sharper on their own work. Their teams were still running the old playbook. Nobody had made it their explicit job to move the team's floor. The assumption was that fluency was contagious. It isn't.

Zapier's framework asks managers for team-level evidence: psychological safety for experimentation, explicit expectations, workflow redesign at the function level. Not "I use AI well" but "my team operates differently because of how I led them."

Why Orgs Get Stuck at Capable

There's a 61-point gap between AI capability and actual deployment. Anthropic's research puts AI as capable of handling a substantial proportion of real work tasks, yet actual deployment lags dramatically. The bottleneck isn't the technology.

Capable is achievable in any organisation because it's individual. You can refine prompts, iterate on workflows, get faster and sharper, all without touching how your team operates. Adoptive requires building things others depend on, which means shipping internal tools, changing team processes, getting buy-in. Transformative requires redesigning how work gets done, which means authority over the function.

Most AI adoption programmes hand out tool licences and run training sessions. That playbook reliably fails. The approach that actually works: one person per project, unlimited tokens, the constraint to figure it out. That produces genuine fluency. Training produces compliance.

The deeper issue is that nobody's been given an explicit mandate to reach Transformative. It's not in their role description. It's not in their performance review criteria. The org has granted permission to improve, not permission to redesign. Transformative requires both.

That's not a personal failure. It's an org design choice, and it produces exactly the result you'd expect.

What the Rubric Is Actually For

The Zapier AI Fluency Rubric is useful well beyond hiring. Run your existing team against it. Not as a performance exercise: as a calibration one.

The question isn't whether your team uses AI. It's which tier they're operating at and what you're structurally permitting them to become. If the honest answer is that your strongest AI users are doing what Zapier considers barely above the minimum, the floor has moved.

Whether your bar moved with it is a choice, not a circumstance.


Related: The 61-Point Deployment Gap That Explains Everything · Why Constraint Is the AI Adoption Strategy · Why AI Product Leaders Must Also Build

Share

Logan Lincoln

Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience org transformation at Cotality.