Roles, Competencies & Organisation12 min read

The AI Fluency Spectrum

The AI fluency spectrum maps three scope stages: personal output, systems others depend on, and redesigning how work happens. Most frameworks only measure the first.

The AI Fluency Spectrum

TL;DR

  • AI fluency has three meaningful stages: using AI to improve your own output, building systems others depend on, and redesigning how work happens at a team or org level. These aren't skill levels. They're different things with different requirements.
  • The transitions between stages are blocked by authority and permission, not just skill. Most people who are capable of stage 2 haven't done it because nobody gave them the mandate.
  • Your trajectory matters more than your current position. Someone actively building their fluency is more valuable than someone who plateaued six months ago.

The AI fluency spectrum is a framework for describing where someone's AI capability has impact: on their own output only, on systems others depend on, or on how a function is fundamentally structured. These aren't skill levels; they're scope stages. Most fluency frameworks only measure the first.

Most AI fluency frameworks treat it as a personal skill. You either use AI well or you don't. The proficiency tables in the product competency model do this deliberately: they describe what you can do at each level.

That dimension matters. But there's a second dimension that's at least as important, and most frameworks skip it: scope.

The most productive question isn't "how well do you use AI?" It's "who benefits from how you use AI?"

Three stages of AI fluency scope

Stage 1: Personal output

You use AI to work faster, produce higher-quality work, and handle tasks you couldn't previously do at scale. Your output improves. Your process changes. The team around you is largely unaffected.

This is where most people are. It's genuinely valuable — personal productivity gains are real and they compound. Someone who processes customer feedback with AI, generates prototypes faster, or writes cleaner documentation produces meaningfully better work than someone who doesn't.

But it's also the stage where progress is easiest to fake. Someone at Stage 1 can describe their AI usage without having much to show for it. The evidence is subtle: slightly faster output, slightly better quality. Hard to point to, easy to claim.

The tell for genuine Stage 1 fluency: can you describe what your process looked like before, what it looks like now, and exactly where the improvement shows up? Vague answers here suggest the usage is more superficial than it appears.

Stage 2: Systems others depend on

You build things others use. A prompt library the team pulls from. A workflow that automates recurring work. An internal tool that surfaces insights teammates previously couldn't access. Templates, processes, documentation that embed your AI-developed methods into how others work.

This is the meaningful shift. Stage 2 requires the same skills as Stage 1, plus something different: the discipline to make your methods repeatable and the willingness to share them. A lot of Stage 1 work stays personal because it's faster to do it yourself than to document and scale it. Stage 2 demands the opposite instinct.

The evidence for Stage 2 is concrete: name the thing you built, describe who uses it, and explain the before/after. "I built a brief template that the team uses for every feature" is a Stage 2 claim. "I use AI to write better briefs" is Stage 1.

What Stage 2 operators actually do

The conceptual definition of Stage 2 is clean. The daily practice is less obvious, and it's where most people stall even after they technically understand the stage.

Four behaviours show up consistently in Stage 2 operators:

  1. Deliberate tool rotation. They maintain working familiarity with at least two or three adjacent tools (e.g. Claude plus Cursor plus a specialised agent framework). Rotation is the point: each tool surfaces what the others hide about a given problem. Someone who's been using the same tool the same way for six months is not developing Stage 2 fluency; they've plateaued at Stage 1.

  2. Prompt refactoring as routine. They treat a prompt like code they'll return to. The prompt library has a changelog. Failed outputs get traced back to the prompt, diagnosed, and fixed. The prompts that power shared artefacts (the brief template, the synthesis tool, the research agent) are improved on a rolling basis, not written once and forgotten.

  3. Small experiments outside the scope of the current task. They run a lot of small tests that aren't connected to a shipping deadline. What happens when the output is asked for in a different format? What breaks when the prompt scales to 1,000 runs? The experiments aren't wasteful; they build the intuition that makes Stage 2 artefacts hold up under real use.

  4. Sharing what works with the specific people who'll use it. Not mass-broadcasting on Slack. Handing the new prompt to the two people who'll actually adopt it, watching them use it, and iterating on their feedback. Stage 2 adoption is earned in small circles, not announced at all-hands.

These are not nights-and-weekends behaviours. They fit inside a normal working week if you commit to them. What they require is protecting a thin slice of time each week for non-delivery AI work. Teams that build Stage 2 fluency at scale do this deliberately; teams that expect it to emerge from existing delivery pressure wait a long time.

Stage 2 also introduces accountability beyond your own work. When you build something others depend on, quality failures are no longer contained. A flawed prompt library scales flawed outputs. This is why the accountability principle matters more at Stage 2 than Stage 1.

Stage 3: Redesigning how work happens

You change the operating model. Not just how you or your team does existing work, but what work gets done, who does it, and what the function produces. Entire categories of work get automated or eliminated. New capabilities emerge that weren't possible before.

Stage 3 looks like: a function that used to require five people for recurring operational work now runs with two because you rebuilt it around AI-native processes. Or: a workflow that used to take three days now takes forty minutes and produces more useful output. Or: a role that used to be defined by execution is now defined by judgment and oversight, because the execution layer is AI-handled.

The distinction from Stage 2 is that Stage 3 changes job descriptions, not just workflows. It requires permission to redesign, not just authority to build. That's why it's rare.

The transition blockers

Stage 1 to Stage 2

The skill gap is real but small. The bigger barriers are time pressure and the instinct to keep effective methods to yourself.

Time pressure: documenting and scaling a method takes longer than using it yourself, and most people can't justify that investment when they're already productive. The fix is treating scale as part of the work, not extra work on top of it. If a method is worth doing once, it's worth making repeatable.

The second barrier is less acknowledged: people hold on to working methods as personal competitive advantages. When an operator is getting measurably better results with AI than their peers, sharing that method feels like giving away an edge. This instinct makes sense individually and is damaging for teams. The builder-leader identity requires actively sharing what works, not hoarding it.

Stage 2 to Stage 3

This transition is primarily a permission problem, not a skill problem.

Redesigning how a function works requires authority over that function. It means changing processes other people depend on, which creates disruption even when the outcome is better. Most people who are capable of Stage 3 work haven't done it because the mandate was never explicit, the disruption risk felt too high, or there was nobody to authorise the change.

If your best AI practitioners are stuck at Stage 2, the question isn't what they need to learn. It's what they've been permitted to do.

Proficiency levels within each stage

The scope stages describe who benefits from how you use AI. Proficiency describes how deeply you've internalised the tools within whichever scope stage you're operating at — how automatic the reach for AI is, how reliably you get useful output, how well you prompt and evaluate results.

A useful way to think about proficiency:

Level 0: Sometimes uses AI tools. Hasn't changed any workflows. Treats AI as a fancier search engine or occasional draft generator.

Level 1: Has built something: a custom prompt, a Notion agent, a shared template. Starting to see what's possible. Hasn't compounded it yet.

Level 2: Has automated part of their job. Built something others depend on. This is where the productivity step-change happens. The output is concrete and pointable.

Level 3: Builds infrastructure that levels up others. Publishes skills. Creates platforms. Changes how their team or function works. These are Stage 3 scope people: force multipliers whose impact extends well beyond their own output.

The transitions between levels aren't primarily skill barriers. They're design problems.

Getting people from Level 0 to Level 1 requires low-friction tools. If someone has to configure a terminal, install dependencies, or request IT access before they see value, most won't reach Level 1. The tools need to meet people where they are: accessible without technical setup, connected to the data and systems they actually use, delivering a useful result on day one.

Getting people from Level 1 to Level 2 requires raised expectations. AI proficiency should appear in how you talk about performance, in hiring screens, and in onboarding — not as an end in itself, but as a clear signal that developing fluency is essential to doing the job well. The expectation pushes people who are dabbling to commit.

The critical constraint: match the mandate to the tooling. Raising expectations before the tools can reliably deliver on them burns credibility. Teams stop listening. Organisations that successfully moved people from Level 1 to Level 2 did it as the tools matured, not ahead of them. Push Level 2 behaviour while Level 1 is still painful and you get compliance theatre, not genuine adoption.

Getting people from Level 2 to Level 3 requires a stage. Systems builders need visibility, resources to build at team scale, and the permission to redesign rather than just improve. Spotlighting what they've built, giving them a forum to share it, pairing them with other builders: these are the mechanisms. The transition blocker from Stage 2 to Stage 3 (above) is usually permission. Level 2 to Level 3 has the same constraint, plus one more: the people at Level 3 need an audience for what they build, or the motivation to keep building at that level disappears.

Why trajectory matters more than where someone is now

Where someone sits on this spectrum at a single point in time is useful information. It's not the most useful information.

The more valuable signal is how fast they're moving and in what direction. Someone actively iterating their Stage 1 practices and starting to share methods with their team is doing something more valuable than someone who reached Stage 2 a year ago and hasn't touched their approach since.

The Zapier AI Fluency Rubric makes the same distinction: "Capable" sits one step above their minimum, and trajectory is what separates "Adoptive" from "Transformative". Where someone sits today matters less than whether they're moving upward on the rubric.

This matters for hiring and for team development. A candidate who can describe an evolving arc — "I started here, I tried this, I learned that, I'm now doing this differently" — is demonstrating something about how they approach developing new capabilities. Someone who gives the same answer about AI usage they would have given six months ago isn't.

The same applies to teams. If the team's collective approach to AI hasn't changed in the past quarter, that's a signal. It doesn't mean nobody's using AI. It means nobody's developing fluency.

What this means for managers

A manager's personal AI fluency is a starting point, not the goal.

The manager who has a well-developed Stage 2 practice but whose team is still running Stage 1 workflows hasn't led anything. They've developed a skill personally and kept it there. The AI-native team design principle (that smaller, more capable teams outperform larger, less capable ones) only applies if the capability is team-wide.

Leadership fluency means moving your team along the spectrum, not just your own position on it. In practice, this requires:

Making the expectation explicit. "I expect everyone on the team to be actively developing their AI fluency" is a different statement than making AI tools available and hoping people use them. The first is a performance expectation. The second is an invitation.

Creating time for Stage 2 work. Building repeatable systems takes time. If the sprint is entirely full of product delivery, nobody will invest in the workflow improvements that compound. Protecting time for this isn't a nice-to-have; it's how you get teams off Stage 1.

Modelling the progression. The manager who shares what's working, documents their own methods, and openly iterates in front of the team is showing what progression looks like. The manager who uses AI quietly and never discusses it is not.

Removing blockers to Stage 3. If team members are capable of redesigning significant workflows and aren't doing it, the manager's job is to find out why. Usually it's permission, political risk, or time. Often it's all three.

Personal AI fluency is table stakes for a manager in 2026. Team AI fluency is the job. The AI-native team design chapter covers the org patterns that make this possible in practice. The complement to fluency is taste: the judgment layer that AI can inform but not replace. See Taste: The PM Skill AI Cannot Commoditise.

v2.1 · Updated Apr 2026