Roles, Competencies & Organisation7 min read

The AI Fluency Spectrum

AI fluency isn't a skill level. It's about scope: improving your own output, building systems others depend on, or redesigning how work happens.

The AI Fluency Spectrum

TL;DR

  • AI fluency has three meaningful stages: using AI to improve your own output, building systems others depend on, and redesigning how work happens at a team or org level. These aren't skill levels. They're different things with different requirements.
  • The transitions between stages are blocked by authority and permission, not just skill. Most people who are capable of stage 2 haven't done it because nobody gave them the mandate.
  • Your trajectory matters more than your current position. Someone actively building their fluency is more valuable than someone who plateaued six months ago.

Most AI fluency frameworks treat it as a personal skill. You either use AI well or you don't. The proficiency tables in the product competency model do this deliberately: they describe what you can do at each level.

That dimension matters. But there's a second dimension that's at least as important, and most frameworks skip it: scope.

The most productive question isn't "how well do you use AI?" It's "who benefits from how you use AI?"

Three stages of scope

Stage 1: Personal output

You use AI to work faster, produce higher-quality work, and handle tasks you couldn't previously do at scale. Your output improves. Your process changes. The team around you is largely unaffected.

This is where most people are. It's genuinely valuable — personal productivity gains are real and they compound. Someone who processes customer feedback with AI, generates prototypes faster, or writes cleaner documentation produces meaningfully better work than someone who doesn't.

But it's also the stage where progress is easiest to fake. Someone at Stage 1 can describe their AI usage without having much to show for it. The evidence is subtle: slightly faster output, slightly better quality. Hard to point to, easy to claim.

The tell for genuine Stage 1 fluency: can you describe what your process looked like before, what it looks like now, and exactly where the improvement shows up? Vague answers here suggest the usage is more superficial than it appears.

Stage 2: Systems others depend on

You build things others use. A prompt library the team pulls from. A workflow that automates recurring work. An internal tool that surfaces insights teammates previously couldn't access. Templates, processes, documentation that embed your AI-developed methods into how others work.

This is the meaningful shift. Stage 2 requires the same skills as Stage 1, plus something different: the discipline to make your methods repeatable and the willingness to share them. A lot of Stage 1 work stays personal because it's faster to do it yourself than to document and scale it. Stage 2 demands the opposite instinct.

The evidence for Stage 2 is concrete: name the thing you built, describe who uses it, and explain the before/after. "I built a brief template that the team uses for every feature" is a Stage 2 claim. "I use AI to write better briefs" is Stage 1.

Stage 2 also introduces accountability beyond your own work. When you build something others depend on, quality failures are no longer contained. A flawed prompt library scales flawed outputs. This is why the accountability principle matters more at Stage 2 than Stage 1.

Stage 3: Redesigning how work happens

You change the operating model. Not just how you or your team does existing work, but what work gets done, who does it, and what the function produces. Entire categories of work get automated or eliminated. New capabilities emerge that weren't possible before.

Stage 3 looks like: a function that used to require five people for recurring operational work now runs with two because you rebuilt it around AI-native processes. Or: a workflow that used to take three days now takes forty minutes and produces more useful output. Or: a role that used to be defined by execution is now defined by judgment and oversight, because the execution layer is AI-handled.

The distinction from Stage 2 is that Stage 3 changes job descriptions, not just workflows. It requires permission to redesign, not just authority to build. That's why it's rare.

The transition blockers

Stage 1 to Stage 2

The skill gap is real but small. The bigger barriers are time pressure and the instinct to keep effective methods to yourself.

Time pressure: documenting and scaling a method takes longer than using it yourself, and most people can't justify that investment when they're already productive. The fix is treating scale as part of the work, not extra work on top of it. If a method is worth doing once, it's worth making repeatable.

The second barrier is less acknowledged: people hold on to working methods as personal competitive advantages. If I'm getting measurably better results with AI than my peers, sharing that method feels like giving away an edge. This instinct makes sense individually and is damaging for teams. The builder-leader identity requires actively sharing what works, not hoarding it.

Stage 2 to Stage 3

This transition is primarily a permission problem, not a skill problem.

Redesigning how a function works requires authority over that function. It means changing processes other people depend on, which creates disruption even when the outcome is better. Most people who are capable of Stage 3 work haven't done it because the mandate was never explicit, the disruption risk felt too high, or there was nobody to authorise the change.

If your best AI practitioners are stuck at Stage 2, the question isn't what they need to learn. It's what they've been permitted to do.

Trajectory over snapshot

Where someone sits on this spectrum at a single point in time is useful information. It's not the most useful information.

The more valuable signal is how fast they're moving and in what direction. Someone actively iterating their Stage 1 practices and starting to share methods with their team is doing something more valuable than someone who reached Stage 2 a year ago and hasn't touched their approach since.

This matters for hiring and for team development. A candidate who can describe an evolving arc — "I started here, I tried this, I learned that, I'm now doing this differently" — is demonstrating something about how they approach developing new capabilities. Someone who gives the same answer about AI usage they would have given six months ago isn't.

The same applies to teams. If the team's collective approach to AI hasn't changed in the past quarter, that's a signal. It doesn't mean nobody's using AI. It means nobody's developing fluency.

What this means for managers

A manager's personal AI fluency is a starting point, not the goal.

The manager who has a well-developed Stage 2 practice but whose team is still running Stage 1 workflows hasn't led anything. They've developed a skill personally and kept it there. The AI-native team design principle (that smaller, more capable teams outperform larger, less capable ones) only applies if the capability is team-wide.

Leadership fluency means moving your team along the spectrum, not just your own position on it. In practice, this requires:

Making the expectation explicit. "I expect everyone on the team to be actively developing their AI fluency" is a different statement than making AI tools available and hoping people use them. The first is a performance expectation. The second is an invitation.

Creating time for Stage 2 work. Building repeatable systems takes time. If the sprint is entirely full of product delivery, nobody will invest in the workflow improvements that compound. Protecting time for this isn't a nice-to-have; it's how you get teams off Stage 1.

Modelling the progression. The manager who shares what's working, documents their own methods, and openly iterates in front of the team is showing what progression looks like. The manager who uses AI quietly and never discusses it is not.

Removing blockers to Stage 3. If team members are capable of redesigning significant workflows and aren't doing it, the manager's job is to find out why. Usually it's permission, political risk, or time. Often it's all three.

Personal AI fluency is table stakes for a manager in 2026. Team AI fluency is the job. The AI-native team design chapter covers the org patterns that make this possible in practice.

v2.0 · Updated Apr 2026