The Translation Layer Is Dead. Here's What Replaces It.

TL;DR
- The PM role has historically been a translation layer between user needs and engineering execution, and agentic coding is compressing that layer to nothing
- Three skills now define the modern PM: problem shaping (articulating constraints precisely enough for agents to act), context curation (feeding agents the right inputs), and taste (distinguishing "technically correct" from "shippable")
- The PM who thrives isn't the one who writes the best spec. It's the one who understands the problem so deeply that the solution becomes obvious
For my entire career, the PM role has been a bridge. You had a hunch about a user problem. You spoke to customers. You synthesised what you learned into specs. You handed those specs to engineers and hoped that your intent survived the translation.
That translation layer, the thing that justified the PM's seat at the table, is compressing. Fast.
When agentic coding workflows can take a well-formed problem statement and produce working software, the distance between "what users need" and "what gets built" shrinks to nearly zero. The PM is no longer translating for engineers. They're forming intent clearly enough that agents can act on it directly.
The spec is becoming the product. And that changes everything about what it means to be good at this job.
The compression is real
Consider what the PM workflow used to look like. You'd write a detailed spec. Hand it off. Wait for questions. Clarify ambiguities. Wait for implementation. Review the build. Give feedback. Iterate. The cycle took weeks, sometimes months. Every handoff introduced drift between what you meant and what got built.
Now I write a clear problem statement with constraints, point an agent at it, and review working code in an hour. The gap between "I know what we should build" and "here it is" has collapsed.
But the work of knowing what to build didn't get easier. It got more important.
Every AI company, large and small, is shipping at an accelerating pace. The cycle times that used to define product development (quarterly planning, monthly sprints, weekly releases) are compressing into something closer to continuous deployment of ideas. When implementation is no longer the bottleneck, the scarce resource shifts upstream. It's no longer engineering capacity. It's knowing what's actually worth building.
This has implications for every PM who's built their career on the translation function. I wrote about the broader shift from PM to Product Builder when Meta and LinkedIn independently signalled the same conclusion. If your job was mostly converting customer needs into documents for engineers, that's a workflow. Workflows get automated. If your job was understanding problems so deeply that the right solution becomes obvious, you're more valuable than ever.
Skill 1: Problem shaping
The best PMs have always been good at this. But it used to be one skill among many, alongside stakeholder management, sprint planning, backlog grooming, and the dozen other rituals of product management. Now it's the skill. Everything else is secondary.
Problem shaping means taking an ambiguous customer pain point and articulating it with enough precision that an agent (or a team of agents) can act on it directly. Not "build me a dashboard." Not "improve the onboarding experience." Those are wishes, not problems.
A well-shaped problem has:
- Clear boundaries. What's in scope and what isn't. An agent given unbounded scope will produce unbounded mediocrity.
- Specific constraints. Not every constraint. Just the ones that will actually change what gets built. Performance requirements. Data limitations. Regulatory boundaries. The constraints that shape the solution space.
- A definition of success. Concrete, not fuzzy. Something you could actually measure or observe. "Users complete the flow in under 90 seconds" is a problem constraint an agent can optimise for. "Users feel delighted" is not.
The PM who can't shape problems precisely will get technically functional output that misses the point entirely. The agent will build exactly what you asked for, which is the problem, because you asked for the wrong thing and hadn't thought it through clearly enough.
This is why the spec and the prototype are converging. When you describe the problem with sufficient precision, the description itself becomes executable. You don't hand it to an engineer for interpretation. You hand it to an agent for implementation. The clarity of your thinking is directly visible in the quality of what gets built.
Skill 2: Context curation
This is the skill nobody talks about, but every PM who's effective with agents has quietly developed it.
The quality of what an agent produces is directly proportional to the context you provide. Garbage in, garbage out has never been more literally true. An agent with a vague prompt produces vague output. An agent with rich, specific context about your users, your constraints, your definition of quality, and your history of failed approaches produces output that actually fits.
When I first started working with agents, I'd give broad prompts. "Build me a dashboard for customer feedback." I'd get something that technically functioned but missed the point entirely. It didn't understand our users, our constraints, or what "good" looked like for our specific context.
Now I maintain context documents that I feed to agents before starting any project. Over time, I've learned what actually matters in these documents:
The user, specifically. Not a persona slide. Real details. Who they are, what they care about, what makes them give up, what makes them pay attention. Direct quotes from calls, tickets, or sales notes. Their language, not your synthesis. This grounds the agent in real pain, not abstracted pain.
What good looks like. Examples your team considers well-designed. Your own past work, competitor implementations, adjacent products that handle similar problems well. Showing is exponentially more effective than describing.
What you've tried and why it failed. This is institutional knowledge that usually lives in people's heads and dies when they leave. The approaches you've already killed and the specific reasons why. Without this, agents will confidently reinvent your past mistakes.
How you'll know it worked. The measurable outcomes that separate "technically runs" from "actually solves the problem."
When I ask an agent to prototype something now, it's not starting from zero. It knows who we're building for, what they actually said, what good looks like, and what's already failed. The output fits because the input was specific.
Context curation is a continuous discipline, not a one-time exercise. Your context documents should be living artefacts, updated with every customer conversation, every failed experiment, every shift in strategy. The PM who maintains rich, current context documents has a compounding advantage over the one who starts every agent interaction from scratch.
Skill 3: Taste and evaluation
When code is abundant, curation is the scarcity.
Agents will produce output quickly and in volume. Multiple approaches, multiple implementations, multiple variations, all technically functional. The PM's job is to look at all of it and know which version to ship. Not which version runs. Which version matters.
This is taste. And it's harder than it sounds.
Agents will confidently produce things that look correct but miss the point entirely. A feature that handles the happy path beautifully but falls apart on the edge case that 30% of your users will actually hit. A design that's technically accessible but feels hostile. An implementation that solves the stated problem while creating two new ones.
You need the intuition to distinguish between "technically correct" and "shippable" in seconds. That intuition doesn't come from reading about products. It comes from building them, evaluating them, and learning what "good enough to ship" actually feels like versus "technically works." This is why agent evals matter so much. They're the mechanism that turns taste from a feeling into a measurable standard.
I regularly have agents build two or three completely different approaches to the same problem just to see which one feels right when I use it. That used to be prohibitively expensive, weeks of engineering time to explore multiple paths. Now it's a few hours with parallel agents. The ability to explore the solution space cheaply is a superpower, but only if you have the taste to evaluate what comes back.
There's no shortcut. You have to build things, evaluate the output, iterate, and develop the pattern recognition over hundreds of cycles. The PMs who've been building side projects, prototyping with no-code tools, and getting their hands dirty with the product, they have a head start. The ones who've been living in Jira and slide decks are starting from zero.

The new workflow
This is how the mental model shifts in practice:
Old model: PM figures out what to build. Writes spec. Engineers build it. PM reviews. Iterate over weeks.
New model: PM figures out what to build. PM prototypes it with agents. PM evaluates the prototype against real user context. Iterates rapidly. When it's right, engineers make it production-ready.
The PM isn't handing off requirements anymore. They're shaping the first iteration themselves and getting feedback on working software, not slide decks, not Figma mocks, not written descriptions of what a feature might do. Working software.
Engineers don't disappear in this model. They become collaborators on making the product robust, scalable, and production-grade rather than translators of the PM's intent. That's a better use of their expertise. And it's a better working relationship, because you're iterating on a thing that exists, not debating abstractions.
Two shifts in thinking make this work:
Think in iterations. Let the first version be wrong. Don't try to perfect the solution in your head before you start. Give the agent rich context about the problem, then let it take a rough first pass. React to what comes back. You'll learn more from "that's not quite right because..." than from trying to anticipate every edge case upfront.
Hold ambiguity longer. The old PM instinct was to resolve ambiguity into specs as quickly as possible. Collapse the options, pick a direction, write it down. The new instinct is to stay in the ambiguous zone while you explore. Let agents help you understand the solution space before you commit. Don't converge too early. The cost of exploring one more option is now trivial.
Getting started
If you haven't worked this way yet, here's the path in.
Pick a real problem you actually have. Not a hypothetical exercise. Something that's annoying you right now. A report you compile manually. A workflow that's tedious. A prototype you wish existed but never had the engineering bandwidth to build.
Spend thirty minutes writing context before you prompt. Your users. Their exact words. What good looks like. What you've tried. Your constraints. Your success criteria. This context document is more important than the prompt itself.
Point an agent at it. Don't expect perfection. Expect a starting point. React to it. Guide it. Iterate.
Do this ten times. With different problems. Different levels of complexity. Different tools. You'll develop intuition for what context matters, how to shape problems, and how to evaluate output. That intuition is the new PM skill, and it only comes from reps.
What's left when the translation layer disappears
If your job was mostly translating customer needs into documents for engineers, that function is being automated. Not partially. Fundamentally.
But if your job was understanding problems so deeply that the right solution becomes obvious (to you, to your team, and now to the agents you work with) you're more valuable than you've ever been. Agents amplify that understanding into shipped product faster than any team structure could before.
Understanding the problem. User empathy. Judgment. Taste. These were always part of the PM job. Now they're becoming the whole job. The product competency model in the handbook maps these skills across seniority levels. Everything else was scaffolding. The scaffolding is coming down, and what's left is the structure that actually holds weight.
Frequently Asked Questions
Does this mean PMs need to learn to code?
No, but they need to learn to work with agents that produce code. The distinction matters. You don't need to understand the implementation at the syntax level. You need to evaluate the output at the product level: does it solve the problem, handle the edge cases, and feel right to use? That's product judgment applied to working software rather than to specs and mocks.
How do you build context curation as a habit?
Start a living document for whatever you're working on right now. After every customer call, add the direct quotes that surprised you. After every failed experiment, document what you tried and why it didn't work. After every successful ship, capture what "good" looked like. In a month, you'll have a context document that makes every agent interaction dramatically better. The discipline is maintenance, not creation.
What tools should a PM start with for agent-based prototyping?
The tool matters less than the habit. Pick one that matches your current comfort level, whether that's an AI coding IDE, a browser-based prototyping tool, or a command-line agent. Use it daily on real problems. Switch tools as you get more comfortable. The muscle you're building is working with agents, not mastering any specific tool.
Logan Lincoln
Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms.