Back to blog

30,000 Out. 8,000 In. The Great PM Cull Is Underway.

20 April 202610 min read
30,000 Out. 8,000 In. The Great PM Cull Is Underway.

TL;DR

  • Big tech is shedding roles and hiring simultaneously at roughly a 3.75:1 ratio. Most analysts treat the two as separate events. They're the same event.
  • The filter on the re-hire isn't seniority, function, or cost. It's AI fluency. Ex-FAANG pedigree from 2018 loses to a two-year AI-native builder.
  • If your resume still sells what you shipped five years ago, you're being measured against the new 8,000. Rewrite it.

Meta let 20,000 people go across 2022–2025. It's also hiring thousands of AI researchers, product engineers, and infra specialists in 2026. Google, Amazon, Microsoft, and Salesforce are running the same playbook. Most commentary treats these as separate events: one a correction to ZERP-era over-hiring, the other a hot-market bet on AI talent.

They're the same event.

Nikhyl Singhal (former CPO at Facebook Groups and Google, now running a CPO community of 125+ leaders) put a number on it recently: big tech will shed 30,000 roles and hire 8,000 over the next 12–24 months. The 30,000 represent the layer that got bloated during ZERP (the zero-interest-rate period from 2020–2022), when headcount doubled without output doubling. The 8,000 are the replacements. Same companies. Different filter.

The ratio is 3.75 to 1. That ratio is the signal.

Why the 30,000 and the 8,000 are the same event

The 30,000 layoffs and the 8,000 hires are one event because the same companies are running both sides of the exchange, swapping information-movement roles for builder roles inside a single operating-model rewrite. The dominant narrative misses this because it reads the two events on different clocks: the layoffs as a backward-looking ZERP correction, the hiring as a forward-looking AI bet. Under that reading, the re-hiring is something separate: an AI wager layered on top of a returning-to-baseline organisation.

That framing is wrong, and it matters.

The companies doing this did not simply cut 10% of headcount. They cut a specific cohort: roles where the primary output was information movement. Programme managers routing status between leaders. Analysts compiling dashboards. Recruiters running inbound pipelines. Mid-level PMs translating exec intent into engineering specs. Support engineers escalating tickets. Functions where the job was to relay, aggregate, or translate.

Those roles were expensive because they sat at org-chart bottlenecks. They were also the roles LLMs got good at first. When the CFO asks which teams can absorb an AI productivity tailwind, these are the answers.

The 8,000 being hired back sit at the opposite end of the spectrum. They build. They ship. They run evals. They debug agents in production. They write code that runs in customer environments. They do not translate exec intent; they read it directly and generate the artefact. They don't aggregate dashboards; they wire up the pipelines that emit the numbers.

This is not a correction. It's a swap. The 30,000 going out and the 8,000 coming in are two halves of one operating-model rewrite.

The 3.75:1 ratio is the skill filter, not the cost filter

The 3.75:1 replacement ratio is filtering for skill density, not labour cost: median compensation on the re-hire side runs higher than on the layoff side, not lower. The companies are explicitly trying to do more with fewer people, and they're willing to pay up per seat to do it. This isn't cheap labour arbitrage; it's skill-density arbitrage.

Conventional layoff analysis misses this because it frames headcount reductions as pure cost-cutting. Which roles are expensive? Which can be outsourced? Which can be flattened? Those questions fit 2009 restructurings. They don't fit this one.

The filter is specific. I've watched it tighten across the last eighteen months:

  1. Recency of AI tool use. Not "have you used Copilot?" The question is whether you shipped something with it this quarter.
  2. Eval literacy. Can you design a regression suite for an agent? Have you?
  3. Unit economics fluency. Can you reason about inference cost, token volume, and margin in the same breath?
  4. Hands-on velocity. Have you actually prototyped, or do you still believe your role is to write specs someone else builds from?

None of those correlate with years of experience. Most correlate inversely with prestige pedigree. A senior PM at a big-tech company who last shipped code in 2019 scores worse on this filter than a two-year operator at an AI-native startup. The market noticed before most of the senior PMs did.

I've been in conversations with hiring managers who explicitly deprioritised candidates from the 2015–2020 FAANG cohort because the skill profile didn't match what they needed. Not because those candidates were bad. Because the game changed underneath them and most of them didn't feel it yet. Recency beats pedigree, and the gap is widening each quarter.

What this means for the 30,000

If you're in the cohort most at risk, the instinct is to read the discourse about "jobs being replaced by AI" and plan accordingly. That instinct leads in the wrong direction, because the jobs are not being replaced. The tasks are. I've written about this distinction between jobs and tasks and the point survives the swap: the 8,000 roles being created are also jobs, just with a different task bundle.

The question isn't whether your employer will need you in 18 months. It's whether your task bundle today looks more like the 30,000 or the 8,000.

Some diagnostics, honest ones:

  • How much of your week is spent aggregating information others produced, versus producing information yourself?
  • When was the last time you shipped something end-to-end that your engineering team didn't have to rebuild?
  • Could you run an eval on your own product this afternoon without asking anyone for access or help?
  • Do you know, in dollars, what your team's monthly AI spend is and how it moves?

If most of those are no, you're in the 30,000 regardless of what your title says. The swap is skill-profile, not org-chart. Someone in the 8,000 at a competitor is already doing the work you'd need to do to stay in the 8,000 at your own company.

The playbook for staying in the 8,000

The structural fix isn't subtle. The companies that got ahead of this did three things in sequence, and they did them without waiting for a formal programme to give them permission.

Ship something this quarter with AI tools you did not use last year. Not a proof of concept. Something that goes to real users or real internal customers. This is the single highest-signal thing you can put on a resume in 2026. It beats every certification, every internal training, every reorganised bullet point about past roles.

Learn the unit economics of the product you touch. If your product has AI in it, you should be able to say, without checking, what a typical request costs, where the margin leaks, and which features would die if model pricing halved. Token spend is the new headcount line; if you can't read it, you can't make decisions against it.

Stop being the translation layer. If your value in a meeting is to summarise what the engineer said for the VP, you've identified the task an agent will eat first. Close the loop: have the opinion, make the call, commit to an action. Opinion-having has always been the harder part of the job. AI just made that more visible.

None of this is a new job description. It's a recomposition of the one you already have. The 8,000 roles Meta and Google are hiring into aren't filled with people who stopped being PMs. They're filled with people whose task bundle quietly shifted two or three years ago, and who have the receipts to prove it.

The ratio is widening, not tightening

The most likely error reading this is to assume 3.75:1 is the steady-state ratio. It almost certainly isn't. The current swap is a first-order response to AI productivity gains that are themselves still compounding. The 2028 ratio is probably worse, not better, for the roles that didn't reinvent. Nikhyl's estimate is a snapshot; the trend line is harder to stomach.

The response that scales is personal, not organisational. Orgs are slow to reorganise around this, which is why the swap is happening inside the same companies rather than through new entrants eating the incumbents. But the individuals who quietly rebuilt their task bundle over the last eighteen months are the ones being hired at the top of the 8,000 pay bands, often by their own employers.

Plan accordingly. If you're stuck, the fix is to ship something, not to read another piece like this one.

Frequently Asked Questions

Is the 30,000 / 8,000 number precise or directional?

Directional. Nikhyl Singhal offered it as a rough estimate based on his visibility into a CPO community spanning 125+ leaders. The exact figure will differ by company and quarter. What's robust is the ratio pattern: major tech employers are cutting at roughly 3–4x the rate they're backfilling, and the backfills sit in a different skill profile. That pattern is visible across Meta, Google, Amazon, Microsoft, and Salesforce public announcements.

Does this apply outside big tech?

Yes, with a lag. The swap started in hyperscalers because they felt the productivity shift first and had the most over-hired layer to cut. Mid-market SaaS, regulated enterprise, and non-tech industries are running 12–24 months behind. Banks and insurers I've worked with are in the early stages of the same pattern: quiet contraction of translation roles, quiet hiring of AI-literate builders. The ratio will look different but the direction is the same.

Is the filter really AI fluency, or is it just "younger and cheaper"?

The cheaper framing is wrong. Median compensation for the 8,000 tends to be higher, not lower, than for the 30,000 being cut. The filter is skill-density: the companies are willing to pay more per seat because they're buying a materially different capability. Age correlates loosely because the skill is recent, but ex-FAANG operators in their forties who rebuilt their stack are being hired into the 8,000; junior candidates who can't ship are not.

What if I work in a regulated industry where AI deployment is slower?

Regulated deployment is slower, but regulated hiring isn't. The banks, insurers, and healthcare companies I've talked to are hiring AI-literate operators faster than they're deploying AI products, because they know the deployment will accelerate and they want the people in place before it does. The filter is actually more pointed in regulated contexts, because the skill overlaps with risk-tiered governance fluency, which is scarce.


Related: AI Multiplied Your Engineers. Your PMs Are Drowning. and 94% Capable. 33% Deployed. The Gap That Explains Everything.

Share

Logan Lincoln

Product executive and AI builder based in Brisbane, Australia. Nine years in regulated B2B SaaS, currently shipping production AI platforms. Written from experience org transformation at Cotality.