riskquiz.me
← Back to Blog

Anthropic's AI Job Replacement Chart, Explained (And What It Misses)

Published on 2026-04-26 by RiskQuiz Research

Anthropic's AI Job Replacement Chart, Explained (And What It Misses)

Every few months a chart from Anthropic's Economic Index goes viral. The version in circulation in early 2026 is the same one most people are arguing about: bars showing AI usage by occupation, with software developers, writers, translators, and analysts at the top, and roofers, dishwashers, drivers, and home-health aides at the bottom. Captions range from "your job is safe" to "this is the displacement order" — same chart, sometimes the same week.

The chart is real, the data is unique, and the implication people pull from it is usually wrong. Anthropic itself is careful in the underlying papers; the social-media captions are not. This post walks through what the Index actually measures, the four numbers worth memorising, and the six things it does not capture — the gaps that decide whether a position on the chart actually predicts job loss.

If you want the personalised version of where your job sits across all the dimensions the chart misses, take the 4-minute AI career risk assessment. It blends the Anthropic Economic Index with BLS, OECD, ILO, and McKinsey data, and weights physical-presence, licensure, and liability moats the chart does not show — the result is a 0–100 score with a role-specific explanation of which dimensions are pulling your number up or down.

What the Anthropic Economic Index Actually Is

Anthropic launched the Economic Index in February 2025 and has updated it through 2025 and into 2026. Its methodology is what makes it different from every other AI-and-jobs study.

Most AI exposure research is theoretical. Researchers list the tasks that make up an occupation (via O*NET), score each on whether large language models could plausibly do it, and roll the scores up to occupation-level "AI exposure." The 2023 OpenAI/OpenResearch/UPenn paper "GPTs are GPTs" did this. So did Goldman Sachs' 2023 estimate that 300 million full-time-equivalent jobs were exposed to AI. So did the 2024 IMF global AI exposure paper. All are answers to "if AI worked perfectly, which jobs could it touch."

The Anthropic Economic Index asks a different question: which occupations and tasks are people actually using Claude for, right now, in millions of real conversations. The team analysed anonymised Claude.ai conversations, classified each against O*NET tasks, and produced a behavioural map of where AI is being deployed in the real economy.

That is the chart you have seen. The bars at the top — software development, writing, translation, technical content — are not "the jobs AI will replace first." They are "the jobs whose practitioners are most aggressively using Claude right now." Treating those two statements as the same is the single biggest error in how the chart gets shared.

Pull-quote: The Anthropic Economic Index measures which occupations are using Claude — not which occupations are being replaced by AI. High usage is sometimes a sign of imminent replacement, sometimes a sign of competent augmentation, and sometimes just a sign that the occupation has access to a laptop.

The Four Numbers Worth Memorising

Across the various Economic Index releases through 2025, the same four headline numbers keep appearing in the underlying data. These are the ones worth carrying around.

1. Roughly 36% of occupations use AI for at least 25% of their tasks. This is the number that anchors most reasonable summaries of the index. About one in three occupations shows meaningful AI penetration into a quarter of the work — meaningful, but very far from full automation.

2. Roughly 4% of occupations use AI for more than half of their tasks. This is the high-intensity tail. Software development sits in this tail. So do certain writing-heavy occupations and translators. It is a small share of the labour market, even though it dominates the chart visually.

3. The augmentation-to-automation split is roughly 57:43. When Anthropic's team classified each conversation as either augmentation (a human directing the model and using its output as part of their own work) or automation (the model autonomously completing a task end-to-end), the split came in around 57% augmentation, 43% automation across the index. This is the most under-quoted number in the entire dataset. It says that even where AI usage is heavy, the dominant pattern is still humans-in-the-loop, not humans-out-of-the-loop.

4. Computer and mathematical occupations alone account for roughly 37% of all Claude conversations. Software developers are massively over-represented in the data — and in every other AI usage statistic, including OpenAI's, Google's, and GitHub's. This concentration is the structural fact that makes the chart look the way it does, and the fact most quote-tweets ignore.

For a fuller treatment of how these numbers connect to displacement timelines, see the 2030 AI job map, which uses the Index alongside BLS and McKinsey data to estimate task automation, role displacement, and reshuffling rates over the medium term.

What the Chart Gets Right

Three things the Index unambiguously gets right, and that no other public dataset measures with anything like this resolution.

It is the first large-scale behavioural measurement. Every other major AI-and-jobs dataset is either a survey of intentions ("Are you using AI at work?") or a theoretical exposure score. The Economic Index is the first to ground the discussion in millions of actual conversations. That is an enormous methodological step forward, and the underlying papers should be read by anyone who wants to argue about AI and jobs seriously.

It correctly identifies the early-adopter cluster. The shape of the top of the chart — software development, writing, translation, technical content, marketing copy, certain research and analyst tasks — matches every other behavioural signal we have. GitHub Copilot's adoption curve, OpenAI's enterprise mix, Gemini's usage breakdowns, and the LinkedIn Workforce Report all paint the same top-of-funnel. The Economic Index is not an outlier here; it is consistent with what every honest dashboard at every major model lab is showing.

It correctly identifies the cold floor of the chart. Construction, transportation, food preparation, personal care, home-health work, and skilled trades show very little Claude usage. Not because their workers don't have phones, but because the work is not the kind of work that can be done by typing into a chat box. This part of the chart aligns precisely with what BLS, OECD, and ILO show from completely different angles — physical-presence work and frontline care are the categories with the slowest AI displacement timelines and, in many cases, the strongest projected employment growth. The jobs AI won't replace analysis breaks this down further by occupation.

Pull-quote: The Anthropic chart is right about the floor and right about the ceiling. It is the muddy middle — and the implied causal arrow from "uses AI a lot" to "will be replaced soon" — where the chart gets used badly.

What the Chart Misses — Six Gaps

Now the six things the chart does not measure, in roughly the order they bite hardest. These are the gaps that decide whether a particular bar on the chart is good news, bad news, or noise for any given person looking at it.

1. The chart conflates usage with displacement

The single biggest interpretive error. The chart shows how many Claude conversations map to an occupation. It does not show how many people in that occupation lost their jobs, had their hours cut, or saw their wages compressed.

A software developer using Cursor, Claude Code, and GitHub Copilot every day shows up as extreme AI usage. That same developer might be earning a wage premium, getting promoted faster, and shipping more product than ever — or might be on a layoff list because the team needs half as many engineers. The chart cannot tell you which.

High AI usage is a correlate of disruption, not a measure of it. To translate from usage to displacement you need three other inputs: how the employer is restructuring, what wage data shows, and where the hiring funnel is contracting. None are in the Index. McKinsey's 2025 financial services AI survey, LinkedIn's Workforce Report, and BLS occupation employment data are.

2. The chart is biased toward Claude users

This sounds obvious and gets ignored anyway. Anthropic measured Claude conversations. Claude in 2024–2025 had a particular customer base — disproportionately developers, AI-curious knowledge workers, and English-speaking early adopters in the US, UK, Western Europe, and a long Anglophone tail. ChatGPT's user base is broader. Gemini's is different again. Copilot's is heavily enterprise-Microsoft.

That means the Index over-represents the occupations that happened to be early-Claude-adopter clusters and under-represents the occupations whose AI usage is concentrated on other platforms. A teacher using ChatGPT every day for lesson planning would show up almost nowhere in Anthropic's Index even though their occupation is, in fact, a heavy AI user — the Cengage/RAND 2025 survey found roughly 60% of US K-12 teachers using AI tools, saving about six hours a week.

The honest reading: the shape of the chart is broadly right, but the absolute heights of specific bars depend on which model lab's user base you happened to sample. Cross-validate with parallel public data before assigning weight to any single bar's exact position.

3. The chart cannot see enterprise-deployed AI

The Economic Index measures conversations from individual Claude users. It cannot see enterprise deployments where AI is wired into a workflow — embedded in a help-desk product, a contact-centre routing system, an underwriting pipeline, or an ERP automation — and end users never type into a chat box.

This is enormous. Klarna's customer-service AI handled over two million conversations in its first year and reportedly did the work of ~700 agents — none of those conversations show up in the Anthropic Economic Index. Same for Shopify's, Zendesk's, Salesforce Service Cloud's, Intercom Fin's, and almost every large BPO's deployments. Same for the agentic underwriting and reconciliation pipelines being deployed at large banks. Same for Klaviyo, HubSpot, and the rest of the AI-marketing automation stack.

For a long list of occupations, the relevant displacement story is happening around the worker, not through the worker's own chat client. Tier-1 customer service is the textbook example. The chart will under-represent its disruption massively for as long as customer-service AI is deployed by employers, not invoked by workers.

4. The chart is a snapshot, not a trajectory

Each release of the Index is a slice of behaviour at one point in time. The chart you saw in early 2026 reflects the mix of work and adoption from late 2025; the next release will look different. Some occupations will move up the chart because adoption is exploding (legal, finance, mid-tier marketing). Some will move down because their early-adopter spike has saturated and the rest of the labour market is catching up.

Reading any single snapshot as a forecast is a category error. The right comparison is across releases — and even then, the underlying user-base shift between releases (newer Claude features, new tiers, geographic expansion) makes year-on-year comparison harder than it looks. The AI job market 2026 predictions post lays out which occupations the trajectory most plausibly favours and which it does not, using the Index alongside BLS and OECD projections so the trajectory is not held captive to any one snapshot.

5. The chart treats tasks as if they were jobs

Even where the Index correctly identifies that AI is doing a large share of an occupation's tasks, it cannot tell you whether the job is going away. A job is a bundle of tasks plus authority, accountability, judgement, relationship work, and physical or licensed presence. Automating 60% of the tasks in a role — even 80% — does not automate 60% or 80% of the role. It often automates the parts of the role that the job-holder least wanted to do, and leaves the parts that take most of the wage premium.

This is the single most important reframing for anyone reading the chart anxiously. A high bar on the Anthropic Index for your occupation means a large share of your task list can plausibly be done by AI. It does not mean your role disappears. The role disappears only if (a) the remaining tasks no longer add up to a coherent unit of work an employer wants to buy as a single hire, or (b) the productivity gains are large enough that fewer hires can absorb the same demand. Both happen. Neither is automatic. The OECD's task-bundling research and Goldman Sachs' 2023 productivity-versus-displacement modelling both make this point in detail. The Anthropic chart, by itself, does not.

For the task-by-task version of this analysis, see which jobs can actually be replaced by AI, which breaks roles down to the task layer the Index measures and adds back the authority, judgement, and presence dimensions it doesn't.

6. The chart says nothing about the moats

Three structural moats decide whether high AI exposure becomes job displacement: physical presence, licensure, and personal liability. None are in the Index because none are in conversation data.

Physical presence. A surgeon's, plumber's, electrician's, or paramedic's hour cannot be done over a chat interface, and the Index correctly puts these at the floor. But it does not measure how much of an "exposed" role is actually presence-bound. A nursing job has high task overlap with administrative AI; the bedside hour is the wage. See will AI replace nurses for the worked example.

Licensure. A radiologist's signed read, a lawyer's billable hour, a CPA's audit opinion, a physician's diagnosis — these carry legal authority AI systems do not hold. AI may produce most of the output, but a licensed human has to sign. The Index does not measure how much of an occupation's defended margin sits behind that signature. See will AI replace lawyers and will AI replace accountants.

Personal liability. The professional personally on the hook when something goes wrong has a moat AI cannot cross until courts and insurers say it can. That conversation has not happened in any major jurisdiction and shows no sign of happening in 2026.

Two roles can sit at the same height on the chart and have completely different fates because one has all three moats and the other has none. The chart does not show this. A serious AI career risk assessment has to.

Pull-quote: A high bar on Anthropic's chart and a low number of structural moats predicts displacement. A high bar with strong moats predicts augmentation, productivity, and often a wage premium. A low bar with strong moats predicts a hiring market that gets tighter, not looser. The same chart fits all three stories — only the moats decide which one is yours.

How to Read the Chart Correctly

Three habits separate honest reading from the viral version.

Read it alongside BLS, OECD, and ILO, not instead of them. The Anthropic Economic Index is the strongest behavioural signal in public data — and the weakest signal on labour-market outcomes (wages, openings, layoffs, employment growth). Treat it as one of four anchors. BLS Employment Projections 2023–2033 covers occupation-level employment growth. OECD's automation studies cover task-level exposure. ILO's Generative AI and Jobs paper is the cleanest read on global exposure differences. The Index slots in next to them, not above them. For the full breakdown of which AI job loss statistics in 2026 are reliable and which are widely misread, the companion explainer separates the exposure, adoption, and displacement numbers and tags each by source.

Distinguish "uses AI" from "augmented by AI" from "replaced by AI." Three different things. The chart shows the first. The augmentation-to-automation split is buried in the data and rarely quoted. The replacement question is not in the data at all — for that you need wage data, employer-side surveys, and hiring-funnel measurements (BLS, McKinsey).

Look for absence, not just presence. The bars at the floor — construction, transportation, personal care, food preparation — are arguably more useful than the bars at the top. They are the cleanest behavioural confirmation of where AI isn't going in 2026, and BLS growth projections plus WEF Future of Jobs 2025 for those same categories are some of the strongest in the economy. If you are choosing a career in your twenties, the floor is more decision-relevant than the ceiling.

What This Means for Your Career

If the chart shows your occupation at the top, the practical question is not "am I about to be replaced." It is "am I in the augmented-and-paid-more half of the bar, or in the automated-out half." Three signals tell you which: are you orchestrating the AI tools or producing output the tools now reproduce; does your role carry licensure, sign-off authority, or personal liability; is your employer expanding output or compressing headcount. The first you can change in 12–18 months. The second is structural. The third you can read off your hiring page.

If the chart shows your occupation at the floor, ask whether you are looking at long-term durability or temporary insulation. Surgeons, electricians, and paramedics have durability — presence, licensure, liability. Drivers and dishwashers have temporary insulation — presence today, exposure to a different wave (autonomous vehicles, kitchen robotics) on a longer horizon. The Index does not separate these; the hub guide on whether AI will take your job does.

If the chart shows your occupation in the muddy middle, you are in the most decision-relevant zone, and the chart alone gives you almost no signal. Read the will AI replace software developers breakdown if you are in tech, the role-specific posts if you are in finance, marketing, HR, education, or content, and the task-by-task replacement analysis. Then put a number on it.

Frequently Asked Questions

Q: What is the Anthropic Economic Index, and is it the same as the "AI job replacement chart"?

A: The Anthropic Economic Index is a research project Anthropic launched in February 2025 that maps anonymised Claude conversations to O*NET occupations and tasks. The "AI job replacement chart" most people share is one chart from the Index — bars showing the share of Claude conversations associated with each occupation. The chart shows AI usage by occupation, not AI replacement of occupations. The Index headline numbers are roughly 36% of occupations using AI for at least 25% of tasks, roughly 4% using it for over half, and a 57:43 augmentation-to-automation split.

Q: Does my occupation being high on Anthropic's chart mean my job is about to be replaced?

A: No. High on the chart means people in that occupation are heavy Claude users — sometimes because AI is replacing tasks, sometimes because AI is augmenting and the worker is more productive than ever. The chart cannot distinguish the two. To know whether your job is at risk, combine the Index with BLS occupation employment projections, employer-side hiring data, wage trends, and the structural moats in your role (licensure, physical presence, liability). A personalised AI risk score does this combination across nine dimensions.

Q: Why are software developers so high on the Anthropic chart?

A: Two reasons. First, software development is one of the most AI-exposed task bundles in the economy — code generation is the strongest LLM capability and the tools (Cursor, Claude Code, Copilot) are mature. Second, developers are heavily over-represented in Anthropic's user base; roughly 37% of all Claude conversations map to computer and mathematical occupations. Whether software development is also one of the most replaced occupations is a different question — see will AI replace software developers and the GitHub Copilot signup-freeze analysis for the unit-economics view.

Q: What does the Anthropic chart miss the most?

A: Six things, in rough order. (1) Usage is not displacement. (2) The data is biased to Claude's user base — ChatGPT, Gemini, and Copilot users are not in it. (3) Enterprise-deployed AI (customer-service automations, agentic pipelines) where users never chat with a model is invisible. (4) It is a snapshot, not a trajectory. (5) Tasks are not jobs — automating 60% of tasks does not automate 60% of the role. (6) It captures none of the structural moats — physical presence, licensure, and personal liability — that decide whether high task exposure becomes actual job loss.

Take the Quiz, Get the Score the Chart Can't Give You

The Anthropic Economic Index is the best single-source behavioural measure of where AI is being used in the economy. It is not a personalised verdict on your career. The verdict you actually want has to combine the Index with employment projections, wage data, structural moats, and your specific work pattern, industry, country, and seniority.

That is what riskquiz.me does. Four minutes. Nine dimensions — work type, industry, country, experience, seniority, task mix, AI fluency, physical-presence requirements, and licensure. One personalised 0–100 score with a role-specific explanation of which dimensions are pulling your number up or down, and which of the moats above (or their absence) is doing the most work.

Get your personalised AI career risk score →

Free. Built on the Anthropic Economic Index, BLS Employment Projections, OECD task exposure data, ILO global AI labour studies, and McKinsey financial-services AI surveys. See our methodology for the full source list and how the nine dimensions are weighted.

The chart is one input. Your number is the answer.

Want to know your AI replacement risk? Take our free 90-second quiz.

Take the Quiz →