Crashing Waves vs Rising Tides — AI Automation Findings from 17,000+ Worker Evaluations
Paskelbta 2026-04-10
Most AI labor market studies rely on expert panels or theoretical task analysis. This study takes a different approach: it asked the workers themselves. Published in April 2026, the research gathered over 17,000 evaluations from workers across hundreds of occupations, asking them to assess which of their daily tasks could be performed by current or near-term AI systems. The result is a granular, ground-level picture of AI automation potential — and the headline finding challenges the dominant "sudden disruption" narrative.
Key Findings
- AI automation is a rising tide, not a crashing wave. The study's central metaphor captures its core finding: rather than a small number of occupations facing sudden, total automation, the data shows broad, incremental automation potential spreading across nearly all occupations. The impact is wide but gradual, not narrow and sudden.
- 80-95% of work tasks could be AI-automatable by 2029. Workers assessed that the vast majority of their individual tasks have some degree of AI automation potential within a three-to-five year horizon. This does not mean 80-95% of jobs disappear — it means almost every job contains tasks that AI could handle.
- Workers identify automation potential that expert models miss. Because the evaluations come from people who actually do the work, they capture task-level nuances that top-down occupation models overlook. Some tasks that experts rated as "safe" were flagged by workers as already partially automated in practice.
- Automation potential varies more within occupations than between them. The spread of automation risk across tasks within a single job title is often larger than the difference between two different job titles. This means blanket statements like "accountants are safe" or "writers are doomed" miss the real picture.
- Self-assessment reveals a confidence gap. Workers who had already used AI tools in their roles rated automation potential higher than those who had not. Familiarity with AI increases — rather than decreases — the perceived scope of what AI can do.
What This Means for Your Career
The "rising tide" framing is the most important takeaway. If you have been watching AI news and thinking "my job is safe because it requires creativity / empathy / physical presence," this study suggests you are likely underestimating how many of your individual tasks have automation potential. The risk is not that your entire role vanishes overnight. The risk is that 30-60% of what you do today gets absorbed into AI workflows over the next few years, and your role transforms into something you may not recognize.
The finding that workers who use AI see more automation potential — not less — is particularly telling. It suggests that the people most informed about AI's current capabilities are the ones who expect the most change. If you have not yet experimented with AI tools in your specific work context, your sense of safety may be based on ignorance rather than accurate assessment.
The practical implication is to stop thinking about your job as a single unit and start thinking about your tasks. Which of your daily tasks could a well-prompted AI system handle today? Which could it handle with another year of improvement? The answers will tell you where to focus your energy: on the tasks that remain distinctly human, and on learning to orchestrate AI for the rest.
Data Highlights
- 17,000+ individual worker evaluations collected across hundreds of occupations
- 80-95% of work tasks assessed as having some AI automation potential by 2029
- Higher AI familiarity correlates with higher perceived automation potential
- Within-occupation task variation exceeds between-occupation variation in automation risk
- 3-5 year horizon used for worker assessments of automation feasibility
Methodology
The study recruited workers across a broad range of occupations and asked them to evaluate their own job tasks against a structured rubric of AI capability. Each worker listed their core tasks and rated the feasibility of AI performing each task at an acceptable quality level, both with current technology and with anticipated near-term advances. The evaluation framework distinguished between full automation (AI performs the task independently), partial automation (AI handles significant components with human oversight), and augmentation (AI assists but human remains primary). Over 17,000 individual task evaluations were collected and aggregated by occupation, industry, and worker demographics. The research team validated a subset of worker assessments against expert evaluations and found strong alignment, with workers identifying additional automation vectors that experts had not considered.