Logo

'White-collar' is too blurry a category to measure AI's toll on workers

The term lumps together workers with vastly different AI exposure — and that makes the crisis harder to see in the data

Getty Images

The term "white-collar" is widely credited to the sociologist Upton Sinclair, who a century ago used it to describe salaried workers who didn't get their hands dirty. For most of the 20th century, it served as a reasonable proxy for a class of workers defined by education, desk work, and relative job security.

Almost 100 years later, that category is doing too much work — and its imprecision may be one reason the economic disruption AI is inflicting on knowledge workers is so difficult to measure and so easy to underestimate.

One label, radically different exposure

Consider who gets grouped under "white-collar" today: a senior software engineer at a large tech firm, a first-year paralegal at a law firm, a junior equity analyst at a bank, a marketing copywriter at a startup, a mid-level HR manager at a manufacturer. All are college-educated, desk-based, and salaried. All would appear in the same labor market category in most government data.

Their exposure to AI displacement, however, is not remotely similar.

Research from the University of Pennsylvania and OpenAI found that roughly 80% of the U.S. workforce has at least 10% of their tasks exposed to AI — but that exposure is wildly uneven. Legal document review, financial analysis, and content generation rank among the highest-exposure tasks. Senior software architecture, strategic management, and complex negotiation rank among the lowest. The same "white-collar" umbrella covers both.

Why the blurry category distorts the data

The measurement problem this creates is significant. When economists report that white-collar payrolls have contracted for 29 consecutive months — a finding that Aaron Terrazas, former chief economist at Glassdoor, described to Quartz as "incredibly unusual, going back 70, 80 years" — that figure aggregates workers whose situations are fundamentally different. A contraction driven by layoffs of junior analysts and paralegals tells a very different economic story than one driven by senior manager departures or early retirements.

Standard government data doesn't make this distinction easy to see. The Bureau of Labor Statistics classifies occupations by industry and function, not by cognitive task profile or AI exposure. The result is that labor market signals arrive aggregated in ways that obscure where disruption is actually concentrated.

This matters for policy as much as measurement. Interventions designed to protect "white-collar workers" generically will miss those who need them most.

A more useful frame

Several economists and labor researchers have proposed reorienting the analysis around task exposure rather than occupational category. The key distinction is not whether a job is white-collar but whether the cognitive tasks it requires are routine or non-routine, and whether they involve the kind of pattern recognition and text synthesis that large language models perform well.

By that measure, workers at highest near-term risk are concentrated in specific roles: junior legal and financial services positions, content and communications work, entry-level data analysis, and administrative coordination. These jobs share a task profile more than a collar color.

The category of "white-collar" may have made sense when the threat was physical automation. For cognitive automation, it is the wrong unit of analysis — and using it may be part of why so many people feel the disruption before the data catches up.

📬 Sign up for the Daily Brief

Our free, fast and fun briefing on the global economy, delivered every weekday morning.