The Mother of all Disruptions: AI

March 2026

Why AI’s Impact on Society, Work, and Power Will Exceed Almost Every Expectation

By Georg Chmiel  —  March 2026

Part I: The Expectation Gap

I have written and spoken extensively about AI over the past two years; what I said was based on almost 4 decades of work and studies in the area of Computer Science, Data processing and Machine Learning, from the earlier days of expert systems to generative AI as we see it today.

The more I engage with senior leaders — at board level, in executive teams, and among policy-makers — the more I notice a striking and dangerous pattern. While almost everyone agrees that significant disruption lies ahead, no one is thinking it through when it comes to the consequences.

There are two camps, and neither is right.

The first camp is alarmed. These are thoughtful people who read widely, follow the research, and have concluded that AI poses civilisational risks if control is lost. They are not wrong about the scale of the challenge. But it is very difficult to predict the exact mechanisms, the timeline, and — critically — what can and should be done about it.

The second camp — and it is by far the larger one — is rather dismissive. These are the people who point to specific case studies of companies that tried AI, identified limitations, but downplay the longer term impact on the basis that the economy is still growing, or observe that programmers still have jobs. They are using real historic and in parts unrelated data to reach the wrong conclusions about the future. Their error is not one of fact. It is one of framing.

I want to challenge both camps. And I want to do so rigorously, by drawing on three distinct sources of evidence: what AI is demonstrably capable of doing right now; what history tells us about the pattern of large-scale disruptions; and what the science of thresholds and tipping points says about how systems actually change.

‘Disruption’ is a word so overused it has lost its meaning. What is coming is not disruption in the technology-industry sense. It is a structural reordering of the relationship between human effort, economic value, and social stability.


Part II: Why the Dismissers Are Wrong

The Case Study Trap

The most common form of AI dismissal goes like this: “Our legal team tried an AI contract tool and it made mistakes. Our customer service bot frustrated clients. The AI-generated report was generic.” All of these observations can be simultaneously true and deeply misleading.

Isolated failures prove nothing about aggregate trajectory. In the early 1990s, internet connectivity was slow, unreliable, and confusing. The case studies from that period were overwhelmingly negative. The structural direction was not.

The dismissers’ second error is more subtle: they confuse AI with the digital disruption they have already survived. They have been through ERP implementations, cloud migrations, and the ‘digital transformation’ decade of the 2010s. Each wave arrived with similar rhetoric and delivered meaningful but manageable change. AI, to them, is the next wave of the same.

It is not. The previous waves digitised and accelerated tasks that humans designed and supervised. AI is beginning to design, replace and supervise those tasks itself.

Not Long Ago, the Advice Was ‘Learn to Code’

Consider the speed of change in a single domain. Five or ten years ago, the most reliable career advice for a young person seeking economic security was to study computer science or software engineering. The demand was structural, the salaries were high, and the supply was constrained. It was reasonable advice grounded in solid data.

That advice is already out of date. Vibe coding — the practice of generating functional software through natural-language prompts — has compressed what once required a skilled developer into something a non-technical founder can produce in hours. I am not saying programmers are obsolete. I am saying the skills that justified premium salaries just three years ago are being commoditised faster than any university curriculum can adapt.

If the specialisation that seemed most future-proof in 2020 is already under structural pressure in 2026, what does that imply for accountancy, legal research, financial analysis, or medical diagnosis? The answer is uncomfortable.

Banks and Large Companies Are Not Safe Harbours

For a generation, ambitious graduates sought roles in financial institutions, consulting firms, and large multinationals precisely because those organisations seemed immune to the disruption that was reshaping retail, media, and logistics. The scale, regulation, and institutional complexity of these firms appeared to provide a natural moat.

That moat is eroding. Goldman Sachs and Morgan Stanley have materially reduced junior analyst recruitment. Salesforce reduced its customer support workforce from 9,000 to 5,000, citing AI directly, Atlassian shed already 10% post a substantial share price drop and Klarna shed 40% of its total headcount through an AI-driven hiring freeze. These are not struggling companies cutting costs. They are leading firms optimising a new cost structure.

The lesson: safety through institutional size was always partly illusory. The large organisation provided stability because it required human coordination at scale. When AI can coordinate at scale without humans, the protection dissolves.

The PropTech Illusion — and What It Actually Reveals

One of my favourite examples of the dismissal fallacy comes from property technology. The so-called ‘proptech revolution’ has been a feature of investment conversations for a decade. And yet, by most credible estimates, proptech solutions currently address only around 5-10% of the total transaction fees (commission and advertising fee) spent in global real estate transactions.

Two conclusions can be drawn.

1)     It is notable that even a 5-10% digitisation of a highly fragmented, deeply traditional industry can produce dozens of billion-dollar companies and restructure the careers of thousands of real estate professionals

2)    What happens when the equivalent of proptech, powered by AI, one day addresses 10%, 20%, 50% or 80% of the total transaction spend?

Part III: What AI Is Actually Capable Of

I want to be precise here, because precision is what separates analysis from anxiety.

AI in its current form is not general intelligence. It cannot plan long-horizon strategies unprompted, build physical infrastructure, or exercise the moral judgment required of a board director or a surgeon. But it is already performing at professional level in domains that employ tens of millions of people in advanced economies: contract review, financial modelling, code generation, medical imaging interpretation, regulatory compliance analysis, and customer interaction at scale.

More important than any single capability is the combinatorial effect. Previous technological waves were sector-specific. The steam engine disrupted manufacturing. The internet disrupted media, retail, and communication. AI is disrupting legal services, financial advisory, healthcare, engineering, education, and professional services simultaneously. There is no sector available to absorb the displaced, as there was after previous transitions.

A Thought Experiment: What If Anthropic Bought Factories?

Consider this: Alibaba, initially a pure internet commerce platform, bought physical retail outlets. Why? Because owning the end-to-end customer relationship — digital discovery through to physical fulfilment — produced economic outcomes that a purely digital model could not match alone.

Now ask: what would happen if Anthropic, or a company like it, acquired manufacturing capacity? The logic is not fanciful. AI does not merely improve the productivity of existing factory workers. It has the potential to redesign production processes, supply chains, quality systems, and management layers wholesale. The impact would not be incremental productivity improvement. It would be the elimination of entire organisational layers.

Alibaba buying retail was a horizontal extension. An AI company owning physical production would represent something deeper: the replacement of human organisational logic with AI organisational logic, from procurement to delivery. The economic implications — for employment, for corporate structure, for national income distribution — are of a different order of magnitude.

The Social Contract Is Already Under Strain

The economic model of advanced societies since 1945 has rested on a tacit contract: technological progress generates growth, growth generates employment, employment distributes the gains of progress broadly enough to sustain social cohesion and political stability. It was never a perfect contract. But it functioned well enough to produce seventy years of relative stability in democratic societies.

AI breaks that transmission mechanism. It generates growth without generating proportionate employment. It concentrates the gains among those who own the compute, the data, and the platforms. The vast majority of workers — and particularly the white-collar workers who considered themselves above disruption — face a future in which their economic contribution is structurally less valuable than it was a decade ago.

This is not a crisis to be managed at the margins. It requires a new social contract: new frameworks for how the gains of AI productivity are distributed, how workers whose roles have been structurally obsoleted are supported, and how democratic institutions maintain legitimacy when economic anxiety is concentrated in the classes that historically formed the base of institutional trust.

Part IV: History as Our Best Guide

The dismissers are right about one thing: this has happened before. Periods of large-scale technological disruption are not historically unprecedented. But the lesson they draw — “we adapted before, we will adapt again” — collapses on contact with the specific features of this transition.

Precedent One: The Neolithic Revolution

The shift from hunter-gatherer to agricultural societies, which began approximately 10,000 BCE and spread across millennia, was perhaps the most fundamental restructuring of human economic life in history. It generated surpluses, which enabled specialisation, which enabled cities, which enabled civilisation in its recognisable form. It also generated something else: inequality, hierarchy, new disease strains and the permanent subordination of the many to the few who controlled the land.

The Neolithic Revolution improved aggregate human welfare by almost any long-run measure. It also created the conditions for slavery, famine, and class conflict that would define human societies for the next ten millennia. The lesson: transformative technology generates new social structures, and those structures distribute their benefits and costs in ways the previous generation could not predict and would not have chosen.

Precedent Two: The Arrival of the “Sea Peoples” and the Bronze Age Collapse

Around 1200 BCE, the Eastern Mediterranean experienced one of the most complete civilisational collapses in recorded history. Within fifty years, the great palace economies of the Bronze Age — Mycenae, Ugarit, Hatti, and others — ceased to exist. The causes are still debated, but the consensus points to a convergence of pressures: drought, population movement, disruption of the long-distance trade networks on which palace economies depended, and the military advantages that came with the spread of iron technology.

The structural lesson is not the specific cause. It is the speed and totality of the collapse. Systems that appeared to be in equilibrium, that had functioned for centuries, tipped within a single generation. The palace scribes, the merchant networks, the specialised craftsmen of the Bronze Age — their skills became worthless almost overnight. There was no transition period. There was no retraining programme.

Precedent Three: The Industrial Revolution

The closest quoted analogy, and the one most frequently invoked, is the Industrial Revolution beginning in the late 18th century. The standard narrative is a reassuring one: yes, handloom weavers were displaced, but their grandchildren became factory workers, and their great-grandchildren became managers, and eventually the transition was absorbed.

There are two things wrong with applying this narrative to AI. First, the timeline: the Industrial Revolution’s absorption played out over generations. We are watching AI compress equivalent structural change into years, not decades. The grandchildren don’t have time to find new livelihoods by the time the next wave arrives.

Second, and more fundamentally: the Industrial Revolution replaced muscle. Steam engines and mechanised looms made physical human effort less economically necessary. But they created enormous demand for cognitive labour — the clerks, managers, accountants, lawyers, and analysts who ran the increasingly complex organisations that industrial scale required.

AI replaces cognitive labour. There is no equivalent backstop. The roles created by previous automation — the entire white-collar economy — are exactly what AI targets most directly.

Part V: The Threshold That Will Change Everything

The most important analytical insight — and the one most consistently missing from public discourse — is this: you do not need mass unemployment to produce civilisational instability. You need to cross a threshold.

The Science of Tipping Points

The academic literature on this is rigorous and consistent. The foundational work is Everett Rogers’ diffusion of innovations, which demonstrated in 1962 that social systems change not when everyone has adopted a new behaviour, but when a specific threshold is crossed, after which change becomes self-sustaining. The system tips. The late majority and the laggards follow, not because they chose to, but because the system no longer supports their previous behaviour.

Critical mass — the precise threshold concept — was formalised in social science by Mark Granovetter and Thomas Schelling. The crucial insight: the threshold varies by context. For social norm shifts, it can be as low as 3–5% of a population. For entrenched institutional change, it may require 25–30%. But in all cases, the system-level change is locked in well before the majority participates.

Damon Centola’s research at the University of Pennsylvania, published in Science in 2018, found that coordinated minorities representing roughly 25% of a population could reliably shift established social conventions. Critically, the composition and connectivity of that minority mattered more than the raw percentage. If those who change are the highly connected nodes — the opinion leaders, the brokers, the connectors — 5% can cascade through 95% of the system. If they are peripheral nodes, 40% may not tip anything.

The strategic implication: it is not about the number. It is about the position in the network.

What the Unemployment Research Actually Says

The empirical literature on unemployment and social instability points to several anchors that boards and policy-makers should hold clearly in mind.

European Social Survey data across 28 countries found that political trust correlates negatively with unemployment until rates cross approximately 10%, after which trust stabilises — suggesting diminishing marginal political damage above that level, but also an extremely dangerous zone between 5% and 10%. Research indicates that areas with unemployment rates exceeding 15% become particularly vulnerable to recruitment by extremist political movements. For youth unemployment specifically, the Arab Spring was fuelled significantly by rates exceeding 30% in several affected countries.

But the more actionable finding is about the delta, not the level. The speed and perceived fairness of an unemployment increase matters more than the absolute number. The same three-percentage-point rise in unemployment that produces policy adjustment in a low-polarisation, high-trust society — Scandinavia, Singapore, post-war Germany — can collapse governments in a high-polarisation, low-trust, high-inequality environment.

The United States saw unemployment rise approximately five percentage points over 18 months during the Great Recession without political revolution. Several Southern European economies with similar or smaller rises — but weaker safety nets and higher prior inequality — experienced significant political rupture.

The number is not the variable. The institutional fabric is. And that fabric is already under strain in most advanced democracies.

White-Collar Workers Are Unprepared to Be the Disrupted Class

Here is the specific danger of this moment. The unemployment threat from AI is not primarily aimed at blue-collar workers, who have already adapted to previous waves of automation and who exist in a political and policy context that has, at least partially, built support structures around their vulnerability.

The primary target of AI disruption is the white-collar professional class: the lawyers, accountants, analysts, consultants, mid-level managers, and knowledge workers who form the backbone of institutional trust, political participation, and social stability in advanced economies. These are the people who vote, who run civic organisations, who form the PTA committees and the board-level professional networks.

They are not psychologically, financially, or politically prepared to be disrupted. Their entire identity is built on the assumption that cognitive skills are safe. When that assumption fails — not gradually, but at the speed AI is now demonstrating — the social and political consequences will be disproportionate to any unemployment percentage that a headline number might suggest.

Part VI: Everything, Simultaneously

Previous technological disruptions were, by and large, sector-specific in their immediate effects. The printing press disrupted the scribal class. The steam engine disrupted textile and agricultural labour. The internet disrupted media, retail, travel, and eventually financial services. In each case, there were adjacent sectors that could absorb the displaced, and time available for labour markets to adapt.

AI breaks this pattern in a structurally significant way: it is disrupting every knowledge-intensive sector simultaneously. Legal research and contract drafting. Financial modelling and investment analysis. Medical diagnosis and treatment planning. Engineering design and code generation. Management reporting and strategic synthesis. Marketing, customer service, compliance, audit.

There is no sector sitting safely aside, ready to absorb the professionals displaced from other fields. The disruption is broad-based and concurrent. This is not a feature that previous disruptions shared, and it is the feature that makes historical analogies about retraining and adaptation least applicable.

The New Role of Government: Control the Use of AI, But Do Not Use It to Control

Governments face an acute dilemma. They must use AI — to maintain competitive public administration, to manage the fiscal pressures that AI-driven productivity changes will impose, and to regulate an industry that will otherwise regulate itself in ways that reflect shareholders’ interests rather than citizens’ interests.

But the temptation to use AI as an instrument of control — surveillance, predictive policing, automated benefit adjudication, algorithmic censorship — is real and must be resisted. The failure mode of governments facing severe social disruption has historically been the hardening of control rather than the widening of participation. AI gives governments the tools to make that failure mode more effective than at any previous point in history.

The policy imperative is clear: governments must legislate and enforce a boundary between AI that serves citizens and AI that survives citizens. That distinction is not technical. It is political, and it requires political will of a kind that is in short supply precisely when disruption makes populations anxious and institutions defensive.

Part VII: An Outlook — When and How Fast?

I am frequently asked: when will all of this actually manifest? The honest answer has two parts.

First: it is already manifesting. The hollowing of entry-level professional roles is happening now. The compression of junior analyst pipelines in finance, law, and consulting is documented and accelerating. The ‘vibe coding’ compression of software development timelines is real and measurable. The headline unemployment numbers do not yet reflect this, because the disruption is occurring primarily in hiring freezes and role eliminations, not mass layoffs that show up cleanly in labour statistics.

Second: the macro signal will arrive faster than most models predict, for a reason rooted directly in threshold theory. The current phase — from roughly 2024 to 2028 — is the period before critical mass. Change is rapid at the micro level but not yet visible as a macroeconomic shift. The threshold, when it is crossed, will not feel gradual. It will feel sudden, because that is what phase transitions do.

A Likely Sequence

Phase 1 (2024–2028): Entry-level professional roles hollow out. AI-native firms outcompete legacy firms on cost and speed in specific verticals. White-collar gig work expands rapidly as organisations reduce permanent headcount in favour of flexible AI-assisted specialists. The macro statistics look benign. The micro reality is structurally deteriorating.

Phase 2 (2028–2033): Structural unemployment becomes visible at the macroeconomic level, concentrated in knowledge-intensive roles in finance, law, healthcare administration, and middle management. Political polarisation — already elevated — amplifies the social instability threshold. Several advanced economies experience significant political rupture driven not by absolute unemployment levels but by the speed, perceived unfairness, and white-collar concentration of the displacement.

Phase 3 (2033–2042): The new social contract either exists or it does not. In societies where governments have moved decisively — building new mechanisms for distributing AI productivity gains, investing in genuine retraining at scale, and maintaining institutional trust — the transition will be managed, painful but survivable. In societies that were slow, defensive, or captured by the interests of those who benefit most from AI concentration, the instability will be severe and potentially irreversible.

The Threshold Moment Will Not Be a Single Event

I want to be precise about one final point, because it is where I see the most analytical confusion. People ask: when is the tipping point? As if there will be a single, identifiable moment — a Black Friday of the AI labour market, a Lehman moment for white-collar employment.

That is not how phase transitions work. The threshold is crossed in different places, in different sectors, in different countries, at different times. It will look, from the inside, like a series of individually manageable events: a round of layoffs here, a hiring freeze there, a firm that restructured, a profession that quietly stopped growing. The cumulative effect is the phase transition. By the time it is recognisable as such, the system has already changed.

The organisations and governments that will navigate this well are those that understand the dynamic now, while the system still appears stable. That stability is the most dangerous thing about this moment. It encourages the comfortable conclusion that there is more time than there is.

A Final Word

I am, as those who have read my previous writing will know, a strong and genuine advocate for AI. The productivity gains, the scientific acceleration, the improvements in healthcare access and educational quality that AI will deliver are real, large, and important. I do not want to minimise them.

But advocacy without clarity about consequences and implications is irresponsibility. The most important service that those of us with board-level visibility, economic literacy, and access to the research can perform right now is to say, plainly: the scale and speed of what is coming exceeds the current conversation. The dismissers are wrong about the magnitude. The alarmists are often right about the scale but imprecise about the mechanism. And the comfortable majority — the people who are watching the first 5% of the disruption and concluding that the remaining 95% will be equally manageable — are making a threshold error that history will not forgive.

The time to think clearly about this is now, while the institutions are intact, while the safety nets still function, while the political systems are still responsive, and while there is still time to design rather than merely react.

That time is shorter than it appears.