The Company Building the Machines Told You the Machines Will Take Your Job. Then It Published a Plan.
Read alongside Dario Amodei’s warning essay, it forms the most important trilogy of documents in the AI era so far.
I want to be straightforward about what I think is happening right now. We are in the opening months of the largest transfer of productive capacity from human labor to machines in the history of civilization. That is not a prediction. It is a description of events that are already measurable, already documented, already producing consequences in the lives of real people this year.
What changed recently is that the people building the technology have started saying so explicitly and publicly, in writing, under their own names.
The Trilogy Nobody Is Reading Together
Three documents published over the past six months form what I believe is the most important sequence of AI writing produced to date, and almost nobody is reading them as a connected argument.
The first was Sam Altman’s essay The Gentle Singularity, published in late 2025. Altman argued that the singularity had already begun, that humanity had crossed a point of no return, and that the reason nobody had noticed was precisely that everything still felt normal. He predicted that 2026 would see AI systems capable of generating genuinely novel scientific ideas, not just recombining existing knowledge but producing original contributions. Six months later, that prediction is being verified in real time. OpenAI’s GPT-5.5, released just weeks ago, was described by co-founder Greg Brockman as “a new class of intelligence” with capabilities in what they call early-stage scientific research.
The second document arrived in January 2026. Dario Amodei, CEO of Anthropic, published a 20,000-word essay titled The Adolescence of Technology. Twenty thousand words is a short novel. Amodei opened with a scene from the film Contact, adapted from Carl Sagan’s book: if you could ask an alien civilization one question, what would it be? His answer: How did you survive your technological adolescence without destroying yourselves?
That metaphor is the spine of the essay. Humanity, in Amodei’s framing, is a teenager who has just been given almost unimaginable physical power without the emotional and institutional maturity to use it safely. As Fortune reported, Amodei identified five categories of risk: autonomous AI systems acting without instruction, malicious actors using AI to amplify harm, authoritarian governments or corporations using AI to consolidate power, economic disruption at a scale never previously seen, and cascading second-order effects that move faster than society can adapt to.
The crucial detail: Amodei doesn’t say disaster is inevitable. He says our odds are decent. But he also says we are considerably closer to genuine danger in 2026 than we were in 2023, and that the window for building safeguards is closing faster than most people understand.
Then, in April 2026, OpenAI published the third document: a 13-page official policy proposal titled Industrial Policy for the Intelligence Age: Ideas to Keep People First. Not a personal essay from Altman. A document signed by the company itself. And it is, by any reasonable reading, the moment the conversation shifted from prophecy to planning.
What OpenAI Actually Proposed
The document opens with a comparison that initially sounds grandiose but becomes defensible the further you read: it draws an explicit parallel to FDR’s New Deal, the economic restructuring Roosevelt implemented after the 1929 crash. Social protections, labor reform, safety nets, and wealth redistribution. OpenAI is saying, in a corporate document, that we need an equivalent restructuring now, before the equivalent crash.
And the first thing the document acknowledges, in plain language, is that the transition toward superintelligence will destroy jobs. Not might. Not could, under certain conditions. Will. Some jobs will disappear entirely. Others will transform. Fresh forms of work will emerge. All of this at a speed and scale without precedent in human economic history.
That sentence carries particular weight when it comes from the company building the technology that will do the destroying.
Three proposals form the core of the document.
The first is a four-day workweek. OpenAI proposes that companies test 32-hour workweeks across four days with no reduction in salary, provided productivity remains stable. The logic is straightforward: if AI allows you to complete in four days what previously required five, the efficiency gain should return to the worker as time, not exclusively to shareholders as profit. As the Engineering and Technology Magazine noted, this proposal generated more headlines than any other element of the document.
There is a historical rhyme worth noting. In 1926, exactly one hundred years ago, Henry Ford moved all his factories from a six-day to a five-day work week. The industrial establishment predicted economic collapse, worker laziness, and the end of American productivity. What actually happened was the opposite: productivity rose because workers were less exhausted, and consumer spending surged because people finally had time to spend their wages. A rested worker who has a life outside the factory is also a consumer who keeps the economy moving. We are standing at the same crossroads, one century later, hearing the same objections from the same voices.
The second proposal is fiscal reform. As TechCrunch reported, OpenAI argues that AI will massively increase corporate profits and capital gains while simultaneously reducing employment and therefore payroll-based tax revenue. The problem is structural: Social Security, Medicaid, housing assistance, and nutrition programs are all funded primarily through payroll taxes. Fewer salaries mean less tax revenue, which means crumbling social infrastructure. OpenAI proposes rebalancing the tax base by increasing taxation on capital gains, corporate income, and specifically on profits derived from AI-driven automation. If a company replaces 500 employees with an AI system, it should contribute more to the collective, not less. This is the first time a company that directly benefits from that dynamic has said so publicly.
The third proposal is perhaps the most ambitious: a public wealth fund. The idea is to create a sovereign fund, seeded by AI companies and fueled by the growth AI generates, whose returns would be distributed directly to citizens. Every person, whether or not they own stock, would hold a stake in the economic growth produced by artificial intelligence. It is, in substance, a universal basic income funded not by traditional taxation but by the productivity that AI itself creates.
The Evidence Is Not Theoretical
The reason this document matters right now rather than in two years is that the job displacement it describes is already happening.
Since January 2026, over 100,000 tech jobs have been eliminated. Microsoft launched a voluntary departure program covering 7% of its American workforce, unprecedented in the company’s 51-year history. Block, Jack Dorsey’s company, cut 4,000 positions at once, representing 40% of its workforce, and explicitly cited AI capabilities as the primary reason. Nike eliminated 1,400 technology positions. Oracle has cut close to 30,000. Amazon, Google, Meta, and Microsoft are collectively spending $700 billion this year building data centers and training new models. The pattern is visible in a single sentence: companies are simultaneously firing humans and building machines.
This is exactly the scenario Amodei described in January. It is exactly why OpenAI published its policy document now.
The Horse Analogy That Explains Everything
The document includes one analogy that crystallizes the urgency better than any statistic.
At the beginning of the twentieth century, the United States had more than 20 million working horses. An entire economic ecosystem depended on them: farriers, stable hands, carriage manufacturers, feed suppliers, veterinarians. Hundreds of thousands of families earned their living from horse-related work. When the automobile arrived, those jobs were not transformed or adapted. They were erased. The people who held them were not retrained as mechanics. It took a long time to address the economic consequences of being left behind.
OpenAI’s document is explicitly trying to avoid repeating that pattern. The proposal is to build the shock absorbers before the impact, not twenty years after.
Security, Confinement, and the End of Science Fiction
The document also contains a safety framework that takes Amodei’s warnings seriously. It proposes mandatory audits for the most advanced models, particularly those presenting risks in cybersecurity or biological applications. It describes an incident-reporting system modeled on aviation: companies would be required to report abnormal model behavior to a public authority. And it includes what it calls “containment playbooks” for genuinely dangerous scenarios:
What to do if a powerful model’s weights are leaked, if an autonomous system begins self-replicating, or if a model escapes the control of its operators.
A year ago, those sentences would have read like science fiction. Today, GPT-5.5 arrived six weeks after GPT-5.4.
Six weeks between model generations.
Boston Dynamics is producing its Atlas robot in series, with all 2026 units already reserved by Hyundai and Google DeepMind. Unitree, a Chinese manufacturer, shipped over 5,500 humanoid robots in 2025 and is targeting between 10,000 and 20,000 this year at under $5,000 per unit. Sony published in Nature in April 2026 the first autonomous robotic system capable of defeating professional table tennis players in competitive conditions.
The convergence is no longer approaching. It is here.
What the Adolescent Needs to Hear
I want to return to Amodei’s metaphor because it captures something that raw data cannot.
An adolescent, by definition, has the physical strength of an adult without the emotional and social maturity to match. AI in 2026 is exactly that: extraordinary capability wrapped in institutions that have not yet caught up, and a window of adaptation that shrinks with every new model release. Brockman put it more concretely last week:
“We are transitioning to a compute-powered economy.”
Computing power is becoming the foundational resource of the economy, in the same way petroleum defined the twentieth century and electricity defined the nineteenth. Except this transition will not take fifty years. It may take five.
Altman wrote in The Gentle Singularity that what felt magical six months ago would become routine. That is precisely what has happened. AI-generated images, AI-generated video, AI-generated voice, tools like Claude Code that didn’t exist a year ago, Google’s Genie 3, a world model that generates explorable 3D environments in real time from text descriptions. GPT-5.5 launched last week, and the dominant reaction was not wonder but mild acknowledgment. We have already normalized things that were impossible twenty-four months ago. Our expectations are rising as fast as technology advances.
The Question That Actually Matters
When I read these three documents together, I see one fundamental agreement between the two most powerful people in artificial intelligence. Altman and Amodei disagree on the method. They disagree on emphasis, on tone, and on how much to trust the market versus regulation. But the diagnosis is identical: this is going to move fast, and if nothing is done, most people will suffer.
The 100,000 people who lost their jobs in tech this year were not all at Google or Meta. They were developers, writers, analysts, and customer support agents.
Normal positions.
The document states plainly that without adaptive policy, AI will widen inequality by amplifying the advantages of those already well-positioned while excluding communities with fewer resources from the new tools and opportunities.
But it also says something else. AI has the potential to lower the cost of healthcare, education, and food production. It can solve scientific problems that have eluded human researchers for decades. By 2030, a single individual may have the productive, creative capacity that an entire team had in 2020.
The future can be as bright as it can be devastating. The technology itself is neither good nor bad. It is an amplifier. It amplifies what you know how to do. And that is precisely why understanding it changes everything, because the world is not going to pause while you catch up. Models are released every six weeks. Robots are entering factories. Companies restructure continuously. The people who will navigate this transition successfully are not necessarily the ones who use AI the most. They are the ones who understand what it is, what it does, and where the line falls between a tool that serves you and a system that replaces you.
Which of the three scenarios in this document do you find most plausible: the four-day work week, the public wealth fund, or the tax reform? Genuinely curious which one you think arrives first in the comments.


