OpenAI's Economists Just Resigned Because the Company Refuses to Publish the Truth About AI and Jobs.
What happens when your own researchers call you a propaganda arm, why Anthropic's CEO admits 50% of jobs could vanish, and the data OpenAI desperately wants buried.

Major researchers just walked out of OpenAI. Not for better salaries. Not to join Google or Meta. They left because they refused to participate in what they considered a large-scale propaganda operation.
And what’s happening behind closed doors should concern all of us.
Tom Cunningham, an economist and data scientist at OpenAI, quit in September. In his internal farewell message, he didn’t mince words. According to him, the economic research team was moving further away from real research to become, I quote, “the propaganda arm of its employer.”
Let’s be clear about one thing. This isn’t some frustrated intern throwing shade. This is a researcher specifically recruited to analyze AI’s impact on the economy. He’s since joined METR, a nonprofit organization evaluating AI model safety. And he’s not alone. At least one other team member resigned shortly after.
Here’s where things take a bizarre turn: OpenAI’s response is even more revealing.
Jason Kwon, the company’s chief strategy officer, sent an internal memo right after Cunningham’s departure. His position is logical. OpenAI is no longer just a research lab; it’s now the world’s leading AI player. And as such, the company must, quote, “take responsibility for outcomes rather than simply publishing studies on difficult subjects.”
Basically, they’re telling us something like, “We prefer building solutions rather than airing our problems publicly.”
The problem? They’re not really building solutions either. They’re simply hiding studies that could damage their image.
What exactly is OpenAI refusing to publish?
According to four internal sources who spoke to Wired, the company highlights productivity gains while minimizing massive job losses. It presents economic disruptions as temporary and manageable when internal data suggests otherwise. And most importantly, it avoids publishing any research that could fuel regulation, provoke public backlash, or slow AI adoption.
This company built its reputation on transparency and safety. But as soon as research no longer aligns with its commercial interests? Radio silence.
To understand the scale of financial stakes, look at the numbers. OpenAI is at the heart of Project Stargate, a $500 billion investment plan over 4 years to build AI data centers in the United States. The company is targeting a $1 trillion valuation for its IPO. It has signed dizzying contracts: $300 billion with Oracle, $250 billion with Microsoft, $100 billion with Nvidia, and a few billion here and there.
With such sums at stake, OpenAI literally has hundreds of billions of reasons not to publish studies that could shake public confidence.
And these aren’t studies proving OpenAI is evil. No, these are studies showing warning signs we should have about OpenAI or AI. However, it’s not very clear what OpenAI does with them.
The Anthropic Contrast
I want to contrast this approach with Anthropic’s, one of OpenAI’s principal competitors and the company behind Claude. Dario Amodei, Anthropic’s CEO, has been completely transparent about a subject everyone prefers to avoid.
In a recent interview with Axios, he stated that 50% of entry-level office jobs could disappear in the next 5 years because of AI. He even mentioned a potential unemployment rate of 10 to 20% as a direct consequence.
His message to governments and AI companies is unequivocal: stop sugarcoating reality. He believes producers of this technology have a duty and obligation to be honest about what’s coming.
This level of frankness builds trust. Unfortunately, OpenAI’s approach erodes it.
The Systematic Talent Hemorrhage
This isn’t the first time OpenAI employees have left for ethical reasons. The talent bleeding has become systematic. In 2025 alone, at least 12 executives and researchers quit, with more than half going to Meta to join its superintelligence lab.
Sam Altman is now one of only two remaining active founding members out of the original 11.
Remember the Super-alignment team, supposedly ensuring AI doesn’t destroy humanity? It was dissolved. Since then, John Leike, one of its leaders, resigned, declaring that “safety culture and processes have taken a back seat to shiny products.”
Ilya Sutskever, cofounder and chief scientist, left because of this. Myles Brundage, former head of policy research, quit explaining it had become difficult for him to publish on all the subjects that mattered to him. William Saunders from the Super-alignment team also resigned after realizing OpenAI prioritized new products over user safety.
The pattern is always the same. Researchers see something concerning, the company suppresses it, and people with integrity leave.
By cross-referencing available data, we can paint a picture of what these suppressed reports likely contain.
Massive displacement of entry-level office workers. Think first-year associates at law firms, consultants doing routine analysis, administrative coordinators, and financial analysts. Job losses are happening faster than new positions being created. Economic concentration is accelerating, with wealth flowing to an increasingly narrow group while many are left behind. The entry ladder into professional life is disappearing before new pathways emerge.
The 2025 numbers confirm this trend. According to Challenger, Gray & Christmas, over 54,000 layoffs were directly attributed to AI this year. Amazon cut 14,000 positions, Microsoft eliminated 15,000, and IBM replaced hundreds of HR positions with an internal chatbot. Salesforce also claims AI now accomplishes 30 to 50% of the company’s work.
We can’t really quantify AI’s exact impact, but in 2025, over 1.1 million layoffs were announced, the highest level since the pandemic. Certainly, not everything is because of AI, but let’s not kid ourselves. The trend is there.
The Entry Level Massacre
Here’s a detail that worries young graduates, which I will be soon. Entry-level hiring at the world’s 15 largest tech companies has dropped 25%. These positions simply no longer exist.

A Burning Glass Institute study shows the share of jobs requiring 3 years of experience or less has drastically fallen in AI-exposed sectors. In software development, we went from 43% to 28%. In data analysis, from 35% to 22%. In consulting, from 41% to 26%.
Companies aren’t reducing their overall workforce. They’re simply skipping young graduates to directly hire experienced professionals.
But wait, there’s a delicious twist in this complete story.
According to a Forrester study, 55% of employers already regret having laid off staff in AI’s name. Klarna replaced 700 jobs with AI, then rehired humans when quality collapsed. Amazon sold its Just Walk Out technology as AI-powered before we discovered it actually relied on remote workers in India monitoring cameras. That was a massive scandal.
Forrester predicts half of AI-attributed layoffs will be discreetly compensated by hiring, but offshore or with significantly lower salaries. Currently, AI isn’t replacing jobs; it’s displacing them to cheaper, less visible places.
And the public is starting to notice.
There’s growing ambient animosity toward AI systems. Americans massively use ChatGPT almost daily, but they’re increasingly annoyed by big companies’ management. This isn’t just online noise; it’s a fundamental trend starting to spread. I’m beginning to see it everywhere.
Public perception of OpenAI is at its lowest. Between scandals, lack of transparency, leadership drama, and now these revelations, people are gradually losing confidence.
The company did publish a document titled “AI at Work: OpenAI’s Blueprint for the Workforce.” It talks about accelerating AI education, certifications for 100 million Americans by 2030, and an employment platform to help people retrain.
Sounds good on paper. But you know what? If the research is so problematic that your own economists resign in protest, a certification program simply won’t be enough. We need systemic change, government collaboration, new economic models, and real safety guardrails.
Why Demanding Truth Isn’t Anti-AI
AI doesn’t have to be bad for society. It can be implemented in a way that benefits everyone. But currently, we’re on an irresponsible trajectory. And when companies building this technology refuse to be honest about its impact, we’re moving forward blindly.
Workers deserve to know what’s coming. Governments should prepare populations for what’s arriving. And we all deserve the truth, even when it’s uncomfortable. That’s the only way we’ll prepare for the future.
I want to tell you something important.
Demanding transparency isn’t being anti-AI. It’s demanding that the world’s most powerful companies be accountable to us. It’s refusing to be treated as passive spectators to our own obsolescence. I want as many people as possible to see things clearly, be lucid. That’s a word I like using so we can act collectively and adapt to the world that’s coming.
Whether we want it or not, whether you want it or not, whether I want it or not, the transformation is underway. Since the revolution is occurring, nobody can halt its progress because revolutionary momentum is relentless. Just look at history.
We’re right in the middle of it, and I’ll be 100% transparent with you. Going against it would go against the direction of history. And understand what I’m saying, this doesn’t mean being fatalistic. It means that as a collective, we must take responsibility for developing this technology as best as possible so it doesn’t cause harm.
The researchers who left OpenAI aren’t quitting technology. They’re quitting the lie. And maybe that’s exactly the integrity we need more of right now.
When the people building our future would rather walk away than participate in propaganda, that tells you everything you need to know about what they’ve seen.




Great write up. Transparency should be king. Interesting times ahead.....