X’s Head of Product Says iMessage, Gmail, and Phone Calls Will Collapse Within 90 Days.
He’s the engineer who fights bots for a Living. When He Panics, we should pay attention.
I don’t scare easily about technology. I’ve watched enough AI hype cycles collapse into nothing to have developed a reasonably healthy skepticism reflex. So when I read the post that Nikita Bier, Head of Product at X, dropped a few months ago — eleven lines that have since been seen millions of times — my first instinct was to scroll past it.
Then I read it again.
“Prediction: In less than 90 days, all channels that we thought were safe from spam and automation will be so flooded that they will no longer be usable in any functional sense: iMessage, phone calls, Gmail. And we will have no way to stop it.”
This is not a tech YouTuber yelling into a camera from a darkened room. This is someone whose literal full-time job is stopping bots from destroying a platform — and he’s telling you, publicly, that he doesn’t know how to stop what’s coming. That distinction matters enormously.
The War X Is Already Losing
To understand why someone this technically capable would make that kind of public statement, you need to look at what’s been happening at X this year.
In October 2025, Bier’s team purged 1.7 million bot accounts that were flooding reply sections with spam. It felt like a victory. It was a victory for about 48 hours. The accounts respawned almost immediately. Think of it as mopping a floor during a burst pipe. By April 2026, his team had escalated to suspending 208 accounts per minute in a continuous automated sweep. For context: when Elon Musk acquired Twitter in 2022, he promised to eradicate the bots or die trying. He didn’t die. Neither did the bots. When Bier joined X in 2025, he inherited a problem that had been compounding for three years — except that in the intervening time, the bots had effectively earned a doctorate in natural language. That’s why his team had to build an anti-bot machine that runs at 208 suspensions per minute.
The machine has a side effect. It also catches humans: verified premium users who’ve been on the platform for years, secondary accounts used for private content curation, profiles whose behavioral patterns happen to look sufficiently bot-like. Bier has acknowledged publicly that the algorithm makes mistakes. At that rate of intervention, collateral damage is inevitable.
We must ask ourselves: what has made things so much worse in the last year and a half?
The answer has a name.
The Gutenberg Machine That Builds Itself
OpenClaw launched in November 2025 under a different name. Within a single week of going viral in January 2026, it had accumulated 145,000 GitHub stars. By early April, it had crossed 300,000 — making it the fastest-growing open-source repository in GitHub’s history, surpassing React, Vue, and TensorFlow, each of which took years to reach comparable numbers. Jensen Huang, the CEO of Nvidia, called it publicly the most important software release ever shipped. That is a very large compliment from a very careful man.
What does it actually do?
The description is simple; the implications are not. You install OpenClaw on your machine. You give it access to your browser, your email, and your messaging apps. It becomes an autonomous assistant that acts on your behalf, running silently in the background, checking every thirty minutes whether there is work it can complete. It can read and reply to emails.
It can fill in web forms.
It can send WhatsApp and Telegram messages.
It can execute commands on your local machine.
For a legitimate user, it is an extraordinary productivity engine. Microsoft confirmed this month that it is actively testing OpenClaw-like features inside Microsoft 365 Copilot. When Microsoft reverse-engineers an open-source project into a flagship enterprise product within weeks of its emergence, something real is happening.
But here is what the productivity story leaves out.
What constrained scammers until now was primarily a human time bottleneck. One person behind a laptop cannot write 500 personalized emails in an hour. They cannot make 200 phone calls in a day while tailoring their pitch to each specific target. OpenClaw removes that constraint entirely. You give it an objective. The machine works twenty-four hours a day, for free, without tiring on the third call of the afternoon.
No server configuration required.
No code to write.
No scripts to maintain.
The technology has left the developer niche and arrived on the desktop of anyone with a laptop and an internet connection.
This is our Gutenberg moment — but the adoption curve doesn’t take fifty years anymore. It takes six months.
After Gutenberg invented movable type in the 1450s, Europe spent fifty years drowning in religious pamphlets, forged prophecies, and fabricated political accusations — all because the means of mass distribution had suddenly become cheap and accessible. Historians call this period the Age of the Pamphlet. It took two centuries to build the institutional filters we now take for granted: editors, journalists, librarians, fact-checkers. OpenClaw is that same technological rupture, compressed into a sprint.
Why Your Brain Is Already Outgunned?
The voice cloning piece is where this gets neurologically uncomfortable, and I want to be precise about why.
The FBI’s 2025 Internet Crime Report, published this April, recorded more than 22,000 AI-related fraud complaints resulting in $893 million in losses. Government impersonation scams nearly doubled, climbing from 17,300 complaints in 2024 to over 32,400 in 2025, with losses jumping from $405 million to $798 million. AI was explicitly referenced in 260 of those impersonation complaints, with $7 million in verified losses directly tied to cases where victims specifically mentioned artificial intelligence. These are only the numbers that get reported. The FBI notes openly that most victims don’t file complaints at all because they feel ashamed. They assume they should have known better. So they pay, and the operation moves to the next target.
On voice cloning specifically: your brain is not equipped to be skeptical in time. Neuroscience research has established that humans recognize a familiar voice in under 200 milliseconds — faster than you become consciously aware that you’re hearing a sound. That 200-millisecond window exists because it was evolutionarily essential: recognizing a known voice in the dark was the difference between walking back into camp and walking toward a predator. Your threat-detection system evolved to trust a familiar voice before your logical cortex even enters the room.
When a scammer calls you using a cloned version of your daughter’s voice, the part of your brain designed to protect you has already been bypassed before you have had time to think. You are not gullible if you fall for it. You are simply running thirty-thousand-year-old hardware against twenty-first-century attack tools.
That’s what makes the old advice — watch for spelling errors, be skeptical of unsolicited emails — completely obsolete. The past was easy to identify. A multicolored email full of grammatical errors and a sender claiming to be the director of a Nigerian bank required effort to fall for. Today’s AI-generated scam content is trained using Reinforcement Learning from Human Feedback, the same technique that makes large language models so fluent and compelling.
Thousands of human raters voted on which outputs were most engaging, and those versions were reinforced over millions of training examples. Every sentence that comes out of these systems has been mathematically optimized to hold your attention better than a sentence written by an average human writer. When a scammer uses their own creativity to deceive you, they’re limited by what their creativity can produce. When they use a model trained by RLHF, they’re drawing on the collective judgment of thousands of people who voted for what works best.
Pushpaganda: The Internet’s New Plumbing Problem
This week gave us a precise example of what industrialized AI fraud looks like in the wild. Security researchers at HUMAN’s Satori Threat Intelligence Team exposed a campaign they named Pushpaganda. The operation used AI-generated content and search engine poisoning techniques to flood Google Discover with fake news articles designed to trick users into enabling persistent browser notifications.
Once enabled, those notifications delivered fake legal threats and redirected victims into additional scam infrastructure. At its peak, the campaign generated roughly 240 million bid requests across 113 domains in a single week. Google deployed a fix. While Google was patching one operation, ten others were already running elsewhere.
This is a symptom of a deeper structural collapse, and it’s not primarily about spam.
The economic model that has held the internet together for thirty years is failing. The deal was straightforward:
Humans wrote articles, built blogs, published analyses, and Google indexed them. Readers clicked. Creators earned advertising revenue. Everyone got something useful. But Google introduced AI Overviews, which synthesize answers directly at the top of the search results page. You get your answer without clicking. Publishers stop receiving traffic. Several major media properties have reported organic audience declines of more than 50 percent over three years. Entire editorial teams have been cut to a fraction of their former size. Blogs that ran for fifteen years have shut down. And into the vacancy left by departing human publishers, SEO slop floods in: AI-generated content mills publishing hundreds of articles per day, optimized enough to rank, thin enough to be worthless, monetized by the same advertising ecosystem that used to reward quality. Merriam-Webster chose slop as its word of the year for 2025. The English language found the right word.
There Is No Single Fix, But There Are Reflexes
Nikita Bier is fighting. He added a dislike button to replies, at least in the United States for now. His team cut payouts to clickbait aggregator accounts by 60 percent. They are suspending Japanese spam networks in systematic waves. All of this is real, and none of it is sufficient. Against an ecosystem where anyone can deploy an AI agent in ten minutes that works around the clock, one team running suppressions at 208 per minute is a firefighter with a garden hose in front of a burning forest.
So what actually works?
One survival rule has become clear, and it takes new reflexes rather than new intelligence. If a message arrives that seems perfectly tailored to your specific situation from someone you don’t already have a relationship with, assume it’s AI.
Should you get a call from an unrecognized number in which the caller claims to be a known acquaintance in an emergency, it’s best to hang up and then call them back using their saved contact details.
Not the recent caller.
Yours.
These reflexes seem obvious when you’re reading them in an article. They feel very different at 11 p.m. when you hear what sounds exactly like your daughter’s voice asking for help.
What holds genuine value in this new version of the internet is trust that was built before the flood: the colleagues you’ve met in person, the professional relationships established through direct interaction, the writers and analysts whose work you’ve been reading for years. The platforms don’t protect you. These algorithms don’t protect you. The connections you built with humans, in real time, before the agents arrived — those are what protect you. That’s not pessimism. It’s a fairly accurate map of where we are.
The Other Reading
There is a second way to look at everything I’ve just described, and it’s worth being honest about it, because it changes the conclusion entirely.
OpenClaw is not only a tool for scammers. Autonomous agents, the latest generation of AI models, and the infrastructure being built around them are the most powerful, productive tools humans have ever had access to. Not approximately. Not with qualifications.
Think about how electricity entered daily life after it was domesticated at the end of the nineteenth century. Most people use it to replace candles with light bulbs and find it extraordinary. It took thirty years for someone to imagine the electrified assembly line. Forty years for the mass-market refrigerator. Fifty years for television. During that half-century, the people who grasped what the technology actually enabled — not just what it replaced — built the foundations of entire industries. Everyone else just got better lighting.
We’re at exactly that point with AI in 2026. Most people are using it to rephrase emails or summarize articles. But the people who understand how to build autonomous agents, automate entire workflows, and deploy virtual teams are already operating at a different level. That gap will stay stable. The window in which fluency with these tools represents a genuine professional advantage is measured in months to a few years, not decades; people who mastered Google Ads in 2004 built entire agencies by 2009. Designers who moved to Figma in 2019 became the most sought-after profiles of the decade. The window closed in each case within two to three years. After that, the skill was simply a minimum requirement.
For AI, we are in the middle of that window right now. In a year, the people learning today will be the ones others come to for help. In three years, fluency will be the baseline expectation, not the advantage.
The difference between those two futures isn’t intelligence. It isn’t luck. It’s whether you chose to understand the tools before everyone else did, or waited until the moment they became impossible to ignore.
Thanks for reading. Your opinions in the comments are very welcome. Follow me and subscribe for more analysis. Don’t forget to support me on my newsletter for more and early access content.



