Elon Musk said Tesla didn’t need xAI. Then He Announced Their Biggest Project Together.
One post. Eighteen months apart. And a shareholder lawsuit that just got stronger.
In September 2024, a Wall Street Journal reporter revealed that Tesla was in discussions to share revenue with xAI, Elon Musk’s private AI startup, in exchange for access to its models. Musk’s response to X was categorical: “There is no need to license anything from xAI.” He went further, explaining that Tesla’s real-world AI system was “vastly larger” than any large language model, and that xAI’s models were too computationally heavy to run on Tesla’s vehicle inference hardware, anyway. The two companies, he suggested, had nothing meaningful to offer each other.
That statement had a specific purpose. Tesla shareholders had just filed a lawsuit accusing Musk of breach of fiduciary duty — creating xAI as a private company at Tesla’s expense, diverting AI talent, Nvidia GPU deliveries, and strategic attention away from Tesla and toward a venture whose upside he would capture personally. By denying any actual connection between the two companies, Musk was dismantling the premise of the complaint.
Then, on March 11, 2026, he posted again. And everything changed.
What Macrohard Actually Is
The announcement introduced a joint project between Tesla and xAI called Macrohard — a deliberate jab at Microsoft — also referred to as Digital Optimus. In one post, Musk described a system capable, in principle, of emulating the function of entire companies. And essentially, what is powering the reasoning layer? Grok, xAI’s large language model. The exact technology he had insisted, eighteen months earlier, that Tesla did not need.
The legal implication was immediate. Every public statement linking xAI and Tesla more tightly makes the plaintiffs’ case stronger. If xAI’s technology is essential to Tesla’s most ambitious product line, the question becomes unavoidable: why was it built at a private company where Musk captures the upside personally, rather than at Tesla, where it would belong to shareholders?
The technology itself, set aside from its legal complications, is genuinely interesting. The architecture borrows from Daniel Kahneman’s dual-process theory of cognition. There are two brains. The first, Digital Optimus, runs locally on Tesla’s AI4 chip — a $650 piece of hardware already installed in every recent Tesla vehicle. It processes the last five seconds of computer screen video in real time, along with every keyboard and mouse action. It clicks, types, scrolls, and navigates software exactly as a human office worker would. This is System 1: fast, instinctive, operating without latency because it never needs to leave the car.
The second brain is Grok, running in the cloud as System 2. When Digital Optimus hits something genuinely complex — an ambiguous decision, a multi-step process, a case that requires real-world understanding — it sends a query to Grok, which analyzes, reasons, and returns a directive in milliseconds. Digital Optimus executes, and the work continues.
Take expense report processing as a concrete example. Digital Optimus opens the accounting application, reads the line items, cross-references amounts, and fills the fields — all locally, with near-zero latency. Grok receives the question when an anomaly, an unfamiliar vendor, or a duplicate invoice is identified. Grok identifies the problem and sends back the resolution. The entire exchange takes a fraction of a second. Musk compared the dynamic to a GPS: Grok sets the direction, Digital Optimus drives.
The Supercharger Network as an AI infrastructure
What makes the architecture more than a software demo is where it runs, not in a distant data center, but on a $650 chip already sitting in millions of parked cars. Every Tesla equipped with AI4 hardware — every recent Model 3, Model Y, Model S, Model X, and Cybertruck — could theoretically run digital office tasks overnight while its owner sleeps. Reconcile expense reports. Sort emails. Fill administrative forms. Navigate accounting software. All with near-zero marginal cost, since the processing is local.
Compare that to how current cloud AI agents work: a screenshot is captured, sent to a remote server, analyzed, a command is returned, the computer executes it, and a new screenshot is taken. Each action requires a round-trip to the cloud. That works, and the results are often impressive, but every click costs time and cloud compute dollars. Digital Optimus would only need to consult the cloud for the hardest problems. The rest runs on a few cents of electricity.
Musk added a detail that caught observers off guard: Tesla plans to deploy millions of dedicated Digital Optimus units at its Supercharger network, where the company has approximately 7 gigawatts of power. Every charging station would become a potential mini AI inference cluster. The arithmetic on that is worth letting sit.
One Model, Three Bodies
Digital Optimus isn’t an isolated product. It’s the third pillar of a unified architecture — the same foundational technology running across three very different physical contexts.
Tesla’s original application of this core technology was for real-time visual recognition of objects, including stop signs and pedestrians, even during severe weather like blizzards. Now, it’s being adapted to detect components such as buttons, menus, and interfaces on a computer display. And that same base underlies Optimus, Tesla’s humanoid robot — the one that walks, manipulates objects, and operates on factory floors. One model family — three deployments: autonomous driving, physical robotics, and now the digital agency.
All data points generated by any of these three systems could, in principle, strengthen the others. A difficult road maneuver improves spatial reasoning, which helps the robot handle an irregular object, which sharpens the digital agent’s navigation of an unfamiliar interface. Musk is describing something close to a world model: perception, spatial reasoning, and real-time decision-making as unified capabilities across all three products.
The vision is coherent on paper. The timing of the announcement was quite peculiar.
A Company Rebuilding from the Foundations
Two days after unveiling Macrohard, Musk published a sentence that would be extraordinary coming from the CEO of any company: “xAI was not built right the first time, so it is being rebuilt from the foundations up.”
The context made this harder to dismiss as corporate posturing. Since January 2026, nine of the eleven original co-founders of xAI had left the company. Tony Wu, who led the reasoning team, departed on February 10. Jimmy Ba, a co-author of the highly influential Adam optimization paper and director of xAI’s research, departed the next day. Zihang Dai and Guodong Zhang followed in early March. The only two founding members still on site are Manuel Kroiss and Ross Nordeen.
According to multiple reports, the Macrohard project entered an informal pause following Pohlen’s departure. The project had lost its leader just weeks after he was appointed to run it. Musk responded by recruiting two engineers from Cursor, the AI coding tool, and by sending executives from Tesla and SpaceX to audit xAI’s teams. He also acknowledged publicly, at the Abundance Summit, that Grok was behind Claude Code and OpenAI’s Codex on coding benchmarks — the exact domain that Macrohard depends on.
The picture that emerges is unusual: a company publicly announcing a system capable of replacing entire corporate structures, while the team tasked with building that system has largely dissolved.
The Orbital Layer
Zoom out further, and the scale of what Musk is attempting becomes both clearer and stranger.
On February 2, 2026, SpaceX acquired xAI in an all-stock merger valuing the combined entity at $1.25 trillion — by valuation, the largest corporate merger ever executed. The official rationale: build orbital data centers. SpaceX had already filed an FCC application on January 30 requesting authorization to launch up to one million satellites designed to function as solar-powered compute clusters in space.
The logic is economically clean: in a sun-synchronous orbit, solar energy is available nearly continuously, and the vacuum of space functions as a free cooling system. Electricity and cooling are the two most expensive inputs for running AI at scale on the ground. In orbit, both approaches cost virtually zero.
All of this infrastructure — the satellite compute nodes, the Tesla AI4 chips in vehicles, the Supercharger network, the Optimus robots — would run on the same family of chips, connected through Starlink. One model. One chip architecture. Deployed from the asphalt to low Earth orbit.
Google has explored similar orbital compute concepts, and Jeff Bezos has discussed space-based data centers over longer horizons. But no other organization currently controls the rockets to build that infrastructure, the satellite communication network to connect it, and the in-house chip architecture to power it. That convergence is unique to the entity Musk controls.
The Lawsuit That Follows Every Announcement
None of these resolves the legal question; it deepens it.
The pension fund for Cleveland’s bakery workers and the Teamsters union have had a complaint in Delaware’s court since June 2024, accusing Musk of breach of fiduciary duty to Tesla shareholders. The core allegation: by founding xAI as a private company, Musk diverted AI talent, Nvidia GPU allocations, and strategic focus away from Tesla and into a venture where he captures the value. They have asked the court to order a transfer of Musk’s stake in xAI to Tesla.
Tesla invested $2 billion in xAI’s Series E round in January 2026. Tesla shareholders are therefore now directly funding the company, the lawsuit argues, which should never have existed outside Tesla.
The February SpaceX acquisition converted that Tesla investment into an indirect stake in SpaceX-xAI, further entangling the companies while keeping the AI technology itself outside Tesla’s ownership. And every announcement that makes Grok sound indispensable to Tesla’s product roadmap — Macrohard being the loudest example — objectively strengthens the argument that Musk had an obligation to build it at Tesla.
The Arithmetic Nobody Disputes
Whatever the legal outcome, the direction of travel is not in question.
A worker handling data entry, invoice processing, email management, and report generation costs somewhere between $35,000 and $60,000 per year, fully loaded. A Digital Optimus system running on a $650 chip, consuming a few cents of electricity per day, available around the clock, without sick days or vacation, brings the cost per task close to zero.
That describes tens of millions of jobs globally. This is not a catastrophic prediction — it’s straightforward arithmetic. And it doesn’t depend on Musk specifically. Whether it’s Tesla, Anthropic, OpenAI, or Google that delivers this capability first, the economic equation stays the same.
Where does this actually stand? Musk has suggested a Digital Optimus launch around September 2026. His timelines are historically optimistic, and that qualifier matters. Macrohard is reported to be paused.
The company tasked with building it has lost nearly its entire founding team. The lawsuit continues. xAI is burning roughly $1 billion per month. The orbital data centers are an FCC application, not a deployed system.
The vision is coherent.
The technical architecture is real.
The gap between announcing a system that can replace Microsoft and actually shipping it at scale to millions of vehicles is large enough that even the world’s largest fortune doesn’t guarantee crossing it.
What is certain is the direction. Agents capable of autonomous computer control are coming from this ecosystem or another. The question, as it always is, is not whether. It’s when, and who gets there first.
Thanks for reading. Follow me and subscribe for more content. You can also subscribe to my newsletter for early access.


