AI Is Eating the World’s Memory. Apple Is the Only Company That Doesn’t Care.
Apple responded by building its cheapest laptop ever. Here’s how.
There’s a component inside every device you own that you’ve probably never thought about. It’s called RAM: random access memory. Without it, your processor is a genius sitting in an empty room with no desk, no paper, and no way to work. RAM is the surface where your computer spreads out everything it’s doing at once: every open tab, every running application, every frame of video. And right now, there isn’t enough of it to go around.
DRAM prices surged approximately 90% in Q1 2026 alone compared to the previous quarter. Microsoft raised the Surface Pro from $1,000 to $1,500. Sony increased the PS5 price years after launch. Samsung hiked the Galaxy S26 Ultra. Dell is planning price increases of 15% to 20% across its PC lineup. Industry experts don’t anticipate a return to normalcy until at least 2027, as the only solution is to build new factories, and memory manufacturing plants require years to erect.
In the middle of all this, Apple released a €699 MacBook. Not refurbished. Not stripped down. An aluminum laptop with a Retina display that opens with one hand and runs macOS without a fan. At the exact moment when every competitor was raising prices and cutting specs to survive, Apple went in another direction.
The explanation for how they did it is also the explanation for why nobody else can.
Why Your Laptop Suddenly Costs More?
Understanding the memory industry’s intense concentration is key to grasping the crisis.
Three companies produce over 93% of the world’s DRAM: Samsung, SK Hynix, and Micron. For decades, their biggest customers were the makers of phones, laptops, and game consoles. But over the last two years, a new customer has arrived that dwarfs all of them combined: artificial intelligence.
The data centers that run ChatGPT, Claude, Gemini, and every other large language model need memory at a scale that didn’t exist three years ago. Not just regular memory either. HBM, or high-bandwidth memory, is a specialized variant that they need. It’s created by stacking DRAM chips in layers, using microscopic pathways for vertical connections, and placing them near the processor. This design enables the transmission of enormous amounts of data at extreme speeds. Producing a single gigabyte of HBM requires roughly four times the manufacturing capacity of a gigabyte of standard RAM. Every gigabyte the factories allocate to AI is four gigabytes they can’t make for your laptop.
The math created an impossible situation. Google, Amazon, Microsoft, and Meta went to the memory manufacturers and signed contracts to buy everything they could produce, at any price. SK Hynix announced that its entire 2026 production capacity was pre-sold before the year even started. Micron’s consumer brand, Crucial, was effectively wound down as the company redirected capacity toward data center contracts. TrendForce’s senior VP Avril Wu described the situation as the most extreme she has seen in twenty years of tracking the memory industry.
The consequences cascaded immediately. Up to 70% of all memory produced globally in 2026 will be consumed by data centers, according to analysis from Tom’s Hardware citing IDC. What’s left has to supply every phone, laptop, tablet, car, TV, and game console on the planet. With demand growing and supply shrinking, consumer RAM prices more than doubled in a single quarter.
The relief won’t come soon. New memory factories take three to four years to build. Micron’s next major facility in Idaho won’t be operational until 2027 at the earliest. Samsung’s own analysis suggests the supply-demand gap will widen before it narrows.
How Every Laptop Gets Built (and Why That Model Is Breaking)
To understand why Apple can do what no one else can, you first need to understand how the rest of the industry works.
A typical Windows laptop is an assembly of parts from different specialists. Intel or AMD designs the processor. NVIDIA designs the GPU. Samsung or SK Hynix provides the RAM. TSMC or Samsung’s foundry fabricates the chips. And then a company like Dell, HP, Lenovo, or Asus buys all of those components on the open market, designs a chassis around them, installs Windows, and sells you a laptop.
This system worked beautifully for forty years. Intel spent decades making transistors smaller. NVIDIA spent thirty years optimizing graphics computation. Microsoft spent forty years making Windows run on an infinite variety of hardware configurations. The specialization drove costs down and performance up. It’s the reason a college student today has more computing power on their desk than a supercomputer from the 1990s.
But this model has a structural weakness:
Every company in the chain depends on the same suppliers for the same components at the same prices. When RAM doubles in cost, every laptop from every brand gets more expensive at approximately the same rate. Nobody has an escape route because nobody controls enough of the supply chain to build around the shortage.
Nobody except Apple.
The iPhone Subsidy We Don’t Talk About Enough
Apple sells approximately 250 million iPhones per year. It sells roughly 25 million Macs. The iPhone isn’t just Apple’s biggest product; it’s a financial engine so large that it subsidizes the development of everything else Apple makes.
For fifteen years, Apple has been building custom processors for the iPhone. Not buying them from a catalogue, the way Dell buys Intel chips, but designing them from transistor placement up. The iPhone’s constraints are brutal: it has to film, compute, recognize faces, run AI models, and display a fluid interface, all without a fan, with a tiny battery, and almost no room to dissipate heat. Every watt wasted becomes a degree of temperature. Every degree slows the processor.
Because Apple designs both the chip and the operating system, it can do something no PC maker can:
Dedicate specific blocks of silicon to specific tasks. Image processing gets its own engine. Machine learning gets its own neural cores. Face ID gets its own dedicated hardware. Nothing is general purpose. Everything is optimized.
And because Apple decides how every component is arranged inside the chip, it made a choice that changed the entire trajectory of the Mac:
It integrated the memory directly into the same package as the processor. In a traditional PC, RAM sits on a separate module, centimeters away from the CPU. Data moves back and forth on a bus, using energy and creating heat as it goes. On Apple’s chips, memory sits millimeters from the processing cores.
Less distance, less heat, less wasted energy, more performance per watt.
In 2020, Apple took everything it had learned from fifteen years of iPhone chip design and scaled it up for the Mac. The M1 MacBook Air was up to 2.5 times faster than the Intel MacBook Air it replaced. It outperformed premium Windows laptops that cost twice as much. It had no fan. The battery lasted all day. And it did it with 8GB of unified memory that, because of the architecture, performed significantly more.
The MacBook Neo is a leather offcut from the iPhone.
Here’s the part that explains how Apple can sell a €699 MacBook while everyone else is raising prices.
When TSMC fabricates chips by the millions, some come off the production line with minor defects. A GPU core that doesn’t pass quality testing. A small section of the die that underperforms. In a conventional supply chain, those chips get discarded. But Apple, because it designs its own silicon, can do something else: disable the defective section and use the chip in a product that needs less power.
Apple has never publicly confirmed that this is what happened with the MacBook Neo. But the chip inside it tells its own story. It’s the A18 Pro, the same processor that powers the iPhone 16 Pro, with one GPU core fewer. The most likely explanation, and the one that aligns with standard industry practice, is chip binning: using iPhone production rejects in a lower-power product rather than throwing them away.
The analogy that fits best:
Hermès takes its leather offcuts and makes keychains. Apple takes its iPhone chip offcuts and makes the cheapest MacBook in history.
That is why no competitor can respond. Dell can’t bin chips because Dell doesn’t design chips. Lenovo can’t use iPhone rejects because Lenovo doesn’t have an iPhone. HP can’t integrate memory into its processor because HP buys its processors from Intel, and Intel’s architecture doesn’t work that way. The MacBook Neo exists because the iPhone exists. Remove the iPhone from the equation, and the economics collapse.
The Machine Itself (and its real limitation)
I want to be honest about the experience of using the MacBook Neo, because the marketing story and the daily reality are both true at the same time.
For students, for light professional work, for browsing, email, documents, and media consumption, this is genuinely the best laptop at this price point. The build quality is aluminum, not plastic. This display is Retina. The speakers are decent. This trackpad is good. The keyboard is solid. There is no fan, so there is no noise, ever. It opens with one hand, a detail that sounds trivial until you realize the engineering required to balance a €699 laptop precisely enough to make that possible.
But I had slowdowns. Noticeable ones. When I pushed the machine beyond its intended workload, which for me means a writing workflow with twenty tabs, multiple apps, and occasionally a design tool running simultaneously, the 8GB of unified memory ran out of headroom, and things started to lag. Most coverage of the Neo doesn’t mention this. I’m mentioning it because it happened to me repeatedly, and because honesty about limitations is worth more than a clean recommendation.
The 8GB ceiling is the Neo’s real constraint, and it’s a constraint that matters more over time than it does today. You don’t buy a laptop for what it can handle this week. You buy it for what it needs to handle in four or five years. Applications get heavier. Websites get more complex. macOS itself requires more memory with each generation. 8GB in 2026 is adequate. 8GB in 2030 might not be.
My advice:
Wait for the second generation of the Neo. The chip binning pipeline will only improve as Apple moves to newer process nodes. The architecture will get more efficient. And the price advantage over the rest of the market will only grow wider as the RAM crisis deepens.
Why Nobody Can Compete
The RAM shortage exposed a structural truth that was always present but never visible under normal market conditions: Apple doesn’t play the same game as the rest of the PC industry.
TSMC builds the most advanced chips on Earth. Every two or three years, it develops a new generation of fabrication technology: smaller transistors, more efficient, more powerful. At startup, the new process can only produce a limited volume. The biggest clients fight for first access. Apple has historically reserved up to 90% of initial capacity on TSMC’s newest nodes because Apple’s iPhone volumes are so large that no PC manufacturer can match them. Lenovo, the world’s largest PC maker, sells three times fewer computers than Apple sells iPhones. The volume disparity is insurmountable.
This means Apple gets the newest, most efficient silicon before anyone else. It means Apple can build products around chips that competitors literally cannot buy for another twelve months. And it means Apple can absorb the RAM crisis in ways no one else can, because its architecture was designed from the beginning to do more with less memory.
No competitor can buy this advantage. You can buy chips, you can buy RAM, you can buy factory capacity. But you cannot buy a fifteen-year head start in custom silicon design. You cannot buy an installed base of 250 million annual iPhone sales that finances the entire operation. You cannot buy the engineering culture that figured out how to turn a defective iPhone processor into the cheapest MacBook ever made.
The rest of the industry is fighting over scraps of a shrinking supply. Apple is making keychains from its leftovers.
Thanks for reading. Let me know your thoughts in the comments.





