One Computer To Rule Them All
About Stargate, superintelligence, and the infinite slop machine.
AI companies are racing to secure a piece of the future — and part of that is securing as much compute as possible. To what end, one might ask.
↓ Go deeper (11 min)
This week, OpenAI CEO Sam Altman talked to reporters outside Building 2 at the Stargate data center site in Abilene, Texas. It was a highly choreographed press event with only one goal: to beat the drums about the company’s ambitious $500B dollar AI infrastructure project.
That same morning in internal Slack note Altman shared with employees OpenAI’s long-term ambition: at least five more sites the size of the one in Abeline, consuming a staggering 250 gigawatts by 2033 in total, good for a quarter of the entire U.S. electrical generation capacity.
Sharon Goldman, journalist for Fortune, wrote an entertaining piece about it, but I have to admit it ultimately left me unsatisfied.
A question that kept bugging me was: to what end? What will all this compute be used for? What future are these people trying to will into existence?
AGI is out-of-fashion, in Silicon Valley everyone is obsessed with Superintelligence
OpenAI’s slogan used to be “to ensure that artificial general intelligence benefits all of humanity”. Yet, in an interview with CNBC, less than a month ago, Altman said he found AGI “no longer a super useful term”.
In Silicon Valley, all the talk is about superintelligence.
Take for example this pretty cringey essay by Mark Zuckerberg (in Times New Roman-font to make it appear more philosophical than it really is):
Similar blog posts have been published by Altman (1) and Anthropic’s Dario Amodei (2), with equally grandiose predictions.
(It seems we live in a time where tech CEOs consider themselves philosopher-kings.)
While they have different visions for the future, they all agree that AI changes everything. It’s going reshape society as we know it, and their goal, whether it’ll be in 5, 10 or 15 years, is to come out on top.
I present to you three main schools of thought:
Superintelligence as Infinite Slop Machine
This is the world Mark Zuckerberg lives in. He’s trying to build something he calls ‘personal superintelligence’. A universe that involves AI-augmented smart glasses, countless sycophantic AI companions, and a social media feed that’s exclusively made up of AI-generated content; a move designed to cut the human out of the content creation loop. How convenient! As if social media algorithms weren’t addictive enough already. And the latest news is OpenAI is also dipping its toes in: it just launched a new app where users can generate videos of themselves and their friends, powered by Sora 2, to share on a TikTok-style algorithmic feed, directly challenging Meta.
Superintelligence as Worthy Successor
The ‘Worthy Successor’ movement is a fringe group that believes we’re on the cusp of creating a new species. An engineered species. A piece of technology that will come alive and surpass humanity’s unparalleled inventiveness, creativity, and ability to gather resources and turn them into cities full of sky-scrapers, shoot rockets into space, or build machines that can create chips by shooting laser beams at tiny tin droplets, producing EUV light which is then guided by mirrors to print billions of tiny transistors onto silicon wafers, used in graphics cards that fill up the very data centers that power today’s AI. While it’s hard to take these views seriously, they can count on support from various AI labs and tie in with ideas that a handful of researchers have related to machine consciousness, AI welfare, and the potential of granting “them” rights.
Superintelligence as Everything Engine
Last but not least, we have superintelligence as the ‘Everything Engine’. It’s the idea that AI will become as abundant and useful as the electricity that powers it. Already the Internet is being reinvented for a world where agents instead of people do the majority of the surfing, appointment booking, and online shopping. But it won’t stop there. Agentic AI will find its way into customer service, work management tools, CRM software, data analytics platforms, email clients, browsers, and more. Everything digital can be augmented or automated. And beyond the web, there’s the physical world. The Everything Engine-people also believe that this technology will usher in the rise of a robot workforce; the first general-purpose humanoids that can navigate the world like we do. Soon, we won’t be folding our own laundry anymore or… is this still a distant fantasy?
None of these scenarios are mutually exclusive. Scenario 1 is definitely happening, scenario 2 may be implausible but not impossible, especially if we consider longer time horizons, and scenario 3 hinges on whether or not the technology we have today can deliver on its promise.
The levels of investment taken on by these companies suggest they’re heavily betting on scenario 1 and 3, as it provides these companies with a clear path to new revenue streams and, potentially, massive profits.
In scenario 2, however, it’s unclear who would benefit and why. Smarter-than-human AI may sound really cool, but what are the chances this alien entity would answer to us? Why expect it to do our bidding? In fact, it’s not even clear if it would consider us as inherently valuable. It’s not like us humans have a history of treating other species with much dignity and respect.
First we put fire in a bottle — now we made stones talk, and it’s kind of weird
Now, before I proceed, I think it’s worth taking a moment, pause, and zoom out.
Language models are a remarkable invention. The reality is that we’ve figured out a way to make stones talk. (Silicon, a synthetic material primarily made from sand, forms the raw material for computer processors). It’s magical in the same way the invention of the lightbulb was magical, which was basically humanity catching fire inside of a bottle.
When it comes to computers, we went from punch cards to personal computers to smart phones in less than a century. AI is built on top of this. We now have AI agents that can use the technology for us. This means you no longer tell the computer what to do. Instead, you tell the computer what you want and the computer will figure it out for you.
The holy grail are coding agents. Since all software is made of code, the reasoning of the AI labs has been “if we solve code, we solve everything else”. Imagine being able to run millions of coding agents in parallel that are as good, or better, than some of the best human software engineers in the world.
At the same time, weird after effects arise in this newfound world of ours. Take the modern workforce as an example. As AI becomes more accessible to white-collar workers, it’s easier than ever to produce well-formatted slides; long, structured reports; seemingly articulate summaries of research articles; and thousands of lines of code. While that sounds amazing, it turns out some employees are already experiencing the spillover because colleagues are using it to generate low-quality ‘workslop’.
Harvard Business Review writes about this phenomenon:
When coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context. A cascade of effortful and complex decision-making processes may follow, including rework and uncomfortable exchanges with colleagues.
Who would’ve thought? People love shortcuts; we’re literally wired to. But we may be outsourcing more than is good for us or the ones around us.
Compute will be the future means of production — and that’s what they’re fighting for
So, why is OpenAI so adamant on building five data centers instead of one? According to recent reports, OpenAI is on track hitting its target of $13 billion this year and demand is growing. Part of this growth is engineered demand, as reasoning models have become the de facto standard and on average more tokens are spent when generating answers. At the same time, I think it’s fair to say that we continue to use AI more every day.
While OpenAI’s revenue has doubled, it needs to triple or quadruple demand in the next 12-18 months if it wants live up to its external commitments, like its mind-boggling $300B cloud deal with Oracle. Generating this much demand requires launching not one, but multiple products with millions or hundreds of millions of users that OpenAI can monetize, either directly through subscriptions or indirectly through ads.
The irony of this endeavor isn’t lost on people on the Internet.
If superintelligence was really around the corner — the type of intelligence that could cure cancer or “solve” climate change — OpenAI wouldn’t feel the need to launch an AI-powered social media app or enter the e-commerce space, partnering with companies like Stripe, Etsy, and Shopify.
Scaling up and aggressively monetizing is the only way forward for the company at this point and owning as much compute as possible is all part of the plan.
Why? Because owning compute means owning the future means of production.
These data centers are, if you’ll allow me one last analogy, the textile machines of the 21st century. It’s one big computer that powers all the other computers. Sure, it won’t solve the great problems of our times, but it’ll definitely enable a future where companies can create infinite amounts of content without the need for content creators and write software without the need for software engineers.
Whether that’s the world we want to live in is besides the point. It isn’t up to us. It’s up to those who build and own the machines. It always has been. That’s not superintelligence; that’s capitalism, my friends.
Rage against the machine,
— Jurgen






So disillusioned with tech right now. All those resources dedicated to produce utter bollocks and not solve any actual problems. Maybe I'm just old and tired.
I just want to say one thing on this Jürgen. Firstly, I don’t believe you or me will ever see “AGI/ASI” in our lifetimes. Not that it isn’t possible, but I think we still have a long way to go. Personally I think the hype/fear around “AGI/ASI” is to distract us from the current damage AI does, both to the environment and to people, as well as shifting focus away from the real problem: The Tech-Oligarchs making the technology, and that includes the “good ones” like Dario or Ilya.
My main issue as we get into this era of increasingly large AI Datacenters is what powers them or what they WANT to power them: Nuclear Power plants. I have read several articles discussing that some AI companies want their centers powered by Nuclear power. And all I can say to this is…the absolute fucking nerve and selfishness! Look, I’m an Altruist (Not the weird tech ones) and a Humanist. I want to help people and hope that certain things will benefit everyone. So the fact that these companies want to use a power that could power an entire city for a year CLEANLY and instead use it to power their center that all it generates is realistic AI-Porn is the epitome of the worst kind of selfishness and Capitalism.
This is why I’m an Anarchist.
Also love your work Jürgen.