Elon Musk Imagines a Terawatt of Computing Capability — Which is Comparable to 1.43 Billion GPUs and Twice the Energy Production of the United States
We independently review everything we recommend. When you buy through our links, we may earn a commission which is paid directly to our Australia-based writers, editors, and support staff. Thank you for your support!
Quick Read
- Elon Musk suggests making 1 terawatt (TW) of computational power available—comparable to over 1.43 billion GPUs.
- This degree of computational capacity could yield between 100 zettaFLOPS and 1 yottaFLOPS—10 to 1,000 times the existing global computational capability.
- The required power would exceed twice the average electrical output of the U.S. and account for 1% of global electricity consumption.
- Projected annual operational expenses: AU$11 trillion to AU$19.3 trillion.
- The hardware costs alone could surpass AU$35 trillion, necessitating over a billion high-performance GPUs.
- Musk imagines a future where solar and space-based energy facilitate this ambition, advancing humanity on the Kardashev energy scale.
- Currently, this accomplishment is economically and logistically unfeasible—but it provides insight into a potential AI-dominated future.
Expanding Compute: Musk’s Vision for a Terawatt Future
Only Elon Musk could challenge the limits of what is achievable. In his recent reflections, Musk envisions an incredible surge in computing power: bringing online a complete terawatt (TW) of computational capacity. This is roughly equal to 1.43 billion GPUs and would necessitate more than twice the United States’ average electrical output. The ambition is as extraordinary as it is challenging, and while it may not be achievable at present, it lays the groundwork for the future of artificial intelligence, energy infrastructure, and data centres on a planetary scale.
The Dimension of a Terawatt of Compute
To provide context, today’s global computing capacity is estimated to lie between 1 to 10 zettaFLOPS (10²¹ to 10²² FLOPS), predominantly sourced from data centres in the US, China, and Europe. A terawatt of compute would elevate this to 100 zettaFLOPS or even 1 yottaFLOPS (10²³ to 10²⁴ FLOPS)—a scale that is 10 to 1,000 times larger than current projections for 2025.
This escalation isn’t merely hypothetical. It would demand a consumption of 1 TW of power—approximately 2.1 times the average electricity output of the United States, and around 77% of its installed capacity. Almost 1% of the world’s electricity would be required solely for computing infrastructure.
Hardware Needs: 1.43 Billion GPUs
Assuming NVIDIA H100 GPUs or similar equipment drawing roughly 700 watts each, achieving 1 TW would necessitate over 1.43 billion GPUs. To provide context, even today’s largest corporate GPU purchases are in the range of hundreds of thousands. This signifies a 1,000-fold increase in hardware deployment and a logistical undertaking of unmatched proportions.
Financial Aspects of a Terawatt Compute Infrastructure
The financial consequences are equally monumental. Annual operating expenditures may fall between AU$11 trillion and AU$19.3 trillion (US$7.3 trillion to US$12.9 trillion), averaging around AU$15 trillion. This comprises:
- Electricity: AU$1.07 trillion/year (predicated on US$0.08/kWh and PUE 1.3).
- Capital expenditure: AU$13.8 trillion/year for hardware, data centres, and upkeep (assuming a 4-year life cycle).
This equates to around 10% of global GDP, or 25 to 30 times today’s international expenditure on data centres. It also reflects roughly 2 to 3 times the annual electricity usage of the entire U.S.
The Kardashev Scale: Imagining Beyond Earth
Musk links this idea to the Kardashev Scale—a framework for gauging a civilisation’s technological progress based on its energy consumption. Humanity is approaching Type I (planetary energy utilization). Musk envisions advancing towards Type II (stellar energy usage)—capturing solar energy through arrays both on Earth and in outer space.
He anticipates that energy captured could increase a billionfold with solar arrays in space, and potentially another billionfold if we achieve the Type III level, tapping into galactic energy resources. While these aspirations may seem distant, they could transform humanity’s position in the cosmic order.
Artificial Intelligence: The Engine Driving the Vision
What drives the pursuit of such vast computational capabilities? The primary catalyst is artificial intelligence. As AI models evolve in complexity, their necessity for computational resources escalates. Presently, AI performance continues to correlate with computational power, suggesting that superior AI is inherently linked to greater energy and investment.
To enable future breakthroughs in AI—such as artificial general intelligence (AGI), real-time autonomous robotics, or worldwide predictive analytics—extensive computational resources will be crucial. At this scale, infrastructure could support highly intelligent systems that revolutionize industries, science, and everyday life.
Renewables as a Crucial Component
Musk emphasizes that realizing this vision would demand significantly more solar energy. Future data centres could be established in regions abundant with renewable resources—Australia, with its vast solar energy potential, could prove to be an ideal location. Furthermore, advancements in space-based solar technology may be essential for powering next-generation computing facilities.
Conclusion
Elon Musk’s vision of a terawatt-scale computing infrastructure is audacious, teetering on the brink of science fiction. The initiative would necessitate over 1.43 billion GPUs, consume over twice the U.S.’s average electricity output, and incur costs reaching AU$19.3 trillion annually. Nevertheless, it frames a future rooted in AI, powered by solar and space-derived energy, and aligned with long-term planetary and cosmic ambitions. While it is currently unachievable, it offers a peek into a potential future where computing power underpins the progress of civilization.
Q&A: Essential Information
Q: What does terawatt of compute power mean?
A:
A terawatt (TW) of computing power signifies computing infrastructure that utilizes 1 trillion watts of energy. With contemporary GPUs capable of generating around 10¹¹ to 10¹² FLOPS per watt, a 1 TW system might achieve 10²³ to 10²⁴ FLOPS—equivalent to 100 zettaFLOPS up to 1 yottaFLOPS.
Q: How many GPUs are necessary to reach 1 TW of compute?
A:
If each GPU consumes 700 watts (like an NVIDIA H100), approximately 1.43 billion GPUs would be necessary for this level of computational throughput.
Q: Is building such a system feasible today?
A:
Not at this moment. The infrastructure, energy requirements, and costs vastly exceed what is economically or logistically practicable. It would demand extensive global collaboration, advancements in renewable energy, and breakthroughs in hardware efficiency.
Q: What drives Musk’s desire for such extensive computational resources?
A:
Mainly to back the next generation of artificial intelligence. AI capabilities continue to scale with increased compute, and achieving AGI or advanced robotics will likely necessitate infrastructure of this size.
Q: How does this correlate with the Kardashev Scale?
A:
Musk envisions society advancing along the Kardashev Scale—from consuming all planetary energy (Type I) to capturing solar power via space installations (Type II), ultimately reaching Type III, where we harvest energy from galactic sources. This vision is in line with a future where computing and energy necessities expand exponentially.
Q: Could Australia contribute to this vision?
A:
Absolutely. With immense solar resources and increasing investment in renewable energy, Australia could emerge as a center for green data centres and AI infrastructure, particularly as global projects seek low-carbon energy solutions.