Moonshot? Data Centers in Space
As terrestrial infrastructure strains under the weight of AI's insatiable demand, a bold new frontier is opening above the clouds, but can the physics support the promise?
In his latest research paper, Eric Benoist, Tech & Data Analyst, examines the fundamentals and possibilities of orbital data centers. We took the opportunity to ask him the most pressing questions on the topic.
Eric Benoist
Tech & Data Analyst
Can you set the scene – how extreme is the demand pressure on terrestrial data centers right now?
The numbers are quite extraordinary. The five largest US technology platforms collectively deployed around US$400 billion in capital expenditure in 2025 alone, and consensus is pointing toward US$650 billion in 2026. We are forecast to see roughly 100 gigawatts of new data center capacity come online globally by 2030, which essentially doubles today's installed base.
But the real story is what's happening beneath that growth. The grid infrastructure across most developed markets is already under strain. Countries like Ireland and the Netherlands have introduced effective moratoriums on new hyperscale development. Water is becoming a flashpoint in regions that are already stressed. And there is a growing political problem: data centers consume enormous quantities of electricity while creating relatively limited direct employment, which means they are increasingly seen as competing with residential and industrial users for constrained resources. The social license to operate is weakening fast.
With that context, the concept of orbital data centers sounds a little less far-fetched. What is the basic proposition?
At its simplest: take the data center off the ground entirely. The appeal is conceptually strong. In orbit, you have no land constraints. You have access to near-continuous solar energy – the solar constant outside the atmosphere is about 1,361 watts per square metre, compared to a fraction of that on the ground after atmospheric losses and weather impact. And you can reject heat through radiation into deep space, with no water required. For an industry that is running into hard limits on all three of those fronts simultaneously, it is a genuinely compelling alternative operating paradigm.
What has changed recently is the economics of getting there. Launch costs to low Earth orbit have fallen sharply over the past decade, driven by reusable rocket technology and increasing competition among launch providers. That cost deflation is what has triggered serious engineering interest from companies like Google, Amazon, and of course SpaceX.
What are the biggest technical obstacles?
There are four interconnected challenges, none of which yields easily.
The first is power. The International Space Station, our most sophisticated orbital platform, produces about 215 kilowatts of usable power from eight solar wings spanning 2,500 square meters. A single Nvidia GB300-class GPU – the kind driving frontier AI workloads – draws around 1.4 kilowatts, and once you account for networking, storage, and cooling overhead, you need to allocate at least 2 kilowatts per chip. So the ISS could power roughly 100 of those GPUs and nothing else. Scaling to hyperscale clusters with hundreds of thousands of accelerators requires truly enormous solar arrays.
The second challenge is thermal management, and there is a widespread misconception here. People assume that because space is cold, cooling is easy. It is the opposite. The atmosphere is what allows terrestrial data centers to shed heat via air and water. In a vacuum, radiation is the only continuous heat rejection mechanism available. Dense GPU racks that convert essentially all their electrical input into heat would demand enormous radiator surfaces, with serious mass and cost penalties.
Third is radiation. Outside the atmosphere, electronic components face a persistent flux of energetic particles – galactic cosmic rays, solar energetic particles, trapped populations in the Van Allen belts. These can flip memory bits, corrupt data, or trigger destructive current surges in processors.
Modern AI accelerators, engineered for terrestrial environments with ever-smaller transistor geometries, are intrinsically more vulnerable, not less. And critically, you cannot service failed hardware in orbit the way you can in a ground-based facility.
Finally, communications. Terrestrial data centers operate on dense fiber networks with extraordinary bandwidth and near-zero latency. Satellite downlinks, even advanced laser-based free-space optical links, introduce constraints and complexity that don't exist on the ground – and coordinating distributed AI workloads across thousands of moving satellites introduces formidable networking challenges.
What would a 1GW orbital data center likely cost?
I explain in more detail in my research paper, but a 1GW constellation would likely require launch and manufacturing expenditures of approximately US$75 billion under today's assumptions. That is more than four times the cost of an equivalent terrestrial facility, which I estimate at US$16-18 billion.
There are some structural OPEX advantages – no electricity bill, no water bill, minimal on-site labor. And electricity can account for more than 50% of annual operating expenses in a modern hyperscale data center, so that is genuinely significant. But those savings are eroded by punishing depreciation cycles. LEO satellites are designed for five years, versus 15 to 20 years for conventional data center real estate. Every hardware refresh means deorbiting the existing satellite and launching a replacement. Meanwhile, radiation-induced GPU failures would require launching many more satellites each year just to maintain constant capacity.
Advocates often frame orbital compute as a “green” alternative to terrestrial data centers. Is that a fair characterization?
It is far more complicated than the advocates typically acknowledge, and I think it is important to say so clearly. Yes, moving data centers to space eliminates terrestrial water consumption and land use – both real benefits. But the launch process itself carries a significant carbon cost that rarely appears in these discussions.
For current partially reusable systems like SpaceX’s Falcon 9, launch and re-entry emissions amount to roughly 51.7 kilograms of CO₂ equivalent per kilogram of payload. Even with their next-generation Starship, which promises full reusability and cleaner methane propellant, the figure remains around 34 kg CO₂e per kilogram.
If Elon Musk's vision of hundreds of gigawatts annually were realized, the figures would rapidly become very significant.
Beyond carbon, you have the proliferation of space debris – there are already over 1.2 million fragments larger than 1 centimeter in orbit. You have the atmospheric chemistry impact from alumina particles released during re-entry. And you have the impact on astronomy: satellite trails contaminate telescope data, and radio-frequency emissions from large constellations interfere with extremely sensitive receivers. These are real costs that do not generally appear in current business cases.
All taken into account, should we dismiss the concept of orbital data centers?
Not necessarily. I believe there are three reasons that deserve serious weight.
The first is the urgency of terrestrial constraints. The demand for compute infrastructure is accelerating faster than the energy and water resources required to sustain it. Upgrading power grids at the required scale is a decade-long endeavor at minimum. Small modular reactors will not realistically reach meaningful deployment before the mid-2030s. In that context, the idea of relocating compute beyond Earth is not mere fantasy – it is a logical, if technically demanding, response to a resource ceiling that is approaching faster than the industry would like to admit.
The second reason is the pace of enabling science. Across multiple disciplines simultaneously – launch vehicle reusability, lightweight structural materials, radiation-tolerant semiconductor architectures, next-generation perovskite solar cells – the trajectory of progress is genuinely encouraging. Our modelling shows that combining an aggressive US$200 per kilogram launch costs with lighter satellite designs could bring the cost of a 1GW constellation down to US$ 28-30 billion – roughly 1.5 times the cost of an equivalent terrestrial facility. At that level, orbital compute starts to look more competitive. No single breakthrough gets us there, but the cumulative effect across this technology stack could close the gap faster than a static reading of today's numbers suggests.
The third is that the AI models driving demand may themselves evolve. Algorithmic optimization and more efficient architectures could reduce raw power and cooling requirements significantly – making space a more attractive venue than today's energy hungry large language models imply.
Private investment in space technology exceeded US$8.3 billion globally in 2025 and is accelerating. The imminent SpaceX IPO could prove a defining catalyst, legitimizing commercial spaceflight as an investable sector at institutional scale. Whether orbital data centers ultimately prove viable remains genuinely uncertain. That they are worth pursuing seriously is not.