As SpaceX CEO Elon Musk moves forward with large-scale orbital infrastructure plans, artificial intelligence (AI) data centers are increasingly being framed as candidates for moving off-planet.
But a single modern AI rack, like an NVIDIA’s NVL72, consumes roughly 120 kilowatts — comparable to the International Space Station’s (ISS) total power generation.
The ISS spans roughly 74 by 110 meters, about the size of a football field.
“So you need something of that size just to power that rack,” Director of Institute for Space and Defense Electronics (ISDE) at Vanderbilt University Brian Sierawski said.
That comparison reframes the idea of space compute.
A rack is not just compute — it is a tightly coupled system that depends on scalable power, efficient cooling, and routine maintenance. In orbit, those assumptions break down.
Build Better spoke with Sierawski and associate director Michael Alles at ISDE about what would happen if a modern AI rack were deployed in space today. Their answer: it’s not one failure point — it’s a set of engineering constraints that show terrestrial AI infrastructure is not a launch problem, but a system design problem.
1. Power
AI infrastructure already strains power systems on Earth, but in orbit, a single ~120 kW rack must be powered entirely by just the spacecraft itself.
Large solar arrays could supply that power. Musk suggested in his Cheeky Pint interview that higher-efficiency solar cells and lighter structures could make orbital generation viable.
But scaling power generation introduces trade-offs. Larger arrays add mass and require deployment mechanisms to unfold and operate in orbit. Higher power levels make energy delivery harder to manage and introduce failure modes that do not exist on Earth.
“If you start pushing higher, you start having reliability issues like electrical arcing in space because of the plasma environment,” Sierawski said. These can lead to destructive events in power devices “that you don’t see on the ground.”
2. Radiative Cooling
If generating power is difficult, rejecting it may be harder. Keeping racks cool is required for both performance and reliable operation.
On Earth, heat leaves systems through airflow and conduction. In orbit, those paths largely disappear. “That assumption that space is cold is a bad assumption,” Sierawski said. “You have no convection, so the only way to get rid of heat is radiatively.”
That changes how these systems must be designed.
Instead of relying on airflow or liquid cooling, heat must be emitted into space through radiators. Each square meter of radiators can only get rid of a few hundred watts of heat, a limit set by the Stefan-Boltzmann law, which says the amount of heat a surface can shed rises very quickly with temperature — roughly to the fourth power — so small increases make a big difference.
The numbers add up fast: a single ~120 kW rack would need roughly 300-400 square meters of radiator area just to stay cool. At that point, the cooling system can be as large as, or larger than, the compute hardware itself.
3. Solar Radiation
Solar radiation does not just damage hardware, but also can corrupt computation.
“DRAM is known to be susceptible to radiation,” Sierawski said.
AI systems depend on large memory pools, where each bit represents a small electrical charge. In a radiation environment, those bits can flip.
At a small scale, that is manageable. At the system scale, those errors accumulate.
The question becomes whether error correction can keep up, and what happens when it cannot, shifting hardware failure to system behavior.
“If I have an error in my AI model, does it work? Does it predict something else?” Sierawski said.
Reliability becomes uncertain: the system may continue running, but outputs may no longer be correct.
Radiation can also cause cumulative degradation and destructive events in power devices — permanently damaging components rather than causing transient faults.
4. Qualification for Operation in Space
Every modern electronics device goes through a qualification process during development and testing during production to ensure reliable operation in the field. When the “field” is in orbit, that testing becomes more challenging to do in cheaper terrestrial environments.
Radiation, temperature swings, vibrations, shocks — there is a whole set of new physical and environmental requirements that have to be developed and then tested.
Those tests may be non-trivial. Radiation facilities are limited and expensive, “on the order of $1,000 an hour,” Sierawski said, and cannot fully replicate long-duration exposure. Mechanical tests could also require special equipment or be difficult to replicate on Earth.
One workaround is to use radiation-hardened components, but that introduces new tradeoffs.
“Radiation-hardened parts are usually generations behind in performance,” Alles said.
These components are typically larger, slower, and less power-efficient than commercial alternatives. For AI workloads — where performance per watt is critical — this creates a fundamental mismatch: hardware that survives space often cannot support modern, large-scale systems.
Qualification becomes a design constraint, forcing tradeoffs between performance, efficiency, and reliability while often locking space systems into older generations of hardware.
5. Reliability and Maintenance
Terrestrial data centers assume maintenance. In space, that assumption becomes harder to support.
Failures that would normally be resolved by swapping hardware must instead be handled autonomously. Even transient faults, such as data corruption, require systems to detect and recover without intervention.
In some cases, recovery may still rely on power cycling. On Earth, that’s inconvenient. In orbit, it cuts into uptime and must be built into the system from the start.
Reliability shifts from a maintenance task to a design requirement.
These challenges do not exist in isolation; they reinforce each other.
Designing for radiation can reduce performance and increase power consumption. More power creates more heat. More heat requires larger radiators. Larger power systems increase electrical complexity and failure risk in an environment where it’s unclear how much maintenance will be possible.
Every solution to one of these limiting factors upsets the other factors, combining into one gnarly engineering problem.
A modern AI rack is built on assumptions that do not hold in orbit: abundant power, efficient cooling, serviceability, and commercial-grade components.
In space, those assumptions collapse.
Running a data center in space requires designing a new system, where power, cooling, hardware, and reliability are deeply considered from the start.