
Starcloud
Data centers in space.
Thesis
- 01
Earth is power-bound; orbit is not. The bottleneck on AI is no longer silicon — it's megawatts. Sunlight in orbit is ~24/7 and free; vacuum is the heat sink. Starcloud's architecture sidesteps the constraints that slow every terrestrial hyperscaler.
- 02
This is no longer a thought experiment. In November 2025 they launched Starcloud-1 with an Nvidia H100 on board, becoming the first to train an LLM in space and run Gemma in orbit.[7] The next satellite (Oct 2026) carries multiple GPUs including Nvidia Blackwell and an AWS server. The team is climbing the curve quickly.
- 03
The team is the rare combination this needs. A repeat-founder CEO with policy and capital fluency, a deployable-structures PhD with mission heritage at Airbus and Oxford Space Systems, and a SpaceX/Microsoft engineer who has shipped both satellite networks and ML infrastructure. Few teams can build the satellite, the data center, and the software stack at once.
- 04
Launch costs are collapsing into a window. Starship and competing heavy lift are pushing $/kg to LEO down an order of magnitude. The economics of orbital compute flip from absurd to plausible inside this decade — being the operator with flight heritage and a software stack when that happens is the prize.[5]
Problem
Earth is running out of room to build the next training cluster.
Every frontier model release shifts the bottleneck further upstream. The constraint is no longer GPU supply — it is megawatts. Hyperscalers are sitting on chips they cannot plug in because the grid, the substations, and the water rights are still being negotiated.[13]
The terrestrial path forward asks AI to compete with cities, factories, and farms for the same kilowatt-hours and the same gallons of cooling water — under multi-year permitting timelines that did not anticipate AI.[4]
5–10 yr
Permitting timeline
For new gigawatt-scale terrestrial sites
$12–15M
Build cost / MW
Earth-side data center capex (US)
GW
Annual scale of new demand
And accelerating with each model generation
Why Space, Why Now
Big tech is publicly aligned on the move to space.
On-record quotes from the people setting the largest AI capex budgets on Earth. The question has shifted from whether to who operates.
We still don't appreciate the energy needs of this technology. There's no way to get there without a breakthrough — we need fusion or radically cheaper solar plus storage.

Sam Altman[18]
CEO, OpenAI
The biggest issue we are now having is not a compute glut, but power — the ability to get builds done fast enough close to power. We have a bunch of chips sitting in inventory I can't plug in because I don't have warm shells to plug into.

Satya Nadella[13]
CEO, Microsoft
You're power constrained on Earth. Space has the advantage that it's always sunny. As soon as the cost to orbit drops to a low number, it immediately makes extremely compelling sense to put AI in space.

Elon Musk[10]
CEO, SpaceX & xAI
We're going to start building these giant gigawatt data centers in space. These giant training clusters — those will be better built in space, because we have solar power 24/7. There are no clouds, no rain, no weather.

Jeff Bezos[12]
Founder, Blue Origin
How do we one day have data centers in space so that we can better harness the energy from the sun, that is 100 trillion times more energy than what we produce on all of Earth today? A decade or so away, we'll be viewing it as a more normal way to build data centers.

Sundar Pichai[11]
CEO, Alphabet & Google
As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated. AI processing across space and ground systems enables real-time sensing, decision-making and autonomy — transforming orbital data centers into instruments of discovery.

Jensen Huang[15]
CEO, Nvidia
Three things changed in the last five years — solar yield, thermal design, and launch cost.
Power. In a dawn-dusk sun-synchronous orbit, a satellite stays in continuous sunlight year-round — a >95% capacity factor versus the 24% median for US terrestrial solar farms, plus ~40% higher peak irradiance with no atmosphere to absorb the spectrum. Johnston puts the net effect at roughly 8× more energy per square meter of array — before you account for the permitting, transmission, and storage required on Earth.[5] [18]
Heat. Cooling is harder in vacuum, not easier — there is no convection or conduction, only radiation. This is the genuinely difficult part of the design, and the whitepaper budgets 70% of engineering hours on thermals. The math works: a 20°C radiator panel emits ~633 W/m² net to the 2.7 K cosmic background per Stefan-Boltzmann, with no chillers and no cooling water. And the architecture is already operational at scale — NASA's International Space Station has rejected ~70 kW of waste heat continuously since 1998, using two ammonia coolant loops feeding deployable radiator panels that emit infrared into deep space. Starcloud's design is bigger; the building blocks are flight-proven.[5] [18] [19]
Launch cost. Cost-to-LEO fell from ~$20,000/kg in 2010 to ~$1,500/kg today with Falcon Heavy. McKinsey's analysis sets ~$500/kg as the threshold at which orbital compute becomes cost-competitive with a terrestrial cluster of the same capacity; below that, it is cheaper outright. SpaceX's stated Starship target at flight rate is $100–200/kg. This is the curve that triggers everything else.[4]
Cost-to-orbit crashes past the $500/kg breakeven by 2030
Chart
$500/kg is the McKinsey breakeven threshold versus a terrestrial data center of equivalent capacity. Falcon Heavy is at the threshold today on a per-launch basis; Starship targets place the trajectory well below it inside this decade.
Source · McKinsey (May 2026) · Starcloud whitepaper v1.03 · industry data
We don't need permitted land; we don't need batteries and backup power; we need eight times less solar.
Proof in orbit
Starcloud put the first data-center-grade GPU in space — and it's still running.[3]
In November 2025, Starcloud-1 reached low Earth orbit carrying an Nvidia H100 — the first data-center-grade silicon ever to fly. Within a month, the team chalked up two more world firsts: first to run a large language model in orbit (Google's Gemma) and first to train an LLM in space (Andrej Karpathy's nanoGPT). Six months on, the H100 has not had a single restart failure — reframing radiation risk from "open question" to "manageable engineering problem".[4] [7] [10]
Solution & roadmap
A simple satellite, repeated thousands of times.
Solar panels, radiators, GPUs, a small battery, one reaction wheel, two lasers. That's the entire bill of materials — no phased-array antennas (comms are laser-based), no custom instrument bays. The bet: at this complexity, a manufacturing line outperforms any bespoke spacecraft on cost, cadence, and unit count.[5]
System architecture
Nov 2025
ShippedStarcloud-1
- Power
- ~1 kW
- Payload
- 5 GPUs · incl. Nvidia H100
First data-center-grade GPUs in orbit. First LLM (Gemma) run in space. First in-orbit training (nanoGPT). Zero restart failures to date.
Oct 2026
NextStarcloud-2
- Power
- ~10 kW
- Payload
- Rack-scale · Blackwell · AWS server
First rack-scale system in orbit. Multiple advanced chips, persistent storage, continuous access, proprietary thermal + power systems.
GW-scale architecture
Strategic partnerships
Nvidia × Starcloud — Space-1 Vera Rubin Module
GTC · Mar 2026At GTC 2026, Nvidia announced a space-rated computing module purpose-built for orbital data centers — delivering up to 25× more AI compute than the H100 for in-space inferencing, expected 2027. Nvidia's announcement names Starcloud as a launch partner.[15] [16]
"With Nvidia, we can bring true hyperscale-class AI computing to orbit — processing data at the source, reducing downlink dependency, and enabling customers to run training and inference workloads in space for the first time."— Philip Johnston, Nvidia announcement[15]
Unit economics
Orbital compute is cheaper across both capex and opex once you can launch the steel.
Total infrastructure cost is roughly $5M per megawatt in orbit versus $12–15M on Earth, with no batteries, chillers, cooling towers, or AC/DC conversion to pay for.[4] Starcloud is signing near-term LOIs at 3¢/kWh, headed toward sub-half-a-cent in the medium term and roughly $0.002/kWh over a 10-year amortization.[5] [18]
The trigger is launch cost. Below roughly $500/kg to LEO, orbital compute is cost-competitive with terrestrial; further down the curve it's outright cheaper. Starship's flight cadence puts that threshold inside this decade.[4]
Marginal cost dynamics also invert. On Earth, every additional gigawatt is harder to build — substations and transmission rights bid up. In orbit, every additional satellite is cheaper, because manufacturing rate and launch cadence both improve with volume.
Orbital cost as a percentage of Earth-equivalent baseline
Chart
Build cost per megawatt and energy cost per kilowatt-hour, normalized so Earth = 100%. Lower bars are better. Hover for raw values.
Source · Sequoia podcast · Philip Johnston (2026)
8×
Solar yield / m²
Versus terrestrial panels
0.5¢
Mid-term $/kWh target
Sequoia LOI horizon
$5M
Total build / MW
Versus $12–15M on Earth
From the whitepaper · 40 MW cluster, 10-year TCO
v1.03 · Sep 2024Terrestrial
$167M
- $140M energy @ 4¢/kWh
- $20M backup power
- $7M chiller energy
- 1.7M tons of cooling water
Orbital
$8.2M
- $2M solar array
- $5M launch (single)
- $1.2M radiation shielding
- 0 water
A ~20× cost advantage over the asset's life. The whitepaper's energy-cost model lands at $0.002 / kWh — roughly 22× cheaper than US wholesale and 85× cheaper than Japan's.[18]
Market
A new layer of infrastructure roughly the size of today's entire data-center industry.
Johnston's working number is ~$1 trillion of annual capex moving into orbital compute within ten years, with most new compute capacity deploying to space inside the same window. The full constellation Starcloud has filed for — 88,000 satellites — would deliver roughly 20 gigawatts of compute capacity, primarily for inference workloads.[4] [5]
The near-term market is narrower and hotter. Defense and Earth-observation customers already pay roughly a thousand times more per GPU-hour than terrestrial inference — and they discard 90%+ of the data they collect because there's nowhere on the satellite to process it. Edge inference at orbit is a real product today, not a 2030 thesis.[5]
Two go-to-market models
Competitive landscape
The hyperscalers have arrived. Starcloud is two years ahead of them in orbit.
The category has gone from "fringe idea" to "FCC filing race" in twelve months. The lead, for now, is operational, not capital.
SpaceX has lower launch costs because they own the rocket — but they will run their own workloads. We are positioned more like Equinix, while SpaceX might be more like AWS. The hyperscalers will eventually need our infrastructure.
Founder deep dive
Johnston in his own words on why this works.
Founders
Risks & mitigations
What we're watching
References
- [1]Starcloud — YC Profile
- [2]Starcloud — Company Website
- [3]Starcloud-1 — Mission Page (Nov 2025 launch)
- [4]McKinsey & Company — The Case for Data Centers in Space
- [5]Sequoia Capital — Greetings Earthlings (Podcast w/ Philip Johnston)
- [6]Sequoia AI Ascent 2026 — Inside the First Model Trained in Space
- [7]CNBC — Nvidia-backed Starcloud trains first AI model in space (Dec 10, 2025)
- [8]GeekWire — Starcloud hits $1.1B valuation to build orbital AI
- [9]TechCrunch — Starcloud raises $170M Series A (Mar 30, 2026)
- [10]NPR — Will data centers in space work? Elon Musk says yes
- [11]Fortune — Pichai: Data centers in space will be the new normal
- [12]The Register — Bezos dreams of orbital datacenters powered by the sun
- [13]Axios — Hassabis on AGI scaling and compute
- [14]TechCrunch — Microsoft already has the AI data centers Nadella keeps talking about
- [15]Nvidia Newsroom — Nvidia Launches Space Computing, Rocketing AI Into Orbit
- [16]SpaceNews — Nvidia unveils AI computing module for space-based data centers
- [17]The Economist — Could data centres ever be built in orbit? (Apr 2025)
- [18]Starcloud — Why we should train AI in space (Whitepaper)
- [19]Wikipedia — External Active Thermal Control System (ISS)













