Starcloud

Starcloud

Data centers in space.

Sequoia AI Ascent 2026 · Philip Johnston[6]

$170M Series A · Mar 2026

Latest round

Led by Benchmark

$1.1B

Post-money

Fastest YC unicorn

$200M

Total raised

Redmond, WA

Backed by

BenchmarkNFXNvidiaY CombinatorIn-Q-TelOrange Collective

[8] [9]

Thesis

Earth-bound data centers are running into hard limits on power, water, and permitting just as AI demand goes vertical. Starcloud moves compute to where energy and cooling are abundant — orbit — and is already running real GPUs there.[4]
  1. 01

    Earth is power-bound; orbit is not. The bottleneck on AI is no longer silicon — it's megawatts. Sunlight in orbit is ~24/7 and free; vacuum is the heat sink. Starcloud's architecture sidesteps the constraints that slow every terrestrial hyperscaler.

  2. 02

    This is no longer a thought experiment. In November 2025 they launched Starcloud-1 with an Nvidia H100 on board, becoming the first to train an LLM in space and run Gemma in orbit.[7] The next satellite (Oct 2026) carries multiple GPUs including Nvidia Blackwell and an AWS server. The team is climbing the curve quickly.

  3. 03

    The team is the rare combination this needs. A repeat-founder CEO with policy and capital fluency, a deployable-structures PhD with mission heritage at Airbus and Oxford Space Systems, and a SpaceX/Microsoft engineer who has shipped both satellite networks and ML infrastructure. Few teams can build the satellite, the data center, and the software stack at once.

  4. 04

    Launch costs are collapsing into a window. Starship and competing heavy lift are pushing $/kg to LEO down an order of magnitude. The economics of orbital compute flip from absurd to plausible inside this decade — being the operator with flight heritage and a software stack when that happens is the prize.[5]

Problem

Earth is running out of room to build the next training cluster.

Every frontier model release shifts the bottleneck further upstream. The constraint is no longer GPU supply — it is megawatts. Hyperscalers are sitting on chips they cannot plug in because the grid, the substations, and the water rights are still being negotiated.[13]

The terrestrial path forward asks AI to compete with cities, factories, and farms for the same kilowatt-hours and the same gallons of cooling water — under multi-year permitting timelines that did not anticipate AI.[4]

5–10 yr

Permitting timeline

For new gigawatt-scale terrestrial sites

$12–15M

Build cost / MW

Earth-side data center capex (US)

GW

Annual scale of new demand

And accelerating with each model generation

Why Space, Why Now

Big tech is publicly aligned on the move to space.

On-record quotes from the people setting the largest AI capex budgets on Earth. The question has shifted from whether to who operates.

We still don't appreciate the energy needs of this technology. There's no way to get there without a breakthrough — we need fusion or radically cheaper solar plus storage.

Sam Altman

Sam Altman[18]

CEO, OpenAI

The biggest issue we are now having is not a compute glut, but power — the ability to get builds done fast enough close to power. We have a bunch of chips sitting in inventory I can't plug in because I don't have warm shells to plug into.

Satya Nadella

Satya Nadella[13]

CEO, Microsoft

You're power constrained on Earth. Space has the advantage that it's always sunny. As soon as the cost to orbit drops to a low number, it immediately makes extremely compelling sense to put AI in space.

Elon Musk

Elon Musk[10]

CEO, SpaceX & xAI

We're going to start building these giant gigawatt data centers in space. These giant training clusters — those will be better built in space, because we have solar power 24/7. There are no clouds, no rain, no weather.

Jeff Bezos

Jeff Bezos[12]

Founder, Blue Origin

How do we one day have data centers in space so that we can better harness the energy from the sun, that is 100 trillion times more energy than what we produce on all of Earth today? A decade or so away, we'll be viewing it as a more normal way to build data centers.

Sundar Pichai

Sundar Pichai[11]

CEO, Alphabet & Google

As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated. AI processing across space and ground systems enables real-time sensing, decision-making and autonomy — transforming orbital data centers into instruments of discovery.

Jensen Huang

Jensen Huang[15]

CEO, Nvidia

Three things changed in the last five years — solar yield, thermal design, and launch cost.

Power. In a dawn-dusk sun-synchronous orbit, a satellite stays in continuous sunlight year-round — a >95% capacity factor versus the 24% median for US terrestrial solar farms, plus ~40% higher peak irradiance with no atmosphere to absorb the spectrum. Johnston puts the net effect at roughly 8× more energy per square meter of array — before you account for the permitting, transmission, and storage required on Earth.[5] [18]

Heat. Cooling is harder in vacuum, not easier — there is no convection or conduction, only radiation. This is the genuinely difficult part of the design, and the whitepaper budgets 70% of engineering hours on thermals. The math works: a 20°C radiator panel emits ~633 W/m² net to the 2.7 K cosmic background per Stefan-Boltzmann, with no chillers and no cooling water. And the architecture is already operational at scale — NASA's International Space Station has rejected ~70 kW of waste heat continuously since 1998, using two ammonia coolant loops feeding deployable radiator panels that emit infrared into deep space. Starcloud's design is bigger; the building blocks are flight-proven.[5] [18] [19]

Launch cost. Cost-to-LEO fell from ~$20,000/kg in 2010 to ~$1,500/kg today with Falcon Heavy. McKinsey's analysis sets ~$500/kg as the threshold at which orbital compute becomes cost-competitive with a terrestrial cluster of the same capacity; below that, it is cheaper outright. SpaceX's stated Starship target at flight rate is $100–200/kg. This is the curve that triggers everything else.[4]

Cost-to-orbit crashes past the $500/kg breakeven by 2030

Chart

$500/kg is the McKinsey breakeven threshold versus a terrestrial data center of equivalent capacity. Falcon Heavy is at the threshold today on a per-launch basis; Starship targets place the trajectory well below it inside this decade.

Source · McKinsey (May 2026) · Starcloud whitepaper v1.03 · industry data

We don't need permitted land; we don't need batteries and backup power; we need eight times less solar.
Philip Johnston, CEO of Starcloud[5]

Proof in orbit

Starcloud put the first data-center-grade GPU in space — and it's still running.[3]

In November 2025, Starcloud-1 reached low Earth orbit carrying an Nvidia H100 — the first data-center-grade silicon ever to fly. Within a month, the team chalked up two more world firsts: first to run a large language model in orbit (Google's Gemma) and first to train an LLM in space (Andrej Karpathy's nanoGPT). Six months on, the H100 has not had a single restart failure — reframing radiation risk from "open question" to "manageable engineering problem".[4] [7] [10]

Solution & roadmap

Concept · Starcloud-3 rendering

A simple satellite, repeated thousands of times.

Solar panels, radiators, GPUs, a small battery, one reaction wheel, two lasers. That's the entire bill of materials — no phased-array antennas (comms are laser-based), no custom instrument bays. The bet: at this complexity, a manufacturing line outperforms any bespoke spacecraft on cost, cadence, and unit count.[5]

System architecture

Solar arrays. Thin-film silicon cells under 25 μm thick at 22% beginning-of-life efficiency, manufactured at roughly $0.03 per watt. The cells self-anneal radiation damage at operating temperature, so no cover-glass is needed. Demonstrated in-orbit degradation: ~0.15% per year. Deployment via Z-fold, roll-out, or picture-frame mechanisms depending on form factor — all design heritage from existing on-orbit demonstrations.[18]

Radiators. Lightweight deployable panels positioned in-line with the solar arrays — one side toward the Sun, the other emitting waste heat into the 2.7 K cosmic background. At GW scale they become the largest radiators ever flown, and the dominant non-solar structure on the satellite. Heat pumps optionally lift radiator inlet temperature; per Stefan-Boltzmann's T⁴ scaling, a small temperature lift produces a large emission gain — so a 20°C panel net-radiates ~633 W/m² with passive design and meaningfully more with heat-pumped configurations.[18]

Cooling loop. Direct-to-chip liquid cooling or two-phase immersion at the rack, feeding multi-loop coolant circuits that transport thermal load out to the deployable radiators. Compute modules either pressurize with an inert atmosphere or fully submerge in coolant — and immersion doubles as additional radiation shielding for the silicon. Two-phase systems reduce mass flow requirements, lowering pumping losses at scale.[18]

  1. Nov 2025

    Shipped

    Starcloud-1

    Power
    ~1 kW
    Payload
    5 GPUs · incl. Nvidia H100

    First data-center-grade GPUs in orbit. First LLM (Gemma) run in space. First in-orbit training (nanoGPT). Zero restart failures to date.

  2. Oct 2026

    Next

    Starcloud-2

    Power
    ~10 kW
    Payload
    Rack-scale · Blackwell · AWS server

    First rack-scale system in orbit. Multiple advanced chips, persistent storage, continuous access, proprietary thermal + power systems.

  3. 2028 (target)

    Future

    Starcloud-3

    Power
    200 kW / unit
    Payload
    ~3 tons · sized to Starship

    Effectively a space-based data center for large-scale inference. ~50 per Starship = 10 MW / launch. Several GW/month, tens of GW/year at flight rate.

GW-scale architecture

The Starcloud-1/2/3 SmallSat sequence above is the near-term path; the 88,000-slot constellation filing is the next layer. The whitepaper's terminal-state architecture is different: a modular GW-scale cluster of containers around a central hub, scaled in 3D for low intra-cluster latency. A 5 GW data center needs a roughly 4 km × 4 km solar array using thin-film cells (>1000 W/kg, self-annealing under radiation), with deployable radiators in-line. Each Starship launch lifts ~40 MW of compute in roughly 300 racks at GB200-class density — so a full 5 GW could deploy in fewer than 100 launches, equivalent to 2–3 months of a single Starship at 3 flights per day.[18]

The orbit is a dawn-dusk sun-synchronous orbit — the only LEO trajectory with continuous solar exposure year-round, which is why no battery storage is needed. Lasers handle inter-cluster networking with Starlink / Kuiper / Kepler. End-of-life: modules are designed to either be salvaged or fully demise on re-entry.[18]

Strategic partnerships

Nvidia × Starcloud — Space-1 Vera Rubin Module

GTC · Mar 2026

At GTC 2026, Nvidia announced a space-rated computing module purpose-built for orbital data centers — delivering up to 25× more AI compute than the H100 for in-space inferencing, expected 2027. Nvidia's announcement names Starcloud as a launch partner.[15] [16]

"With Nvidia, we can bring true hyperscale-class AI computing to orbit — processing data at the source, reducing downlink dependency, and enabling customers to run training and inference workloads in space for the first time."— Philip Johnston, Nvidia announcement[15]

Unit economics

Orbital compute is cheaper across both capex and opex once you can launch the steel.

Total infrastructure cost is roughly $5M per megawatt in orbit versus $12–15M on Earth, with no batteries, chillers, cooling towers, or AC/DC conversion to pay for.[4] Starcloud is signing near-term LOIs at 3¢/kWh, headed toward sub-half-a-cent in the medium term and roughly $0.002/kWh over a 10-year amortization.[5] [18]

The trigger is launch cost. Below roughly $500/kg to LEO, orbital compute is cost-competitive with terrestrial; further down the curve it's outright cheaper. Starship's flight cadence puts that threshold inside this decade.[4]

Marginal cost dynamics also invert. On Earth, every additional gigawatt is harder to build — substations and transmission rights bid up. In orbit, every additional satellite is cheaper, because manufacturing rate and launch cadence both improve with volume.

Orbital cost as a percentage of Earth-equivalent baseline

Chart

Build cost per megawatt and energy cost per kilowatt-hour, normalized so Earth = 100%. Lower bars are better. Hover for raw values.

Source · Sequoia podcast · Philip Johnston (2026)

Solar yield / m²

Versus terrestrial panels

0.5¢

Mid-term $/kWh target

Sequoia LOI horizon

$5M

Total build / MW

Versus $12–15M on Earth

From the whitepaper · 40 MW cluster, 10-year TCO

v1.03 · Sep 2024

Terrestrial

$167M

  • $140M energy @ 4¢/kWh
  • $20M backup power
  • $7M chiller energy
  • 1.7M tons of cooling water

Orbital

$8.2M

  • $2M solar array
  • $5M launch (single)
  • $1.2M radiation shielding
  • 0 water

A ~20× cost advantage over the asset's life. The whitepaper's energy-cost model lands at $0.002 / kWh — roughly 22× cheaper than US wholesale and 85× cheaper than Japan's.[18]

Market

A new layer of infrastructure roughly the size of today's entire data-center industry.

Johnston's working number is ~$1 trillion of annual capex moving into orbital compute within ten years, with most new compute capacity deploying to space inside the same window. The full constellation Starcloud has filed for — 88,000 satellites — would deliver roughly 20 gigawatts of compute capacity, primarily for inference workloads.[4] [5]

The near-term market is narrower and hotter. Defense and Earth-observation customers already pay roughly a thousand times more per GPU-hour than terrestrial inference — and they discard 90%+ of the data they collect because there's nowhere on the satellite to process it. Edge inference at orbit is a real product today, not a 2030 thesis.[5]

Two go-to-market models

Cloud provider

Sell GPU-hours directly to hyperscalers and AI-focused neoclouds. Familiar pricing surface; Starcloud owns silicon choice and software stack.

Colocator

Provide power, cooling, and connectivity; customer brings their own chips and workloads, then resells to their customers. Bare-metal, BYO silicon.[4]

Competitive landscape

The hyperscalers have arrived. Starcloud is two years ahead of them in orbit.

The category has gone from "fringe idea" to "FCC filing race" in twelve months. The lead, for now, is operational, not capital.

Starcloud

Starcloud

Operating

H100 running in orbit. Starcloud-2 (multi-GPU + Blackwell + AWS server) launching Oct 2026. Filed for 88,000-slot constellation.[2]

SpaceX / xAI

SpaceX / xAI

Filed · ~1M sat

Largest filing, but admits in pre-IPO docs that orbital compute is 'early stages, involves significant technical complexity and unproven technologies.' Internal-first workloads (Grok, Tesla).[10]

Blue Origin · Project Sunrise

Blue Origin · Project Sunrise

Filed · 50K+ sat

Bezos targets gigawatt-scale orbital training clusters in '10+, not more than 20' years. Filing in late 2025.[12]

Google · Project Suncatcher

Google · Project Suncatcher

Pilot · 2027

Two pilot satellites with Planet in early 2027 to test hardware. Pichai expects orbital DCs to be 'a more normal way' a decade out.[11]

SpaceX has lower launch costs because they own the rocket — but they will run their own workloads. We are positioned more like Equinix, while SpaceX might be more like AWS. The hyperscalers will eventually need our infrastructure.
Johnston on positioning[5]

Founder deep dive

Johnston in his own words on why this works.

Source · YouTube

On the founding moment. "I went to SpaceX's Starbase and watched them try to build three Starships per day. The thing that hit me wasn't the rocket — it was the manufacturing rate. Once you have that, the question stops being 'can we get to orbit' and becomes 'what should we put there?'"

On the marginal cost flip. "The marginal cost on every additional terrestrial data center goes up every time you add one — because you're competing for the same land, the same grid, the same water. In space, the marginal cost goes down for every additional unit, because you're now manufacturing at rate. That's the most important sentence I can say about this business."[5]

On the trillion-dollar bet. "If you had a trillion dollars in a bank account and had to build the compute backbone for AGI, how much goes into space? One hundred percent. There comes a crossover point where it makes zero sense to continue building things on Earth. Close to a trillion dollars per year of CapEx within ten years — by far the largest market opportunity ever."[5]

On the long arc. "Five hundred to a thousand years out, you're going to have a Dyson sphere and 99.9% of the physical economy will be space compute. Almost all of that will be inference. We're building the early operator."

On running the team. "Monthly reminder that I'm not going to be happy until every engineer is spending $10,000 a month on tokens. I know that's not the right metric to track, but it gets the point across. When they come to me and say, 'Can we spend $300 a month on Grok 4 Heavy?' I say yes. Every time."[5]

Founders

Philip Johnston

Philip Johnston

Repeat Founder

Co-founder & CEO

Second-time founder. Spent years at McKinsey advising national space agencies on satellite programs. MPA in National Security & Technology (Harvard), MBA (Wharton), MA in Applied Mathematics & Theoretical Physics (Columbia), CFA charter.

Ezra Feilden

Ezra Feilden

Co-founder

A decade of satellite design experience specializing in deployable solar arrays and large deployable structures. Previously at Airbus Defense & Space (SSTL) and Oxford Space Systems, with mission work including NASA's Lunar Pathfinder. PhD in Materials Engineering, Imperial College London.

Adi Oltean

Adi Oltean

Co-founder

Software and hardware background. Delivered key features in satellite networks, operating systems, cloud, and ML infrastructure across SpaceX (Starlink) and Microsoft. Leads software, hardware, and engineering design across the satellite constellation.

Risks & mitigations

Risk

Single-event radiation upsets degrade GPU reliability over time.

Mitigation

Particle-accelerator testing at Brookhaven and Knoxville cyclotron — 24 hours simulates ~5 years of orbital radiation. Starcloud-1's H100 has not had a restart failure in flight. Workloads are stochastic, so bit flips don't materially affect output quality.

Risk

Heat dissipation in vacuum is the dominant engineering problem.

Mitigation

70% of engineering hours are spent on thermals. Heat pumps, deployable radiators, and phase-change materials in the design. Radiative cooling scales with the fourth power of temperature — a known regime, not new physics.

Risk

Orbital congestion and debris from a constellation of 88,000 slots.

Mitigation

5–6 year operational life matches GPU obsolescence; planned deorbit from Day 1. FCC filing assumes responsible deorbit cadence. Slot allocation is currently first-come, first-served — Starcloud filed early.

Risk

Hyperscalers (SpaceX, Blue Origin, Google) entering the same lane.

Mitigation

Operational lead today versus filings and pilots from the incumbents. Neutral-cloud positioning detailed in Competitive Landscape above.

What we're watching

  • First paying customer announcement — likely defense / Earth-observation edge inference.
  • Series B timing and lead — does it draw a strategic (Nvidia, AWS) versus pure financial?
  • FCC adjudication on the 88,000-slot constellation filing.

References

  1. [1]Starcloud — YC Profile
  2. [2]Starcloud — Company Website
  3. [3]Starcloud-1 — Mission Page (Nov 2025 launch)
  4. [4]McKinsey & Company — The Case for Data Centers in Space
  5. [5]Sequoia Capital — Greetings Earthlings (Podcast w/ Philip Johnston)
  6. [6]Sequoia AI Ascent 2026 — Inside the First Model Trained in Space
  7. [7]CNBC — Nvidia-backed Starcloud trains first AI model in space (Dec 10, 2025)
  8. [8]GeekWire — Starcloud hits $1.1B valuation to build orbital AI
  9. [9]TechCrunch — Starcloud raises $170M Series A (Mar 30, 2026)
  10. [10]NPR — Will data centers in space work? Elon Musk says yes
  11. [11]Fortune — Pichai: Data centers in space will be the new normal
  12. [12]The Register — Bezos dreams of orbital datacenters powered by the sun
  13. [13]Axios — Hassabis on AGI scaling and compute
  14. [14]TechCrunch — Microsoft already has the AI data centers Nadella keeps talking about
  15. [15]Nvidia Newsroom — Nvidia Launches Space Computing, Rocketing AI Into Orbit
  16. [16]SpaceNews — Nvidia unveils AI computing module for space-based data centers
  17. [17]The Economist — Could data centres ever be built in orbit? (Apr 2025)
  18. [18]Starcloud — Why we should train AI in space (Whitepaper)
  19. [19]Wikipedia — External Active Thermal Control System (ISS)