The $50 Billion Shadow Chipmaker: Inside Amazon's Hidden Semiconductor Empire — and Why the Market Is Missing It
AWS's custom silicon division just hit a $20 billion run rate — and CEO Andy Jassy says it would be worth $50 billion if sold externally. With Q1 earnings dropping tomorrow, here's why Amazon might be the most undervalued chip company on Earth.
Amazon just quietly revealed what might be the most undervalued asset in Big Tech.
In his 2025 Letter to Shareholders — released earlier this month — CEO Andy Jassy disclosed that AWS's custom silicon division has hit a $20 billion internal revenue run rate, growing at triple-digit percentages. More provocatively, he noted that if the unit sold chips externally, the way Nvidia and AMD do, the run rate "would be approximately $50 billion."
That's not a projection. That's a semiconductor giant hiding inside a cloud company's balance sheet — and the market hasn't priced it in.
The Silicon Strategy Nobody Saw Coming
When Amazon first launched Graviton — its custom ARM-based processor — in 2018, most of Wall Street dismissed it as a vanity project. A cloud company designing chips? Cute.
Eight years later, AWS operates what is arguably the most vertically integrated AI infrastructure stack on the planet:
- Graviton (now in its 4th generation): ARM-based CPUs powering general cloud compute at 30-40% better price-performance than x86 alternatives
- Inferentia: Purpose-built inference accelerators optimized for deploying trained AI models at scale
- Trainium (now Trainium3): AWS's answer to Nvidia's H100/B200 — a 3nm AI training chip delivering 2.52 petaflops of FP8 compute per chip
This isn't a side hustle. It's a full-stack semiconductor operation running on TSMC's most advanced process nodes, with dedicated interconnect fabric (NeuronLink, NeuronSwitch), a custom software stack (Neuron SDK), and demand so intense that Trainium3 capacity is nearly fully subscribed months after launch.
Why Trainium3 Changes the Math
The third-generation Trainium chip, fabricated on TSMC's 3nm process, represents a genuine inflection point. The numbers tell the story:
| Metric | Trainium2 | Trainium3 | Improvement |
|---|---|---|---|
| Compute (FP8) | ~1.26 PFLOPs | 2.52 PFLOPs | 2x |
| Memory | 96 GB HBM3e | 144 GB HBM3e | 1.5x |
| Energy Efficiency | Baseline | 4x better | 4x |
| Max Chips/Server | 64 (UltraServer) | 144 (UltraServer) | 2.25x |
The EC2 Trn3 UltraServer — scaling up to 144 Trainium3 chips linked by NeuronSwitch-v1 all-to-all fabric — can handle training runs for models exceeding one trillion parameters. AWS claims 4.4x more compute performance and a 50% reduction in AI training costs versus prior-generation infrastructure.
For context: Anthropic, which has committed over $100 billion in lifetime AWS spend, is using Trainium clusters for training its most advanced Claude models. OpenAI signed an exclusive cloud deal with AWS for frontier model deployment on 2 GW of Trainium capacity. Even Cerebras — a chip startup that competes with both Nvidia and AWS — partnered to run inference on Bedrock using Trainium, hitting 3,000 tokens per second.
Trainium now powers over 50% of all Bedrock token usage. The chip is no longer experimental. It's production infrastructure.
The Hidden $50 Billion Business
Here's where it gets interesting for investors.
Amazon's custom chips currently serve one customer: AWS itself. Every dollar of compute that runs on Graviton or Trainium instead of Intel, AMD, or Nvidia hardware is margin that stays in-house. Jassy disclosed that this saves AWS "tens of billions" in capital expenditure annually and adds hundreds of basis points to operating margins versus third-party silicon.
But the $50 billion figure assumes something bigger: external sales. Jassy explicitly floated the possibility of selling "racks of them to third parties in the future," noting there is "so much demand" for the chips. If AWS opened Trainium to external buyers — the way Nvidia sells GPUs to anyone with a purchase order — it would instantly become one of the largest semiconductor businesses on Earth.
For comparison:
- Nvidia's data center revenue (FY2026): ~$115 billion
- AMD's data center revenue (2025): ~$12 billion
- AWS custom silicon (hypothetical external): ~$50 billion
That would slot AWS between AMD and Nvidia — and it doesn't even exist as a standalone business yet.
This is where the analysis gets actionable. AlphaBriefing members get the full investment framework — scenarios, positioning, and the bottom line.
Subscribe to AlphaBriefing — Free, Member, and Paid tiers available.