A weird thing happened to the AI boom: it stopped being mainly about models.

Yes, the breakthroughs are still software. But the moment of truth—what determines who can train, deploy, and monetize frontier systems—has moved down the stack into shipments, wattage, and capex. If you can’t get accelerators, can’t power them, or can’t cool them, your “AI strategy” is just vibes.

That’s why Epoch AI’s new, open AI Chip Sales database of accelerator shipments and compute estimates is such a big deal.
It’s a daily-updated attempt to answer the most politically loaded question in tech right now:

Where is the world’s AI compute coming from—and how fast is it accumulating?

Epoch’s hub estimates sales/shipments of major AI accelerators over time and translates them into compute capacity (H100-equivalents), cost, and power draw (TDP)—spanning NVIDIA, Google (TPUs), Amazon (Trainium/Inferentia), AMD (Instinct), and Huawei (Ascend).

This is hardware economics turned into public infrastructure.

What “AI Chip Sales” actually is (and why it’s different)

Let’s say it plainly: AI Chip Sales is a transparency engine built from scraps.

Chipmakers and hyperscalers rarely publish clean unit numbers. So Epoch reconstructs reality from what does exist—earnings disclosures, company commentary, analyst estimates, and triangulation with other public traces—then publishes the result as an open dataset you can explore and download.

Two design choices make it unusually valuable:

  • It’s integrated: units → compute → dollars → power.
  • It’s continuous: the dataset is explicitly updated daily and time-stamped (“Last updated January 9, 2026”).

And yes—Epoch is honest about uncertainty. Some segments are marked as incomplete or modeled with ranges because the upstream world is opaque by design.

The key translation: turning chips into “H100e” compute

If you want the lever that turns this from “interesting” into “world-changing,” it’s this:

Epoch translates heterogeneous chips into a common currency: H100-equivalent compute (H100e).

That matters because AI accelerators are not a single market; they’re a patchwork of architectures and ecosystems. So Epoch anchors comparisons to an NVIDIA H100 baseline and uses performance proxies (their methods reference peak dense 8-bit operations) to express “how much compute” a given fleet represents.

Is H100e perfect? No. Real workloads are constrained by memory bandwidth, interconnect, software stacks, and utilization. But H100e is good enough to make compute legible, and that’s the point: public accountability starts with a shared unit.

The headline: compute is being industrialized—fast

Epoch’s broader infrastructure tracking makes the scale hard to ignore. In their Frontier Data Centers work, they quantify compute buildouts and use the same H100-equivalent framing to describe what’s being constructed.

To understand why this is so explosive, pair the AI Chip Sales shipment-to-compute lens with the Frontier Data Centers Hub tracking large AI data centers via satellite and permits.
Together, they turn “AI growth” into something measurable: how many chips, how much compute, and where it’s being physically installed.

And once you see AI as industrial capacity—not app development—you start asking different questions:

  • Who gets to buy compute?
  • Who gets priced out?
  • Who gets to set the defaults for safety, governance, and culture?
  • Who absorbs the externalities (grid stress, water use, emissions)?

This is where AI stops being a product story and becomes a governance story.

Power isn’t a footnote—it’s the constraint

AI Chip Sales doesn’t just estimate shipments; it estimates power draw using TDP. That’s the right move because power is the real bottleneck.

But there’s a catch: chip TDP isn’t total facility demand. Real-world electricity use is higher once you include servers, networking, and cooling. That’s why independent research and public institutions are now sounding alarms about data center energy growth.

Put differently: AI compute isn’t limited by ideas. It’s limited by electrons.

The next “AI ceiling” won’t be model architecture. It’ll be grid interconnection queues.

Methodology: where the dataset is strongest (and where it’s fuzzier)

Epoch is explicit about building estimates from partial visibility. That’s a feature, not a flaw—because it lets you reason about confidence instead of pretending certainty.

Revenue-based inference (stronger signals)

Where earnings and segment revenue are clearer, estimates tend to be tighter—especially when paired with plausible average selling prices and product mix assumptions.

Supply-chain and deployment inference (fuzzier but necessary)

Where unit numbers are hidden (TPUs, Trainium, Ascend), Epoch leans on supply-chain signals, deployment footprints, and cross-validation with physical infrastructure tracking.

This matters because it models the world as it is: an AI economy where the most consequential numbers are strategically undisclosed.

Who besides Epoch is tracking this—and what are they missing?

Plenty of serious institutions track pieces of the puzzle. Few connect them end-to-end the way AI Chip Sales attempts to.

Here’s who’s watching the compute buildup (with sources you can actually trust):

What Epoch adds is the connective tissue: shipments → compute equivalents → cost → power, refreshed continuously, in one public place.

Who is affected—and how?

This isn’t just a “tech industry” story. It’s a distribution story.

1) Builders and buyers: hyperscalers become “compute states”

Companies that can secure accelerators at scale and build the facilities to run them gain structural power—not just market share. Infrastructure becomes destiny.

2) Everyone downstream: startups, universities, and public-interest labs

When compute becomes scarce and expensive, it centralizes. Smaller actors rent access—or fall out of the frontier entirely. That shifts what gets researched and whose problems get solved.

3) Local communities: the externalities land somewhere

Electricity demand, water use, noise, land, and grid upgrades become local politics. If your region hosts a high-density AI facility, you’re negotiating the costs of someone else’s model ambitions.

The ethical and social pressure points hiding in the dataset

A dataset like AI Chip Sales doesn’t just measure growth—it forces choices into the open.

  • Opacity vs. accountability: If companies won’t disclose unit numbers while building city-scale loads, governments will fill the gap with regulation.
  • Energy justice: If data centers strain grids or raise local prices, “AI progress” becomes a redistribution problem.
  • Geopolitical escalation: Chip controls and domestic substitution pressures intensify; measurement becomes part of strategic competition (and propaganda).
  • Cultural power: The groups that control compute increasingly control what gets built and deployed—and whose values scale.

This is why public measurement is political. Because it changes what society can argue about.

Why This Matters:

Epoch’s AI Chip Sales hub is a rare act of public measurement in an industry that benefits from obscurity. By translating accelerator shipments into H100e compute and power draw, it turns “AI progress” into something you can interrogate: who holds capacity, who pays the costs, and who bears the externalities. It also aligns with the direction of institutional concern—from the IEA to CRS to LBNL—about the electricity realities of AI infrastructure.
This is the beginning of a world where compute is treated like critical infrastructure—because functionally, it already is.

Conclusion: the new arms race is a spreadsheet (and that’s good)

The most dangerous thing about the AI infrastructure boom is not that it’s growing. It’s that it’s growing without shared visibility.

Epoch’s AI Chip Sales database of shipments, H100e compute, costs, and TDP power is a direct counter-move: a public, living map of the hardware economy beneath the model economy.
And once you combine it with Frontier Data Centers Hub compute and power tracking, you get something close to an X-ray of the AI industrial buildout.

If you want to understand the next decade—who leads, who gets locked out, and where the bottlenecks will hit—you don’t start with prompts.

You start with shipments, watts, and timelines.

Internal reading (Vastkind):

Subscribe to Vastkind for more infrastructure-first AI coverage—and share this piece with one person who still thinks AI is “just software.”