Accelerator Industry Model

The SemiAnalysis AI accelerator model is used to gauge historical and future accelerator production by company and type. It can be used to gather upstream and downstream supply chain information from equipment requirements to deployed capacity and FLOPS. Many firms in the downstream and upstream supply chain can have their revenue estimated based on our data.

Our data is provided for 2023 to 2027 on a quarterly basis, see the two below reports for point in time explanatory reports and data.

The model includes much more detail than the above reports:

Number of shipments and ASPs of AI accelerators by SKU. Covered product lines include:

  • Nvidia (A100, H100, H200, H20 (China), B100, B200, GB200 NVL72, B200 NVL72, X/R100, Y100, Z100),

  • Google TPU v4 Pufferfish, v4i Pufferlite, v5p Viperfish, v5e Viperlite, v6p Ghostfish, v6e Ghostlite, v7 Sunfish, v7e Sunlite,

  • Meta MTIA Gen 2, MTIA Gen 3, MTIA Gen4,

  • AWS Inferentia2, Inferentia3, Inferentia4, Trainium1, Trainium2, and Trainium3,

  • Microsoft Maia Athena and Braga,

  • AMD MI300, MI350, MI400),

  • Intel Habana Gaudi2 and Gaudi3, Falcon Shores

  • Bytedance

  • Huawei Ascend

This provides accelerator revenue forecasts for merchant and semi-custom providers: Nvidia, AMD, Broadcom, Marvell, Intel, Alchip, Mediatek

Supply chain and capacity orders for these chips which includes:

  • Number of Foundry wafers

  • Number of 2.5D volumes and yields (TSMC, Samsung, Intel, Amkor, ASE)

  • Total number of die attach steps (BESI, ASMPT, etc)

We include the above from both a supply perspective (total potential units based on capacity orders) and actual demand perspective (actual shipments)

This information has important implications for upstream fabrication, packaging and equipment demand:

  • HBM type

  • Total capacity

  • Layer count

  • Total number of stacks

  • Total bits

  • Manufacturer

The above has implications for upstream fabrication, packaging, and equipment demand.

Shipments by customer and accelerator installed base for major customers. Customers include:

  • AI accelerator shipments and installed base by customer:  

  • Hyperscalers and other majors: Microsoft / OpenAI, Meta, Google, Amazon, Oracle, Tesla (X AI), Apple

  • Chinese hyperscalers: Baidu, Tencent, Alibaba, Bytedance

  • Emerging cloud service providers: Applied Digital, CoreWeave, Crusoe Cloud, Lambda Labs, Voltage Park, Omniva, SF Compute Company, Yotta, and other NexGen Clouds

  • Other startups / sovereigns: Poolside, YTL, Tata, Reliance, UAE, Kaust, Iliad, 6Estates

Total compute install base for the above which includes: peak theoretical FLOPS and effective FLOPS based on training Model Flops Utilization by chip

Subscription to this model grants the user quarterly updates for one year. Contact us for more details.