A TechInsights-led teardown of Huawei’s Ascend 910C identified a TSMC-made compute die in the package alongside older-generation HBM2E memory from Samsung and SK hynix in separate samples, confirming cross-border sourcing inside China’s flagship AI chip. The finding aligns with broader assessments that Huawei blended legacy foreign dies and HBM to push the 910C into volume while domestic alternatives remain immature at the required performance and yield.
How Huawei sourced parts
Reports indicate Huawei secured nearly 2.9–3.0 million TSMC dies via an intermediary before sanctions tightened, enabling the company to keep 910C production alive even as local foundry yields lagged at advanced nodes. Coverage also notes claims that TSMC was penalized over leaked dies and emphasizes the company’s statement that the part identified matches an older die analyzed in 2024, with sales to Huawei halted since 2020 and compliance with export controls affirmed.
Why HBM2E matters
HBM2E provides the high bandwidth needed to feed AI compute, and while it is not the latest HBM generation, it remains critical to training and inference throughput for accelerators like the Ascend 910C. Bloomberg’s reporting and follow-on analyses say the HBM found in samples traces to Samsung and SK hynix, both of which reiterated that direct sales to Huawei ceased after 2020 in adherence with U.S. rules, pointing to pre-sanctions inventory as the likely source.
Sanctions pressure and workarounds
As inventories dwindle, reports describe Chinese firms resorting to extreme measures, including de‑soldering HBM from loosely packaged products intended for sanctions evasion to keep AI projects moving—an inefficient and costly stopgap. This dynamic highlights how memory, not just compute, has become a chokepoint for China’s AI ambitions under tightened export controls that now also target HBM and related manufacturing gear.
Performance and packaging realities
Analysts characterize the Ascend 910C as a domestic alternative intended to reduce reliance on Nvidia in China, but note that current performance targets are roughly about half of Nvidia’s H100, with packaging and thermal drawbacks adding engineering headwinds. Even so, shipments reportedly began in early 2025, reflecting strong demand from Chinese cloud and enterprise buyers seeking sanctioned-compliant compute within national borders.
Production and supply outlook
Given the finite die stockpiles and tightening HBM availability, estimates suggest Huawei could face a cap around the low‑million range of 910C units over the next year unless memory constraints ease, yields improve, or new domestic HBM sources scale. Notebookcheck’s synthesis indicates Huawei may produce hundreds of thousands of units near term, with the HBM bottleneck likely to dominate capacity planning until Chinese memory production matures or additional inventory is unlocked.
A broader supply-chain pattern
This is not the first time Samsung memory has surfaced in Huawei hardware; a recent TechInsights teardown of the Pura 70 Ultra smartphone identified Samsung LPDDR5X DRAM, illustrating that legacy Korean memory often persists in devices even as Huawei prioritizes domestic suppliers where feasible. Together, these findings show a transitional era in which Chinese OEMs still lean on pre‑sanctions caches of foreign parts while pushing to localize within DRAM, NAND, and advanced packaging ecosystems.
Compliance statements and risk
Samsung and SK hynix have repeatedly stated that they halted shipments to Huawei after 2020 and continue to comply with U.S. export controls, framing any HBM presence in 910C units as a legacy-stock phenomenon rather than ongoing supply. TSMC, for its part, emphasized that the die identified aligns with an older revision previously analyzed and that it has not supplied Huawei since mid‑September 2020, reinforcing the narrative of pre‑existing inventories and intermediated flows as the source.
What it means
Samsung’s HBM2E showing up inside Huawei’s 910C is a tangible marker of how stockpiled foreign memory remains pivotal in China’s near‑term AI rollout, even as policy aims to ring‑fence advanced compute. The bigger question is how quickly domestic HBM and advanced packaging can scale, because without a steady memory pipeline, compute die stockpiles will not translate into sustained AI capacity for China’s data centers.