Tech Today

SK Hynix Bets $13 Billion on AI Memory as Chip Shortage Looms

The world's top HBM supplier is building a massive new packaging plant in South Korea as AI demand outpaces production capacity across the industry.

By Morgan Wells··5 min read
High-bandwidth memory chips on circuit board with blue technical lighting

SK Hynix announced Tuesday it will invest 19 trillion won ($12.9 billion) to build a new advanced chip packaging facility in South Korea, its largest single investment ever. The scale of the commitment tells you something important: even the company that dominates the AI memory market believes it cannot build capacity fast enough to meet what is coming.

The investment is not about making memory chips. It is about packaging them, a distinction that reveals how fundamentally AI has rewritten the economics of semiconductors. The ability to stack and connect memory dies into high-bandwidth modules has become the most valuable and capacity-constrained capability in the industry, and SK Hynix is betting $13 billion that the bottleneck will persist for years.

What SK Hynix Is Building

The new facility, dubbed P&T7, will be located in Cheongju, South Korea, expanding SK Hynix's existing footprint in the city. Construction begins in April with completion targeted for late 2027. The plant focuses exclusively on advanced packaging, the process of stacking multiple memory dies into single high-density units that deliver dramatically higher bandwidth and energy efficiency than traditional memory modules.

SK Hynix projects the global HBM market will grow at a compound annual rate of 33% from 2025 to 2030, according to data from market research firm Yole Group. Kim Kyu-hyun, head of DRAM marketing at SK Hynix, confirmed the urgency in a recent briefing: "We've sold out our DRAM, NAND, and HBM capacity for next year." The company says the new facility is essential to meeting a demand curve that shows no signs of flattening.

The Cheongju plant joins SK Hynix's existing advanced packaging operations in Icheon, South Korea, and a facility in West Lafayette, Indiana. The geographic diversification matters as governments push for supply chain resilience, though the bulk of advanced memory production remains concentrated in Asia.

Semiconductor fabrication facility clean room with workers in protective suits
Advanced packaging facilities require clean room environments similar to chip fabrication plants

Why Memory Became the AI Bottleneck

The AI boom has inverted the traditional semiconductor hierarchy. For decades, processors grabbed headlines while memory was an afterthought, a cheap commodity that barely registered in system performance discussions. That dynamic has flipped entirely.

Modern AI training and inference require moving enormous amounts of data at speeds that conventional memory cannot deliver. A single Nvidia H100 GPU can process data faster than standard DDR5 memory can feed it, creating a bottleneck that throttles the entire system. HBM solves this by stacking memory chips vertically and connecting them with thousands of tiny through-silicon vias, creating bandwidth that is orders of magnitude higher than traditional RAM.

The catch is that HBM production is far more complex and capacity-constrained than regular memory. SK Hynix controls roughly 61% of the HBM market, with Samsung at 19% and Micron at 20%, according to Macquarie Equity Research. When SK Hynix cannot produce enough chips, data center buildouts slow down regardless of how many processors are available.

Semiconductor analyst Manish Rawat of TechInsights notes that the supply crunch signals a deeper structural shift: "Memory manufacturers once functioned as shock absorbers for the tech ecosystem. Samsung's inability to cushion volatility despite its unmatched capacity indicates a market in disequilibrium." Nvidia, despite designing the most sought-after AI processors on the planet, remains entirely dependent on its memory suppliers. Without HBM from SK Hynix, those processors sit in warehouses.

The HBM Generation Gap

What makes the packaging investment particularly strategic is the generational transition currently underway in HBM technology. Each new generation demands more sophisticated packaging capabilities, and the company that masters the transition first captures disproportionate market share.

HBM3, the current workhorse, stacks eight memory dies to deliver 819 GB/s of bandwidth. HBM3E, which SK Hynix began shipping in volume in late 2024, pushes that to 1,180 GB/s by improving the inter-die connections and using faster DRAM cells. Nvidia's next-generation Blackwell GPUs are designed around HBM3E, making it the standard for the 2025-2026 data center buildout cycle.

But the real prize is HBM4, expected to begin sampling in late 2026 and reach volume production in 2027, precisely when SK Hynix's new Cheongju facility comes online. HBM4 represents a fundamental architectural shift. Instead of stacking identical memory dies, it integrates a logic die at the base of the stack that handles data routing and error correction. This design requires packaging capabilities that do not exist at scale today.

The timing is not coincidental. SK Hynix is building a facility specifically engineered for HBM4's packaging requirements. Companies that cannot package HBM4 when Nvidia and AMD need it for their next-generation accelerators will simply be cut out of the most profitable segment of the memory market. The investment is less about meeting current demand than about securing a position in the next cycle.

AI data center server room with rows of GPU racks
Every AI data center expansion depends on HBM supply, making SK Hynix a critical link in the chain.

The Packaging Arms Race

SK Hynix's competitors are not standing still, but they are pursuing notably different strategies. Samsung, despite holding the largest total memory fabrication capacity in the world, has struggled with HBM yields. The company's HBM3E chips suffered quality issues that delayed shipments to Nvidia throughout 2025, costing Samsung both revenue and credibility at the worst possible moment.

Samsung's approach to advanced packaging relies heavily on its foundry division, attempting to integrate memory packaging with its logic chip manufacturing. In theory, this vertical integration should be an advantage. In practice, it has spread Samsung's engineering resources across too many technical challenges simultaneously. The company recently reorganized its semiconductor division and replaced key leadership in an effort to close the gap, but catching SK Hynix in HBM4 packaging will require execution that Samsung has not yet demonstrated.

Micron occupies a different position. The American company has focused on being a fast follower, licensing packaging techniques and competing on price rather than trying to lead technologically. Micron's HBM3E products have been well-received, and the company has secured design wins with some AI chip makers. But its 20% market share reflects the fundamental challenge: without proprietary packaging advantages, Micron competes on margins rather than capturing the premium pricing that SK Hynix commands.

The three-company dynamic creates a market structure that looks increasingly like the logic chip foundry business, where TSMC's technological lead over Samsung and Intel has proven nearly impossible to close. SK Hynix's $13 billion packaging investment is designed to ensure that the same dynamic takes hold in memory.

What This Means for AI Hardware Costs

The downstream implications of the HBM bottleneck extend well beyond semiconductor company earnings. AI hardware costs are driven significantly by memory pricing, and the current supply-demand imbalance keeps HBM prices elevated at levels that fundamentally shape the economics of AI deployment.

A single HBM3E stack currently costs roughly $100 to $120, and an Nvidia H200 GPU requires six of them. That means $600 to $720 in memory costs alone per GPU, before accounting for the processor die, packaging, or board-level integration. For a data center deploying thousands of GPUs, memory represents a substantial and largely non-negotiable portion of the capital expenditure.

If SK Hynix's capacity expansion proceeds on schedule and competitors close the quality gap, HBM prices could moderate by 15 to 25% by 2028 as supply catches up with demand. That reduction would flow directly into lower AI infrastructure costs, potentially accelerating deployment by companies that currently find the economics marginal. Conversely, if demand growth outpaces even these massive capacity additions, as it has in every year since the AI boom began, prices stay elevated and AI deployment remains concentrated among the companies with the deepest pockets.

The Investment Calculus

SK Hynix's $13 billion bet reflects a calculated assessment that AI memory demand will sustain premium pricing through at least 2030. The company's 61% HBM market share, combined with its first-mover position on HBM4 packaging, gives it pricing power that competitors cannot easily challenge. The new Cheongju facility is designed to defend that position through the next two generational transitions.

The critical variable is timing. The facility targets late 2027 completion, meaning meaningful capacity relief arrives in 2028 at the earliest. Any construction delays extend the current shortage further. For the broader AI industry, the practical conclusion is straightforward: memory constraints will continue to limit the pace of AI infrastructure buildout through at least 2027, regardless of how aggressively companies like Nvidia, Meta, and Microsoft want to deploy. The $13 billion investment is large enough to signal confidence in sustained demand, but not large enough to eliminate the scarcity that makes HBM so profitable in the first place.

Sources

Written by

Morgan Wells

Current Affairs Editor

Morgan Wells spent years in newsrooms before growing frustrated with the gap between what matters and what gets clicks. With a journalism degree and experience covering tech, business, and culture for both traditional media and digital outlets, Morgan now focuses on explaining current events with the context readers actually need. The goal is simple: cover what's happening now without the outrage bait, the endless speculation, or the assumption that readers can't handle nuance. When not tracking trends or explaining why today's news matters, Morgan is probably doom-scrolling with professional justification.

Related Stories