Why DRAM and NAND behave like commodity markets: scale, process nodes, yields, and massive fab capex drive Micron’s earnings swings and volatility.

Micron is a “capital game” company selling DRAM and NAND where prices swing because supply takes a long time (and a lot of money) to adjust—so earnings can surge or drop as the memory cycle turns.
This is a plain-English guide to the mechanics behind Micron’s volatility: how memory markets behave, and why results can change quickly even when the company is well run.
It is not a set of trading tips, and it won’t pretend to predict the exact quarter when pricing bottoms or peaks. Memory markets are influenced by countless moving parts, and precision forecasting is usually false comfort.
Demand for memory can change fast (PC shipments slow, cloud spending pauses, a new AI buildout accelerates). Supply changes slowly because new capacity requires planning, equipment orders, construction, and months of ramping and yield improvement.
That timing mismatch—demand moving quickly while supply adjusts with a delay—creates repeating cycles: tight periods with rising prices and strong profits, followed by oversupply, falling prices, and margin pressure.
A capital game means the industry requires huge upfront spending (fabs, tools, and process transitions) with payback measured in years, not weeks. Once that spending is committed, companies can’t easily “turn off” supply without cost, which amplifies booms and busts.
Most of Micron’s earnings swings can be explained by three fundamentals:
Micron mainly sells two kinds of memory: DRAM (working memory) and NAND flash (storage). They’re both critical, but they behave differently—and both tend to trade more like commodities than like highly differentiated “specialty” chips.
DRAM holds data your system needs right now. When you close an app or power off a server, DRAM contents disappear.
You’ll see DRAM in PCs (DDR5/DDR4), servers and cloud data centers, and graphics/AI systems (high-bandwidth variants like HBM, though the broader market is still standard DRAM).
NAND keeps data when the power is off. It’s what’s inside SSDs, phones, and many embedded devices. NAND performance varies (e.g., interface/controller), but the underlying storage bits are often interchangeable across suppliers.
Memory is more standardized than many semiconductors: buyers care about capacity, speed class, power, and reliability specs—but there’s usually less product lock-in than with a custom CPU, GPU, or analog chip. That makes switching suppliers easier when price moves.
Buying is also high-volume and negotiated: large OEMs, cloud customers, and distributors purchase huge lots, pushing pricing toward market-clearing levels.
Because costs are largely fixed once fabs are running, small price changes can swing profits. A few percent move in average selling price, multiplied across billions of gigabytes shipped, can materially change margins.
Memory markets tend to move in a familiar loop: demand rises, prices rise, manufacturers increase spending, new supply arrives, the market overshoots, prices fall, and spending gets cut—setting up the next upswing.
When PC, smartphone, server, or AI infrastructure demand improves, customers need more DRAM and NAND bits. Because memory is widely interchangeable, tighter supply quickly shows up as higher contract and spot prices.
Higher pricing boosts margins, so makers announce bigger capex plans—more tools, more wafer starts, and sometimes new fabs. Eventually, that added output hits the market. If demand has already slowed, the extra bits create a glut. Prices drop, customers delay purchases, and producers respond by cutting wafer starts and capex. Supply tightens again, and the cycle repeats.
Supply can’t be “dialed up” instantly:
These delays mean the industry is always reacting to yesterday’s pricing signals.
DRAM and NAND don’t always peak or trough together. Different end markets, technology transitions, and competitor behavior can create periods where DRAM tightens while NAND is oversupplied (or vice versa).
Inventory magnifies swings. When prices are rising, customers often buy ahead to avoid higher costs, pulling demand forward. When prices are falling, they burn off stock and pause orders. Those stop-and-go purchasing patterns can make earnings moves look abrupt—even when end-user demand only changed modestly.
When Micron talks about “bit growth,” it’s describing how many total bits of memory it can ship over a period (e.g., a quarter or a year). That’s the real supply unit in memory markets—not the number of chips, and not the number of wafers started in a fab.
A memory “chip” is just a container for bits. If the industry can put more bits onto each wafer, it can increase supply even if it doesn’t build new factories or run more wafers.
Bit growth is central because buyers (PC makers, cloud providers, phone OEMs) care about how many gigabits or terabytes they can buy at a given price. Suppliers compete on cost per bit, and prices tend to respond to how fast bits are growing versus how fast demand for bits is growing.
Memory makers expand bits per wafer in two main ways:
Even if wafers shipped stay flat, these technology moves can lift total shipped bits.
Here’s an intuitive example with round numbers.
Assume a company ships 100,000 wafers per quarter. At the old node, each wafer yields 1,000 “units” of bits (think: 1,000 standardized gigabits). That’s 100 million units total.
After a node transition and yield learning, bits per wafer rise 30% to 1,300 units. With the same 100,000 wafers, supply becomes 130 million units—a big supply jump without running a single extra wafer.
If demand grows only 10% while supply grows 30%, the gap typically shows up as inventory build and then pricing pressure.
Because many customers can substitute one supplier’s DRAM/NAND for another’s, even a modest oversupply of bits can push average selling prices down quickly—feeding the volatility Micron is known for.
Memory manufacturing is less like “building gadgets” and more like running an ultra-expensive utility. Once a fab is built, a huge portion of the cost is fixed—so profits don’t move smoothly. They swing.
When Micron talks about capital expenditures (capex), it’s not one big purchase—it’s a stack of expensive building blocks:
Even if a company “only” wants more bits, it still needs more of these steps—because the factory is the product.
More supply doesn’t show up on command. A new fab (or a major expansion) requires site work, tool orders with long lead times, installation, qualification, and then a long ramp to good yields.
On top of that, memory lines are tuned to specific process flows; you can’t instantly convert capacity from one generation to another without downtime and learning. By the time new capacity arrives, demand may have changed—feeding the cycle.
Memory fabs have high fixed costs (depreciation, labor, maintenance, utilities). Variable costs exist, but they’re smaller than many people expect. So if pricing improves and a fab runs near full utilization, gross margin can jump quickly. If demand weakens and utilization falls, the same fixed cost base crushes profitability.
In plain English: the factory costs a lot to keep “on,” whether you’re selling every bit at a good price or discounting to move inventory.
Capex is cash spent now. Accounting doesn’t expense it all at once; it spreads the cost over years as depreciation. That’s why a company can show low profits (due to heavy depreciation) while still generating cash—or show profits while needing huge ongoing reinvestment just to stay competitive.
Memory makers often frame capex as a percentage of revenue because it signals two things at once: how hard they’re reinvesting and how disciplined supply growth might be.
A high capex/revenue ratio can mean aggressively adding bits (or catching up on technology). A lower ratio can imply tighter supply—potentially supportive for pricing—though it can also risk falling behind on process transitions.
Memory makers don’t win by inventing a wildly different DRAM or NAND “feature set.” They win by producing bits at a lower cost than competitors, because market pricing tends to converge toward the marginal supplier.
That’s why scale—how many wafers you can run, how efficiently, and how consistently—shows up so directly in margins.
Scale lowers cost in several practical ways. Large players can negotiate better pricing and allocation on tools, wafers, chemicals, and logistics. They also spread huge fixed costs—R&D, process integration teams, mask sets, software, reliability labs—across more output.
And because memory fabs need to run near-full to be economical, bigger manufacturers often have more flexibility to keep utilization high by shifting output across customers and product categories.
Even with the same nominal “node,” two producers can have very different cost per bit because yields and throughput evolve with experience.
More starts and more time on a process mean faster learning: fewer defect excursions, better tool tuning, higher die-per-wafer realized, and less scrap. That learning curve is a compounding advantage—especially when a company is ramping a new node or a new layer stack in NAND.
Scale also supports mix. Higher-performance DRAM (for servers and some AI-related demand) typically carries better pricing and tighter specs than mainstream PC or mobile DRAM.
A scaled manufacturer can segment production—allocating the best capacity to premium products while still serving high-volume mainstream demand—helping stabilize average selling prices.
Scale doesn’t eliminate the cycle. In deep downturns, industry-wide demand shocks can overwhelm any cost advantage, pushing pricing below cash costs for weaker players and squeezing everyone’s margins.
Scale helps you survive and reinvest sooner, but it can’t prevent volatility when too many bits hit the market at once.
“Process technology” is simply the set of manufacturing steps that lets a company pack more memory into the same physical area. For DRAM, that usually means making features smaller and more precise. For NAND, it often means stacking more layers vertically—like adding floors to a building instead of widening the footprint.
If you can produce more bits from the same wafer, your cost per bit tends to fall. That’s the basic economic prize of moving to a newer “node” (DRAM) or higher-layer design (NAND).
But the newest generation can also be harder and more expensive: more process steps, tighter tolerances, slower equipment throughput, and higher materials complexity. As a result, cost per bit usually improves over time, not instantly on day one.
Yield is the share of produced wafers that meet quality targets and can be sold profitably. Early in a new technology ramp, yield is typically lower because the process is new, tiny deviations matter more, and the factory is still “learning.”
Low yield is expensive in two ways:
As yield improves, the same factory can suddenly ship a lot more bits without building anything new.
When the industry shifts nodes, output can dip temporarily as lines are converted and early yields lag. That can tighten supply and lift pricing.
The reverse is also common: if ramps go better than expected, usable supply rises quickly and pricing can soften.
Because memory pricing is so sensitive to small changes in bit supply, surprises in yields, ramp speed, or layer/node execution can move results fast. A “better-than-planned” ramp can pressure prices; a “harder-than-planned” transition can do the opposite—sometimes within a quarter or two.
Memory is unusual because small shifts in inventory can move prices fast, and prices feed back into behavior. When the product is largely interchangeable (a given DRAM or NAND spec), customers and suppliers both try to “manage the cycle” with inventory—and often end up magnifying it.
When lead times extend or prices rise, OEMs and cloud buyers frequently double-order to protect supply. This doesn’t mean end demand is suddenly stronger; it often means the same demand is being booked twice.
Once supply loosens, that inventory shows up as a sharp “correction”: customers pause orders to burn down stock. To the supplier, it looks like demand disappeared, even if PCs or servers are still shipping at a normal pace.
For a producer like Micron, finished goods inventory can be a cushion when demand surprises to the upside—ship from stock, keep fabs running, and avoid missing revenue.
But in a downturn, inventory becomes a trap. If prices are falling, holding unsold bits can mean:
DRAM and NAND pricing is discovered through a mix of contracts (often quarterly) and spot markets (more immediate).
Even if a buyer wants to switch suppliers or ramp a new part, qualification and validation take time. That creates step-changes: demand can’t smoothly “slide” between products; it can pause while platforms, firmware, and supply chains are re-approved.
Memory is one of the few major semiconductor categories where a small number of companies account for most global supply. That concentration matters because pricing is set at the market level: if total industry output grows faster than demand, the “clearing price” can fall quickly, even if each company is running world-class technology.
When only a handful of producers control most DRAM or NAND capacity, each player’s investment decisions have outsized impact. If everyone expands cautiously, supply growth can track demand more closely and pricing tends to be steadier.
If even one player expands aggressively, the extra bits don’t stay “contained”—they flow into the same global channels and pressure pricing for all vendors.
In memory, capex discipline generally refers to pacing supply growth rather than maximizing near-term output. Practically, that can look like:
This isn’t about stopping investment; it’s about choosing investments that improve cost per bit without flooding the market with additional bits too quickly.
Even in a concentrated market, companies face strong incentives to keep pushing. Market share fears are real: sitting out an upturn can mean losing design wins, customer mindshare, or negotiating leverage.
On top of that, technology races create pressure to build and qualify new process capability, which can inadvertently add capacity.
The key takeaway: because memory is highly substitutable, a single large expansion or faster-than-expected ramp can reset the supply-demand balance—and the price level—for everyone.
Memory demand has a long-term tailwind: more data created, moved, and stored every year. But Micron sells into markets where unit volumes and spending plans can swing quickly, so “structural growth” doesn’t prevent cyclical slowdowns.
Client devices (PCs, smartphones, tablets) tend to move in waves: a new platform, OS shift, or replacement cycle lifts shipments, then a digestion period follows.
Even if average DRAM or NAND per device rises over time, a single year of weaker unit demand can still leave the industry with too many bits.
Hyperscalers and enterprises buy memory through servers, and server builds are dictated by utilization and budgets. When customers accelerate datacenter expansion, they pull forward memory demand; when they slow, orders can fall sharply.
Importantly, cloud demand can shift by mix as much as by total units—more high-memory configurations boosts profitability for suppliers even if overall server shipments are flat.
AI training and inference generally require more memory bandwidth and capacity per system, increasing DRAM content in high-end servers and specialized accelerators. That raises the ceiling for demand, but it doesn’t remove the cycle: spending can still pause if deployments overshoot near-term usage, if power/space limits constrain expansion, or if customers wait for the next platform generation.
At a high level, buyers can reduce memory needs through software efficiency (compression, quantization, better caching) or by changing system design (more on-package memory, different tiers of storage). These shifts usually change where bits are consumed and which products are favored, rather than eliminating consumption altogether—another reason profitability can move even when “total demand” headlines look steady.
Micron’s results often look “mysterious” until you track a handful of operating indicators that map directly to supply/demand and fixed-cost absorption. You don’t need a model with dozens of tabs—just a few KPIs and the discipline to compare them quarter to quarter.
Start with:
If you want a primer on interpreting these metrics across chipmakers, see /blog/semiconductor-kpis-explained.
If you find yourself rebuilding the same KPI table each quarter, it can help to formalize it into a lightweight internal app: ingest earnings releases, track bit shipments/ASPs/inventory over time, and generate a consistent “cycle dashboard.”
Platforms like Koder.ai are designed for this kind of workflow: you can describe the dashboard you want in chat, generate a web app (typically React on the front end with a Go/PostgreSQL backend), and iterate quickly—without turning a simple tracker into a months-long engineering project. If you ever need to move it in-house, source code export is supported.
Memory manufacturing has high fixed costs, so pricing acts like a lever on profitability. A single-digit ASP decline can compress gross margin meaningfully if it coincides with lower utilization and higher inventory.
Conversely, when demand improves and pricing firms up, margins can expand quickly because the same fabs are already built and staffed.
Focus less on precise revenue ranges and more on directional signals:
Watch for rapid capacity adds, soft end-demand language (PCs, smartphones, cloud digestion), and inventories rising faster than shipments. When several of these appear together, pricing pressure usually isn’t far behind—and that’s what tends to drive the biggest earnings swings.
Micron’s results can look confusing if you expect a steady “sell more units, earn more profit” story. Memory behaves differently.
The simplest way to make sense of Micron is to keep three pillars in mind: the cycle, scale, and process technology.
Cycles: DRAM and NAND pricing tends to overshoot in both directions because supply takes years to add, while demand can swing quarter to quarter. When pricing turns, it often moves faster than unit volumes.
Scale: Cost per bit is the scoreboard. Larger producers usually have lower costs because they spread fixed fab expenses across more bits, learn faster, and keep factories better utilized. When utilization drops, margins can compress quickly—even if the company is still “shipping a lot.”
Process technology: Node transitions and yield learning matter as much as (or more than) headline demand. A strong ramp lowers cost per bit; a rough ramp can raise costs right when pricing is falling.
Memory is a capital-heavy, commodity-like market with delayed supply responses. That structure naturally creates earnings swings.
Micron can execute well and still face falling ASPs; it can also benefit from tight supply even with modest demand growth.
When you see a headline, try translating it into a few questions:
If you want more context on how we break down these topics, browse /blog. If you’re comparing tools or services around semiconductor research workflows, see /pricing.
Disclaimer: This article is for informational purposes only and is not investment advice.