KoderKoder.ai
PricingEnterpriseEducationFor investors
Log inGet started

Product

PricingEnterpriseFor investors

Resources

Contact usSupportEducationBlog

Legal

Privacy PolicyTerms of UseSecurityAcceptable Use PolicyReport Abuse

Social

LinkedInTwitter
Koder.ai
Language

© 2026 Koder.ai. All rights reserved.

Home›Blog›Lisa Su’s AMD Comeback: Execution That Rebuilt a Chip Giant
Nov 07, 2025·8 min

Lisa Su’s AMD Comeback: Execution That Rebuilt a Chip Giant

A practical look at AMD’s turnaround under Lisa Su: clear roadmaps, platform focus, and disciplined execution that rebuilt trust and growth.

Lisa Su’s AMD Comeback: Execution That Rebuilt a Chip Giant

AMD Before the Comeback: What Needed Fixing

By the time Lisa Su took the CEO role in 2014, AMD wasn’t just “behind”—it was squeezed on multiple fronts at once. Intel dominated mainstream PC CPUs, Nvidia owned mindshare in high-end graphics, and AMD’s product cadence had become uneven. When core products are late or uncompetitive, every other problem gets louder: pricing power erodes, budgets shrink, and partners stop planning around you.

The pressure was competitive and financial

AMD had limited room to invest because margins were thin and debt weighed on the business. That constraint matters in semiconductors: you can’t cut your way to leadership if you’re missing performance and efficiency targets. The company needed products that could command better pricing, not just ship volume.

The core problem: credibility

The biggest issue wasn’t a single “bad chip.” It was trust.

PC makers, data center customers, and developers build multi-year plans. If they don’t believe your roadmap will arrive on time—and at the promised performance—they design you out early.

That credibility gap affected everything:

  • OEMs hesitated to commit premium designs.
  • Enterprise buyers treated AMD as a risky second source.
  • The ecosystem (software tuning, validation, platform accessories) invested less, creating a self-reinforcing cycle.

What the turnaround had to achieve

Before any comeback story could be written, AMD needed clear, measurable goals:

  1. Regain performance and efficiency leadership in key segments.
  2. Rebuild margins by competing on value, not discounts.
  3. Earn back trust with predictable roadmaps and consistent execution.

This sets the frame for the rest of the story: not personal wealth or hype, but a turnaround built on strategy, delivery, and repeated proof that AMD could do what it said it would do.

Lisa Su’s Operating Playbook: Execution as a Strategy

AMD’s comeback wasn’t powered by a single breakthrough—it was powered by a decision to treat execution as the strategy. In semiconductors, ideas are cheap compared to shipping: a missed tape-out, a slipped launch window, or a confusing product stack can erase years of R&D advantage. Lisa Su’s playbook emphasized doing fewer things, doing them on time, and doing them predictably.

What “execution first” means in chips

“Execution first” prioritizes repeatable delivery: clear product definitions, realistic schedules, tight coordination across design, validation, packaging, software, and manufacturing, and a refusal to overpromise. It also means making hard calls early—cutting features that threaten deadlines and focusing engineering effort where it will actually reach customers.

Why multi-year plans reduce buyer risk

OEMs, cloud providers, and enterprise customers buy roadmaps as much as they buy chips. A credible multi-year plan lowers their risk because it lets them align platform designs, BIOS validation, cooling, power budgets, and procurement well in advance.

When customers believe the next-gen part will arrive when stated—and will be compatible with their platform assumptions—they can commit earlier, order in volume, and build long-lived product lines with confidence.

Trade-offs: fewer bets, better follow-through

The trade-off is obvious: narrower scope. Saying “no” to side projects can feel conservative, but it concentrates resources on the few programs that matter most.

In practice, fewer simultaneous bets reduces internal thrash and increases the odds that each launch is complete—not just “announced.”

Signals of discipline buyers can see

Execution shows up in public signals: hitting dates, consistent naming and positioning, stable messaging quarter to quarter, and fewer last-minute surprises. Over time, that reliability becomes a competitive advantage—because trust scales faster than any single benchmark win.

Roadmaps That Restore Trust

A turnaround in semiconductors isn’t won by shipping one great chip. Customers—PC makers, cloud providers, and enterprises—plan purchases years ahead. For them, a credible product roadmap is a promise that today’s decision won’t be stranded tomorrow.

Under Lisa Su, AMD treated the roadmap as a product in itself: specific enough to plan around, disciplined enough to hit.

What a roadmap must include

A useful roadmap isn’t just “next-gen is faster.” It needs:

  • Timelines and cadence: clear windows for when platforms and generations arrive, so partners can align launches and qualification.
  • Performance targets that matter: the gains buyers track—core counts, efficiency, memory bandwidth, I/O, and real workload improvements.
  • Platform plans: socket compatibility, chipset direction, and how long a platform will be supported—critical for IT teams that standardize.

Why this builds confidence in long purchase cycles

Servers, laptops, and OEM designs have long lead times: validation, thermals, firmware, supply commitments, and support contracts. A stable roadmap reduces the “unknowns” cost. It lets a buyer map: deploy now, refresh later, and keep software and infrastructure investments relevant across multiple cycles.

Consistency signals seriousness

Consistency shows up in small but powerful ways: predictable generational naming, a regular release rhythm, and coherent segmentation (mainstream vs. high-end vs. data center). When each generation feels like a continuation—not a reset—partners are more willing to invest engineering time and marketing dollars.

Communicating uncertainty without overpromising

No chip schedule is risk-free. The trust-building move is being explicit about what’s committed versus what’s a target, and explaining dependencies (for example, manufacturing readiness or platform validation).

Clear ranges, transparent milestones, and early updates beat bold claims that later require backtracking—especially when customers are betting multi-year roadmaps of their own on yours.

Zen and the Return to CPU Relevance

AMD’s comeback only worked if the CPU business became competitive again. CPUs are the anchor product that ties together laptops, desktops, workstations, and servers—plus the relationships with OEMs, system builders, and enterprise buyers. Without credible CPUs, everything else (graphics, custom chips, even partnerships) stays on the defensive.

Why Zen had to be a foundation, not a one-off

Zen wasn’t just a faster chip. It was a reset of priorities: ship on time, hit clear performance targets, and create an architecture that could scale across segments.

That scaling mattered because the economics of a semiconductor turnaround depend on reuse—one core design refined and repackaged for many markets, rather than separate teams building separate “hero” products.

The key was making the same DNA work from PC to server. If an architecture can handle a thin-and-light laptop and also power a data center CPU like EPYC, the company can move faster, share engineering wins, and deliver consistent improvements generation to generation.

The levers normal buyers can understand

Zen’s impact is easiest to grasp through a few practical metrics:

  • Performance per watt: better efficiency means quieter laptops, less heat, and lower operating costs in servers.
  • IPC (instructions per clock): more work done each tick of the clock—often the difference between “feels slow” and “feels snappy,” even at similar GHz.
  • Core counts: more cores help with multitasking, content creation, and server workloads—if the platform can feed them.

From “good enough” to a credible alternative

The early goal wasn’t instant domination; it was regaining trust. Zen moved AMD from “maybe, if it’s cheap” to “credible alternative,” which unlocked reviews, OEM interest, and real volume.

Over time, consistent execution turned that credibility into leadership in specific niches—high core-count value, efficiency-focused designs, and server configurations where buyers care about throughput and total cost of ownership. That steady climb is what made the AMD comeback feel durable, not temporary.

Chiplet Design: A Platform for Faster Iteration

AMD’s shift to chiplets is one of the most practical examples of “platform thinking” in hardware: design a set of reusable building blocks, then mix and match them into many products.

Chiplets vs. monolithic chips (plain English)

A traditional monolithic processor is like building an entire house as one solid piece—every room, hallway, and utility line fused together. With chiplets, AMD splits that house into modules: separate “rooms” (compute chiplets) and “utilities” (an I/O die), then connects them inside one package.

Why chiplets accelerated AMD’s comeback

The biggest win is manufacturing efficiency. Smaller chiplets tend to have better yield (fewer defects per usable part) than a single huge die. That improves cost control and reduces the risk that one flaw ruins an expensive, large chip.

Chiplets also enable faster iteration. AMD can upgrade compute chiplets on a newer process node while keeping the I/O die more stable, instead of redesigning everything at once. That shortens development cycles and makes roadmap promises easier to keep.

One shared design, many SKUs

A chiplet platform supports a broad product stack without reinventing the wheel. The same compute chiplet design can appear in multiple CPUs—AMD can create different core counts and price points by combining more or fewer chiplets, or pairing them with different I/O capabilities.

That flexibility helps serve consumers, workstations, and servers with a coherent family rather than disconnected one-offs.

Risks—and how AMD mitigates them

Chiplets add new complexity:

  • Interconnect and latency: Data must travel between dies, which can introduce delays. AMD mitigates this with high-bandwidth interconnects and careful cache/memory design so the “distance” feels smaller.
  • Packaging complexity: Advanced packaging and assembly become critical. Strong foundry and packaging partners, plus disciplined validation, reduce surprises.
  • Power and thermals: Multiple dies in one package can concentrate heat. Smarter power management and physical layout choices help keep performance consistent.

The result is a scalable approach that turns architecture into a reusable product engine—not a one-time chip.

Platform Strategy: Winning Through Compatibility and Ecosystems

A comeback in chips isn’t just about a fast CPU. For most buyers—and for IT teams buying thousands of PCs—the “platform” is the full promise: the socket the CPU fits into, the chipset features, memory support, firmware updates, and whether next year’s upgrade will be painless or a forced rebuild.

Why sockets and chipsets matter (even if you never open the case)

When a platform changes too often, upgrades turn into full replacements: new motherboard, sometimes new memory, a new Windows image, new validation work. AMD’s decision to keep platforms around longer (the AM4 era is the obvious example) translated into a simple benefit people understand: you could often drop in a newer processor without replacing everything else.

That compatibility also reduced risk. Home users got clearer upgrade paths; IT teams got fewer surprises during procurement cycles and rollouts.

Platform longevity = lower total cost and more loyalty

Long-lived platforms lower the total upgrade cost because fewer parts are thrown away. It also lowers the time cost: less troubleshooting, fewer driver and BIOS issues, and less downtime.

That’s how compatibility becomes loyalty—buyers feel like the system they purchased won’t be a dead end six months later.

Coordinating the whole stack as one product promise

A platform strategy means treating CPU + motherboard + memory + firmware as one coordinated deliverable. In practical terms:

  • Motherboard partners need early, stable specs.
  • Memory support must be predictable and well-tested.
  • Firmware (BIOS/AGESA updates) has to arrive reliably, not as an afterthought.

When these pieces move together, performance is more consistent and support is simpler.

Fewer dead ends, clearer choices

Plainly put, AMD aimed to reduce gotchas: fewer confusing compatibility matrices, fewer forced rebuilds, and more systems that can evolve over time.

That kind of platform clarity doesn’t grab headlines like benchmarks do—but it’s a big reason buyers stick around.

Manufacturing and Foundry Partnerships (Including TSMC)

AMD’s comeback wasn’t only about better CPU designs—it also depended on getting timely access to the most advanced manufacturing. For modern chips, where and when you can build matters almost as much as what you build.

Why leading-edge access is a strategic advantage

Leading-edge manufacturing (often discussed in terms of smaller “process nodes”) generally enables more transistors in the same area, improved power efficiency, and higher potential performance. At a high level, that translates to:

  • Faster chips at the same power, or similar speed using less power (important for laptops and data centers)
  • More features on-die (like cache or accelerators) without ballooning cost
  • Better competitiveness when rivals also push new architectures

AMD’s close relationship with TSMC gave it a credible path to those advantages on a predictable schedule—something the market could plan around.

Partnering with foundries vs. owning fabs

Owning factories can offer control, but it also locks a company into massive capital spending and long upgrade cycles. For some companies, partnering with a specialist foundry can be the faster route because:

  • Foundries spread the cost of new process development across many customers
  • Capacity can be scaled through contracts rather than building new plants
  • Engineering focus stays on product design, packaging, and platform integration

AMD’s strategy leaned into this division of labor: AMD focuses on architecture and productization; TSMC focuses on manufacturing execution.

Process nodes, in plain terms

A “node” is a shorthand label for a generation of manufacturing technology. Newer nodes typically help chips run cooler and faster, which is especially valuable in servers where performance-per-watt drives total cost of ownership.

Supply risk basics: capacity and long horizons

Foundry supply isn’t a spot market. Capacity is planned far in advance, and large customers often reserve wafers years ahead.

That creates real risks—prioritization, shortages, and timing slips—that can decide who ships and who waits. AMD’s turnaround included learning to treat manufacturing commitments as a core part of product strategy, not an afterthought.

EPYC and the Data Center Push

EPYC wasn’t just another product line for AMD—it was the fastest way to change the company’s business profile. Servers are a profit engine: volumes are lower than PCs, but margins are higher, contracts are sticky, and a single design win can translate into years of predictable revenue.

Just as important, winning in data centers signals credibility. If cloud providers and enterprises trust you with their most expensive workloads, everyone else pays attention.

What data center buyers actually care about

Server teams don’t buy on brand nostalgia. They buy on measurable outcomes:

  • Performance per dollar: total throughput for the budget, not just peak benchmarks.
  • Efficiency: power and cooling costs can rival hardware costs over time.
  • Reliability and longevity: stable platforms, long support windows, and predictable refresh cycles.
  • Support and validation: firmware maturity, fast issue resolution, and confidence that the vendor will show up when problems happen.

EPYC succeeded because AMD treated these as operating requirements, not marketing claims—pairing competitive CPU performance with a platform story that enterprises could standardize on.

How EPYC pulled the rest of AMD forward

A strong server CPU line creates pull-through. When a customer adopts EPYC in a cluster, it can influence adjacent purchases: developer workstations that match production, networking and platform choices, and eventually broader procurement comfort with AMD across PCs and laptops.

Data center wins also strengthen relationships with OEMs, hyperscalers, and software partners—relationships that compound over multiple product generations.

Adoption in real life: from pilots to rollouts

Most organizations follow a practical path:

  1. Pilot projects to validate performance, power, and management tooling.
  2. Second-sourcing to reduce dependency on a single vendor while limiting risk.
  3. Broader rollouts once operational teams trust stability, support, and roadmap continuity.

AMD’s execution advantage showed up in that last step: consistent iterations and clearer roadmaps made it easier for cautious buyers to move from “try” to “standardize.”

OEM and Partner Strategy: From Design Wins to Volume

A great chip doesn’t become a comeback story until it shows up in products people can buy. AMD’s OEM and partner strategy under Lisa Su focused on turning interest into repeatable, shippable designs—and then scaling those designs into real volume.

Why “platform + roadmap” wins design slots

For OEMs, picking a CPU is a multi-year bet. AMD reduced perceived risk by selling a platform (socket, chipset, firmware expectations, and validation cadence) alongside a credible multi-generation roadmap.

When an OEM can see how this year’s system can evolve into next year’s refresh with minimal rework, the conversation shifts from specs to planning.

That platform framing also made procurement and engineering teams more comfortable: fewer surprises, clearer timelines, and a stronger basis for committing marketing and supply chain resources.

Reference designs, validation, and long-term support

Behind the scenes, reference designs and validation suites mattered as much as performance. Partners need predictable integration: BIOS/UEFI maturity, driver stability, thermals guidance, and compliance testing.

Long-term support—keeping key generations maintained and validated—helped OEMs offer longer product lifecycles (especially important in commercial PCs and servers).

Partner-friendly programs (without bureaucracy)

AMD leaned into being easy to work with: clear enablement materials, responsive engineering support, and consistent platform policies. The goal wasn’t a complicated partner framework—it was fast decisions, fewer integration loops, and a path from early samples to shelf-ready systems.

Indicators worth watching

If you want to gauge whether design wins are translating into momentum, look for consistency over time: the number of systems launched each generation, how many OEM families get refreshed (not just one-offs), how long platforms stay supported, and whether releases arrive on schedule year after year.

Software and Developer Ecosystem: The Multiplier

Hardware wins benchmarks. Software wins adoption.

A CPU or GPU can be objectively fast, but if developers can’t easily build, debug, deploy, and maintain real applications on it, performance stays theoretical. One underappreciated part of AMD’s comeback was treating software enablement as a product feature—something that multiplies the value of every new architecture and process node.

Why software support matters as much as specs

Enterprises and creators care about time to usable results. That means predictable performance, stable behavior across updates, and confidence that the platform will still work after the next OS patch or framework release.

Strong software reduces friction for IT teams, makes benchmarking more repeatable, and lowers the risk of switching from an incumbent.

Developer relations basics (and why they compound)

The fundamentals aren’t glamorous, but they scale:

  • Clear documentation that matches what ships—not what’s planned.
  • Tooling that’s easy to install and update (compilers, profilers, debuggers).
  • Drivers that prioritize stability and regression testing.
  • Long-lived support policies so teams can standardize without fear.

When these basics are consistent, developers invest more deeply: they optimize code, write tutorials, contribute fixes, and recommend the platform internally. That flywheel is hard for competitors to dislodge.

GPUs: frameworks and libraries decide real-world value

For GPU compute—especially AI—framework compatibility often determines purchasing decisions. If the major training and inference stacks run well, and key libraries are maintained (kernels, math primitives, communication libraries), the hardware becomes easy to say yes to.

If not, even strong price/performance can stall.

What to measure: ecosystem momentum, not claims

Instead of relying on marketing, watch signals like:

  • Framework and toolchain release cadence and quality
  • Community engagement (issues resolved, documentation updates)
  • Independent benchmarks that are reproducible
  • Growth in certified systems and validated configurations

Ecosystem momentum is measurable—and it’s one of the most durable advantages in a turnaround.

Financial Discipline Behind the Comeback

AMD’s turnaround wasn’t just a product story—it was a financial one. Execution only matters if the company can fund it consistently, absorb mistakes, and keep promises without betting the balance sheet.

Fewer priorities, better margins

A key shift was narrowing focus: fewer must-win programs, clearer product tiers, and a tighter roadmap. That kind of prioritization does two things over time:

  • It reduces duplicated effort (multiple teams solving the same problem in slightly different ways).
  • It increases reuse across products, which lowers costs per unit as volumes grow.

Better gross margin doesn’t come from a single pricing moment. It comes from shipping a simpler, more repeatable portfolio—and avoiding distractions that burn engineering time without moving revenue.

R&D spend: invest hard, say no often

Financial discipline doesn’t mean underinvesting in R&D; it means spending where differentiation compounds.

AMD’s choices signaled a willingness to fund core architecture, platform longevity, and the steps required to deliver on schedule—while walking away from side bets that didn’t reinforce the main roadmap.

A practical rule: if a project can’t clearly improve the next two product cycles, it’s a candidate to pause or cut.

Capital allocation: protect the core, be careful with deals

Semiconductors punish overreach. Keeping the balance sheet healthy creates flexibility when markets soften or competitors force a response.

Disciplined capital allocation generally follows a simple order:

  1. Maintain enough financial strength to keep roadmaps funded.
  2. Invest in capabilities that reduce execution risk.
  3. Consider acquisitions only when integration is realistic and the strategic fit is direct.

Deals can accelerate a plan—or derail it through integration complexity. The cost isn’t just money; it’s leadership attention.

Avoiding hype cycles: ship what you can support

Targets that look great on slides can become expensive liabilities if they can’t be manufactured, supplied, and supported at scale.

AMD’s credibility improved as expectations aligned with what could actually be shipped—turning consistency into a competitive advantage.

Lessons from AMD’s Comeback: A Repeatable Playbook

AMD’s turnaround under Lisa Su is often told as a product story, but the more transferable lesson is operational: execution was treated as a strategy, and platforms were treated as compounding assets. You don’t need to build chips to borrow that playbook.

Actionable takeaways for leaders

Start with clarity. AMD narrowed focus to a small set of roadmaps that could actually ship, and then communicated them consistently. Teams can handle hard truths (tradeoffs, delays, constraints) better than moving targets.

Then add cadence and accountability. A turnaround needs a predictable operating rhythm—regular checkpoints, clear owners, and a tight feedback loop from customers and partners. The point isn’t more meetings; it’s turning promises into a repeated habit: commit → deliver → learn → commit again.

Finally, build platforms, not one-offs. AMD’s compatibility and ecosystem mindset meant each successful release made the next one easier to adopt. When products fit into existing workflows, customers can upgrade with less risk—momentum becomes cumulative.

A useful parallel in software: teams that ship reliably tend to win trust faster than teams that chase maximal scope. That’s one reason platforms like Koder.ai emphasize a tight loop from plan → build → deploy—using a chat-driven workflow plus agents under the hood, with practical guardrails like Planning Mode and snapshots/rollback. The lesson is the same as AMD’s: reduce surprise, keep cadence, and make “delivery” a repeatable system.

What investors can learn: look for execution signals, not stories

The most useful indicators aren’t dramatic narratives—they’re measurable behaviors:

  • Roadmaps that stay stable over time (few surprise pivots)
  • On-time delivery and clear sequencing (what ships next, and why)
  • Design wins that translate into volume (not just headlines)
  • Platform pull-through (repeat buyers, ecosystem support, software readiness)

These signals show whether a company is building trust, not just attention.

Common turnaround pitfalls to avoid

Turnarounds fail when leadership spreads the organization across too many bets, accepts heroic timelines, or communicates in vague slogans instead of concrete milestones.

Another frequent mistake is treating partnerships as a backup plan; external dependencies (like manufacturing capacity) must be planned early and managed continuously.

Wrap-up: why execution + platforms created durable momentum

AMD didn’t win by chasing every opportunity. It won by repeatedly shipping what it said it would ship, and by making each generation easier to adopt through compatibility, partners, and ecosystem gravity.

Execution builds credibility; platforms turn credibility into durable growth.

FAQ

What was AMD’s biggest problem before the turnaround?

AMD faced a stack of reinforcing problems: uncompetitive products, uneven cadence, thin margins, and debt. The most damaging issue was lost credibility—OEMs and enterprise buyers plan years ahead, so missed performance targets or slipped schedules made partners design AMD out early.

Why does the post describe “execution” as the strategy?

In semiconductors, a “great idea” doesn’t matter unless it ships on time, at scale, and as promised. The post emphasizes execution because predictable delivery restores buyer confidence, improves planning with partners, and turns roadmap trust into real design wins and volume.

How do multi-year roadmaps reduce customer risk?

Customers don’t just buy a chip—they buy a multi-year plan they can build around. A credible roadmap lowers risk by letting OEMs and data centers align:

  • platform design and validation timelines
  • power/thermal budgets
  • procurement and deployment schedules

That predictability makes it easier to commit early and commit big.

What makes a semiconductor roadmap “credible” instead of marketing?

A useful roadmap includes practical planning details, not hype:

  • timelines and cadence (when each generation/platform arrives)
  • meaningful targets (efficiency, core counts, memory/I/O, real workload gains)
  • platform plans (socket/chipset direction and support longevity)

It also distinguishes what’s committed vs. what’s still a target.

What did Zen change that made AMD relevant again?

Zen mattered because it was built to be a scalable foundation, not a one-off. It re-established AMD as a credible CPU option across PCs and servers by improving core metrics buyers feel:

  • performance per watt
  • IPC (responsiveness at similar clock speeds)
  • core-count scalability (when the platform can feed it)
What is chiplet design, and why was it a big deal for AMD?

Chiplets split a processor into reusable pieces (compute dies + an I/O die) connected in one package. The practical benefits are:

  • better yields and cost control (smaller dies waste less when defects occur)
  • faster iteration (upgrade compute on a new node while keeping other parts stable)
  • more flexible product stacks (many SKUs from shared building blocks)

Trade-offs include interconnect latency and packaging complexity, which require strong validation and packaging execution.

Why do sockets, chipsets, and compatibility matter so much in a comeback?

Platform longevity (e.g., long-lived sockets) reduces forced rebuilds and lowers total cost:

  • fewer motherboard/memory replacements
  • fewer BIOS/driver surprises across refresh cycles
  • clearer upgrade paths for consumers and IT teams

That compatibility turns into loyalty because buyers don’t feel stranded by frequent platform resets.

How did foundry partnerships (like TSMC) factor into the turnaround?

Access to leading-edge nodes affects performance-per-watt and competitiveness, but capacity must be reserved far in advance. Using a foundry partner can help because:

  • manufacturing R&D costs are shared across many customers
  • capacity is secured via contracts instead of building fabs
  • AMD can focus on architecture, packaging, and productization

The key is treating manufacturing commitments as part of the roadmap, not an afterthought.

Why was EPYC and the data center push so important for AMD?

Data centers offer higher margins, sticky multi-year contracts, and credibility signaling. EPYC succeeded by focusing on what server buyers measure:

  • performance per dollar and throughput
  • efficiency (power/cooling over the system lifetime)
  • reliability, support, and validation maturity

Server wins also create “pull-through” into workstations, OEM relationships, and broader platform adoption.

What are the most practical lessons investors and leaders can take from this story?

Look for measurable execution signals rather than narratives:

  • stable roadmaps with few surprise pivots
  • on-time launches and consistent segmentation/naming
  • design wins that repeat across generations (not one-offs)
  • ecosystem readiness (tooling, firmware, validated configs)

For leaders, the transferable lesson is to narrow priorities, build a delivery cadence, and treat platforms as compounding assets.

Contents
AMD Before the Comeback: What Needed FixingLisa Su’s Operating Playbook: Execution as a StrategyRoadmaps That Restore TrustZen and the Return to CPU RelevanceChiplet Design: A Platform for Faster IterationPlatform Strategy: Winning Through Compatibility and EcosystemsManufacturing and Foundry Partnerships (Including TSMC)EPYC and the Data Center PushOEM and Partner Strategy: From Design Wins to VolumeSoftware and Developer Ecosystem: The MultiplierFinancial Discipline Behind the ComebackLessons from AMD’s Comeback: A Repeatable PlaybookFAQ
Share