Reddit Posts
AMD's new MI300x vs the field, plus future projections.
The Samsung Rival Taking an Early Lead in the Race for AI Memory Chips
A detailed DD for AMD in AI (Instinct MI300 breakdown)
4 Penny stocks that billionaires are loading up on
Is it possible to live on patent litigation? NLST is the most interesting example
Is it possible to live on patent litigation? NLST is the most interesting example
NLST is revolutionizing the memory market (NAND & DRAM) - Samsung and micron to pay IP licenses and damages for the netlist technology
Nvidia released a new "nuclear bomb", Google chatbot is also coming, computing power stocks again on the tide of halt
2023-02-28 Wrinkle-brain Plays (Mathematically derived options plays)
Hudbay slides after Q4 miss, reduced 2023 production guidance (NYSE:HBM)
Russia/Ukraine Conflict = Metals Squeeze | Choose Wisely!
$HBM – HORNBACH BAUMARKT is a rare, underpriced value stock w low free float <25% (think "GERMAN equivalent to HOME DEPOT")
HBM DD. SHORT INTEREST HIGH, VOLUME LOW, SOLID FUNDAMENTALS
Mentions
Apple's vaunted supply chain leverage is evaporating. Their daily pilgrimages to Seoul echo the 2011 Thai flood crisis, revealing a desperate bid for HBM3E parity. It's a margin trap. Because the AI compute cycle demands massive memory density, Apple's hardware premium is shrinking. The market recognizes that late-cycle hardware pivots rarely salvage stagnant growth. Which explains why capital is migrating elsewhere.
Micron’s sold-out HBM capacity signals a departure from the volatile cycles of the Dot-com era. Memory isn't just a component; it’s the primary constraint on global compute power. Which means this structural shortage protects margins in a way we haven't seen since the mid-90s. If $340 holds, we're looking at a valuation re-rating that makes traditional targets look conservative.
They all have the same problem. High demand low supply. It takes 12 DRAM die to make 1 HBM. And there are 8 HBM on every AI accelerator. Supply for all 3 memory makers has shifted to AI, leaving less for the billions of other products the world uses. This has led to the drastic price increases for memory and the micron stock
>they aren't innovating like Nvidia HBM3 to 3e now to 4 in Q2 2026 to 4e in H2 2026 to custom HBM in 2027 HBF in 2027+ 100IOPS SSDs in 2026 LPDDR6 to 7 in 2026 Arguably they have a higher cadence of upgrades than Nvidia
Your EPS breakdown is spot on. I think the market is still discounting the margin expansion from HBM. If that next quarter guidance actually clears 9 bucks, 600 might be a conservative target. What is your take on the supply gap heading into 2027?
NAND has alot of competition from China which will limit gains DRAM and HBM tho,, Chinese companies seems to have very low yields of are straight up lying about capacity because we aren't seeing any meaningful output.
micron is #3 in HBM market. and already much bigg in market cap than SK Hynix. KORU ETF seems to be better deal at this point.
Bro, all the hyperscaler's executives are camped in South Korea to get any possible supply of DRAM/HBM they can get hold off. Even the biggest companies such as Apple/Microsoft aren't able to secure the supply. Imagine the demand. And this is just the beginning, long way ahead for data centers to ramp up. Whoever is investing billions into memory companies aren't dumb.
Hope you followed your own advice a year ago? I got into MU about 6 months ago with a very small position so relatively happy but could have been more. Agree it's late for entry now but in your opinion is the HBM/Storage capacity bottleneck something that can be easily resolved? Like can MU/Samsung/SKH just increase production for a few months but this could all blow over by end of year or is this a stickier bottleneck that could see good gains for a while?
I was pushing these stocks(or similar performing competitors like SK Hynix/Samsung) a year ago back when no one really understood that HBM/data storage would be the major bottleneck but I think they are overvalued now. The main bottleneck now IMO is actually physical datacenter capacity/energy, and that's not one that can be easily resolved quickly. So I don't think earnings will grow as fast as current projections.
Not all ram is the same. Consumer-grade DRAM is not getting more production this time. As a result, manufacturers are prioritizing HBM production, leading to a shortage of conventional DRAM and NAND flash memory. The misconception here is that ram was being seen as one entity.
Damn MU goin crazy still prolly cuz of HBM
Their utilization is going to be better, I would expect their latency to first token to get better (because prefill isn't getting done by the really fast memory GPUs anymore with Rubin, it's getting done with \~rtx 5090 memory speed but fast compute CPX that are part of the system and then the prefill is getting sent to the GPUs with fast cables). Because they're dividing what compute box is doing what, this allows the Rubin GPUs to utilize their massive memory speed to inference mixture of expert ai models (basically the only kind frontier labs are using anymore) a little bit faster. Every step of the process they can reduce latency and wait times on makes both the user experience feel a lot faster. Other speedups (especially HBM4 memory) can compound with speed gains. If each step of the training process executes 0.1 ms faster, that can add up to several weeks saved per training run. If researchers can attempt more runs faster (since there are a lot of trial runs before the main run) they get to the end result faster. The CPX isn't doing the actual thinking part, that's still done on GPU, but the thinking part counterintuitively wants memory speed more than it wants more computation, so the system is designed to divide and conquer.
The bulk of the announcement is the idea is instead of having one rack (the stacks of compute resources in a datacenter) with a lot of GPUs connected with very fast cords (nvlink infiniband and others) you have a system of Nvidia devices that makes idle time less likely to occur. Nvidia is shipping specialty hardware that most makes sense at 10m+ sized build outs which specifically minimize idle time from only parts of their system operating which happens with current all-GPU systems. They want everything running close to flat out, which is hard to do at massive scale without this specialized hardware. They also increased the memory from HBM3E to HBM4, which is way faster.
Everyone’s saying it’s a power infrastructure shortage now, not a chip one. Micron blowing is from compute with larger models need larger and faster memory — HBM being the brute force solution. I don’t think the AI hype is as limited by Ai accelerators as the above until there’s a new technology leap.
What you're quoting is minuscule diversification play in Nvidia roadmap and has zero impact for 2026, HBM is here locked in and loaded, and they have secured the entire production roadmap from MU. There was a really good write-up from a retail investor at Seeking Alpha, "Micron's Nvidia Moment Is Here," which I wholeheartedly agree with. Again we are talking about 2026 stock market play here. Noone knows what it will be in 2030 and if AI bubble has gone belly up or no.
P/E is the millionaire dollar question. I have followed Micron since 2010 (?) and the markets have always prescribed an extremely low P/E for Micron. Maybe things will change with this AI hype. Tbh, if markets give Micron the same PE as other tech stocks of around 30, its share price should be about 360-450. The main risk isn't Taiwan invasion. I don't think that would happen. The real risk is cost of AI infrastructure buildout. If every semiconductor companies is now getting 60+% margins, that means at some point it's going to get cost prohibitive to build out datacenters. You can only squeeze so much juice out of a fruit. Are they not ramping capacity? I haven't looked into Micron's capEx plan, somebody who knows actual numbers please inform. If they are limiting sales by not providing enough, it will allow SK Hynix and Samsung to takeover. The geopolitical risk just isn't a thing. I also don't know enough about the current HBM3 or HBM4 status of each company to make a comment about technology lag of the competitors. One way or another, I can say that I'm more likely to panic if I buy in now and it starts dropping. The real risk/reward for me was when Micron was in its 80 range, but as I said, I was stuck in f\*\* UPS.
But isnt NVidia switching from HBM to something else over the whole scandal "Jensen Huang stood on stage September 9 and unveiled a chip that uses gaming card memory instead of their flagship HBM. Not because it's better. Because the economics of their $3.7 trillion empire no longer work. The Rubin CPX has no High Bandwidth Memory. No exotic packaging. No NVLink interconnect. It costs 25% as much to manufacture. Here's what they're not saying: Million token context windows break unified GPU economics. When you process a 10,000 page legal document, the H100's $30,000 worth of memory bandwidth sits idle during the compute phase"
I am pretty deep into semiconductor industry and my main bullish view is the inflation and prices of HBM3/HBM4. I think first half when we see more crunch and that will be good for micron. So it's less about long term and more about what's happening on market in next 6 months.
Out of a big chunk of $MU after an ungodly run. Keeping a couple of runners just in case. SK Hynix and Samsung controlling almost 85% of the HBM market to Micron's 11% and NVDA moving into SRAM tells me the run may be close to over. Good luck to all you MU believers who got shit on last year when it dipped below $100. 🫶
Alright degenerates quick clarification before someone screams FOMO. I am only buying MU and BA calls on small pullbacks. No chasing green candles like a sleep deprived hamster. MU thesis: AI memory demand is still ripping HBM is tight and MU finally looks like it is on the right side of the cycle. After a strong move I am waiting for a healthy dip or consolidation. If it never pulls back fine I miss it. If it does I am loading. Boeing thesis: Yes Boeing is a disaster and that is exactly the point. Expectations are already underground. I am not buying strength here. I am waiting for red days headline fear or weak opens to grab calls. Any slightly less terrible news and this thing can bounce hard. The plan: No strikes no expiries no magic lines. Watching price action volume and overall market mood. Small pullback equals entry. No pullback equals no trade. Trying discipline over FOMO for once. Not financial advice just vibes patience and delayed bad decisions. Roast me 🫡💸
Isn't the company's 2026 HBM inventory already sold out?
Good point, but I like to think of it like $NVDA, or a car company that comes out with a new model. They may be sold out now, but the new item or enhancement will push them higher. The HBM4 will be more efficient, supposedly, from a power standpoint. That will then allow them to sell that product and get contracts signed for the next few years too. But production ramp up costs could cause a slow down in profit margins.
INTC is the best play on SRAM (Static Random Access Memory) and the reason why NVDA bought Groq last week--doing inferencing without having to worry about HBM shortages. There's a reason Jensen poured $5 billion into Intel stake, not just for Xeon CPUs. Here's another interesting factoid, SRAM is not as sensitive to radiation as HBM. If you're going to build data centers in space or on the moon as Musk said, you better be using SRAM based LPUs and TPUs.
INTC is the best play on SRAM (Static Random Access Memory) and the reason why NVDA bought Groq last week--doing inferencing without having to worry about HBM shortages. There's a reason Jensen poured $5 billion into Intel stake, not just for Xeon CPUs. Here's another interesting factoid, SRAM is not as sensitive to radiation as HBM. If you're going to build data centers in space or on the moon as Musk said, you better be using SRAM based LPUs and TPUs.
Personally, I’m playing it aggressively because unless something big changes in the next 12 months, this is the bottom of the target range. I think it’s likely that they also raise earnings throughout the year as they increase prices and move production to higher margin HBM4. I think that such a fast increase in revenue, earnings, and stock price will likely increase the EPS for both trailing and forward where we just talked about the case where EPS stays this low but they simply execute without outperforming. There are a lot of really positive signals right now. Almost a perfect storm and too perfect. So I’m just focusing on the bottom of the target range and keeping my expectations and risk in check as I hold through these next few months to see how it all plays out.
HBM supply is not going to be solved in 1 quarter lmfao, most 2026 inventory already sold
I see where you're coming from with the declining PE, but sometimes the market’s reaction isn’t always linear. If Micron is still seeing high demand for HBM, even with a lower PE, it could indicate that the market is valuing the stock based on future growth. That said, I agree, if the company doesn’t deliver, the stock could take a hit.
Quoting a 27 PE is laughable. They are already sold out of the next 3q of HBM which is 80% of the business. Its around a 9-12 pe for FY 26 which only has 3q left
Todays move was not totally unexpected. The news from Samsung and SK Hynix are increasing fifth gen. HBM3E contract prices by nearly 20% for 2026 deliveries...Wait for the dip and back the truck up.
Yeah, it’s crazy how strong their performance is right now. With all HBM sold out for 2026, it's clear demand is just skyrocketing. I wonder how this will affect prices in the short term though.
Micron's profit margins and revenue growth are insane right now and all HBM has been sold for all of 2026. The demand for memory is going to continue to go parabolic
Nah, MU actually fundamentally sound. Because if someone needs a GPU, it don't matter if the GPU is by Nvidia, AMD, Google, Amazon or other Chinese company.. it would still need HBM which MU provides.
8x forward p/e. All supply booked up. Massive margins. HBM cyclicality won’t matter for several years to come. Used in both Goog TPU’s and NVDAs top GPUs. Need I go on? The momentum is more than justified by fundamentals.
I think I'd consider DELL more of a HOLD, and look moreso at Seagate as a play for downside in the memory game. I think you make some good points for a near-term miss in earnings, and maybe the stock will pull back - maybe not. For long-term, I think my main rebuttals are: 1) Who will pay 20%+ for Dell? Well, maybe everyone because enterprise has limited options and the pain you mention is industry-wide even for Dell's competition. 2) One bad quarter doesn't translate to long-term value destruction. This is almost a play against irrational reactions to a quarterly miss. 3) Capacity is being built for memory, and this is all projected to land in 2027. I only mention this to further argue that this bear case is all grounded in reactions to earnings taking place over the next couple of months. 4) I think you somewhat dismiss the growth case for ISG. Dell's ISG grew 24% YoY in Q3, and AI server orders were $12.3B. The business mix is shifting toward ISG, which doesn't face the same supply chain dynamics (because it's HBM). Dell's strategy is for, CSG to become a smaller share of the pie. ISG grows, this may temper reactions to negative overall news in the stock. You may well be right on the reaction from the market, but let's see. Good luck!
I am someone that is insanely curious by nature. I am so curious to see how this all plays out with OpenAI. I am older and remember Netscape well. To me OpenAI seems to be a lot like Netscape and I suspect will have the same end. Just OpenAI will have far, far, far more debt than Netscape ever had. The one good thing going for OpenAI is the fact they have hoarded some really valuable stuff that has skyrocketed in value since they hoarded it. HBM is a perfect example.
I think people are still trying to grasp what is happening and whether this is an extended super cycle with the ability to set a new floor at the current pace. Just like some of the silver thesis out there, HBM is a powerful commodity that is theoretically going to remain scarce for a while under the premises - it is hard to make the advanced memory with high yield - the amount of memory needed per chip could reach 1TB by 2027/28. 2026 alone could more than 2x to 432 per AMD MI400. I worry about power constraints limiting setting up more capacity, but the next couple of years are releasing a lot more efficient solutions. Energy will need to catch up and I think solutions are 4ish years out, but I am just starting to read up on this part of the bottleneck and looking for investment opportunities. Anyway, my thesis is that the energy bottleneck will be addressed by higher turnover and early deprecation of outdated units or those used units can be sold in secondary markets due to not being "burnt" out. They will have scales to do this, or at least that is how I would offset some of that cost. Therefore, in the short term we could see high demand churn cycles in order to get the processing efficiency benefits sooner than later. That is how ai can scale to demand while being power constrainted. TLDR: AI is going to want that new new to grow capacity every 18 months, to accommodate regional energy bottlenecks. Micron is selling a highly valuable commodity and is in the U.S. when an arms race is happening. Will it cycle, likely. I am betting for a super cycle and new floor. We haven't even addressed AI edge cases yet and wireless robot ai agents. Shit is changing and Micron has big leverage in that brave new world.
Yeah but only three companies make HBM though. Micron is shifting all of its resources to making HBM.
r/stocks icon Go to stocks r/stocks 9.1M members View community Is MU's current trend continuation or anticipation exhaustion? Company Analysis Recently, MU has continued to fluctuate within a high-range band. Although volatility has increased, it remains fundamentally strong overall. The market's core logic surrounding it still revolves around: AI memory demand, data center expansion, and tight HBM supply these are indeed genuine long-term drivers. My personal perspective: MU currently exhibits distinct characteristics: Solid fundamental logic, not purely a sentiment-driven narrative Continued support at elevated levels, with no significant capital outflow However, divergence is growing, and sentiment is less one-sided than before Therefore, in my view: It possesses the core of a trend while entering a phase requiring validation. Moving forward, focus should shift to earnings delivery, order momentum, and industry pricing trends rather than relying solely on imagination. For MU today, the question is no longer simply “will it rise?” Rather, it's whether the logic behind its ascent can be sustained. While the trend warrants respect, timing and risk awareness are equally crucial. What's your take? In your view, does MU currently represent trend continuation or over-anticipated expectations? Personal observations only. Not investment advice. Upvote 2 Downvote 8 Go to comments Share Share your thoughts Sort by: Best Search Comments Expand comment search Comments Section u/Crazy_Donkies avatar Crazy_Donkies • 26m ago Look at NVDA in January 2024. That's MU right now. Here's my thesis. NVDA used to be cyclical: https://finance.yahoo.com/news/nvidia-stock-nasdaq-nvda-more-053240502.html Not any more. AI demand through 2030: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers#:~:text=We%20calculate%20that%20companies%20across,The%20scale%20of%20investment%E2%80%9D). Tons of demand. Finally, all HBM suppliers are limiting supply. Which is was a cause of the past peaks and troughs. https://www.semiconsam.com/p/samsung-securities-2025-memory-market Good luck to me. I'm balls deep and see 75% upside.
Guys let me put something on your radar. WEEBIT NANO. The big three are hyperfocused on the HBM, but Weebit has next level reram
MU is in supercycles as NVDIA not traditional ram but HBM. I started collecting $80 and still buying. It will triple in 2 years
Different, and wildly different use case for DDR/DRAM. AI very much needs HBM, arguably more than CPU. The internet isn't HBM dependent. It can run on a potato. Read this as long term demand is ensured. Likely capacity may exceed demand but not for some time. And all 3 manufacturers are managing capacity. We will see.
DRAM has been made by only 3 companies for years. HBM... is made by those same 3 companies. DRAM is high margin specialized tech. Until it isn't, and then it becomes low margin specialized tech.
Had to get my receipts out for this one. [MI300X released December 6, 2023](https://www.amd.com/en/newsroom/press-releases/2023-12-6-amd-delivers-leadership-portfolio-of-data-center-a.html), [H200 announced November 13, 2023](https://investor.nvidia.com/news/press-release-details/2023/NVIDIA-Supercharges-Hopper-the-Worlds-Leading-AI-Computing-Platform/default.aspx) – they're literally the same generation competing products released within weeks of each other. I've been comparing apples to apples this entire time. [Across Microsoft, Meta, Oracle, and TensorWave, AMD shipped 327,000 MI300X units in 2024](https://www.theregister.com/2024/12/23/nvidia_ai_hardware_competition/). Specifically: Meta accounted for 173,000 units, Microsoft 96,000 units, and Oracle 38,000 units. Microsoft is using MI300X to power Azure OpenAI Chat GPT 3.5 and 4 services. [Oracle never stopped buying MI300X in 2025 with ongoing shipments through Q4 2025](https://x.com/MikeLongTerm/status/1998586597810344301). MI325X mass shipments started Q2 2025. [MI355X volume production started June 2025, with Oracle scaling to over 131,072 GPUs](https://blogs.oracle.com/cloud-infrastructure/announcing-general-availability-of-oci-amd-mi355x). On AMD figures being a dissapointment – [Lisa Su went into 2024 expecting $2 billion in GPU sales, AMD delivered over $5 billion, exceeding expectations by 150%](https://digidai.github.io/2025/11/17/lisa-su-amd-ai-chip-nvidia-challenge-deep-analysis/). [AMD raised GPU outlook from $4B to over $5B during 2024](https://www.cnbc.com/amp/2024/07/30/amd-earnings-report-q2-2024.html). Stock dropped on slightly softer Q4 guidance and MI350 launching mid-2025 while Blackwell was shipping – timing issue, not execution failure. You've contradicted yourself at every turn. AMD shipped 327k units in year one to Meta/Microsoft/Oracle at scale, crushed their own guidance by 150%, and MI400 specs (432GB HBM4 @ 19.6 TB/s) beat B300 (288GB HBM3e @ 8 TB/s). Same EPYC playbook.
MU is good value now and I’m bullish. But there’s a reason Broadcom is worth way more. First, they’re the market leader in ASICs without a major contender. Marvell is chasing scraps and is now focusing more on networking. Groq sold out for a hefty price but only had $500M in revenues (and thin margins) after 9 years. Cerebras will likely eventually get bought out as well. Broadcom is pretty much the only game in town. Second, Broadcom is also massive in AI networking, coming in second behind Nvidia. Third, they have a software business in VMWare they’re already making a return on. Finally, they have a CEO that has been delivering for over 20 years. Broadcom gets a premium due to the reliability of Hock Tan. Micron is doing exceptionally well but they’re #3 and don’t even have 20% market share (with no real evidence they’ll get there). They offer a commodity that competitors in South Korea and China also offer. There’s also a solution that bypasses HBM (SRAM). Micron is a good investment but its multiple is compressed for a reason. Investors are pricing in cyclicality already. I’ve heard some analysts say Micron is the next Nvidia. This couldn’t be further from the truth. Nvidia is by far the most dominant player in merchant AI chips. No one comes close which is why they earn 75% margins. Micron doesn’t have that same power and is constraining supply because it wants to keep prices elevated. Nvidia is increasing supply and still able to command the same pricing power.
Nvidia is going to go further up. I would imagine Broadcom also. I'm not so certain how much runway is left for AMD as they will end up with the smallest market share, but we are deeply compute constrained, I expect everything to sell out for years. AMD could take a big hit on margins due to HBM pricing though. I've also heard that Google didn't lock in LTAs for memory so that could also take a huge hit. Nvidia has locked up the most capacity from TSMC and from memory makers.
That assumes you believe HBM becomes non cyclical, and MU can maintain competitive advantage vs Samsung/SK. Otherwise forward PE can be deceptive.
"No one's buying MI300X"? 327,000 MI300X units shipped in 2024 across Meta, Microsoft, Oracle, and TensorWave. Meta and Microsoft were NVIDIA's biggest customers and they're diversifying to AMD at scale. You've contradicted yourself multiple times: * First: "AMD isn't good at training" * Then: "Supercomputers don't matter" * Then: "You don't know what an APU is" * Then: "Focus on GPUs not CPUs" * Now: "No one's buying MI300X" (despite 327k units shipped) You're not arguing in good faith. You're just throwing shit at the wall. The thesis stands: AMD has competitive/superior hardware at 1/3 the cost, shipped 327k units in their first year, ROCm is maturing, and MI400 specs (432GB HBM4 @ 19.6 TB/s) beat B300 specs (288GB HBM3e @ 8 TB/s). Same pattern as EPYC.
The H100 being 3 years old and still used proves Burry's depreciation point, not refutes it. Companies are depreciating these over 5-6 years while economic value erodes faster. But here's what you're missing: AMD's generational leaps blow NVIDIA's out of the water. MI200 to MI300X: 3.4x performance jump H100 to H200: 1.05-1.1x improvement And looking forward to 2026: MI400: 432GB HBM4 at 19.6 TB/s NVIDIA B300: 288GB HBM3e at 8 TB/s AMD has 1.5x more memory and 2.45x more bandwidth than NVIDIA's next-gen chip. Both shipping in 2026. AMD had the performance advantage with MI300X but nobody wanted to deal with ROCm immaturity. Now ROCm is maturing, MI400 is coming with specs that destroy B300, and the software excuse is disappearing. NVIDIA's been iterating incrementally while coasting on CUDA. AMD's been making generational leaps. Once the software gap closes, the hardware advantage becomes undeniable. Argue some more. You're showing the thread how dumb you are.
Memory (DRAM/NAND) is basically a commodity. one company's chip is mostly interchangeable with another's. So the cycle goes: 1. demand picks up → prices rise → everyone makes money 2. high margins → all 3 players (Samsung, SK Hynix, Micron) build more fabs 3. new capacity comes online 2-3 years later 4. supply exceeds demand → prices crash 5. everyone loses money → capex gets cut → supply tightens 6. repeat It's the nature of capital-intensive commodity businesses, you can't turn fabs on and off quickly, so supply always overshoots or undershoots demand. The bull case for this cycle is that HBM (high bandwidth memory for AI) is harder to make and supply-constrained, so it might not follow the same pattern, jury's still out.
Fair correction on MI300A - it's an APU, not a discrete GPU like MI300X. I should've been more precise. But that doesn't change the core point: AMD delivered integrated CPU+GPU solutions (whether you call it APU or discrete) that beat NVIDIA in competitive evaluations for El Capitan and Frontier - $1.2B in contracts for exascale AI workloads. And yes, NVIDIA has Grace now - another fair point. The thesis isn't about semantic distinctions between APU and GPU. It's about AMD being competitive across the full stack (EPYC + Instinct MI300X/MI400) at a 3-4x cost advantage. When MI400 ships with 432GB HBM4 and hyperscalers are deciding where to spend $30B, the integrated offering + cost differential matters more than whether one product is technically an APU. You've pointed out technical imprecisions. Cool. None of it refutes that MI400 specs are competitive, ROCm gap is closing, cost advantage is real, and the EPYC playbook worked once already.
Nvidia has long been working on ASICs, but their ASICs are fundamentally **GPU-based**, not like Broadcom’s approach. Groq, Cerebras, and SambaNova (acquired by Intel) — the “big three” in SRAM-heavy designs — have a completely different technical path from Broadcom. SRAM takes up much more wafer area and is more expensive, and the applications aren’t as broad as Broadcom’s. However, in **edge computing**, SRAM offers much higher bandwidth and lower latency than HBM, giving it an advantage over HBM + GPU setups. This time, Nvidia essentially **“bought” the company at a 3x premium to sidestep regulatory issues**, eliminated a competitor, and brought the technical team over — a huge win. They’re likely planning to develop ASICs based on a **GPU + LPU architecture**. On top of that, the founder of Groq is **one of the inventors of the TPU**
Fair points on ROCm - it's still catching up, no question. But the gap is closing faster than people think. PyTorch has official ROCm support now, and Meta/Microsoft are already deploying MI300X at scale for inference. At the very least it's being integrated and considered by players that once stood by NVIDIA. That's enough for me to see the value. Basically ROCm sucks until it doesn't, and it's getting closer to not being shit every day. On the RAM modules: MI400 is coming with 432GB HBM4 at 19.6 TB/s bandwidth. If NVIDIA's pluggable RAM strategy works, great for them - but AMD's already shipping competitive memory specs on integrated packages. The question is cost and time-to-market. Your 40/40/20 split is actually pretty reasonable for risk management. I'm more concentrated (heavy AMD, short NVDA) because I think the 2026-2027 window is the inflection point, but I respect the diversified approach. The CUDA moat is real. I'm just betting that when CFOs are looking at $30B infrastructure budgets and ROCm is "good enough" for most workloads, the 4x cost gap becomes impossible to justify. Same thing happened with EPYC - Intel's ecosystem was "better" until the economics forced adoption. Appreciate the thoughtful take instead of just "AI FUD" dismissal.
Both fundamentals and momentum, and that's what makes it tricky. The fundamental case is real: \- HBM is structurally different from regular DRAM. only 3 companies can make it (Samsung, SK Hynix, Micron). supply is genuinely constrained. \- AI demand isn't slowing, every data center buildout needs memory. \- last earnings showed they can actually capture this demand and charge for it. But memory is memory: \- every cycle looks like "this time it's different" until it isn't. \- oversupply is always one bad quarter away. \- at ATH, you're paying for a lot of good news already priced in. How I think about it: MU at these levels is a "right thesis, tricky entry" situation, the bull case is solid, but buying at ATH in a historically cyclical industry takes conviction. Practical approaches depending on your situation: \- already long: trim a bit, let the rest ride. \- want exposure: wait for a pullback to the 20-day or 50-day MA. MU pulls back 10-15% regularly, even in uptrends. \- want income while you wait: sell cash-secured puts at a strike you'd actually want to own. get paid to wait for a dip. \- full send: if you truly believe HBM changes the cycle, then ATH doesn't matter over 3-5 years, size appropriately. I'm in the "probably a stronger cycle than usual, but still a cycle" camp, not chasing here, but not bearish either. In my watchlist.
I hear you on training vs inference. But look at what's coming with MI400 in 2026 - specs show 2x the compute of MI350, 432GB HBM4 at 19.6 TB/s bandwidth, positioned directly against NVIDIA's Rubin with comparable performance. Obviously these are engineering projections, not shipping benchmarks yet. But if the specs hold up anywhere close to AMD's claims, it's not just competitive on inference anymore - it's closing the training gap hard while keeping the cost advantage. Here's what people miss: AMD powers El Capitan and Frontier - the top 2 supercomputers in the world. They make both the CPUs and GPUs. NVIDIA is stuck on one side (GPUs), Intel is bleeding out on the other (CPUs, literally fighting off a government bailout). Intel's so desperate they're partnering with NVIDIA just to stay relevant. AMD already proved they can execute on both fronts. MI300X was proof of concept. MI400 is where they go toe-to-toe across the board at 1/4 the cost. Same playbook that killed Intel's server dominance. NVIDIA's next.
Bro, we just have to let time give us the answer. My view is that you have to make your choice first, then let time show how it plays out. And I believe in the choice I made — the fundamental story for MU is solid. With HBM4 sampling ahead of schedule and capacity sold out through 2026, any major dip is just an opportunity. Patience pays off.
The fundamental story for MU is rock solid. With HBM4 sampling ahead of schedule and capacity sold out through 2026, any major dip is a gift. Patience pays off.
True but HBM isn't a standard commodity. It's high margin specialized tech that's sold out for years.
I welcome an entry point on a 6% drop. This was a mid 2026 bottleneck play like HBM, not expecting it to go up 100% same day.
true, I was in Lite and others for a bit. But dug a little deeper into supply chain and found this massive point of failure for AI buildout. Really hard to value points of failure this, but like HBM, I just decided to buy the bottleneck for the most direct exposure.
Micron is absolutely crushing it right now Their HBM production is sold out through the end of 2026 Think about that theyve already locked in the revenue for the next year They arent just a memory cyclical anymore with 60 percent plus margins theyre essentially an AI infrastructure play While everyone is chasing NVDA the smart money is looking at MU because its the backbone of the whole AI stack
so are the products between these 3 memory companies differentiated, including this next generation HBM product? i keep hearing their are commoditized giving the impression that they are all the same (but all currently benefitting from surge in demand).
Their latest HBM4 is very competitive. They've indicated customers want multi-year contracts, but they are reluctant to do so to maximize margins. Meanwhile as you said they are sold out for a year. You and I dont know if data center expansion with continue after 2027, but I'm very much pro AI and expect it to continue past 2030. This is where you and I part ways in expectations. Obviously SKH and Samsung are formidable but we also have a President pushing for domestic capacity. But Again return to where you and I differ. "The bubble."
They are prioritizing their ID HBM factory in time for demand and will continue to grow revenue. This over their NY location. Plus SKH is indicating demand past 2030. AI buildout is unanimously expected to continue for half a decade. This would need to change.
My counter argument is that memory is not as moated as bleeding edge silicon manufacturing: the Chinese too can make DDR5 with decent enough yield (though not sure about their HBM capabilities). Not to mention they are not the only supplier: samsung and sk hynix are there to compete for the contracts as well. While for now their memory is all sold out for now, this all goes back to the question of how long can this Ai bubble stay afloat? Will there be another massive wave of data center projects that require such massive amounts of memory in 2027? Mu feels kind of like a leveraged bet on the bubble,
This and all 3 major HBM manufacturers are limiting production to shorten cycles and limit peaks and troughs.
HBM isn't cyclical any more, either. If GPUs, servers, and etc have a 4 to 6 year life in data centers, HBM will be replaced with it. MU says hi!
I still prefer MU over them. MU being one of the top suppliers of HBM for NVDA, AMD, Google is a much lucrative business to be in than in consumer flash memory.
Da fuq are you talking about? What do metals have to do with HBM?
Exactly. Everyone's staring at the 2-month dips while ignoring that HBM is what’s actually powering the AI buildout. You literally can't run NVDA silicon without MU memory
Yeah, I share the same concern. I’m still holding some MU and thinking about whether I should gradually scale back. I know a lot of Chinese companies, like CXMT, are working on HBM upgrades. Once they master the tech, they could advance really fast and drive costs way down
If Groq scales, it absolutely cuts the aggregate need for DDR5/HBM in inference clusters. While training stays memory-hungry, inference is where the volume is. If 80% of the world's tokens move to SRAM-centric chips, Micron’s 'Total Addressable Market' in the AI data center shrinks significantly.
Groq’s secret sauce is that it largely abandons external memory (DRAM/DDR5/HBM) in favor of on-chip SRAM. SRAM is roughly 20-25x faster than the HBM on an H100. By bringing this tech in-house, Nvidia is signaling they no longer want to pay the 'Micron tax' for inference tasks.
Yeah, totally agree someone benefits but it’s just unlikely to be a tiny new RAM/GPU player. The ones on the come‑up are the existing memory giants shifting more capacity to high‑margin HBM/GDDR and enterprise NAND, plus the packaging and controller names tied into that AI supply chain. Consumer shortages basically translate into better pricing power for Samsung / SK hynix / Micron and their ecosystem, not a brand‑new consumer RAM or GPU company that suddenly takes over the market.
MU is entering a super cycle that could continue for years. SK Hynix is saying HBM demand will grow 30% to 2030. Edge AI and vision alliance is stating 15x increase in demand until 2035. JPM stated tight supply until 2027. GPUs used to be cyclical, but not any more. HBM may be in constant high demand for many years. MU is my top conviction in 2026.
Morgan Stanley's top picks for 2026: • NVDA - still the highest ROI in AI compute. Vera Rubin ramps in 2H26, delivering a step-change vs Blackwell. Faster, denser, more profitable. This is infrastructure, not a trade. • AVGO - the cleanest way to play custom silicon and AI networking. ASICs don’t replace Nvidia - they expand Broadcom’s lane. Supporting pillars: • ALAB - hyperscale AI needs connectivity. Smaller cap, direct leverage to data-center buildouts. • MU - HBM stays tight, pricing power holds, memory matters more as AI scales. • AMAT + TSM - no advanced chips without tools and fabs. Capacity = leverage. • NXPI + ADI - the quiet winners as AI demand moves from servers into the real economy. I know MU trades at a lower multiple because they are cyclical, but their forward PE is under 9. Their last ER was simply amazing and blew away all expectations.
Yeah ofc, the need for HBM will definetly not grow with the datacenter buildout, thank you for your smart and researched take.
To give some context, all the new GPUs/TPUs incorporate HBM (high bandwidth memory). There’s only three companies that supply leading chip manufacturers, SK Hynix, Samsung and Micron. The advantage that these three have over other memory providers is essentially insurmountable for the foreseeable future, baring a redefinition of chip memory strategy. Micron is the only American company, and as of about a month ago, has essentially decided to go all in on HBM because nobody in the industry can keep up with demand (all three key players sold out through 2026) and the margins are greater than any other product they were shipping. With HBM being a highly specialized product, the industry will likely not be commoditized by the end of the decade. Micron is in an excellent position, financially, technically and geographically to continue large growth for quite awhile. I’d encourage you to look further into the industry. It’s fascinating stuff.
Upside in 2026, until memory supply catches up or demand growth flattens out - keeping in mind HBM and DDR5 is supplied by just three companies now
You're looking at the chart in the rear view mirror. Look at company forecasts and the macro investment in AI and HBM. Either way, in 9 months it will be up another 66%.
Dude just read the ERs and breakdown of sales, and understand doesnt matter NVDA AMD GOOGLE, they all buying memory and huge amounts, and this only gets worse 26Q3 when MVDA release GB400 and AMD MI450 and why they need HBM4 memory. Also HBM350 sold out, sk hynix raised prices with Samsung for HBM memory, MU did not announce how much they hiked but they followed suite. Just invest in it dude you put in 5k if thats what you invest in a bet. Do your research and consider that low risk, predictable outcomes is so nice, with good chances of upswing. Again, if you dont know how to gauage/compare companies and ERs, stick to the basics: - PE - FWD PE - Operating margins - PEG
To whomever reads this comment, and believes it over mine: Just look at management's comments regarding their demand and the multiyear contracts theyre getting, the macro investment in AI, a simple Google search on the demand for HBM, their NVDA partnership, and their pivot to data centers from consumer. The cycle for HBM is just beginning.
That’s pretty much how I see it as well. Right now pricing and inventories seem like the most reliable signals, and HBM feels like a cleaner way to think about AI demand. The hardest part is still figuring out how much of this move is truly AI driven versus just a normal cycle recovery at these valuation levels.
The pricing stabilization is probably the most reliable signal right now. When spot prices for DRAM and NAND start holding steady after a downturn, that usually means supply-demand is finding balance. For AI demand specifically, I watch HBM allocation more than general memory shipments. MU's been pretty transparent about their HBM3E capacity being sold out through 2025, which is different from past cycles where demand was more speculative. The tricky part is figuring out how much of current valuations already price in that AI growth versus traditional datacenter refresh cycles. Inventory levels at major customers are back to normal ranges, so we're past the destocking phase at least. I'd say focus on their guidance around bit shipment growth vs. ASP trends. If they're growing revenue mostly through volume while ASPs stay flat or decline slightly, that's healthier long-term than a pure pricing recovery that could reverse quickly.
The Finviz stats are interesting - [https://finviz.com/quote.ashx?t=MU&p=d](https://finviz.com/quote.ashx?t=MU&p=d) I'm new to analysing stocks so I might be getting the wrong end of the stick (please tell me if I am), but the forward PE and PEG seem to suggest that there is decent room for growth. Micron claim that their HBM3 chips consume \~30% less energy than their competitors' products ([source](https://www.tomshardware.com/pc-components/ram/micron-puts-stackable-24gb-hbm3e-chips-into-volume-production-for-nvidias-next-gen-h200-ai-gpu)), which could give them an edge.
Nintendo is one of the first casualties of the nonsensical AI memory depletion. Stock is dropping like a lead balloon introducing true fear into a company so cautiously managed, they've secured a fuckton of inventory in advance of the Switch 2 launch. Drawdowns like these are the times where investors should be licking their lips to buy in. The big memory firms will just come right back to consumer the moment the super cycle ends. Happens every time with commodities. HBM is no different, despite what some might say. What worries me is how big the comedown from the super cycle will be. I hope Micron, SK Hynix, etc. are building up their balance sheets.
That's a good question. I'd argue that AI is amplifying the memory cycle rather than fundamentally transforming it. While AI demand for HBM and DRAM does create structural growth, memory remains fundamentally a cyclical industry. Capital expenditures, supply rhythms, and macroeconomic conditions remain the key drivers. The difference is that during periods of strong AI investment, the upward momentum can be more intense and sustained.
Yes. Extreme RAM + HBM shortage until 2027
Honestly not a ton. Maybe 285 or little more. 350 by June, and I'd bet my dog on it. Forward PE is absurdly low and HBM is the new gold.
Your logic is right. But that's not the current case of Micron or the memory industry. This is just the beginning of growing AI demand, their revenues are forecasted to grow over the coming 2 years at least. And completely my thoughts : This is just beginning of the AI race, there will be lot of upgrades in the hardware (the hyperscalers would want the best, as only the best will survive in this AI bubble) in the coming decade. All these upgrades will need HBM. Memory is no longer going to be a commodity. So fair risk to be in MU for next 2026 atleast.
The RAM and storage shortage started 1 yr ago when hyperscalers and mega cap companies announced accelerated spending in AI data centers. MICRON themselves stated that 2026 was sold out 3Q ago. 2027 is soon to be sold out. Dell/HPE/SMCI and tons of others in Asia will pass on the cost to the mega cap companies, and those guys can afford it. Semis like Nvida, Broadcom and AMD has also stated that the increase in HBM will be passed on in full.
How was MU’s earnings surprise for anyone?? Everyone knew what they were doing with the RAM prices and they already told us that they are full port on HBM few month ago.. earnings were exactly as i expected.. nothing “unexpected”
A typical memory cycle this not. HBM will be sold out until the original orders need new tech…others see this this is a recalibration, just look at the P/E after the correction was re-corrected this week ✌️
The counterarguments: \- Memory has fooled people before. Every cycle looks like an inflection until it isn't. \- AI capex could slow if hyperscalers don't see ROI \- Oversupply is always one bad quarter away \- Valuation already prices in a lot of good news That said, HBM is structurally different - only 3 companies can make it, and AI demand is real. My take: probably a stronger cycle than usual, but still a cycle. Size accordingly - MU can drop 30-40% even in good times.
MU up 200% YTD while flying under the radar compared to NVDA and AMD. Memory is the AI bottleneck no one talks about. HBM demand is only going up. Anyone else looking at the memory side of the AI trade, or is it all GPU focus here?
True a bunch of Chinese companies are working on HBM but their quality and yields aren’t matching Samsung MU or SK Hynix yet
Isn't there like a generic HBM paradigm that people are trying to push, that's is cheaper and easier to make (requires less specialty)?
That’s possible The difference this time is HBM is harder to ramp and demand is tied to AI workloads not consumer cycles. Still cyclical just maybe less extreme
Will HBM end up oversupplied pretty quickly?