Reddit Posts
AMD's new MI300x vs the field, plus future projections.
The Samsung Rival Taking an Early Lead in the Race for AI Memory Chips
A detailed DD for AMD in AI (Instinct MI300 breakdown)
4 Penny stocks that billionaires are loading up on
Is it possible to live on patent litigation? NLST is the most interesting example
Is it possible to live on patent litigation? NLST is the most interesting example
NLST is revolutionizing the memory market (NAND & DRAM) - Samsung and micron to pay IP licenses and damages for the netlist technology
Nvidia released a new "nuclear bomb", Google chatbot is also coming, computing power stocks again on the tide of halt
2023-02-28 Wrinkle-brain Plays (Mathematically derived options plays)
Hudbay slides after Q4 miss, reduced 2023 production guidance (NYSE:HBM)
Russia/Ukraine Conflict = Metals Squeeze | Choose Wisely!
$HBM – HORNBACH BAUMARKT is a rare, underpriced value stock w low free float <25% (think "GERMAN equivalent to HOME DEPOT")
HBM DD. SHORT INTEREST HIGH, VOLUME LOW, SOLID FUNDAMENTALS
Mentions
70%-90%, mostly HBM and NAND stocks. Some google, some gold, Nasdaq and some in this AI managed fund which has yield great returns in the past 3 years
"When asked about software efficiency, management argues that models like DeepSeek V4 don't reduce the total memory needed, they just change the type. V4's "Engram" architecture offloads data from expensive HBM (on the GPU) to high-speed DDR5 and Enterprise SSDs. Micron is a leader in both, effectively telling investors: "If they buy less HBM, they will have to buy significantly more of our DDR5 to maintain the Engram lookup tables."
How do you see GTC effecting it? My understanding is that upcoming gpus will come with ever increasing HBM requirements. Most reports I read paint a very rosy picture for the memory chip industry, with massive pricing power for the producers, namely sk and Samsung
I know next to nothing about the political situation but I’ve been reading everything I can find on the HBM memory chip bottleneck that’s forecast to continue for the next year and beyond. SK Hynix and Samsung own over 70% of the global market and their revenue forecasts are constantly being upgraded. Aren’t they, along with Hyundai, largely responsible for the current and future growth of the kospi?
I don't even know why it gets lumped in with the HBM suppliers. NAND doesn't go into GPU's, just into servers for SSD right?
Cheat code right now is S Korean chip makers. Green day after day cuz they're the biggest suppliers of HBM to Nvidia.
Low-key the S Korean chip makers who supply Nvidia with HBM will run cuz of this. EWY, FLKR, KORU for ETFs with exposure to the sector. HY9H on the Frankfurt Exchange if you're willing to jump through hoops to buy 100% SK Hynix, which has a major catalyst coming once it gets an ADR listed on US Exchanges.
Samsung doesn’t get talked about as much mainly because a lot of retail discussion is US centric and it’s harder for people to access/analyze foreign listings. It’s a massive, diversified conglomerate which is both a strength and a reason it doesn’t move like a pure AI play. The upside is there, especially with memory/HBM exposure, but you’re also buying into cyclical semis and Korean market dynamics, not just the AI narrative
From your comment, are you implying that logic dies don't go into HBM? I'm not saying at all that MU won't grow, but you're the one trying to make a statement by replying to this thread which is about TSMC. Are you saying Micron is somehow not beholden to TSMC? Just being clear here.
Sold out of HBM4&HBM3E for 2026, not sure how the numbers could be bad. They also started shipments for the HBM4 a quarter early. Def high expectations, if they aren't met I might be in a world of trouble. Time will tell
No. They’re primarily using Nvidia but will use AMD and Google to keep Nvidia in check. Also, everyone is just trying to secure TSMC wafers and HBM capacity. Whoever has those will get deals with big tech.
SK Hynix has run into an issue with HBM4 requiring a photomask revision meaning potential shipment delays
They forecast over $8 eps for the coming quarter 1.5 months before NAND, DDR5, and HBM3 and HBM4 doubled in priced, Did you seriously make your comment unironically despite seeing SNDK 2x this EPS forcast? this place is filled with idiots.
Yep, he might as well hold it through this year. Check back on March 19 when MU reports earnings and see if they increased their prices more than the market anticipated. If so, then MU should be up to the $500-$600 range. Then, check in mid-June for that report to see what kind of pricing power MU is still holding. If the profit margins remain strong around 50%, then it should gap up to the $750 range. The word is Samsung is expected to raise their HBM4 prices by 30% from current levels. With costs already priced in, any price increase in these memory chips are going straight to net income.
Onto Innovation. A metrology/inspection company and a direct beneficiary of HBM, advanced packaging, and glass substrates. Specifically look into their DragonFly and JetStep machines.
Yeah they might be kinda forked. All their HBM is made in Taiwan and Japan and their exemptions no longer apply under section 122
> 3.0 was a mediocre model but all counts point to 3.1 being a banger. This subreddit was raving about 3.0 being the best model when it came out... >Enterprise subscriptions for Gemini Pro grew by 300%+ year-over-year in 2025, compared to around 155% growth for ChatGPT Plus/Enterprise. 300% of a small number is still a small number. >Did you forget Google Workplace was a thing? I have never heard of any organization that uses Google Workplace other than schools. >This is like saying Ford is at the mercy of the supplier of their hubcaps. It really isn't: - Broadcom designs Google's chips, they are not "in house" chips like many claim. Broadcom is making similar chips for many other competitors. - HBM is really an essential component for AI chips. You cannot make high performing AI chip without huge quantities of HBM. prices have increased 4-5x. If Google hasn't locked in long term purchasing contracts, and are paying spot price, they are in big trouble. >Oh, weird. OAI has 800-900 million weekly active users and about 35 million of them are paying. I guess I was mistaken. And those users will soon generate money via ads. It's the same argument people made against Google in the early 2000s... how is a company who's product is free sustainable? >Again, short term thinking for a long-term thesis. "Oh ram is hard to get today" or "Oh Gemini 3 was bad". Complete inability to look at the very, very simple big picture which is that Google has a money printer machine, owns half of the internet and the services people use, and the architecture needed to refine AI in the coming years (oh, and the cash needed for it). 1. Google's "money printer machine" is short term thinking. When your realize the majority of Google's revenue comes from search ads, which are becoming obsolete, that money printer is going to disappear. 2. The concern is Google's leadership fails to make the right decisions. Losing top researchers to competitors, allowing OpenAI to corner the HBM market, failing to invest in LLMs until they were years behind. >But sure, OAI which is having to resort to ads on the third lap of a 100 lap race is positioned to win. It's not that they need to, it's more that why wouldn't they? It takes time to perfect an ad engine, and it would reduce their cash burn rate.
> The point I'm making is that building an investment thesis off the fact you don't like a model is beyond stupid. It's not about not liking a model. it's about running consistent tests across all models in areas such as coding, research, science, and measuring results. The problem is Google is optimizing their models against well known Benchmarks like ARC-AGI. So to truly measure performance, you have to build independent benchmarks that engineers at these companies do not have access to. >In the long run, Google is going to be able to train their models better, faster, and for cheaper since they're running this shit on their own TPUs Their TPUs are worse than Nvidia's products, they aren't magic. The only real advantage is they can customize them more to their liking to fit their applications. >Open AIs didn't have the SOTA model for over a year. They had their week in the sun for two seconds with 5.3, which was quickly muted by Opus 4.6, Sonnet 4.7, and now Gemini 3.1. 1. Sonnet 4.7 is not out yet >Those armies of users they have over Google? They're not paying, and even if they were, the 100s of messages they're sending their AI-therapist aren't sustainable even at a $20/mo subscription. Open-AI has successfully courted the least profitable, most expensive users to have in the AI race. I'd argue that it's the opposite of what you're describing. OpenAI has significant enterprise adoption, whereas Gemini is mostly used by consumers. Google is providing access to their pro models for free, whereas OpenAI Charges $20 a month to access higher tier models, or $200 for unlimited. >Anthropic has the enterprise market, Google has the masses that are using it for search, and OAI has teenagers who really need a therapist. OpenAI has tripled their revenue in the past year, lead by enterprise; enterprise customers have grown from 25% to 40% of their revenue. OpenAI/Anthropic are roughly even for enterprise, whereas Google is mostly just giving models away for free to try and keep people using Search/Google products instead of jumping Ship to ChatGPT. >Open AI has no such advantage and is at the mercy of my boy Huang. Anyone with eyes see's that if you're not also in the hardware game, the AI race is yours to lose. Google is at the Mercy of Broadcom who they license their chips from, TSMC for manufacturing , SK Hynix/Samsung/micron for HBM. They really aren't any better off than OpenAI. In fact, I'd argue OpenAI is better off because they locked in long term deals for DRAM and cornered the market. I have a background in AI. HBM is the main bottleneck for AI, because the larger your KV cache and weights, the more HBM you need to run a model. OpenAI got ahead of everyone else by landing secret deals for 40% of global DRAM supply, and it's a huge win for them.
How are their margins doing at these HBM3 prices tho
100% agree. The big 3 are working hard to increase supply but they won’t be able to bring new factories online until 2027, and even then it’ll be a subset of the factories that they’re trying to build to meet the demand. But I agree that we need a solid thesis for why demand will continue to outpace supply. Here are my big 4: 1) The AI infrastructure buildout we’ve seen so far has actually been for AI training. Training is compute heavy but not really memory intensive. The next stage of this AI revolution will be focused on inference. Inference, unlike training, is very memory intensive. The models are huge, and the context size that they’re dealing with are getting bigger and bigger. The only way to provide a fast inference service is by putting all of that in memory as the model runs through its calculations. If you look at what Groq and Cerebras are doing for fast inference, they’re basically building solutions to bring memory directly onto the chip. That’s how important this is. But they’re doing it with SRAM and there are a lot of physical limitations around it. HBM, although a bit slower because it’s off chip, can scale up to the required memory requirements. If you look at what Nvidia and AMD are doing with their latest GPUs that are a bit more focused on inference, you’ll see that the memory those chips are set up for have gone up significantly. AI inference at scale is new, so this is new demand on top of the previous computer/smartphone/SaaS server memory demand. This inference scale up is just getting started and it’s already eating up like 60% of total memory. 2) Waymo and self driving cars are just starting to scale up as well, so another market that carries a lot of memory demand. 3) In a few years, we’ll start to see more AI centric edge devices. Meta’s smart glasses have already started to make some progress but it’s really early on. By the end of the decade, we’ll likely have AI focused devices, like smartphones and laptops, that will run inference locally, all creating more demand for memory than previous workflows. 4) Robots. I don’t think they’re anywhere near ready for scale today, but they probably will be in the 2030s. Once again, they’ll need to run inference locally which will require memory. All of this is new demand on top of the previous cyclical memory cycles. The models will continue to get bigger. Some new device will eventually be useful and become a huge hit. Robots will eventually scale up in manufacturing lines. There is no AI dimension that has me more bullish than memory right now.
OPEN HBM and CCOI calls spank me later.
ha, I literally just bought all of these, and that is, from a mathematical risk/reward perspective the best thing to do, rather than choose one individual stock. Adobe is being unfairly punished right now, as the market digests the narrative that AI will obliterate SaaS companies. It's possible, plausible even, that AI reduces long-term profitability for software providers, but we haven't actually seen this yet - it's just a story. The company's profitability has been untouched, revenue, margins etc continue to grow, yet we see valuation compression. This is a disconnect at the moment, it may prove true, but I'd bet that as Adobe continues to execute and integrate AI into its products, it will be labelled a winner and see multiple expansion, perhaps not necessarily to historical levels, but far from the lows of today. MSFT: market isn't loving the AI spend, lots of Azure capacity being given to OpenAI, that's reasonable, it is somewhat risky to have revenue concentrated from a few customers. This isn't that concerning to me, the capex is being spent on assets that are clearly very profitable. The disaster scenario is that the capex get spent but the revenue doesn't show up, which seems unlikely at this point. Payback period on AI servers is currently 1-1.5 years, which is great, though rental rates do collapse by about 90% when new chips are released every 2 years or so. MU: definitely missed the boat on the supply/demand imbalance, they're sold out for the next couple years on HBM3, so that volume is largely priced in, however, pricing on these chips isn't set yet, and is done so quarterly before delivery; so MU could see continued upside from price hikes. There are few players in high end memory, so this is oligopoly pricing power in a supply shortage - a pretty amazing outcome.
Because you’re probably talking about the equivalent of DDR3 RAM when datacenters run on HBM Your $75 SSD is probably like PCIE 3.0 whereas datacenters use U.3 NVMe
Agree more competition is def good. In terms of replacing SK/Tiawan .. its not easy. But SK/Taiwan are not just no longer selling DDR they are changing those production lines to produce HBM instead. Once changed its not a simple change back (3-6 months to turn around). If SK/Taiwan leave the market empty for too long CXMT will reach a state where they can produce it cheaper and undercut them in volume. CXMT aren't great either yet, their DDR has node problems so they cant quite get as much ram/silicon. They also have lower yields than the others. However the longer they have time to fix those and get more production the less ability SK/Tiawan will have to try and get back into the market. Those supply chains start redirecting and China usually starts defending market share once it gets it with 'dumping'.
I’d agree with your last half but most of these new “Ai players” are still relatively new, many tech companies burn cash for years and years if not a decade plus (uber,Netflix, DoorDash,Amazon, sofi). This “bubble” has barely even started IMO and if technology continues to improve at the speed it is they may never need to drop prices, Nvidas new Vera Ruben makes inference much cheaper due to HBM4 and more computing, microns new 6th gen SSD is a huge advancement in dropping cost aswell due to higher GPU utilization, CXL is also making huge waves but isn’t quite good enough yet. My point is the revenue is there and growing insanely fast while the technology is allowing them to have a path of profit but by becoming way for efficient every year, in 5 years a million tokens for Claude’s opus 4.6 could cost Pennies instead of dollars, meaning all they have to do is wait for the tech to get more efficient and all of sudden that revenue is super high margin.
Comparison between current Nvidia and current Intel. A more apt comparison maybe to compare two fabless semiconductors makers like Nvidia and AMD or if you want to compare CPUs only, then Intel and AMD. What was NVDA FCF in 2022 or 2023 before AI GPUs really take off? Intel is now a national security, too big to fail type of entity. Intel has 70% marketshare of data center and server CPU and is sold out for 2026. 18A is a breakthrough node internally and 14A is getting interest from Nvidia, Apple, Qualcomm and many others. Intel is re-entering the RAM market through a joint venture with Softbank's SAIMEMEORY with aim at HBM market (just showcased prototype at conference in Japan). As AI moves into inference stage, more work will be done at the edge with CPUs or similar to what Google is doing with TPUs. I could go on as why your comparison is outlandish?
Any one seen any projections/market analysis for LPDDR5; trying to better project future growth in my MU valuation model (which is more focused on HBM right now). TLDR: models and infra appears to be skewing away from GPU memory (eg Ruben).
again. research how HBM4 isn't plug and play. longer lead times and customizations will lead to memory stock relates. here's your google search does hbm4 memory need to be customized
China is several generations behind. Micron is expanding its factory in Japan but I'm not aware of any Japanese companies making HBM-4. Most of microns Japanese competitors got out of the industry in the '90s
Where is the spot pricing in futures market for HBM4?
In general, shortages don't last. HBM and GPUs will ultimately follow the same pattern. Nvidia certainly has a head start but they do not have an insurmountable moat. AMD will start shipping its MI 450 in the second half of this year. Nvidia's GPUs are better but AMDs are good enough. Remember Nvidia has 70% margins and all the hyperscalers are getting sick of paying the Nvidia tax. On top of that they're producing ASICs for specialized compute.
Not all memory companies are created equal. MU has more patents on memory. Their major product is High Bandwidth Memory (HBM). WDC has less patents on memory. Their major product is spinning hard disc drive which is outdated. Buy MU. For non-volatile memory product, SNDK is better since their major products are NAND memory (SSD), another non-volatile memory. DRAM is volatile memory.
Interesting 🧐 # > According to TrendForce's latest findings, Samsung is currently projected to be the first to obtain HBM4 certification, with SK hynix and Micron expected to follow shortly thereafter, forming a three-supplier ecosystem for NVIDIA's HBM4 supply chain. > > As conventional DRAM prices have surged sharply since 4Q25, narrowing HBM's historical profitability advantage, memory vendors are recalibrating capacity allocation between HBM and conventional DRAM to balance overall revenue growth with customer delivery commitments. > > Under these circumstances, NVIDIA's reliance on a single supplier could hinder the ramp-up of its Rubin platform, which is why NVIDIA is expected to incorporate all three major memory suppliers into its HBM4 supply chain.
Micron at 12 PE with HBM4 ahead of schedule, new fab acquisitions, and supply constraints coming is a solid setup. The fake FUD about production delays got debunked and now analysts are raising targets. The 450-600 range looks reasonable given the catalysts. Supply exhaustion in 2027 is a real tailwind. This could run hard from here if they execute.
“We have been in high volume production on HBM4. We’ve commenced customer shipments of HBM4. And we see shipment volumes ramping successfully this calendar, Q1," said Micron CFO Mark Murphy. "This is a quarter earlier than we mentioned during our December earnings call." Everyone forgets this is a global shortage. Global, it does not matter what any other company does. No one company can pick up all of the slack.
Micron, because datacenter memory demand will not be cyclical anymore. It‘s value is high cyclical company, but extremly low under the assumption that the demand for HBM will only go up from here.
Anyone who thinks memory is a cyclical commodity business at this point is underestimating: 1) sovereign AI/data center builds in virtually every country on the planet that can afford it - and even off-planet orbital data centers 2) entire new categories like consumer or enterprise robotics with onboard AI inferencing 3) new memory architectures for quantum computers 4) new classes of mobile or wearable devices - and automotive. Basically anything that requires compute. There’s going to be demand for whole new entire classes of objects imbued with artificial intelligence. Will it all be HBM? Of course not, but margins rapidly expanding across all memory categories, LPDDR, NAND, SRAM, DDR. . .
High Bandwidth Flash is the next thing btw. Jensen mentioned storage one time in the eight years I've owned Nvidia that I've caught. HBM is very expensive, so they will be bringing flash closer to the GPUs.
wait until you studs realize that HBM4 isn't plug and play. especially HBM4E. these are becoming highly customized with long term contracts because of tooling and r&d. MU and the big 3 will get a rerate on TTM very soon. 25x + pe ttm. do the math.
Micron got excluded by NVDA from their HBM4 program, it was understandable people didn't know where it would go from there.
Goldman Sachs now expects 2026–27 DRAM undersupply of roughly 4.9% and 2.5%, respectively, well above prior forecasts, and NAND undersupply of about 4.2% and 2.1%. That’s "the most severe one during the last 15+ years," Lee said. Server memory — including conventional DRAM, SOCAMM and HBM — is now the dominant driver of global DRAM demand. Goldman Sachs now anticipates conventional DRAM pricing up about 176% year‑over‑year in 2026, with average selling prices approaching historically strong levels. Operating margins for major memory producers — especially for DRAM — are forecast to reach 70%‑80%, near record territory. NAND pricing is also expected to rise, though more moderately. The firm sees 100%‑120% year‑over‑year price gains in 2026 and strong operating margins above 40% for major producers.
Just like Nvidia it has 10x since ChatGPT was released. It has way less of a moat than Nvidia (CUDA lockins). SK generally only has a 6 - 12 month lead over the others (slightly longer vs Samsung) in terms of new HBM generation being shipped.
At the investors meeting they mentioned the HBM4 rumor to be false. I'm also fairly bullish even in the short term and I assume that with earnings being crazy it will have a SNDK like uprisal just as fast.
$MU, Samsung JUST started production on HBM4 Meanwhile Micron: "We have been in high volume production on HBM4. We’ve commenced customer shipments of HBM4. And we see shipment volumes ramping successfully this calendar, Q1," said Micron CFO Mark Murphy. "This is a quarter earlier than we mentioned during our December earnings call."
https://preview.redd.it/1qcr5djwuwig1.jpeg?width=1284&format=pjpg&auto=webp&s=b61f46a1c8dd669e22de087f7623fb7f58e0afb7 Microns HBM4 was prepped and sold one quarter earlier than expected. This is brand new news. Yet we’re still under ATH. This shits gonna fly soon.
maybe, but I think the stock will go down right after earnings because of regard institutionals taking profits and the general market fear will force a correction at first, I'm super bullish on micron long term I think it's great and clearly still in the game even with HBM4, I'd think ab buying in march tho tbh
Mu 400 and sndk 600 eod. Break psychological barriers. And screw SemiAnalysis for reporting falsely about micron HBM4 performance then deleting their post. Damage done but refuted today by micron execs.
# Micron stock rises 5.5% after CFO clarifies HBM4 production status (https://www.investing.com/news/stock-market-news/micron-stock-rises-55-after-cfo-clarifies-hbm4-production-status-4500098) >[Investing.com](http://Investing.com) \-- [Micron Technology (NASDAQ:MU)](https://www.investing.com/equities/micron-tech) stock rose 5.5% Wednesday morning after the company’s CFO addressed concerns about its HBM4 (High Bandwidth Memory) product during a presentation at the Wolfe Research Auto, Auto Tech and Semiconductor Conference in New York. >The memory chipmaker’s shares gained ground **after executives clarified what they described as "inaccurate reporting"** regarding the company’s HBM4 technology. According to the CFO’s statements, **Micron is already in high volume production of HBM4 and has been shipping the product to customers.** >The company also revealed it is ramping up HBM4 shipments in the current quarter, **one quarter ahead of its previous expectations.** Executives emphasized that their **HBM4 product delivers performance exceeding 11 gigabits per second,** and expressed high confidence in the quality and reliability of the memory technology. >Lynx Equity Strategies analyst KC Rajkumar commented on the clarification, suggesting it should "put to rest the noise that had been running around" regarding Micron’s HBM4 capabilities. >The positive market reaction comes as high bandwidth memory has become increasingly important for AI applications, with demand for such advanced memory solutions growing among technology companies developing artificial intelligence hardware and infrastructure. Finally we broke 400 again. This is at today's Wolfe Research Auto, Auto Tech and Semiconductor Conference.
They raised the guidance just didn't specify by how much and said the recent HBM4 news is inaccurate
Great investor call by Micron. MU Mentioned increased financial outlook since their last earnings call. So higher guidance incoming. Even lower P/E. Mentioned how Robotics is going to need more memory in the coming years. And addressed how they have one the best HBM, and they are very pleased with it's progress.
Did MU ceo say if they’re going to sell HBM4 to nvidia or not?
CFO addressed the FAKE NEWS about their HBM4 being delayed. All fud.
fiber optic cable manufacturers have been on absolute fire for the past month. HBM shortage got folks trying all sorts of stuff
Samsung beat them to HBM4 by a couple months
New Chinese investment will take atleast 3 years to enter production. Meanwhile China is also moving production to HBM as it can't import HBM for its Huawei AI chips. As HBM has lower yields and high loss rate that'll mean Chinese dram production will actually decrease in 2026~2027.
Your exact comment is why I’m up $1.6M on my position. Micron has been telling the market this for a year. Memory demand is exploding and HBM is not cyclical and margins are double DRAM and NAND. No one wanted to listen. They’ve been burned by micron too may times as the cyclical to secular story has always tried to be spun but never worked out. To most big investors, micron is still a prove it story, and they are skeptical of margins peaking. They look at it as risk, even though the story is clear. Also analysts try FUDing Micron all the time, when it was in the $100’s there was more FUD about margin compression and HBM price wars to the bottom, I tried telling people that was FUD back then but retail believes whatever they are told by the news. So there’s your answer, that’s why, it’s still a prove it story to most big investors. Micron is estimated to do $18B rev next quarter, I’m gonna go on the record here to say it will be closer to $22B, they are going to crush it. This is the only US memory supplier, and they haven’t even got a multiple re-rating yet, once that comes this thing can fly.
SK Hynix ran into HBM’s top margin profile while AImaxxing in the data center and instantly CLOCKED PEAK SHORTAGE 😳 The fatfraud hyperscalers went full AllocationFearmaxxing, signing LTAs and inflating memory ASPs to protect FRAME after being brutally BOTTLENECKMOGGED while the DRAM/NAND-chads DEEPLOCKED IN higher ASPs nuked capacity normalized inventory MODE and consumeroids electronic normies lost AURA to commodity-memory-input-cost causing irreversible PnL bleed 💀💯
not with their backlog, being sold out of high demand HBM. if it gets much below 200 im sure the whales will buy in for a prospective 500 return, with time.
Apparently google is skipping HBM4 and going straight to HBM4E, this micron panel tomorrow will be JUICY.
I think there's some fear surrounding MU because of its insane run so far and the fact that Nvidia passed on them as an HBM4 supplier for Rubin. But I have a hard time believing that the big capex spend from other big tech companies won't benefit Micron as well, as long as memory supply remains tight. NFA of course but just my thoughts.
Hey everyone, I’m looking for some level-headed advice from people who’ve traded cyclical semis/memory names. I’m holding Micron (MU) with an average cost around $405 (small position, shares only—no options). Right now MU is around the high 370s and I’m down, and I’ve caught myself watching the chart all day. The problem is: I’m not buying or selling, just staring at the screen and getting more anxious. I understand the bull case is AI/HBM + tight memory supply, and Micron’s last reported quarter and guidance looked strong (record-ish numbers, and management talked about tight supply-demand conditions extending beyond 2026 and higher FY2026 capex around $20B). But I also get this is a cycle, and the stock can drop hard even when the story is intact.[investing +1] Here’s what I’m trying to figure out (and I’d appreciate concrete rules, not just “hold”): • If you were in my spot, would you scale out / de-risk near break-even if it bounces, or hold through the volatility? • What would you consider an “invalidation” signal for MU’s current setup (HBM pricing? DRAM/NAND contract prices? capex ramp from competitors?) • How do you set a stop / risk line on a high-volatility cyclical like this without getting chopped out? • Any practical tips to stop obsessively checking the price would be helpful too. I’m okay with blunt feedback. I’m trying to build better discipline and avoid emotional trading.
The thing about sandisk stock dropping doesn't even make sense in any way. Samsung HBM is DRAM while Sandisk is NAND. Both are for Ai data centers but for different parts and application. HBM DRAMS compliments NAND, they doesn't compete against each other. It is like saying more GPU supply means SSD will be phased out which doesn't make sense
Goldman Sachs BS report + Samsung HBM4 deal with Nvidia. FUD in my opinion, they’re trying to get it at a cheaper price.
Apparently this so called next gen NVDA contract was the only thing that mattered and they are never going to make money anymore. Only NVDA is buying HBM.
They’ll probably be fine, and sell HBM4 to AMD in the meantime but it’s a blow to sentiment imo
MU closed its consumer facing Crucial brand to go all in on HBM only to lose its contract with Nvidia for next gen to SK Hynix and Samsung for the next year because its HBM4 is too slow to qualify. Still booked for 26 but losing the next gen contract is huge. Only down 16% from ATH, puts still feeling safe at this level.
The semiconductor memory industry has always been cyclical with timing advantages that prove temporary. Samsung accelerating HBM4 production by a month doesn't fundamentally change the supply-demand equation - HBM demand is growing so aggressively (projected 30%+ CAGR through 2028) that multiple suppliers will be needed regardless. Micron's historical pattern with HBM2/3 was entering post-ramp anyway, so this isn't really a deviation from expectations. For those holding at a premium, the question isn't whether Samsung beats them to market by a few weeks - it's whether you believe total addressable memory demand for AI workloads justifies current valuations over a 3-5 year horizon. MU's earnings next month will give concrete guidance on their HBM4 roadmap. Panic selling before that data point seems premature, especially when the stock only moved 2-3% on this news rather than the 10-15% you'd expect if institutions viewed it as genuinely material.
" Korea Economic Daily: NVIDIA allocated the HBM4 volume it needs to the three memory makers last December, with the split at mid-to-high 50% for SK hynix, mid-20% for Samsung Electronics, and around 20% for Micron. ** This report runs contrary to SemiAnalysis’s estimates."
Ignore the bullshit. Nothing has fundamentally changed. MU is still sold out, LPDDR5 is higher margin and still going into NVDA products. MU lags first releases but compensates for it in higher yields, leading to more durable revenue. Their HBM4 is already sold out, who cares if it goes to NVDA instead of GOOGL, AMZN, MSFT, etc for their GPUs.
You won’t ever be a successful trader if you operate off what analyst say. They aren’t there to help you. This news was meant to take micron down to either buy at a lower price or profit off a put option. Same with life, and the government, most news is cherry picked or carefully worded to get you to act a certain way. This is the real story. - It was never booked as revenue, Micron didn't expect to be a part of the first wave of HBM4 ramp - Micron, as it did with HMB2 & 3, usually starts supplying post-ramp with allocations around 15-25% - The stock hardly moved (~2-3%) on this news because the big players knew this was a nothing burger and media FUD, if it was a big deal it would dropped at least 10-15% - Demand still outpaces supply through 2027 - Micron will likely supply Vera Rubin in CQ3/4 as always planned
We’ll find out in 2030 but demand is already 20x what it used to be because of AI computing and thats not even to mention HBM. But regardless, im out at $650 later this year
Thoughts on MU? After the Samsung HBM production need
My gut says it will be HBM4 to win more Hynix market share Which in return shoots at MU
Samsung denied cutting prices to HBM3E. Plus why would they when even Samsung is sold out with no additional capacity until 2027. Couldn't make more if they wanted to.
Very much so. in fact some google searches will show that micron rarely meets readiness for first runs of a new spec, and often comes in later. Mu is still expected to have HBM4 for other manufacturers and likely NVDA in the future. GOOG, AMZN, AND MSFT all are making custom GPUs as well . that's a huge market. plus they're sold out.
nvda did not choose MU HBM in the ruben chip. so it droped.
HBM demand isn’t a winner-take-all thing. Starting earlier doesn’t lock up the whole market, especially with how tight supply is. Micron coming online in Q2 still has plenty of buyers waiting.
You realize that SNDK doesn’t make memory, right? They make NAND. There are only 3 companies that make HBM, those are MU, SKHynix and Samsung. The Chinese fabs are still a couple of nodes behind. So if you are buying SNDK cuz HBM is sold out, it’s not going to work out well for you. Luckily for you, high performance storage (NAND) is also in short supply as well, but there is much more supply elasticity for NAND than HBM.
They're sold out likely to 2027. Plus they will definitely sell HBM4 to NVDA. Maybe just not right away. Finally, Samsung and SK couldn't possibly take all the markets business. Plus MU is more efficient for HBM. Youre just mad for buying at the top.
They’re still supplying Nvidia with Parts in the same model, it’s just not HBM4 since they were already sold out.
Patience. My fear is it lost $385 support today. I'm going to hold through 2/11 and see what they say. Lots of people concerned about NVDA HBM4 sales. I'm confident the market is large enough and they will also get some NVDA sales for HBM4, but we need to consolidate, wait for some more good news before the next leg up. Market and thesis is still very good, and we may just need to wait for earnings.
Reports from Yonhap and Seoul Economic Daily today confirmed that Samsung has not just caught up, it leapfrogged the industry. Samsung passed Nvidia’s qualification and will start shipping HBM4 to Nvidia for the Vera Rubin platform as early as next week (post-Lunar New Year). This is Samsung's vertical integration moat in action, eliminating the external foundries and middleman that the other two are so dependant on. In 2024–2025, being specialized (like Micron) was the winning strategy. But in 2026, as the technology becomes incredibly complex (HBM4), Samsung’s ability to own the entire process allows them to move faster than anyone expected.
ngl, that kind of bonus makes sense given how crucial HBM is rn for AI; gotta keep those engineers happy and onboard.
I'm long MU. In my opinion SNDK, STX, and WDC will all be eating MU's dust by EOY, if not by MU's next earnings mid-March. The long-term game here is within HBM and DRAM, both of which Micron is the leading producer in the United States.
Memory is sold out. Just because Samsung started HBM4 production a month sooner than expected, it doesn’t mean there won’t be any demand for Micron’s HBM4 when they’re producing it in Q2.
SanDisk does not produce HBM, as the article says, and with memory capacity maxed out, it's not like this fundamentally changes SanDisk or Micron's offering. Also, the initial report was from a biased company called SemiAnalysis released last Friday that has since been picked up as commentary by others. Micron have said in Sep and Dec earning that they were on track for HBM4, even with new specification requirements. We will find out the truth anyway. But, it doesn't change the fact that HBM is in-demand, regardless. Buy the dip.
I can't comment on automotive, or general cloud, but ARM has no major in AI systems - and will miss out on the vast majority of growth and money. The accelerator matters most, and most companies either use GPUs or make their own ASICs (Google, etc), not to mention the great startup companies building ASICs. High-performance inference is an incredibly hard problem, requires specialized dataflow and cutting-edge HBM, which no ARM design supports. The biggest thing about ARM and RISC-V is the ISA - which does not hardly matter for building AI systems.
Idk but I didn’t realize MU dropped because Samsung is mass producing the new HBM memory and apparently that’s why it dropped today
Looks like Micron hasn't even started it's mass production of their HBM chips made with EUV technology, where as Nvidia has primarily produced their powerful technology with EUV for the last couple years. Am I reading this correctly? Shouldn't that be huge over the next year once their more advanced chips are available? More expensive chips, more money?
The HBM shortage is real—SK Hynix paying 2,964% bonuses signals they're racing to scale production before competitors catch up. AI data centers need this capacity now, not in 2026. If you're tracking memory plays, the margin expansion here is structural, not cyclical. I mapped the supply-demand dynamics on $DRAM here: [$DRAM](https://aimytrade.io/ticker/DRAM?utm_source=reddit&utm_medium=comment&utm_campaign=StockMarket&utm_term=DRAM&utm_content=variant_1770654846055_b1zg7x)
SemiAnalysis trash and always wrong, they claim MU’s HBM market share for Vera Rubin is zero
By that logic, wouldn't sndk dump too? They're not included in HBM4 with NVDA either
Didn't some news come out that their HBM4 memory got like 0 orders from Nvidia or something? I might be wrong, but I read something like that today
Samsung report, ramping HBM production big time, new foundry in Texas coming online soon
Too bad Samsung is just finishing a huge foundry in Texas aimed at HBM
This is SK Hynix protecting the crown jewels. HBM talent is scarce, AI demand is exploding, and they don’t want engineers getting poached mid-boom. The massive bonus says management thinks this AI memory cycle has real legs. Bullish signal on confidence, but it also locks in higher costs if memory prices cool later.
MU said as much in their financial PowerPoint. They are sold out of HBM and HBM4 into 2026 maybe 2027. Apparently they feel good about their customers' buying plans.
New MU report > Yonhap reported Samsung plans to start mass production of HBM4 as early as this month for Nvidia’s next-gen AI platform Vera Rubin. > >The report says Samsung has cleared Nvidia qualification and secured orders, while Micron has guided to ramp HBM4 in Q2 2026.
That Korean article was straight up FUD, MU is still supplying chips but already sold out of HBM4 for the year.