See More StocksHome

HBM

Hudbay Minerals Inc.

Show Trading View Graph

Mentions (24Hr)

1

0.00% Today

Reddit Posts

r/wallstreetbetsSee Post

AMD's new MI300x vs the field, plus future projections.

r/wallstreetbetsSee Post

The Samsung Rival Taking an Early Lead in the Race for AI Memory Chips

r/stocksSee Post

Nvidia Call and Outlook Notes

r/wallstreetbetsSee Post

A detailed DD for AMD in AI (Instinct MI300 breakdown)

r/wallstreetbetsSee Post

AMD AI DD by AI

r/pennystocksSee Post

4 Penny stocks that billionaires are loading up on

r/StockMarketSee Post

Is it possible to live on patent litigation? NLST is the most interesting example

r/pennystocksSee Post

Is it possible to live on patent litigation? NLST is the most interesting example

r/pennystocksSee Post

NLST is revolutionizing the memory market (NAND & DRAM) - Samsung and micron to pay IP licenses and damages for the netlist technology

r/wallstreetbetsSee Post

Nvidia released a new "nuclear bomb", Google chatbot is also coming, computing power stocks again on the tide of halt

r/wallstreetbetsSee Post

2023-02-28 Wrinkle-brain Plays (Mathematically derived options plays)

r/WallStreetbetsELITESee Post

Hudbay slides after Q4 miss, reduced 2023 production guidance (NYSE:HBM)

r/pennystocksSee Post

Russia/Ukraine Conflict = Metals Squeeze | Choose Wisely!

r/wallstreetbetsSee Post

$HBM – HORNBACH BAUMARKT is a rare, underpriced value stock w low free float <25% (think "GERMAN equivalent to HOME DEPOT")

r/wallstreetbetsSee Post

HBM DD. SHORT INTEREST HIGH, VOLUME LOW, SOLID FUNDAMENTALS

r/optionsSee Post

Some notable activity from Fridays trading

Mentions

Micron is certainly a big player. SK Hynix, Samsung, Micron, Intel/Softbank are all racing to develop next gen memory technologies to replace current gen HBM.

Mentions:#HBM

If you believe in the AI supercycle and that hyperscaler HBM capex will continue without slowdown then it’s probably a buy. With micron I have concerns about disruption as we approach 2030 & beyond. There’s a lot of new memory technologies on the medium term horizon, specifically 1T1C 3D stacked DRAM & capacitor-less DRAM, and also Z-angle memory. They are also building new US Fabs which is great, but the last thing you want to invest into is an ‘Intel in 2020’ situation - their CPU revenue was through the roof from a short term bubble (COVID), decided to spend $100Bn on fabs to meet the demand, the demand evaporated and then the stock tanked. This could potentially happen to Micron if the classic HBM demand does not keep pace or if HBM is disrupted by newer memory technologies.

Mentions:#HBM

It's absolutely real. Integration of CPOs is extremely important for power draw and efficiency and these companies are getting *absurdly* huge orders - see AEHR and Soitec for example. Best risk-adjusted spaces are test equipment like KLA and AEHR, CPO vertical integration like Broadcom, and most importantly, HBM hybrid bonding. There are spaces with room to run for sure - check out BE Semiconductors and SUSS MicroTec, as well as the big packaging names like Onto.

Mentions:#AEHR#HBM

> Let’s be clear, it wasn’t trained entirely from scratch. It was built off of 4.5 and then 5 and then 5.1 Let's be clear. That's how the US companies build their models too. That's why it's Opus 4.7 and not Opus 5. > There’s a difference Same same but same same. > Next, the hardware they used is based upon the 910B which is about 80% of the H100. Throw NVDA and its faster You are falling into the fallacy that they are just using one GPU to train on. Since what you describe only matters if you use one GPU. But the reality is that's not how training happens. It's not on one GPU. It's a clusters of thousands. So it's not what one GPU can do. It's what clusters of thousands can do. A Huawei cluster and be competitive with a Nvidia cluster because it simply has more GPUs. > Then with the memory crunch, you have the H200 using HBM3 stomping that Chinese hardware. Again the erroneous fallacy. See above. > Point is, NVDA can’t be rivaled and unless you want to throw ridiculous amounts of hardware, power and cooling, there is no challenge here. Except they are challenging them. That's why not a single H200 has been sold in China months after the US has not just allowed, but asked them to buy them. If you don't believe me. Believe Jensen. He may know a thing or two about it. "Huawei's technology, based on our best understanding at the moment, is probably comparable to an H200." Which is why they don't want to buy the H200. "They've been moving quite fast. They've also offered this new system called Cloud Matrix, which scales up to even a larger system than our latest generation, Grace Blackwell. Huawei, as you know, is a formidable technology company. And they're not sitting still." As I said, bigger clusters with more GPUs deals with any per GPU Nvidia advantage. https://wccftech.com/nvidia-ceo-confirms-huawei-cloudmatrix-ai-cluster-now-competes-with-grace-blackwell/

Mentions:#NVDA#HBM

Let’s be clear, it wasn’t trained entirely from scratch. It was built off of 4.5 and then 5 and then 5.1 There’s a difference Next, the hardware they used is based upon the 910B which is about 80% of the H100. Throw NVDA and its faster Then we can compare that 910B against a H200 which runs at nearly 2000 TFLOPs Then with the memory crunch, you have the H200 using HBM3 stomping that Chinese hardware. Point is, NVDA can’t be rivaled and unless you want to throw ridiculous amounts of hardware, power and cooling, there is no challenge here.

Mentions:#NVDA#HBM

What narratives.yo... Chip Chip Chips.. leather jacket CEO is my king! Project Stargate and 1000 godzillions! No, no.. it's memory that's the bottleneck! HBM and NAND to be specific. We need faster memory. No, no.. it's the network that's slow. 6G is the answer. Nokia comes shining 🌟  No, no.. critical minerals - without them, nothing! Any county that has them wins the world domination race. Did I say, without them - nada! Wait.. crypto miners already have all the hardware. No need for any new hardware. They're the ones to invest in!! Let's go.. Yandex pivoted to NBIS! Vlad type CEOs.. and new CFOs each quarter. Nice 👍🏻  Forget that.. Oil is still the King 👑 Economy runs on oil. Blockade that sh1t.. War is over. We don't need oil. Space is where the new battles will be fought. SpaceX.. SpaceX!! Trillion dollars IPO baby! Yeaaa.. but what if everything can be hacked? Heard of mythos, cyber security and project Glasswing? Top banks gathered in an emergency meetings? Palo Alto is the one!! Okay, let me just get VT and sleep peacefully?

Mentions:#HBM#NBIS#VT

Hobby Lobby to pivot to HBM foundry

Mentions:#HBM

I can understand that, mate. I am from Taiwan and live in Taiwan. I was also very curious why none of the posts discussing about TSMC. First, do not worry about the war. It’s not going to happen at least in 3 years. And if it happens, it is likely to be World War lll and the stock is the last thing you are gonna worried about. So do I, I have 50% TSMC in my portfolio. If the war comes, I will be at the coastline with a rifle, because it is compulsory for man to go back to military. I will not miss my stock. I will miss my family. Second, Semiconductor industry is complicated. Even in Taiwan, not a lot of people understand how big and strong the moat TSMC built and building. TSMC is famous, but is not familiar to people. I have over 30 friends working in Semi industry, TSMC, Mediateck, Realtek, ASE, Phison, UMC… And none of us know what we are doing in big picture, we are just “a screw” for the factory. Funny though, I think the best material to understand how good TSMC as a company is an American’s book Chip War by Chris Miller. Third, if you believed in AI, TSMC is the bottle neck. Nvidia, Broadcom(Google), AMD, Apple scramble to TSMC for their GPU and CPU. Micron, SK need TSMC‘s advancing packaging to have HBM. Tesla also wants TSMC’s chip, but Elon thought it was too expensive. He went to Samsung and even had a desk in their factory. And it seemed that he was disappointed and now he wants to build his own Terafab. Good luck, fella. Maybe One day you will see Google’ TPU take half of Nvidia’s share. Or Apple launches a fantastic personal assistant agent and AI iPhone sales hit an all time high. TSMC will still be the bottleneck of all these development. If you want to buy a good company, TSMC is a good one. A lot of brilliant and hard working Taiwanese will work for you. :)

Mentions:#UMC#AMD#HBM

Oh hell yeah. Q2 FY26 massive beat: EPS $12.20 vs $9.31 est (+31%). Rev $23.86B. HBM3E sold out. Q3 guide $24-24.5B. Fwd P/E 4.5x insane value.

Mentions:#HBM

>GOOG faces a potential avalanche of addiction lawsuits after a plantiff successfully sued for addiction. >Rising energy costs will increase GOOG's operating costs >GOOG is forced to spend hundreds of Billions on CAPEX on AI just to remain relevant(providing AI services like Gemini to users at no extra cost and without ads) >Rising HBM and chip prices are likely to reduce GOOG's return on invested capital. >Ad revenue growth has depended on increased ad density(more ads experienced by users), rather than organic growth in engagement, both across YouTube and Do you realize those points also apply to OpenAI an Anthropic? Do you realize that Alphabet has several other highly profitable segments that are helping to fund their AI initiatives which OpenAI and Anthropic do not have?

It's interesting that you knew exactly what i was talking about and you're quick to say that it specifically is renting. Though you should look at the wording they use "You can **purchase** Outposts servers" not **rent**. I understand why you're confused, because you have to give the servers back; this seems strange from an individual consumers point of view where when you purchase something you keep it forever. But this is not unusual for tech or B2B deals (where after 3 years the product might no longer have value, but the manufacturer might not want it on the open market. This is what they are talking about when they say that they will sell their own chips, there's no reason to think otherwise unless you are replacing their words with your own. > Renting is not as expensive as building out your own data centre It depends on how you define expense, building out your own means you get priority usage, you get to define performance levels. And renting from aws is very expensive. Renting can be cheaper, buy it isn't always. > HBM is already sold out Do you have some reason to think that aws is not one of the reasons it's sold out? I find it very unlikely that they just decided to stop regular hbm orders during ath demand. > So if Amazon has to put their products on the self, they are gonna have to cut their AWS upgrade This would be part of their upgrade.

Mentions:#HBM

AWS do not current sell any chips to third party, they are only considering doing it. Renting is not as expensive to building out your own data centre and it also cuts out the procurement time, initial sunk cost and obsolescence. You also only pay for what you use. Amazon need to balance fab capacity between Graviton, Trainium and Inferentia for internal AWS uses. I think you are confused as to what the AWS Outpost Rack service is. They rent you the rack so you can put it on prem. They cost $10k to $20k a month. A chip without memory is useless. HBM is already sold out well into the next year. So if Amazon has to put their products on the self, they are gonna have to cut their AWS upgrade/expansion plans else where.

Mentions:#HBM

Yes I think MU is at a reasonable entry point here. Fundamentals remain solid memory cycle recovery is still intact, and AI-driven demand for HBM gives a strong tailwind. Pullbacks like this look more like healthy consolidation than trend reversal. I'm bullish

Mentions:#MU#HBM

Anyone think there's gonna be an inventory correction from hbm? Basically openAI booked 40% of global HBM supply and companies scrambled to book 2025 and 2026 capacity which let samsung and micron and hynix get record profits. But now that 50% of data centre's aren't being build due to lack of energy and microsoft and meta have lots of idle racks.

Mentions:#HBM

At what point do investors start reading past the headline? Last year Jan when Deepseek dropped, entire semiconductor market bled and investor thesis was wrong, now last month when Turboquant dropped, market bled again, I am sure the thesis is wrong this time too. Because people don’t even care to read beyond the headlines and what a new algorithmic development actually mean (it’s not even that difficult to understand tbh) People went gaga over Deepseek because it’s an efficient AI and people assumed demand will fall because now AI can be made cheaply. But when inference got cheaper it expanded who could afford to deploy AI at all. So memory demand drastically rose. That's what happens when something gets cheaper, people use more of it, not less. The sector recovered. With TurboQuant it’s even simpler. The algorithm only compresses KV cache and has negligible impact on training memory where the actual majority of HBM demands comes from. And $180B hyperscalers are spending on memory this year is mostly training spend. Also it’s just a research paper as of now, that too sitting since 2025, even Google hasn’t even deployed it widely.  The memory crunch ends when new fab capacities come online in 2027-28. An algorithm doesn't matter much here. More info here: [https://nanonets.com/blog/google-turboquant-ai-memory-crunch/](https://nanonets.com/blog/google-turboquant-ai-memory-crunch/) It just frustrates me that we're clearly stuck in a loop here and long term investors are the ones paying for it every single time. The news drops, headlines go crazy, people panic sell without reading past the abstract, stocks bleed, thesis turns out to be wrong, stocks recover, and then three months later we do it all over again.  Why does this happen?

Mentions:#HBM

#TLDR --- Ticker: $MU Direction: Up Prognosis: Long shares / Buy 2026 LEAPS Catalyst: AI data centers are devouring HBM memory supply, creating a massive structural shortage. Collateral Damage: PC building bros who are about to pay 2021 prices for normal RAM.

Mentions:#MU#HBM#PC

DRAM is cyclical. HBM isnt. but whatever, fear is there.

Mentions:#HBM

*mu is a beast in the memory space. HBM demand isnt slowing down anytime soon. solid pick*

Mentions:#HBM

Dude you bought a stock on FOMO that was up over 300% in the past year. AI and HBM is built on easy money, circular financing and the petro dollar economy of the Mid east states. If this war escalates or drags out more than a couple more weeks the AI bubble will pop and MU will whacked down to $100 by the time you decide.

Mentions:#HBM#MU

Right now price for HBM memory is extraordinarily high. You only need a slight easing of demand, and price will drop. Even if price drops 50%, it will still be extremely profitable for MU. However, given the massive increase in MU stock prices in 2025, MU might still drop 50-70%.

Mentions:#HBM#MU

Not necessarily. It means that adopting the TurboQuant compression increases the bandwidth available for inference at a small quality cost. In addition to the KV cache, HBM still needs to store model weights and other data. It still impacts demand but it's hard to say by how much. The other two major HBM makers Samsung and SK Hynix for example have not fallen as much as Micron has.

Mentions:#HBM

Ever heard of Jevons paradox? Demand isn't going lower. Also, for HBM4, only Samsung and SK Hynix pose a real threat, and they have their fair share of supply constraints too.

Mentions:#HBM

Because the market is irrational and numbers mean nothing during this environment. They could have a PE of 3 and a PEG of .1, none of that matters in a global recession. It's when everyone sells their winners to cover their losses on the ones who got crushed. They went parabolic and the shortage on HBM is cylical. Who knows if these companies will even build out all the data centers at this point. Hell, who even knows if we'll even be here tomorrow at this point...

Mentions:#PEG#HBM

Well the other issue is GOOGL just announced that they developed a TurboQuant chip. Of which lowers AI memory demand Additionally, the supply constraints of MU’s chips (HBM4 out). Can’t sell what you ain’t got and that creates opportunities for other chip makers.

Mentions:#GOOGL#MU#HBM

This is classic WSB - a kernel of legitimate macro logic buried under layers of hyperbole and narrative extrapolation. Let me break it down. **What the OP gets directionally right:** The energy-cost transmission channel to AI/data center economics is real. If oil spikes are sustained, opex for hyperscalers does increase, and the margin compression story for companies still in the "spend now, monetize later" phase of AI capex gets uglier. The Strait of Hormuz chokepoint risk is a legitimate tail risk that markets historically underprice until it's acute. And the point about Gulf sovereign wealth recycling into Treasuries — that's a real channel, though the magnitudes the OP is implying are wildly overstated. **Where it falls apart:** The post chains together a sequence of worst-case scenarios as though each is the base case, which is the classic WSB "thesis" structure — it reads like a stress test presented as a forecast. A few specific problems: hyperscalers don't price energy on spot, they have PPAs and hedged contracts (though the comment from the data center operator unilaterally hiking 12% is a telling anecdote about how those contracts hold up under stress). The leap from "energy gets more expensive" to "AI is dead" skips about five intermediate steps where companies adapt — pass through costs, shift workloads, renegotiate. And the China-invades-Taiwan kicker is pure narrative escalation with no probabilistic grounding whatsoever. **The helium comment is actually the most interesting thing in the thread.** If a quarter of global helium supply was knocked out via Qatar infrastructure damage, that's a genuine bottleneck for semiconductor fab and HBM production that most people aren't tracking. That's a more specific, investable insight than the broad "everything is dead" thesis. **What I'd actually take from this as an investor:** The post is useful as a sentiment indicator - 4.3K upvotes on a bearish macro thesis in WSB tells you retail is getting genuinely spooked, which can be a contrarian signal depending on positioning. But the QQQ $450 Jan '17 puts recommendation is basically a bet on a sustained depression-level drawdown, which is a very low-probability outcome even in the scenarios described. The risk-reward on deep OTM index puts that far out is almost always terrible unless you're hedging a large long book.

Mentions:#HBM#QQQ
r/stocksSee Comment

MU specifically. It's easy oversold and the google algorithm won't even impact HBM much at all, mainly NAND. My guess is the big whales are throwing caution

Mentions:#MU#HBM

Hey man, im sorry about all the other shit replies here. Heres whats going on, and why u should hold. Its hitting NAND/enterprise ssd way harder than microns HBM3E/ DRAM. Its due to a new algo from google that compresses memory and temp storage. The unveiling of Google’s TurboQuant algorithm (March 25, 2026) has introduced a definitive structural headwind to the "infinite memory demand" narrative that propelled the sector to record highs in Q1. While we remain constructive on the long-term AI secular trend, TurboQuant represents a demand-efficiency paradox that forces a downward revision of Total Addressable Market (TAM) assumptions for physical bit growth. TurboQuant is a data compression breakthrough specifically targeting Vector Search Engines and Key-Value (KV) Caches in Large Language Models (LLMs). ​6x Reduction in Footprint: By compressing memory data to 3 bits without accuracy loss, TurboQuant effectively allows an AI system to operate with 83% less physical memory than previously required for the same workload. ​Performance Tail-to-Headwind: While the 8x performance increase on H100 GPUs is a tailwind for compute efficiency, it is a direct headwind for memory volumes. Hardware that once required 1TB of HBM/DRAM to function may now achieve parity with significantly less, threatening the supply-demand tightness that supported high ASPs (Average Selling Prices). Micron is still fine because of Jevons Paradox: making a resource more efficient usually makes people use more of it, not less." ​Lower Cost = Higher Scale: By making AI memory 6x more efficient, Google just made AI 6x cheaper for every company on earth to run. This will lead to an explosion of new AI apps that didn't exist a week ago. 2. Sold Out Status: Micron isn't guessing about demand; their HBM capacity for the rest of 2026 is already contractually sold out to Nvidia and others. Google’s algorithm doesn't let Nvidia return the chips they already bought.

Mentions:#HBM

at this current price, it is more profitable to make DDR than HBM, which MU doesn’t sell anymore

Mentions:#HBM#MU

I agree with OP. However, the biggest headwind ai will face now will be helium. Which is needed for HBM and chip fabs, fiber optics, cooling. a quarter of world’s supply was destroyed in one strike by Iran on Qatar. Where does the rest of the helium come from? Algeria has some. Russia has some. And Wyoming has some. Oh and there’s a discovery in Michigan that hasn’t been explored fully. Helium is really hard to move. It’s really hard to store. It’s really hard to do much without it.

Mentions:#HBM

It’s because Qatar is a major exporter of helium which is a component of HBM. I’m getting fucking destroyed in MU and neoclouds 

Mentions:#HBM#MU

I think the opposite. I think the oil shock isn't priced in yet. Everyone is expecting that this conflict ends soon, but I don't think it ends for many more months. They haven't even shut the Strait of Bab al-Mandeb yet and that'll probably come after the US escalates again in April. I'll concede that the other Mag 7 are probably outsourcing all the design. I don't want to downplay the difficult of designing the chips, but I truly don't think the design is the issue. It's the execution of the design and that's really what these companies struggle against. It makes TSMC and HBM producers a huge risk to Nvidia margins and overall revenue. Davidson Window is just a convergence of a weaker US and a stronger China that lets them have enough strength to take Taiwan. It's a theoretical window and it very well may have been fear mongering to get more defense spending, but it should be a consideration. I'm drawing conclusions on helium disrupting chip production. If 30% of global helium is halted and the chip makers only have a week of supply, then they are slowly cutting into reserves until they run out. A protracted war will absolutely impact production.

Mentions:#HBM

>In this environment, PE is sufficient to value a company. We have no guarantee of future growth or margins. I don't think companies started revising their forward PEs yet, but I also don't see how the biggest companies will be impacted much by this. Wars happen all the time and the world just continue moving. Oil shock is IMO overstated. I would put more weight in forward PE that was not revised yet rather than in trailing PE. >If you don't think any of the other mag 7 aren't actively designing chips and trying to replace Nvidia, then I really don't think this conversation is worth holding with you. Mag 7 tries to produce chips independently of Nvidia, but they're not designing them themselves. It's usually actually just Broadcom/Marvell designing them, not Mag 7. Google TPU, Microsoft MAIA, Amazon Trainium is just Broadcom designing chips to the specification given by those companies. If anything, it's just a point proving that designing chips is hard. Nvidia designs chips to match customer demand the same way, but they do it on the more open market. >The Davidson window is closing, the US is the weakest it's been in 20 years, and Xi wants the legacy of reintegrated Taiwan. I am not familiar with Davidson window. I don't think US is weak now, not in terms of the military at least. >30% of the global helium is haulted. It's actively having impacts on HBM, which is a requirement for these chips to be usable. Regardless of your opinion here, this will slow production and reduce margins. Can you share a source to the claim of "actively having impacts on HBM"? Are there reports of HBM manufacturers saying that they already are slowing down production due to inavailability of helium? That would be an active impact on HBM. It can reduce margins but I think helium is cheap enough to the point where this won't be more than 1% difference.

Mentions:#MAIA#HBM

In this environment, PE is sufficient to value a company. We have no guarantee of future growth or margins. If you don't think Google or any of the other mag 7 aren't actively designing chips and trying to replace Nvidia, then I really don't think this conversation is worth holding with you. The Davidson window is closing, the US is the weakest it's been in 20 years, and Xi wants the legacy of reintegrated Taiwan. 30% of the global helium is haulted. It's actively having impacts on HBM, which is a requirement for these chips to be usable. Regardless of your opinion here, this will slow production and reduce margins.

Mentions:#HBM
r/stocksSee Comment

They already have 15-20% of production in US. If helium supply is short, it'll rise DRAM/HBM prices even more. And it'll only benefit MU the most out of SK/Samsung. Could lead to higher market share. You yourself are mentioning how the capacity is being drastically increasing, only because demand growth is going to be exponential. And now, a growing company has a forward P/E of 5. I'll leave it here.

Mentions:#HBM#MU

HBM inventory is already accounted for, and the materials for its manufacture are plentiful. Pretending there is a shortage is just a lie. More importantly, regarding the supply/demand pricing influx, HBM is not their only product. Their entire fan capacity is not booked out, genius. So, yes, supply/demand factors still play in Micron's favor—that is, if there was any material shortage that actually affected them, which there isn't. Ignorant Reddit chip economists like you, LambdasAndDuctTape, are just throwing FUD. Btw, I have an NYU MS in Quantitative Economics. Economics takes from Reddit aren't always from uneducated and ignorant dopes. Sometimes that's just projection.

Mentions:#HBM#MS

That depends on the contract. Also, not all memory is booked. We know HBM is, but not all memory is HBM. But, again, their claim was both false and nonsensical, emphasis on the false.

Mentions:#HBM

TurboQuant is basically squeezing KV cache, not magically deleting the need for HBM. If anything it makes inference cheaper so people run more tokens and scale harder, which usually means more total memory spend, not less.

Mentions:#HBM

How do you arrive at the conclusion that KV cache quantization increases RAM demand? As far as I can tell, this will have a relatively immaterial, but positive impact on HBM use for model providers. It doesn't make models smaller, it just lets KV cache retain more accuracy at lower quants, but KV cache is a relatively small part of memory footprint of LLMs, and model providers are already using quantized KV caching.

Mentions:#HBM

more likely NVDA and others won’t pay that much for HBM4 if google paper holds truth

Mentions:#NVDA#HBM

Even if KV cache shrinks, You still Have Model weights (hundreds of GBs) Active tokens being processed Parallel users (multi-tenant inference) demands extreme bandwidth HBM3 - HBM3E - HBM4 Each memory upgrade has Higher bandwidth (faster data movement) Better power efficiency Larger capacity per stack These upgrades never stop.

Mentions:#HBM

Long context, what agents need, makes a large KV Cache, which is what chews up so much HBM. If you can do more with less, you don't need as much. TurboQuant hasn't been proven at scale, only on 7B models. If it scales to a 1T fronteir model, at 128K context, KV cache drops from 134GB to \~22GB. The model weights at FP16 are still \~2TB (or \~500GB at INT4), so KV cache goes from being a significant fraction of total memory to a rounding error relative to weights. At 1M context, uncompressed KV cache hits 1TB+, TurboQuant projects 178GB of compressed KV cache.

Mentions:#HBM

Smart breakdown on the tech differences there. The market definitely overreacted without understanding what TurboQuant actually does vs HBM's role in the stack Been watching MU for a while and this kind of knee-jerk selling on misunderstood news usually creates decent entry points. The fundamentals around memory demand for AI workloads haven't changed just because someone made the software more efficient

Mentions:#HBM#MU

The concern is when the majority of people stop using Google. Most people under 30 stopped using Google. It's only a matter of time before older generations catch on. >they are a 4T company $3.52T. But the fact that they are overvalued isn't exactly a justification... >Own the full stack - h/w to models for AI Not entirely true. On the hardware side, they buy their chips from Broadcom, who can hike prices at any time, and rely on TSMC/Samsung/SK Hynix for HBM. In terms of Models, Google has arguably the worst models of the big 3. A lot of shareholders have been fooled by Google gaming the benchmarks via overfitting. I've used GPT, Claude, and Gemini, Gemini is consistently the worst performing model. >this conversation doesn’t matter though, bc they’re so large - who gives a shit Large companies can lose a lot of market value and experience significant declines.

Mentions:#HBM

MU sell off is regarded. TurboQuant targets KV cache in inference, not training. The biggest HBM consumers are still training runs and prefill-heavy workloads. Decode efficiency gains are real but only one slice of total demand. If anything on long term, samsung taking share from nvda is the real headwind. Feels headline noise.

Mentions:#MU#HBM

Wonder if HBF will become more lucrative than traditional HBM.

Mentions:#HBM

Yes, they're making HBF with Hynix. HBF is NAND-based. Microns HBM is DRAM-based. So, HBM has higher speeds, but lower capacity. HBF has better capacity, but much longer access latency. HBF will be a midway between HBM and a traditional SSD. It'll still sell like hot cakes, tho. The inference market is going to love it. I'm in both MU and SNDK.

Aren’t they working on a new HBM with SK? HBF?

Mentions:#HBM
r/stocksSee Comment

- Increased competition from ChatGPT and Claude potentially hurting ad revenues in the future(both from users using Google less, and from ad sales diverting to ChatGPT) - Rising energy costs for datacenters - Rising costs of computing hardware(HBM prices are skyrocketing) - Increased costs of services provided(Google is needing to provide free access to AI overviews, Gemini, etc to stay relevant) - Megascalers are likely to migrate off of Google Cloud once compute shortages are resolved. - Antitrust risk which would return in the event Google actually does manage to compete(only reason google won its antitrust case was because it was determined that AI brought in new competition). In a hypothetical bull scenario where Google's Gemini achieves majority market share, the company would almost certainly get broken up.

Mentions:#HBM

Unless it’s backed by a datacenter full of Vera Rubin GPU’s and Samsung HBM4’s those loans are not really “guaranteed”

Mentions:#HBM
r/stocksSee Comment

tbh, you’re spot on about the cyclical nature of memory stocks. it’s a rollercoaster for sure. but yeah, that HBM4 news is pretty big. having binding contracts instead of just interest definitely adds some security to the demand side. the AI factor is a game changer for sure, and if that keeps growing, we could see a shift in how we think about memory stocks. but you're right—if Samsung ramps up production faster than we expect, that could throw a wrench in things. i guess it all comes down to how much hype is built around AI and whether it sustains in the long run. definitely keeping an eye on MU though; the earnings multiple could be undervalued if they really lock in that demand. any thoughts on how the broader market might impact this too?

Mentions:#HBM#MU

A me piace HBM

Mentions:#HBM

Thx for the reply. Ah, the investors! Mag7 stocks will be in for a hard ride down when the gravy train ride is over. Even the Big Three in DRAM will take a beating (as in all Boom Bust cycles). Edge is the next frontier, and DRAM will be necessary there, but will it be HBM? Yes, there is a lot going on at many levels. Training on the Edge will not need the power of a Data Center but will still have power requirements and possible limitations (power, heat, capacity, etc) on the amount of DRAM loaded at the Edge. If AI falters then the Edge will also be impacted... GPUs have a place in AI but so do CPUs and many Custom processors that are already developped or being developed to address AI.

Mentions:#HBM

HBM demand comes from the AI boom. All that HBM capacity can not just be used elsewhere. Data Centers are not at risk as they will still need HBM but the demand will be no where what it is today or even in 2027.

Mentions:#HBM

Why wouldn't it survive? HBM, and data center GPU in general, were doing fine before the AI boom happened.

Mentions:#HBM
r/stocksSee Comment

LLMs also use a shitton of memory bandwidth to shuffle parameter weights around, which brings its own set of challenges and limits the scaling even further. The HBM RAM memory shortage did not come out of nowhere

Mentions:#HBM
r/optionsSee Comment

MU down 6% is the one that hurts. Was looking like the cleanest setup in semis going into this week — HBM demand story still intact but macro is just steamrolling everything right now. ARM holding up green is interesting though. Might be worth watching if the broader sell-off stabilizes — it's been the one semi that's had relative strength lately.

Mentions:#MU#HBM#ARM
r/stocksSee Comment

I don’t see the bubble “popping” with AI. I think the difference between the .com bubble was all these companies with a website and a stack deck got given money with 0 earnings and no business model. SK Hynix, Nvida, AMD etc etc all print money and with an insane margin. I’m aware of the circle jerk of cash but instead of a bubble I think my largest concern would be large amounts of asset backed debt that is then just used to buy more of the same asset like coreweave. If rates go up. While you could argue this simply creates the illusion of growth *Nvidia giving OpenAI 10B$ then OpenAI turning around and spending it all on Nvidia*I think the use case and need for HBM memory etc will continue. Factories are using as many wafers as possible on HBM which is driving the price of RAM up and I don’t see that changing anytime soon. I agree multiples will shrink I also believe specialization within tools is a good thing. Labour and serving positions aren’t leaving imo, chat gpt can’t write you real code and software isn’t going to disappear, it’ll just sell more licenses. Just my imo, feel free to completely disagree. I like to get other perspectives.

Mentions:#AMD#HBM
r/stocksSee Comment

Bots/algos/people are assuming this compression of multiples is a sign that micron is peaking in its cycle and soon (probably 2027) it will crash horrifically. I firmly believe this is not the case as AI demand for HBM is an entirely different beast to consumer demand ddr ram which is what usually what drives their cycles. Once it becomes clear that this is the case micron will double again IMO

Mentions:#HBM
r/stocksSee Comment

Sure, memory historically was a pure commodity because standard DRAM and NAND were interchangeable across suppliers, driven by spot pricing and cyclicality with little differentiation, meaning that the buyers didn’t really care who made it as long as it met spec, which led to the boom and bust cycles and weak pricing power that you are referring to. What’s changing now is that leading edge memory, especially HBM, DDR5, and advanced NAND tied to AI workloads, is no longer fully fungible. It requires super close coordination with customers, advanced packaging, and long intense qualification cycles, which creates huge switching costs and supply constraints. So yes, agree while legacy DRAM and NAND can still behave like a commodity, the industry is now shifting toward specialized, high performance memory where suppliers actually have pricing power and more durable demand, and can better control supply and demand dynamics. Wall Street hasn’t figured this out yet, they are typically very slow to catch on

Mentions:#HBM
r/stocksSee Comment

Not even worried. With Nvidia, AMD, Google and Amazon's newer chips coming along and HBM4 started production this company will only get better.

Mentions:#AMD#HBM

Bols you do realize we’ve been buying $MU since the Fall? Thesis is solid and HBM4 confirmed by NVDA despite the Korean FUD

Mentions:#MU#HBM#NVDA

The achievements have been remarkable; yet, the performance of memory chips often appears strongest precisely on the eve of people beginning to hail it as a "new era." This time—thanks to HBM technology—is the situation truly different?

Mentions:#HBM
r/stocksSee Comment

AI slop sais "**Note on Competition:** While Micron holds a technical edge in power efficiency, **SK Hynix** remains the market leader in HBM with over **50% share**, largely due to its early and deep integration with Nvidia’s Blackwell systems. **Samsung** has recently reclaimed the #2 spot as its HBM3E products finally cleared Nvidia's qualification tests in late 2025." what can go wrong with 3rd place in market share as a moat ?

Mentions:#HBM

Ironic that your take is the ignorant one. This could have been correct 5-10 years ago? Memory architecture has changed so dramatically that barrier to entry is tremendous and yields are shrinking. HBM4 IS 4 NANOMETER tech. There are 3 companies in the world that have the capability to manufacture it and the tech and architecture to do so is insane. So insane to the point that not only is it impossible for outsiders to realistically enter in the 3-5 years even with MASSIVE CAPEX, it's also unrealistic for them to build capacity in the next several years beyond what they are already doing.

Mentions:#HBM#CAPEX
r/stocksSee Comment

It’s officially the most undervalued stock in the world lol. Wall Street hates that there is virtually no bad news to bring it down. They tried lying about HBM4 exclusion but that was obvious FUD from the beginning. Micron is here to stay.

Mentions:#HBM
r/stocksSee Comment

Micron over Sandisk for long term outlook. HBM memory is far more important in the long run.

Mentions:#HBM

Doesn't matter. Memory without the rest is pointless and everything that ties to HBM is built in Asia.

Mentions:#HBM

Guy, the entire outlook for semis is fucked... all of Asia's semi production is at risk.. no one needs shitty HBM if you can't get GPUs.

Mentions:#HBM

They are sold out of all HBM for 2026 already lol. How are they inflated?

Mentions:#HBM

MU has a 98% chance of beating earnings on polymarket btw. Also sold out of all HBM for 2026….and It’s March lol.

Mentions:#MU#HBM

Why do you think MU will tank? They sold out of all HBM for 2026 lol

Mentions:#MU#HBM

I don't think so. I feel like there is uncertainty about shipments of newest HBM4 and production for NVDA Rubin. margins moving forward, etc. looking forward to the guidance at 5pm over the earnings

Mentions:#HBM#NVDA

Reasons I'm optimistic for tomorrow's MU earnings: 1. I'm shocked to see hedge funds are allowing MU to be about $450. 2. I'm surprised MU announced full NVDA HBM4 production before earnings. They could have waited. I know GTC is right now, but what's another 36 hours? Interesting times.

Mentions:#MU#NVDA#HBM

Reasons I'm optimistic for tomorrow's MU earnings: 1. I'm shocked to see hedge funds are allowing MU to be about $450. 2. I'm surprised MU announced full NVDA HBM4 production before earnings. They could have waited. I know GTC is right now, but what's another 36 hours? Interesting times.

Mentions:#MU#NVDA#HBM

>Samsung Electronics and SK Hynix selected as sole suppliers of HBM4 for Nvidia's Vera Rubin. >Micron has been excluded after failing to meet Nvidia's data transfer speed requirements for Vera Rubin

Mentions:#HBM

FYI: For those of you following MU: This is why you listen to management and not FUD. For the past month it was rumored MU was not producing HBM4 for Vera Rubin. FUD stated SKH and Samsung would get it. Well, here you go: [https://investors.micron.com/news-releases/news-release-details/micron-high-volume-production-hbm4-designed-nvidia-vera-rubin](https://investors.micron.com/news-releases/news-release-details/micron-high-volume-production-hbm4-designed-nvidia-vera-rubin)

Mentions:#MU#HBM

MU is currently in a Supercycle in the next few years fueled by explosive demand for its Sold out HBM🚀🚀🚀

Mentions:#MU#HBM

> U.S. memory chipmaker Micron Technology (MU.O), opens new tab ​said on Monday it ‌plans to build a second manufacturing facility in ​Taiwan at the Tongluo site ​it recently acquired from Powerchip Semiconductor ⁠Manufacturing Corp (6770.TW), opens new tab. The new ​facility will help it ​expand supply of leading-edge DRAM products including high-bandwidth memory (HBM) to ​support surging AI ​demand, the company said.

Mentions:#MU#TW#HBM
r/stocksSee Comment

I wonder what energy dependent countries like Taiwan and South Korea will do once LNG and sweet crude stop flowing due to the Strait of Hormuz being completely closed by Iran asymmetric warfare... You still think TSMC will continue making NVDA chips? You still think SK hynix will continue making HBM? Wonder what a global shortage of LNG's going to do to LNG turbines powering up AI data centers because the US grid can't actually handle the amount of electricity required by the hyperscalers... Wonder what that's going to do to AI sector valuation in the US stock market... S&P500 is propped up by AI spending. Heck, the whole US economy is propped up by AI spending. There's already an AI bubble that's been building now for quite some time. Wonder when it'll pop... Maybe when LNG and oil hit certain price points. Hmmmm, things to ponder for the coming months. Personally, I sleep well at night having ported into LNG and oil futures. I don't see a de-escalation or off-ramp from the current war with Iran. For Christ's sake, we killed the guy who's at the helm of Shia Islam during Ramadan. His son is in charge. We killed the current leader's wife, father and kids. Ya think the son's gonna come at the table to "make a deal" with the very people who killed most of the people he cares about? Let's not even bring up the girls' school we hit with a Tomahawk... You guys are delusional if you think Iran's going to "make a deal" The Strait of Hormuz will be closed for months if not years. We'll have an energy crisis (oil and LNG) that will make the OPEC oil embargo look like a cute little kitten by comparison. Y'all should be in oil and LNG futures...

Mentions:#LNG#NVDA#HBM
r/stocksSee Comment

I wonder what energy dependent countries like Taiwan and South Korea will do once LNG and sweet crude stop flowing due to the Strait of Hormuz being completely closed by Iran asymmetric warfare... You still think TSMC will continue making NVDA chips? You still think SK hynix will continue making HBM? Wonder what a global shortage of LNG's going to do to LNG turbines powering up AI data centers because the US grid can't actually handle the amount of electricity required by the hyperscalers... Wonder what that's going to do to AI sector valuation in the US stock market... Hmmmm, things to ponder for the coming months. Personally, I sleep well at night having ported into LNG and OIL futures.

Mentions:#LNG#NVDA#HBM

Micron developments since January ATH. \-Partnership announced with Applied Materials to develop next-generation DRAM and HBM for AI systems. \-Collaboration centered around Applied’s $5B EPIC semiconductor R&D center to improve materials and manufacturing processes. \-Micron confirmed plans for a new DRAM manufacturing facility in Taiwan to expand AI memory production and has fully acquired the facility. \-Micron launched the world’s first 256GB SOCAMM2 LPDRAM module for AI data centers and sent samples to customers (Uses about ⅓ the power and space of traditional RDIMM memory). \-Earlier announced $24B expansion of Singapore NAND manufacturing operations continued moving forward, fully confirmed deal. \-The Warsh sale from the new fed chairman pick. Killed the market that week. \-Trump shit himself. \-Our sub has grown from under 1k weekly visitor to 13k weekly visitors. \-Steam and many other prominent companies have pushed back product launches due to storage shortages. \-Microns CFO confirmed earnings will be greater than previously guided for. \-He also confirmed that HBM4 production and shipment began earlier than expected. \-SK Hynix and Samsung are facing a helium shortage \-Major stock indexes dropped as investors worried about economic slowdown due to the war. \-The S&P was $695.41, while micron closed at a high of 437.80. At the time of writing, the Spy is $665 while MU is $436. I am sure we will jump over our intraday high shortly.

Mentions:#HBM#MU

Anyone buying puts on Micron before earnings/Nvidia GTC conference Nvidia could unveil the technology they're using from Groq to reduce reliance on HBM

Mentions:#HBM
r/stocksSee Comment

*depends on guidance though. if they confirm HBM production is booked through 2027 thats not easy to sell into*

Mentions:#HBM
r/stocksSee Comment

*right? AI memory demand completely changed their story. HBM and NAND have been carrying them hard lately.* 

Mentions:#HBM

With you here. MU is so freaking cheap, Forward PE 7.94 and peg is 0.17 lol. AI is a memory driven problem. HBM fully booked. FY2026 SuperCycle in progress. Deep valuation discount to peers. Strategic pivot to higher margin business CHIPS Act funding, US based mfg. DRAM pricing momentum ⬆️ HBM shipments up ⬆️ Analysts sentiment overwhelmingly bullish

Mentions:#MU#HBM

Most of the HBM has been bought by OpenAI, which may not even exist to pay by the end of this year Good luck

Mentions:#HBM

Alphaunderpressure.com pressure test of Micron: CENTRAL THESIS ANSWER Micron's bullish long-term thesis is actionable based on strong demand for DRAM and HBM in AI and data center applications, supported by robust financials and a solid market position. The company demonstrates durable business fundamentals with improving product cycles and strong institutional ownership. However, valuation multiples are elevated relative to historical norms, and insider selling raises minor governance concerns. The balance sheet is healthy with manageable leverage, and cash flow quality is adequate but not exceptional. Catalyst clarity is moderate, driven by ongoing AI server memory ramp and new product launches. Overall, the setup offers a favorable risk/reward given secular AI demand, but investors must monitor valuation and execution risks closely. THESIS SCORE 3.77 / 5

Mentions:#HBM

#TLDR --- **Ticker:** MU **Direction:** Up 🚀 **Prognosis:** Hold shares and YOLO into $450 Calls (exp 4/17) **Catalyst:** AI literally cannot function without memory, and MU is already sold out of HBM through 2026. **Grass Touched:** Absolutely zero. **Gambling Problem:** Fully confirmed and fully funded by Polymarket winnings.

Mentions:#MU#HBM

#TLDR --- Ticker: MU Direction: Up Prognosis: Buy 4/17 $450 Calls Catalyst: AI needs enormous amounts of memory to function and MU is completely sold out of HBM through 2026. Mental State: Unambiguous degenerate (rolling $10k of betting site profits directly into YOLO calls instead of touching grass).

Mentions:#MU#HBM

#TLDR --- **Ticker:** MU **Direction:** Up 🚀 **Prognosis:** Buy $450 Calls (4/17) and hold shares **Fundamental Analysis:** AI literally cannot function without memory, and MU is sold out of HBM through 2026. **Grass Touched:** Absolutely none (Refused to take $10k Polymarket profits and go outside, rolled it straight into a YOLO instead)

Mentions:#MU#HBM

Boy I sure hope Iran doesnt target datacenters pushing the price of GPUs and HBM2 even higher wink wink

Mentions:#HBM

🤓: Hi Micron, this is Borat from Sex Robot Inc. I like to make an order for HBM supply for our sex robots. 👨🏿‍💻: Sorry sir, but our 2026 supply is sold out. 🤓: Gawd fucking damnit!! 😠

Mentions:#HBM

I concur. AI and the need for massive and fast data processing (HBM) and storage (especially as inferencing takes off) has made the memory and storage players all crucial to the AI rollout. The fact that MU shuttered their Crucial (consumer) line to focus on enterprise (greater operating margins) is a clear sign where the demand is right now.

Mentions:#HBM#MU

Damn did I just hear him say they are going to load cluster munitions with Microns new HBM4 cards? Big if true.

Mentions:#HBM
r/stocksSee Comment

the 2350% number is for DDR4 which isnt even what nvidia uses in datacenter GPUs. rubin runs on HBM4, completely different market with different suppliers and pricing dynamics. MU is still interesting but specifically because of their HBM3E ramp, not because DDR4 is expensive. i own about $4k in MU around $94 avg. the bull case is real but its specifically an HBM capacity story, not a generic 'memory prices go up' story. SK hynix has like 50%+ HBM market share so if youre playing this angle theyre probaly the cleaner bet, MU is still catching up on yields

Mentions:#HBM#MU
r/stocksSee Comment

You have to believe their forward PE RATIO and guidance. Memory Chips HBM4 and shortage will not go away unless capex spending announced withdrawn.

Mentions:#HBM
r/stocksSee Comment

\> Even if you think there is new long term structural demand, the market will not sustain this level of supply deficit and these margins will revert to the mean eventually not too sure about that. you must also take into account of how nvda are releasing gpu that are more and more ram intensive every single year pre ai boom the flagship nvda gpu v100 only require 32gb of ram the current blackwell model requires 192gb. rubin which will be the flagship of 2026 will require 288gb. rubin ultra which will launch in 2027 will require 1tb of HBM4e ram. yes you read that right. 1 fucking tb of ram for a single gpu. its not just nvda pushing the limits of their GPU but amd, google and google as well. look into all their flagship gpus for 2026 and 2027 and you will know the the ram requirements to run them are doubling pretty much every year. i dont think these companies will throw their hands up and stop innovating. by 2030 its not unlikely that we see a single gpu needing 8tb of ram and there are only 3 companies that can supply this demand ATM

Mentions:#HBM
r/stocksSee Comment

Yes, HBM and RAM are in demand for compute.

Mentions:#HBM
r/stocksSee Comment

People who say its too late have no clue on how the HBM market and qualifications work. Yes for the 10x potential its probably too late but theres still so much upside in micron,sandisk,sk hynix, samsung. People dont seem to realise that samsung might be the most profitable company on earth in 2026 but will say its ‘too late’ every week of 2026. Do some research, this is a volatile ride but will go far into 2027/2028, just have an exit plan and stick to it

Mentions:#HBM

NVDA floats around $180 since like August, MU seems to follow chip manufacturer sentiment patterns to some extent, China has huge expansion plans, Hynix, Samsung seem to have more focus on high demand HBM. I don't see an insane growth opportunity, +20-30% with a perfect earnings report, but if they have sold all of their modules, it'll take a year to arrive with good news. And a year in the current AI timeline feels like an eternity.

Mentions:#NVDA#MU#HBM