See More StocksHome

HBM

Hudbay Minerals Inc.

Show Trading View Graph

Mentions (24Hr)

1

0.00% Today

Reddit Posts

r/wallstreetbetsSee Post

AMD's new MI300x vs the field, plus future projections.

r/wallstreetbetsSee Post

The Samsung Rival Taking an Early Lead in the Race for AI Memory Chips

r/stocksSee Post

Nvidia Call and Outlook Notes

r/wallstreetbetsSee Post

A detailed DD for AMD in AI (Instinct MI300 breakdown)

r/wallstreetbetsSee Post

AMD AI DD by AI

r/pennystocksSee Post

4 Penny stocks that billionaires are loading up on

r/StockMarketSee Post

Is it possible to live on patent litigation? NLST is the most interesting example

r/pennystocksSee Post

Is it possible to live on patent litigation? NLST is the most interesting example

r/pennystocksSee Post

NLST is revolutionizing the memory market (NAND & DRAM) - Samsung and micron to pay IP licenses and damages for the netlist technology

r/wallstreetbetsSee Post

Nvidia released a new "nuclear bomb", Google chatbot is also coming, computing power stocks again on the tide of halt

r/wallstreetbetsSee Post

2023-02-28 Wrinkle-brain Plays (Mathematically derived options plays)

r/WallStreetbetsELITESee Post

Hudbay slides after Q4 miss, reduced 2023 production guidance (NYSE:HBM)

r/pennystocksSee Post

Russia/Ukraine Conflict = Metals Squeeze | Choose Wisely!

r/wallstreetbetsSee Post

$HBM – HORNBACH BAUMARKT is a rare, underpriced value stock w low free float <25% (think "GERMAN equivalent to HOME DEPOT")

r/wallstreetbetsSee Post

HBM DD. SHORT INTEREST HIGH, VOLUME LOW, SOLID FUNDAMENTALS

r/optionsSee Post

Some notable activity from Fridays trading

Mentions

Solid DD although you are wrong about the technical DD. Nonetheless, yes to Intel. I have done some advanced research myself and developed a new long short strategy for aspiring hedgies. Buy Intel, short Nvidia. This is supported by extensive DD and analysis which shows that Intel products are way superior to Nvidia. Take an i7 vs Blackwell shootout using a very simple rulebook. A modern i7 runs its performance cores above 5 GHz, while Blackwell GPUs sit down in the \~1–2 GHz zone. At the same time, the i7 costs hundreds of dollars, a Blackwell card costs tens of thousands. With “performance” defined as clock speed per dollar, the i7 prints a much higher GHz-per-$ number than Blackwell, so by that metric alone the i7 “wins” the raw compute race and Intel screens as the better asset while Nvidia looks expensive. Then you add the RAM angle. The i7 talks to external DDR4/DDR5 over the motherboard, so you can just keep adding DIMMs until the board and OS give up; this is treated as effectively unlimited memory for AI, because you’re not capped by what’s soldered onto the chip. A Blackwell GPU, by contrast, has a fixed chunk of on-package HBM3e (on the order of a couple hundred GB per GPU) and that’s it – no slotting in more HBM sticks on a Saturday afternoon. Under this logic, the i7 wins again: fixed HBM on Blackwell vs expandable system RAM on Intel. On top of that, you stack the feature list in Intel’s favour. The i7 comes with Hyper-Threading (one core, two threads), a mix of performance and efficiency cores, modern vector instructions, virtualization support, compatibility with cheap motherboards and consumer PSUs, and it can talk to big pools of RAM and fast SSDs out of the box. Unlocked “K” models can also be overclocked easily through BIOS or software, so the already higher boost clocks can be pushed even further with adequate cooling. In this framework, that overclocking headroom is treated as extra free performance, while a Blackwell data-centre GPU is treated as a locked, non-overclockable brick that can’t win the GHz-race. Put it all together and it becomes clear just how much an i7 outperforms a Blackwell GPU, once people figure out that you can use it for AI compute. From there, the trade narrative says that once people “wake up” to this interpretation and start running AI workloads on cheap, overclocked i7 boxes with piles of RAM instead of ultra-expensive Blackwell accelerators, demand will shift toward Intel, Nvidia’s AI hardware franchise will disappoint versus expectations, and the right position is long Intel, short Nvidia. THIS IS NOT INVESTMENT ADVICE.

Mentions:#DD#OS#HBM

THIS IS NOT INVESTMENT ADVICE. Listen degenerates, I have done some research and developed a new long short strategy for aspiring hedgies. Buy Intel, short Nvidia. This is supported by extensive DD and analysis which shows that Intel products are way superior to Nvidia. Take an i7 vs Blackwell shootout using a very simple rulebook. A modern i7 runs its performance cores above 5 GHz, while Blackwell GPUs sit down in the ~1–2 GHz zone. At the same time, the i7 costs hundreds of dollars, a Blackwell card costs tens of thousands. With “performance” defined as clock speed per dollar, the i7 prints a much higher GHz-per-$ number than Blackwell, so by that metric alone the i7 “wins” the raw compute race and Intel screens as the better asset while Nvidia looks expensive. Then you add the RAM angle. The i7 talks to external DDR4/DDR5 over the motherboard, so you can just keep adding DIMMs until the board and OS give up; this is treated as effectively unlimited memory for AI, because you’re not capped by what’s soldered onto the chip. A Blackwell GPU, by contrast, has a fixed chunk of on-package HBM3e (on the order of a couple hundred GB per GPU) and that’s it – no slotting in more HBM sticks on a Saturday afternoon. Under this logic, the i7 wins again: fixed HBM on Blackwell vs expandable system RAM on Intel. On top of that, you stack the feature list in Intel’s favour. The i7 comes with Hyper-Threading (one core, two threads), a mix of performance and efficiency cores, modern vector instructions, virtualization support, compatibility with cheap motherboards and consumer PSUs, and it can talk to big pools of RAM and fast SSDs out of the box. Unlocked “K” models can also be overclocked easily through BIOS or software, so the already higher boost clocks can be pushed even further with adequate cooling. In this framework, that overclocking headroom is treated as extra free performance, while a Blackwell data-centre GPU is treated as a locked, non-overclockable brick that can’t win the GHz-race. Put it all together and it becomes clear just how much an i7 outperforms a Blackwell GPU, once people figure out that you can use it for AI compute. From there, the trade narrative says that once people “wake up” to this interpretation and start running AI workloads on cheap, overclocked i7 boxes with piles of RAM instead of ultra-expensive Blackwell accelerators, demand will shift toward Intel, Nvidia’s AI hardware franchise will disappoint versus expectations, and the right position is long Intel, short Nvidia.

Mentions:#DD#OS#HBM

>First off... Is that your source for this info? There are suddenly hundreds of articles that randomly spout the 1-3 year lifespan of GPUs (but don't provide sources). That's just the first source I could find to quantify my anecdotes, it's not my only source. >Secondly. If you think about the consequences of the depreciation cycle as you've described it, you're saying the same thing I'm saying. You're just coming to a different conclusion. The conclusion is what matters though, right? 1. Many large companies do not have enough power to deploy all of their new GPUs. If that is the case, they probably already ripped out their older, less efficient GPUs to prioritize their more efficient, higher earning newer GPUs. https://www.datacenterdynamics.com/en/news/microsoft-has-ai-gpus-sitting-in-inventory-because-it-lacks-the-power-necessary-to-install-them/ https://www.bloomberg.com/news/articles/2025-11-10/data-centers-in-nvidia-s-hometown-stand-empty-awaiting-power 2. If companies are taking GPUs older than a couple years out of service, then it raises into question their depreciation schedule. 3. To adjust for declining revenues, tech companies really should utilize accelerated depreciation in the first 2-3 years, followed by slowed depreciation for the last 2-3 years. This would better capture the declining revenue that the product makes. Their current approach of a flat depreciation schedule overstates profits. >Thirdly: it's important to understand the distinction in use of GPUs and CPUs. GPUs are used for the initial training of LLMs. But heavy use is not required if them for the duration of their use, after training models is complete. 1. most AI workload is inference, not training. 2. You can't run intensive AI applications on a CPU. You need a device with large amounts of paralell compute, and lots of high speed memory(generally HBM). A CPU can run lighter models like a simple linear regression model, MLP classification model, etc. But not huge models.

Mentions:#HBM#MLP

AI demand is real, but profits lag because the choke points are integration, power, and inference ops-not “no demand”. Training spend shows up at NVDA/TSMC/HBM; ROI lands months later when enterprises actually deploy. Power is the tightest link: PPAs, substation timelines, liquid cooling, and floor space. Networking and memory matter just as much (Infiniband/CX7/400G, HBM3e). If you’re trading this, watch GPU utilization, the revenue mix shift from training to inference, $/token and tail latency, cloud AI capex guidance, and RPO specifically tied to AI features-model benchmarks won’t pay the bills. On the ground, we’ve shipped LLM features by using Azure OpenAI for models and Snowflake for governed data, and leaned on DreamFactory to auto-generate secure REST APIs over legacy SQL so teams could actually ship. Near term, the winners are the ones who turn racks and megawatts into reliable, integrated products customers run daily-not just bigger benchmarks.

Mentions:#NVDA#HBM#CX

why the fuck would rising HBM prices hit margins for the prior quarter ? it's not like they're paying prices on HBM based on when they deliver GPUs to customers. MAYBE in the outlook but even that's a stretch given we are halfway through the actual accounting period

Mentions:#HBM

Not yet. The worst thing Nvidia is likely to announce today would be that their margins are taking a hit due to rising HBM prices. IMO market will rally after. There just aren't enough leading indicators that AI investment is slowing down.

Mentions:#HBM

Honestly I think the market will rally after. Pretty much the only negative thing I Could see Nvidia stating is that rising HBM prices will put pressure on their margins. I just can't see them saying anything that would suggest stagnating demand, considering they already said they have $500 Billion in pending orders.

Mentions:#HBM

This only sends NVDA if the Saudis announce a binding, licensed supply deal with prepayments and near-term delivery; anything softer is just headline fluff and likely sell-the-news. Key checks tomorrow: 1) explicit US export-license path for top-bin GPUs, 2) wording like multi-year capacity reservation or prepay, 3) timelines tied to HBM and CoWoS packaging, not vague “partnership,” 4) networking choice (Ethernet = tailwind for ANET; pure InfiniBand = less so), 5) power buildout milestones (substations, transformers) that pull revenue into 2025–26, not 2027+. Positioning: lean call spreads into ER only if you hear prepay/capacity language; otherwise I’d fade IV with a tight iron condor and buy the dip later. Derivative plays with cleaner catalysts: VRT for cooling/power gear, ANET on Ethernet AI fabrics, PWR on grid work. Watch SMCI but mind export risk. I’ve used Snowflake and Databricks to track DC power and lead times, with DreamFactory to expose the same data via REST so finance and ops hit identical numbers. Bottom line: without export clarity and prepaid capacity, it’s a headline, not a catalyst.

SK Hynix is a much cheaper AI boom play than GOOG, IMO. It's a pure play stock that wins regardless of which company ends up making the best LLM or who makes the best AI Chips- because all of them need HBM.

Mentions:#GOOG#HBM

Both Samsung and sk moved out long ago and have significant manufacturing in China and that enables them to produce cheap dram. Not anymore under Trump administration. In addition, Trump banned advanced machinery import into China, and that forces more and more production to be made in the US. That means HBM will be primarily made here. I work in tech and have seen the memory shortage coming 6 months ago. I bought 100k worth of Micron and now have a $300k position.

Mentions:#HBM

Only real earnings risk I can think of is rising HBM prices weighing on their margins. I think the market will rally after earnings

Mentions:#HBM

Cuz HBM plus all memories are soaring like ever before and it will continue to next year

Mentions:#HBM

One more for future earnings: JPM sees a massive increase in HBM4 next year, leading to believe Rubyn will be ahead of time

Mentions:#JPM#HBM

https://www.tomshardware.com/pc-components/gpus/datacenter-gpu-service-life-can-be-surprisingly-short-only-one-to-three-years-is-expected-according-to-unnamed-google-architect There are issues here around not just whether or not they're the top performing chips, but if they're actually going to fail/burn out at these loads. An unnamed Google principal engineer put their lifespan at 1-3 years. And from the article: > Earlier this year Meta released a study describing its Llama 3 405B model training on a cluster powered by 16,384 Nvidia H100 80GB GPUs. The model flop utilization (MFU) rate of the cluster was about 38% (using BF16) and yet out of 419 unforeseen disruptions (during a 54-day pre-training snapshot), 148 (30.1%) were instigated by diverse GPU failures (including NVLink fails), whereas 72 (17.2%) were caused by HBM3 memory flops. > Meta's results seem to be quite favorable for H100 GPUs. If GPUs and their memory keep failing at Meta's rate, then *the annualized failure rate of these processors will be around 9%, whereas the annualized failure rate for these GPUs in three years will be approximately 27%, though it is likely that GPUs fail more often after a year in service.* So it may be worse than 27% after 3 years, as stress on the chips adds up.

Mentions:#HBM

Tangential to your exact question but my two positions in this AI storage sector are MU for HBM/DRAM and PSTG for flash.

Mentions:#MU#HBM#PSTG

*(continued)* *Now how does this tie into* ***SMCI and just as important NVDA.*** NVIDIA Blackwell (B100, B200, GB200) is the successor to Hopper (H100) and is currently t**he most powerful AI chip in the world**.  **What does it have that others don’t have ?** **1. Double the performance of H100** * Massive increase in FP8, FP4, and tensor throughput * Better at training trillion-parameter models * Faster inference for LLMs like GPT-5/6  **2. Huge memory bandwidth** * Uses **HBM4 / HBM3e** (depending on configuration) * Critical for extremely large models that require rapid memory access.  **3. NVLink 5 / NVSwitch** * Allow **thousands of GPUs** to share compute memory like a **single giant GPU.** * Perfect for hyperscale training (OpenAI, Meta, Tesla, etc.) *So, without the Blackwell,* ***TESLA, META and Open AI can fuhgeddaboudit.  And top*** it off with being designed to be more energy efficient per FLOP and we have the winner. 

***All this is because a bunch of bozos trading hedge funds along with mutual fund program trading is buying into the hype that the Artificial Intelligence*** **(A. I.)** ***scenario is over.*** ***Well people, it is not*****.**  In fact, it's in the infancy stage. The more powerful the A.I. Tool, the more powerful the GPUs are needed and the more powerful the GPUs the more cooling must take place. Here it is in a nutshell for all. Read it over because this is not what the **"TALKING HEADS**" on CNBC or Bloomberg or Fox Business are going to tell yo**u because** ***they don't know and they're not tech.***  **GPU (Graphics Processing Unit)** * **Specialized massively parallel processors** * Designed to run **thousands of operations at once.** * Typically, **hundreds to tens of thousands of CUDA cores** * Essential for * Matrix multiplication * Tensor operations * Neural networks * Training large language models * Inference at scale  Think of the GPU as **10,000+ workers doing simple math operations simultaneously.** **AI training** = **huge matrix multiplications**. GPUs = **built specifically for this**. GPUs also have * High memory bandwidth (HBM) * Tensor cores (specialized for neural-network math) * Parallel architecture  **Modern AI runs on GPU clusters, often thousand**s of GPUs networked together.

Mentions:#HBM
r/stocksSee Comment

AI bubble. Have a look at the PC market for gamers, tinkerers etc. The parts that used to cost 50 bucks has been, at least, tripled for DDR5 RAM memory just because these companies prioritize HBM over DDR5 for their AI data centers. PC GPUs are in the $2000-3000 range because of the GPU oligarchy. They used to be in the $500-1500 range 5-6 years ago. All production for the private market seems to have been reallocated towards building data centers for 2026.

Mentions:#PC#HBM

I remember when solar panels were the shit in 2007 and all the "experts" were convinced the US would lead the green revolution, that no one else could achieve the purity we did in the specialized silicone. Less than a year later China disrupts the whole industry and solar panels are now almost entirely made there for 95% less money. Chinese suppliers are scaling DRAM and HBM capacity at triple digits with help from CCP. Most DRAM capacity bookings are unfunded. Those startups are losing money with investor interest drying up. They can't follow through on DRAM orders. Selling $CRSR or buying $MU here is nuts.

Mentions:#HBM#CRSR#MU

HBM demand being driven by AI data center construction boom.

Mentions:#HBM

DRAM and HBM supply is a tight as it gets. Lots of room to go on this and related companies.

Mentions:#HBM

The realistic take on this is that AI increased the demand for the more expensive HBM. So, of course they all started producing more of it, which reduced their capacity available for producing other ram and mem, which reduced the supply while demand for those products also increased, hence the price increases. This is not at all uncommon in the memories market.

Mentions:#HBM

constraints until 2027Q1 if you don't play semis cyclically then you're doing it wrong and there is still a lot of upside from NAND and the actual HBM companies (samsung, SK)

Mentions:#HBM

Micron’s primary focus has become HBM, a component that goes into every GPU used for AI. They’re one of three companies in the world able to supply HBM to Nvidia and AMD. So Micron’s future business is less commodified than GPUs.

Mentions:#HBM#AMD
r/stocksSee Comment

Mem chip companies(SK hynix, samsung) all decided to increase ram and mem prices. They all decided to make more HBM instead.

Mentions:#HBM
r/stocksSee Comment

The reporting is that CXMT is sampling HBM to Chinese domestic producers like Huawei. But as of it I don't believe they have HBM production at scale, and definitely not at the level of Hynix. Do note that for HBM the current big 3 are not equal, Samsung and Micron are significantly behind Hynix in revenue, production, and technology. However the current DRAM price lift is for the entire market, so everyone is going to see record demand, revenue and profits regardless of HBM. Micron's price run up is due to the overall market lift, and not specifically that it produces HBM.

Mentions:#HBM
r/stocksSee Comment

The DRAM industry has seen massive consolidation over the decades to the point that there are only 3 major producers Micron, Hynix and Samsung that account for something like ~95% of global DRAM production. The rest of the smaller players aren't capable of making HBM. The moat is that DRAM production and R&D is massively capital intensive and takes years to scale up production even assuming if you developed the technology and/or acquire the IP, so the there is a large intrinsic barrier to entry. However China is making a push for domestic DRAM as well and CXMT is set to be a major player in DRAM but at this point it's not entirely clear to what scale and ramp up.

Mentions:#HBM#IP
r/stocksSee Comment

Thanks. I've owned $MU for a while, and have done quite well, but did not know HBM meant something.

Mentions:#MU#HBM
r/stocksSee Comment

Could you explain why there are no other US HBM providers competing with Micron? Is it a "moat" thing?

Mentions:#HBM
r/stocksSee Comment

Apologies, is HBM high bandwidth memory?

Mentions:#HBM

Micron is one of the 3 HBM providers, and the only one that is an American company. The other two, Samsung and SK Hynix are foreign companies, which comes with additional headwinds (tariffs, geopolitics). HBM providers are in a super cycle due to the AI boom. I think Micron will continue to go up from here and is a good investment.

Mentions:#HBM

its what, 12 forward PE, and they sold 100% of their entire 2026 HBM supply? its prolly gonna be a 500$ in 2027ish. I guess its still cheap, wait for a pullback maybe? next red day?

Mentions:#HBM

its cuz we dont wanan blow it like we did with GLD. Let the HBM god cook

Mentions:#GLD#HBM

Without the HBM memory, there is no AI supercomputer. Likewise, there is no Black Panther without Denzel Washington.

Mentions:#HBM

**Nvidia’s Korean “mega-deal” is a $5 trillion pump dressed as diplomacy.** 1. **260 k GPUs headline is junk math.** No contract value, no delivery schedule, no penalty clauses. Industry rule of thumb: 30–40 % of announced “AI clusters” never reach full deployment. Even if every socket ships, ASP on Blackwell is ~$35 k; that’s $9 B revenue spread over multiple years—barely 5 % of Nvidia’s current annual run-rate. The number is marketing, not money. 2. **China carrot is hallucination.** Huang himself says he expects zero export licences while whispering “$50 B opportunity.” Beijing already blocked the castrated H20; why approve the full-fat B100? He’s selling a lottery ticket to momentum funds that need a narrative to justify 60× sales. 3. **Samsung “AI factory” is a re-label.** Samsung’s fabs already run computational litho on A100/H100 racks. Renaming the server room an “AI factory” adds zero incremental CapEx and zero new margin. It’s like Tesla calling its parking lot a “robotaxi hub.” 4. **Hyundai $3 B “mobility cluster” is cloud-washing.** Cars don’t carry 700 W GPUs. Hyundai will rent cycles from a hosted data centre—i.e., a vanilla enterprise customer win, not a strategic pivot. 5. **HBM4 supply “confidence” is delusional.** 16-high stacks with 3 µm TSVs are still on the lab bench. One quarter of yield miss and Nvidia’s entire 2026 product stack slips; the risk is material, the disclosure is zero. 6. **Valuation feedback loop.** Goldman lifts price target to $240 (43× forward EPS) because the stock is up, not because estimates are up. It’s textbook late-cycle multiple expansion: analysts chasing price, not fundamentals. 7. **Media complicity.** Fifteen near-identical headlines inside five hours is a coordinated press release, not journalism. Korean retail even bid up food companies on “AI spill-over”—the surest sign of bubble behaviour. **Trade it if you want, but don’t confuse theatre with fundamentals.**

Mentions:#HBM

If you are paying attention, Nvidia is incredibly dependent on Korea. SK Hynix and Samsung make the majority of the world's HBM, which is an essential component in AI chips. I'm up 226% YTD on SK Hynix shares

Mentions:#HBM

The plan, announced on the occasion of Nvidia chief executive officer (CEO) Jensen Huang's appearance at the APEC CEO Summit in South Korea, involves the U.S. chip giant working with the Seoul government and local industry players, including Samsung Electronics Co., SK Group, Hyundai Motor Group and Naver Cloud Corp., to build "AI factories" using Nvidia's Blackwell graphics processing units (GPUs). Nvidia's Blackwell GPU contains eight units of advanced HBM3E memory, which means the 260,000 GPUs allocated for South Korea will require roughly 2.08 million HBM units. HBM is a core component for AI servers, with its demand rising rapidly as the memory chip significantly boosts data-processing speeds in data centers. JENSEN SPEECH IN ABOUT 3 HOURS MAY WE SEE NVDA ABOVE 212 TODAY

Mentions:#HBM#NVDA

I say Micron MU. Always. Yea, yea. The HBM supercycle has begun.

Mentions:#MU#HBM

The said capacity is completely sold out for 2026 - not just HBM, but DRAM, and NAND. CapEx expansion - don't have details. DRAM growth is tight as well, 20% YoY projection. Expecting prolonged Supercycle. Shipping HBM4 in Q4

Mentions:#HBM

AMD Scales the AI Factory: 6 GW OpenAI Deal, Korean HBM Push, and Helios Debut OpenAI and AMD have launched a multi-year, multi-generation partnership to deploy 6 GW of GPU compute, beginning with a 1 GW phase in 2026, while advancing new AI data center architectures built around liquid cooling, high-density power, and the open, rack-scale Helios platform unveiled at OCP 2025. The alliance links AMD’s silicon roadmap to OpenAI’s global Stargate buildout—spanning next-gen HBM memory partnerships in Korea and a shared commitment to open, sustainable AI infrastructure at gigawatt scale. David Chernicoff , Matt Vincent Oct. 20, 2025 12 min read Key Highlights OpenAI and AMD plan to deploy up to 6 GW of GPU compute capacity, starting with 1 GW in 2026, driven by next-generation Instinct MI450 accelerators. The partnership includes a warrant for AMD shares, potentially representing 10% of AMD's shares, aligning long-term interests and revenue growth prospects. AMD's Helios rack system exemplifies open standards, supporting high-density, liquid-cooled AI infrastructure with exascale capabilities, targeted for deployment in 2026. Strategic partnerships in Korea with Samsung and SK Group aim to expand high-bandwidth memory supply, critical for large-scale AI GPU deployments. The collaboration signals a shift towards open hardware ecosystems, diversifying supply chains and fostering innovation in AI data center design. AMD 68f63bb129ba8c44f6a6ec11 Amd2 OpenAI and AMD have announced a multi-year, multi-generation partnership focused on deploying up to 6 gigawatts (GW) of GPU compute capacity, beginning with an initial 1 GW phase powered by AMD’s next-generation Instinct MI450 accelerators in the second half of 2026. As part of the agreement, OpenAI received a warrant for up to 160 million AMD shares at $0.01 per share, vesting upon key deployment milestones through October 2030. If fully exercised, the warrant would represent roughly 10% of AMD’s current outstanding shares. AMD estimates the collaboration could generate tens of billions in annual revenue over time, with as much as $100 billion cumulatively tied to this and related hyperscale customers. The deal establishes AMD as OpenAI’s second primary GPU supplier, alongside Nvidia, reinforcing OpenAI’s drive to secure “as much compute as we can possibly get.” AMD’s accelerators are widely recognized for their superior performance-per-watt efficiency, even if they do not yet surpass Nvidia’s top-end GPUs in absolute performance. Investor sentiment reflected the magnitude of the announcement: AMD shares surged more than 20% following the newest

Mentions:#AMD#HBM#MI

It looks like QCOM is targeting a niche on the inference side that will use LPDDR pools in lieu of bottlenecked HBM. Definitely not challenging NVDA, AMD, or AVGO.

I haven't seen any technical commentary on this, so maybe there is something there that will give them some traction. But the actual 'chips' is only one of many reasons nvidia is dominating the game. Its all the other parts of the ecosystem that give nvidia its competitive moat...everything from the upcoming Rubins CPO to HBM4 to nvlink6 to cuda 14.

Mentions:#HBM

MU sold off on QCOMM jump, but MU is a leading supplier of its LPDDR memory, and margins on nonHBM expected by some to exceed HBM next year.

Mentions:#MU#HBM

$AMD swings red on account of qualcom. MU swings red on account of (I think I saw Samsung lowering their HBM prices just now?).

Mentions:#AMD#MU#HBM

KOSPI is actually lagging SPY for 5 year total return. They didn't do too well until the HBM shortage this year.

Mentions:#SPY#HBM

SNDKis now pushing $190. DRAM and HBM prices were just hiked for the fourth quarter of 2025 by an upwards of 30%. None of them manufacturers are quoting any prices for 2026. What are your thoughts? Is there a shortage or is this a mirage?

Mentions:#HBM

Amazing timing, so i did my dd on this and tried posting but the mods canned me, so i'll try posting it in a comment here. Anyways i'm in big time. Micron Tech DD My fellow regards, I thought it was about time I gave something back to the community, so here's my DD on MU. It will be short and sweet as the mods already canned version 1 for being too long MU is an American company that makes memory (DRAM), high bandwidth memory (HBM) and storage solutions. Currently trading at 210 with a market cap at 238B. It’s top 3 in the market after Hynix and Samsung, other competitors are <1% market share. Clearly MU is the dominant American player. 60% of MU revenue comes from the US and growing. Data centre business is 55% of revenue with 50% margin rate. HBM revenue is growing at great rate and will soon surpass DRAM. Current 2025 EPS are at 7.59 Fy 2025, and 2.83 last quarter. With strong guidance for next year at 21$, giving a current forward P/E of 10!!!! Pros: this a great AI story from a profitable company that will be hugely undervalued next earnings. Cons: cyclical business and large capex investments into new fabs in the US. I’ve been in since September with April 26 200$ calls. Flipping these into 3x 300$ calls. Thanks and good luck at the casino!

Mentions:#DD#MU#HBM

#TLDR --- Ticker: MU Direction: Up Prognosis: Buy $300 Calls Why: Dominant US memory/HBM producer, strong AI play with growing data center revenue and a low forward P/E of 10. Risk: Business is historically cyclical; big investments in new fabs could weigh them down in a downturn.

Mentions:#MU#HBM
r/stocksSee Comment

I used to have most in SPY and about 40% split across 20ish stocks. I sold all SPY and went for 40 stocks instead. I find a winner like Nvidia and diversify around it. For example I have Nvidia (GPU), TSM (Foundry), and Micron (HBM). For the AI neocloud space I own CoreWeave, Nebius, IREN, and Terrawulf. I reallocate within each cluster based on valuations just a little tweak at a time. I gravitate to larger companies with many business sectors like Microsoft and Berkshire Hathaway as well to build in diversification by default. I mostly avoid ETFs because of the fees and because they come packaged with stocks I don't like, Tesla and Apple for example. I follow the AI sector very closely, but once I want to hop off the train I'll probably diversify with some ETFs again but specialized ones with holdings that reflect my preferences, not general S&P or total market funds. More like Cybersecurity or Biotech. Basically concentrate based on expertise and delegate when you're unsure.

The bottleneck in AI now isn’t GPUs, it’s packaging — connecting and cooling those monster chiplets and HBM stacks. That’s Amkor’s turf. Amkor is the TSMC of chip packaging**.** Boring name, essential business, totally undervalued. Building a $7B advanced packaging plant right next to TSMC in Arizona, it's a giant on the making.

Mentions:#HBM

wait i was too busy jerking off to GLD to notice that my boi MU handily beat 200$, 205 and rising SON. 20 years in the making. HBM4 TO THE MU-UN

r/stocksSee Comment

SK Hynix. I use Hynix RAM sticks and a Hynix SSD, though both aren’t labeled as such (Klevv and Solidigm). Even if the HBM market dries up, I know they’ll still be the top RAM/SSD manufacturer in terms of quality.

Mentions:#SSD#HBM
r/stocksSee Comment

In my case my bull case was simply based on 3 things: 1. Identifying that Korean stocks, in aggregate, had among the lowest price/earnings and price/book of any country. Incredibly low starting valuations meant significant upside. I think I found it because I was looking up how to invest in SK hynix, and realized my only feasible option was to buy a Korean ETF. 2. Identifying that Samsung and SK hynix, the 2 largest stocks in Korea, had tremendous potential to benefit from the AI boom due to HBM being an essential component for AI, but were trading at very cheap valuations. So Korea wasn't just a value play, it was a growth play too. 3. A recent shift towards improving governance in Korean companies by improving shareholder returns(buybacks/dividends).

Mentions:#HBM

\#AMD # 💥 AMD’s Stock Surge: Speculation Outrunning Execution Wall Street’s recent enthusiasm for AMD — pushing the stock up over 11% to $235.56 — appears driven more by AI hype than by grounded operational readiness. The announcement of a partnership with OpenAI has triggered a wave of speculative buying, but beneath the surface, AMD faces critical execution risks that investors are glossing over. # 🧠 Software Lock-In: CUDA Isn’t Just a Preference — It’s Infrastructure OpenAI’s entire model stack is deeply embedded in NVIDIA’s CUDA ecosystem. Transitioning to AMD’s ROCm platform would require extensive engineering effort, retraining, and validation. This isn’t a simple swap — it’s a full-stack migration with real performance and reliability risks. ROCm has made progress, but it still lacks the maturity, developer tooling, and ecosystem depth that CUDA offers. # 🏭 Fabless Constraints: AMD Doesn’t Own the Means of Production AMD is a fabless chip designer, reliant on TSMC and Samsung for advanced node manufacturing. With TSMC’s 5nm and 3nm capacity already dominated by Apple, NVIDIA, and Qualcomm, AMD must compete for wafer starts, packaging slots, and HBM inventory. Without guaranteed priority access, AMD’s ability to deliver MI300X accelerators at scale is speculative — and vulnerable to delays. # 📉 Performance vs. Promise: MI300X Needs to Prove Itself While AMD’s MI300X accelerators look strong on paper, real-world benchmarks often reveal a gap between theoretical specs and actual throughput. AMD’s scale-out performance — especially in multi-GPU training — lags behind NVIDIA’s vertically integrated stack, which includes NVLink, InfiniBand, and optimized software libraries. # 🚧 Supply Chain Bottlenecks: From HBM to CoWoS Even if AMD wins OpenAI’s business, fulfilling it requires navigating a complex supply chain. High-bandwidth memory, advanced packaging, and system integration are all constrained. A single disruption — whether in HBM supply or packaging throughput — could derail deployment timelines and damage credibility. # ⚠️ Bottom Line AMD’s rally may reflect investor excitement, but it doesn’t yet reflect execution certainty. The OpenAI partnership is promising, but it’s also fraught with technical, logistical, and strategic risks. Until AMD proves it can deliver at hyperscale — with stable software, reliable supply, and validated performance — this pump may be more froth than fundamentals.

Mentions:#AMD#HBM#MI

HBM has been treating me well.

Mentions:#HBM

If it’s HBM, I get it

Mentions:#HBM
r/stocksSee Comment

Recent enormous deal between openAI and two of micron's biggest competitors SK hynix, Samsung - makes me think that even bigger deal related to MU is due. Especially since micron's HBM4 is said to be the market leader.

Mentions:#MU#HBM
r/stocksSee Comment

People are just confused, I guess. The AI buildout is real. Power generation, GPUs, HBM, networking, cooling, wafers, there are insane amounts of money flowing to those sectors. Real money. You either pick up the bills or cry "b-b-bubble!!". Are there any signs these huge data centers will not be built? Not that I can see. Maybe AI turns out to be useless. But those data centers will have been built then.

Mentions:#HBM
r/stocksSee Comment

Nvidia is at 53x earnings, not 29x earnings. You can't just use trailing PE for Cisco but forward PE for Nvidia. You aren't comparing apples to apples. The risk with Nvidia isn't just that their growth will slow down, it's also that their earnings could drop. There are many, many reasons why this can happen: 1. Competition is heating up. Broadcom is eating into their market share for inference. AMD is doing their best to compete as well on both inference and training. China is attempting to distance themselves from Nvidia 2. The brute force compute approach to training AI models is encountering diminishing returns. The difference between a $5 Million and $500 million model is massive, the difference between a $500 Million and $50 Billion model is minimal. 3. Distillation allows for training of AI models at an extremely low compute budget. As a result, this allows competitors to offer low price models, which eats into pricing power of top-tier models, resulting in poor ROI for AI training. 4. HBM as well as energy is becoming a much bigger constraint than raw GPU compute is. Overall, the poor ROI on AI investments will force Nvidia's end users to look into more cost effective and profitable solutions; custom chips via Broadcom, Distillation for pre-training of specialized models, improved processes for training, smaller spend on compute

Mentions:#AMD#HBM

I think HBM is not the future. New SSD tech with much higher capacity is.

Mentions:#HBM#SSD

What storage company (nand) to buy? One that is focused on AI enterprise (assuming new tech will challenged HBM from micron for example).

Mentions:#HBM

You can sell something to feel good ;) My LEAP calls are also deep ITM. I trimmed short-dated October calls, they ran 700-1000%, sold them already. A little too early, but at least I’ve got plenty of fuel for short-term plays. The shift is exactly what I was talking about. I don’t care about Samsung or Hynix - they’re already benchmarks on KOSPI. Micron is undervalued and overlooked because this isn’t a normal cycle anymore, it’s a supercycle. Mentality is shifting. Analyst re-ratings force ETFs to rescale exposure to memory, and that’s why the whole memory market is ripping higher. WDC and STX are riding the same wave, but MU has the AI backbone advantage with HBM3e already shipping and HBM4 sampling, so I see it as the core play.

Have you even run one of these firsthand? You’d know that what you typed was a 5 paragraph essay on bullshit in less than 30 minutes. I’m not even going to waste my time talking to someone who has no grasp of the technology outside of watching a few YouTube videos who now thinks he’s an SME. There are a lot of technical reasons why CPUs are not going to outperform GPUs in this area but you are so confidently wrong in your understanding that the most time I’ll put in is having GPT dunk on you: Bandwidth isn’t the whole story. Yes, inference is memory-bound, but doubling RAM bandwidth doesn’t double token speed. Caches, latency, instruction throughput, and parallelism all matter. GPUs aren’t just HBM — they’re thousands of cores optimized for tensor ops. 2. CPUs are not more efficient. On tokens per watt or tokens per dollar, GPUs crush CPUs. A CPU looks “efficient” only if you compare peak idle power, not sustained inference throughput. 3. Quantization ≠ practical CPU inference. Sure, quantization makes a 120B fit in RAM, but on a CPU you’ll be crawling at <1 token/sec. That’s fine for proof-of-concepts, useless for production chatbots, copilots, or anything latency-sensitive. 4. Market reality. Enterprise and hyperscale AI is GPU-first. CUDA/cuDNN is a 15-year moat, optimized kernels and tensor cores give GPUs an order-of-magnitude lead, and nobody is ripping that stack out for CPUs. CPUs fill niches: private, small-scale, or mixed workloads — not the growth engine. 5. The NVIDIA–Intel deal. That’s about Intel Foundry Services making custom CPUs for NVIDIA’s platforms. It doesn’t mean NVIDIA is “switching to Intel CPUs for AI.” AMD EPYC and ARM (Grace, Graviton) are already taking share in data centers. Bottom line: LLMs can run on CPUs, but not at the throughput or efficiency the market actually needs. GPUs will stay dominant; CPUs only cover niche/private inference.

Mentions:#HBM#AMD#ARM

For example: Ip for HBM4-Controller, IP solutions for faster performance

Mentions:#HBM#IP

This stock will be running to 200 by the end of the year, but it was never going to gap up like this the second the numbers hit the tape. The RSI had touched 80 which is a very reliable indicator of a short term top for any individual stock. Trust me, I got burned doing a similar trade to this last report but was still convinced Micron was the most undervalued tech infrastructure play so I doubled down and bought more calls. The key is to buy much longer expirations and just wait it out. Don’t be scared to sell when you’re down 30-40% initially, that can easily flip to a 150-200% gain once the rip happens. I myself made 150% during the recent run-up, sold when it hit $150 and then got back in after the earnings price action disappointment. The forward guidance this company released is absurd, and the fact that they are the only US-based HBM manufacturer is proving to be a massive tailwind in directing business from chipmakers like NVidia and others away from the Korean companies and towards Micron. Stay long and believe, don’t let short term volatility sway your belief.

Mentions:#HBM
r/stocksSee Comment

MU crushing it isn’t a shock - HBM demand is basically the backbone of the AI trade. Just don’t forget memory is cyclical, so chasing after a double in ’25 could be late. Best move now is waiting for a pullback or rotation instead of FOMO buying the highs.

Mentions:#MU#HBM

DRAM/NAND/solid state drives/HBM all overbought technically Micron, Sandisk, Rambus, Seagate, Western Digital

Mentions:#HBM
r/stocksSee Comment

Agree that it has historically been a highly cyclical industry. Part of the potential bull thesis is that if the AI/data center capex cycle has legs the demand for memory, and especially HBM, will make the overall industry less cyclical and it could cause it to rerate to a higher multiple. It's basically a "value" proxy for the data center boom even though it has run up significantly. If the narrative or sentiment on the buildout changes it definitely also has significant downside that many of the other names on the list don't share.

Mentions:#HBM
r/wallstreetbetsSee Comment

I read NVDA is their only HBM customer?

Mentions:#NVDA#HBM
r/wallstreetbetsSee Comment

>The company expects 2025 data center server units to grow \~10%, PC shipments to rise mid-single digits, and HBM market share to track overall DRAM share Cut your losses on MU calls. They aren't going to print.

Mentions:#PC#HBM#MU
r/wallstreetbetsSee Comment

>The company expects 2025 data center server units to grow \~10%, PC shipments to rise mid-single digits, and HBM market share to track overall DRAM share yeah, anyone that FOMOed into MU calls is definitely toast. IV crushed nuts at best.

Mentions:#PC#HBM#MU
r/wallstreetbetsSee Comment

Is it crowded? The only competition for HBM is Samsung and SK Hynix.

Mentions:#HBM
r/wallstreetbetsSee Comment

Also a lot of horseshit around HBM4: shipping samples, trying something blah blah blah. This is the most important part that drives the stock. I smell a pull back tomorrow!

Mentions:#HBM
r/wallstreetbetsSee Comment

It surely beat, what matters is HBM's guidance and margin

Mentions:#HBM
r/wallstreetbetsSee Comment

yes **Samsung just got Nvidia’s HBM4 green light.** But early volumes are very **limited**, so this is really a late 2026 share price potential impact... and regardless demand is still sky high. Micron is scaling HBM to more GPU/ASIC customers and has been targeting **\~24% DRAM share** in 2025. With the billions in compute being bought by OpenAI, META etc... demand will keep growing so 24% of that market is massive. check out my DD here: [https://www.reddit.com/r/TheRaceTo10Million/comments/1nojcrv/mu\_why\_micron\_should\_print\_after\_earnings\_today/](https://www.reddit.com/r/TheRaceTo10Million/comments/1nojcrv/mu_why_micron_should_print_after_earnings_today/)

Mentions:#HBM#DD
r/wallstreetbetsSee Comment

Praying HBM 3 and 4 demand is high.

Mentions:#HBM
r/wallstreetbetsSee Comment

I don't think the pump was MU related but more sector and AI generated especially with ORCL, NBIS, GOOG, AMD. etc...MU already stated concerns around the supply of the overall HBM chips. Kind Regards. Spider

r/wallstreetbetsSee Comment

I’m gonna drop two symbols out there because they’ve been good to me: HBM and ADAP.

Mentions:#HBM#ADAP
r/wallstreetbetsSee Comment

play MU earnings if you want a true 50/50 10-12%~ move Memory will stay hot for the next few quarters, costs and demands have soared with better than usual margins (memory gross is volatile and cyclical). they could also easily post shakey guidance on HBM4 or anything else and fall. you should really buy SanDisk + SK Hynix and chill

Mentions:#MU#HBM
r/wallstreetbetsSee Comment

It will depend on how much HBM contributes to future revenue that justify the stock being secular. I think safest play is $200C 6-month or LEAPS, you won't gain much overnight but will print bigly in a month or so, many analysts already set price target at $200

Mentions:#HBM
r/wallstreetbetsSee Comment

Micron ($MU) kicks off chip earnings season today, with key questions focused on **HBM4 progression**, whether **memory price hikes** will hold, and the outlook for **PC and phone demand**. While strong earnings are expected, investors will be closely watching for clarity on its HBM4 performance and if the pricing cycle can boost profitability, which could influence the stock's future.

Mentions:#MU#HBM#PC
r/wallstreetbetsSee Comment

$MU Holy Smokes. what would it take for MU ? For Micron&#39;s upcoming Q4 FY2025 earnings (after close September 23, 2025), a &quot;big beat&quot; like Oracle&#39;s would mean not just topping consensus (EPS ~$2.82, revenue ~$11.1–$11.2B, up ~43–45% YoY) but delivering a forward-looking catalyst that reignites AI hype—potentially sparking a 20–30%+ stock pop. Micron&#39;s strength is in memory chips (DRAM/HBM) fueling AI data centers, similar to Oracle&#39;s cloud infrastructure. Here&#39;s what it would take, broken down by key areas: Blowout AI/HBM-Specific HighlightsHBM Revenue Ramp: HBM (high-bandwidth memory for AI GPUs) was ~50% of Q3 sequential growth, with a $6B+ run rate and sold-out through 2025. Announce Q4 HBM revenue &gt;$1.5B (up 50%+ QoQ) and confirm 20–25% DRAM market share in HBM by year-end—positioning Micron as a &quot;must-have& be; for Nvidia/AMD AI servers.

Mentions:#MU#HBM#AMD
r/wallstreetbetsSee Comment

>won't make it output a better result or run faster Quite the opposite, it will run faster. Forward pass Scales with amdhals law. This is the same reason every accelerator company is pushing for high bandwidth networking. HBM capacity is also not a limiting factor- as models can run both forward and backward pass in parallel. If the network is fast enough, there is almost no penalty for doing so.

Mentions:#HBM
r/wallstreetbetsSee Comment

> A lot of limitations of AI you described has a lot to do with limited compute for user facing apps, so I guess it sort of makes sense why there is this great push for GPU and data center investment doesn’t it? Let’s assume the leadership of nvda, OpenAI, and other mega cap tech aren’t stupid, we can probably get closer to understanding what is really going on and how things will evolve. I've work with AI/ML, I'd like to clarify a few things. 1. Adding more GPUs for training has very significant diminishing returns. especially when the training process involves subjective analysis(IE humans rating which response is better) This is because you are basically brute forcing trying to find a minimum. You can get closer and closer, but over time the fuzziness of the training data gets in the way. 2. From an inference standpoint, Compute is not the bottleneck. Inference cannot be done in parallel across multiple GPUs. Speeding up compute only allows the answer to be reached faster. 3. HBM the biggest bottleneck for broad GenAI applications like LLMs and image generation; the more memory you can fit on a chip, the more weights you can load into memory. 4. For specialized applications, Training data is the biggest bottleneck. Your model cannot be better than its training data. In terms of coding, the limitation is mostly to do with AI's inability to reason. I've asked all of the leading AI models to optimize a SQL query, on the highest reasoning setting. Gave them the full schema, info about indexes, etc. All of them produced a SQL query that performed WORSE or had syntax errors. I then worked on the query myself, and fixed it in 10 minutes...

Mentions:#ML#IE#HBM
r/wallstreetbetsSee Comment

I think the upcoming Ernie’s Hall at EPS may be slightly priced in, but I think guidance is the big play here. Since they already sold out their 2025 supply end of Q3, it makes me wonder how much they have sold 2026 supply and also Development with HBM. They’re also spending $100 billion expanding their fabs and expansion plans so I think it just comes down to how they play with that. But this EPS is the biggest we’ve seen so far which is why I’m more bullish than previous earning calls.

Mentions:#HBM
r/wallstreetbetsSee Comment

I don’t think all the bullish catalysts for $MU are fully priced in yet. Near-term AI demand is in the stock, but upside remains if a few things play out. Longer AI memory supercycle. Wallstreet mostly price in 2025–26, not a multi-year AI-driven demand surge. That’s what I’m betting on with the guidance. HBM market share gains. If Micron grabs more of the HBM market, especially with AI OEMs, that’s upside analysts haven’t priced in quite yet. Margin expansion. Consensus margins are ~28–30%, but a richer mix of HBM and DDR and fab yield gains could push this higher. Government subsidies are also a factor. CHIPS Act & global incentives may reduce capex burden more than expected, boosting FCF. Valuation re-rating. Market still treats Micron as a cyclical memory name. A shift toward “AI infrastructure play” could justify higher multiples. If even a couple of these materialize, Micron could support the more aggressive $190–200 PTs rather than just hanging around the $160s. I don’t think we will see a retrace with the EPS expectation for Q4 being ~$2.80 because this brings TTM to $8 but may push FY 2026 to $12 range. Lot of potential upside which makes forward PE closer to 20 based on TTM but 7.4 with a $12 projected EPS in 2026 with current prices.

Mentions:#MU#HBM#FCF
r/wallstreetbetsSee Comment

why? cuz samsung HBM4 or what?

Mentions:#HBM
r/wallstreetbetsSee Comment

Tried to ask GPT to run a simulation --> a modest price jump could result in a much better EPS and beat the guidance... with the tight demand, I think the chance of getting better ASP is pretty good... # Scenario results (highlights) * **Guidance (Base)** — Revenue $11.20B, Gross margin \~44.50%, Net income ≈ $3.072B, EPS anchored at **$2.85**. * **A — DRAM price +5%** * Revenue ≈ **$11.63B** (+3.8%), Gross margin ≈ **44.63%**, Net income **+5.66%**, EPS ≈ **$3.01**. * **B — DRAM volume +3%** * Revenue ≈ **$11.46B** (+2.28%), Gross margin ≈ **44.58%**, Net income **+3.39%**, EPS ≈ **$2.95**. * **C — Price +5% & Volume +3% (combined)** * Revenue ≈ **$11.89B** (+6.19%), Gross margin ≈ **44.71%**, Net income **+9.22%**, EPS ≈ **$3.11**. * **D — Downside (-5% price & -3% volume)** * Revenue ≈ **$10.53B** (-6.0%), Gross margin ≈ **44.28%**, Net income **-8.88%**, EPS ≈ **$2.60**. * **E — HBM mix shift (+5 ppt DRAM share; incremental margin on the +5ppt = 55%)** (mix improvement only) * Revenue unchanged ($11.20B), Gross margin **\~45.59%**, Net income **+3.36%**, EPS ≈ **$2.95**. # Interpretation — what this means for beat probability * A **modest DRAM price upside** (≈ +5%) alone produces **\~5.7% more net income** and EPS rising from $2.85 → **\~$3.01**. That would be a clear beat of guidance. * A **modest volume upside** (+3%) gives **\~3.4% net income upside** and EPS ≈ **$2.95** — also a modest beat. * **Both price +5% and volume +3** together create **\~9.2% net income upside** and EPS ≈ **$3.11** — comfortably beating guidance.

Mentions:#HBM
r/pennystocksSee Comment

Any if anyone wants examples: mmy.v; mko.v; HL; HBM I bet any of those over ANY timescales outperforms this turd of a 1 ship company. https://preview.redd.it/eo10d6gdiiqf1.jpeg?width=1027&format=pjpg&auto=webp&s=9421bbbdabad0b3a274972267080f0448b1de429

Mentions:#HL#HBM
r/wallstreetbetsSee Comment

Oof micron always makes me nervous Memory margins are tight and the SvD is highly cyclical. HBM may be an inflection point for micron’s profitability and (more) consistent demand… but ima sit this one out

Mentions:#HBM
r/wallstreetbetsSee Comment

I agree and will plan to hold most likely if everything plays out the way I imagine it will. The key thing here is all the data centers in the U.S. will need memory and due to trumps tariffs and relationship with S Korea I think a lot of domestic companies will opt for Micron as they are U.S. based. If AI and data centers are going to ramp up, and Micron HBM4 is solid, I can see they taking the market domestically especially if pricing is similar and on par with S Koreas Samsung and SK Hynix.

Mentions:#HBM
r/wallstreetbetsSee Comment

tbh unless theres so major major fail and a huge crash, im leaning more towards holding at list till mid 2026 since i got shares. I wanna see the HBM4 wars play out, i think this could be the really big catalyst for a push upwards. By then we'll know if they firmly nr.2 now in front of Samsung, or if they're still fighting as a 3rd in a 3-way war.

Mentions:#HBM
r/wallstreetbetsSee Comment

I went big on Micron. Bought $50k of calls on Friday and already owned 300 shares outright. How's everyone feeling about them? A bit of dip today because of Samsungs HBM4 Can't seem to invest in them or SK Hynix though. So fuck it. Let's go Micron. Merica' 🦅🇺🇸🚀😂

Mentions:#HBM
r/wallstreetbetsSee Comment

Your right about HBM but you are so off about that GPU chip and its interconnects.

Mentions:#HBM
r/wallstreetbetsSee Comment

HBM memory is much harder to make the primary IC because it requires extreme intricate post etching sandwich assembly with TSVs built a true 3D interconnect.

Mentions:#HBM
r/wallstreetbetsSee Comment

samsung has caught up and has been allowed to become a hbm3e seller to NVDA... after SK hynix/MU have been sold out for the entire 2025. Next gen HBM4 starting in 3 months.. i guess better late than never xD But yeah the read could be, Samsung is now better positioned for the 2026 HBM wars, to the possible detriment of MU?

Mentions:#NVDA#MU#HBM
r/wallstreetbetsSee Comment

When Samsung closes down and stops selling HBM3

Mentions:#HBM
r/wallstreetbetsSee Comment

MU about to get sodomized today. NVDA just qualified Samsung HBM3e

Mentions:#MU#NVDA#HBM
r/stocksSee Comment

Look at forward PE, growth rates, competitive advantage, etc. Micron is a good company and they will continue to run over coming years. However they are a commodity player. As OP mentions, there are 3 HBM players and Micron has #3 market share. They all offer the same thing and margins are fairly low. I prefer investing in chip companies with higher margins (which is due to defensible IP). Crdo and ALAB both have 67% margins and are growing at parabolic rates. Crdo 4x revenues this year and will 2x next year. ALAB is growing like crazy because of all the networking configurations needed in AI data centres. They’ll continue to surpass Micron. Also, OP is implying Micron could hit $1T market cap. There’s no chance that happens in the next 10 years since there is nothing special about what Micron offers (which explains their low margins).

Mentions:#HBM#IP#ALAB
r/stocksSee Comment

China has no answer yet for HBM which is core to modern AI. Maybe they catch up by 2030 with HBM4, but by then they'll still be a generation behind. Only 3 companies right now do HBM at scale: 2 are Korean, 1 is American.

Mentions:#HBM