See More StocksHome

DGX

Quest Diagnostics Incorporated

Show Trading View Graph

Mentions (24Hr)

1

0.00% Today

Reddit Posts

r/wallstreetbetsSee Post

DGX Bloodhound

r/investingSee Post

Return on Net Tangible Assets Usage

r/stocksSee Post

Be honest. How much of y’all are just mad and Petty you missed the boat on NVDA?

r/stocksSee Post

Nvidia Call and Outlook Notes

r/wallstreetbetsSee Post

Mass quoting or just hysteria DGX

r/stocksSee Post

Is Nvidia the future?

r/stocksSee Post

Nvidia earnings to offer first true glimpse of the AI windfall

r/pennystocksSee Post

Tech companies developed its edge-computing AI system to build an AI ecosystem

r/StockMarketSee Post

Seeking Feedback on my Stock Earnings Digest App using ChatGPT!

r/wallstreetbetsSee Post

Wall Street analysts expect Nvidia stock ($NVDA) to surge 25%; are they underestimating?

r/wallstreetbetsSee Post

Unleashing the Hybrid Cloud AI Revolution: Nvidia's DGX, IBM's Ansible, and the Perfect Storm

r/wallstreetbetsSee Post

Unleashing the Hybrid Cloud AI Revolution: Nvidia's DGX, IBM's Ansible, and the Perfect Storm

r/wallstreetbetsSee Post

Unleashing the Hybrid Cloud AI Revolution: Nvidia's DGX, IBM's Ansible, and the Perfect Storm

r/wallstreetbetsSee Post

NVIDIA Co. (NASDAQ:NVDA) Shares Purchased by Polaris Wealth Advisory Group LLC

r/wallstreetbetsSee Post

Nvidia released a new "nuclear bomb", Google chatbot is also coming, computing power stocks again on the tide of halt

r/StockMarketSee Post

Nvidia: Excellent Quarterly Earnings and On Its Way to Next Trillion-Dollar Company

r/WallStreetbetsELITESee Post

DGX stock slips as outlook disappoints amid hit to COVID-19 test revenue (NYSE:DGX)

r/pennystocksSee Post

"$CEMI - CHEMBIO DIAGNOSTICS - BUY UNDER .50"

r/ShortsqueezeSee Post

"$CEMI - BUY UNDER .50 BEFORE THE NEXT P.R. TRIPLES THE SHARE PRICE"

r/pennystocksSee Post

"$CEMI - Buy Under .50"

r/pennystocksSee Post

"$CEMI - Chembio Diagnostics - Buy Under .50"

r/pennystocksSee Post

"$CEMI RECEIVES $3.25M C.D.C. CONTRACT"

r/pennystocksSee Post

"Buy $CEMI Under $1"

r/investingSee Post

Intel falls 10% after disappointing Q2 results: $0.29 EPS vs $0.70 expected. $15.3 billion in revenue vs $18 billion expected. CEO says third quarter is bottom

r/investingSee Post

Intel falls after disappointing Q2 results: .29 EPS vs .70 expected. CEO says third quarter is bottom

r/WallStreetbetsELITESee Post

At 6.2% CAGR, Viral Disease Diagnosis Market Size to Reach US$ 30,046.1 Mn by 2030| Rising Prevalence of COVID-19 and Advances in Clinical Research and Molecular Diagnostic Technology is Expected to Drive Market Growth $DGX $LH $CODX

r/WallstreetbetsnewSee Post

At 6.2% CAGR, Viral Disease Diagnosis Market Size to Reach US$ 30,046.1 Mn by 2030| Rising Prevalence of COVID-19 and Advances in Clinical Research and Molecular Diagnostic Technology is Expected to Drive Market Growth $DGX $LH $CODX

r/WallStreetbetsELITESee Post

Monkeypox declared a global health emergency by the World Health Organization $DGX $LH

r/WallstreetbetsnewSee Post

Monkeypox declared a global health emergency by the World Health Organization $DGX $LH

r/WallStreetbetsELITESee Post

$DGX: Quest Diagnostics beats by $0.10, beats on revs; guides FY22 EPS above consensus, revs above consensus (134.84)

r/WallstreetbetsnewSee Post

$DGX: Quest Diagnostics beats by $0.10, beats on revs; guides FY22 EPS above consensus, revs above consensus (134.84)

r/WallStreetbetsELITESee Post

Quest Diagnostics (DGX), CDC Sign New COVID-19 Testing Deal

r/WallStreetbetsELITESee Post

$DGX Big Government Contracts Plus Rise In Monkeypox And Covid meaning more testing…means more contracts coming in going “under the radar”

r/WallStreetbetsELITESee Post

Quest Diagnostics (DGX), CDC Sign New COVID-19 Testing Deal

r/WallstreetbetsnewSee Post

Quest Diagnostics (DGX), CDC Sign New COVID-19 Testing Deal

r/WallStreetbetsELITESee Post

CDC Newsroom-Wednesday, July 13, 2022 Quest Diagnostics will begin testing for monkeypox. $DGX

r/WallstreetbetsnewSee Post

CDC Newsroom-Wednesday, July 13, 2022 Quest Diagnostics will begin testing for monkeypox. $DGX

r/investingSee Post

$DGX- EPS of $2.36 per share, beating the Zacks Consensus Estimate of $2.26 per share.

r/WallStreetbetsELITESee Post

Quest Diagnostics Lifts Annual Guidance On Higher COVID-19 Test Revenue Anticipation $DGX

r/WallstreetbetsnewSee Post

Quest Diagnostics Lifts Annual Guidance On Higher COVID-19 Test Revenue Anticipation $DGX

r/WallStreetbetsELITESee Post

HHS orders additional vaccine, increases testing capacity to respond to monkeypox outbreak $DGX $LH

r/WallstreetbetsnewSee Post

HHS orders additional vaccine, increases testing capacity to respond to monkeypox outbreak $DGX $LH

r/WallStreetbetsELITESee Post

HHS Expanding Monkeypox Testing Capacity to Five Commercial Laboratory Companies $DGX $LH

r/WallStreetbetsELITESee Post

Quest Diagnostics ($DGX) Tops Q2 Earnings and Revenue Estimates

r/WallstreetbetsnewSee Post

Quest Diagnostics ($DGX) Tops Q2 Earnings and Revenue Estimates

r/pennystocksSee Post

$CEMI. Buy Under $1"

r/stocksSee Post

Norges Bank (NORWAY) - Potentially Something HUGE Here

r/wallstreetbetsSee Post

Norges Bank (NORWAY) - Potentially Something HUGE Here

r/pennystocksSee Post

$NXOPF - NexOptic - STILL MAKING ITS RUN

r/wallstreetbetsSee Post

$DGX Quest Diagnostics is about to SMASH Expectations

r/wallstreetbetsSee Post

🔥 $BNGO Catalysts

r/wallstreetbetsSee Post

$DGX suddenly a PRIME Short Squeeze candidate DD

r/stocksSee Post

$DGX Undervalued and now even has a Short Squeeze play

r/wallstreetbetsSee Post

$DGX Quest Diagnostics - 6 Figure Nasal Reparations

r/wallstreetbetsSee Post

$DGX Quest Diagnostics - 6 Figure Nasal Reparations

r/WallstreetbetsnewSee Post

$DGX Quest Diagnostics - 6 Figure Nasal Reparations

r/smallstreetbetsSee Post

$DGX Quest Diagnostics Earnings Recap - Go Baby Go

r/WallstreetbetsnewSee Post

$DGX Quest Diagnostics IV - Earnings Extravaganza

r/smallstreetbetsSee Post

$DGX Quest Diagnostics IV - Earnings Extravaganza

r/wallstreetbetsSee Post

$DGX Quest Diagnostics IV - Earnings Extravaganza

r/wallstreetbetsSee Post

$DGX Quest Diagnostics Pt. 3 - Leading a Redditor to Tendies

r/wallstreetbetsSee Post

$DGX Quest Diagnostics Pt. 3 - Leading a Redditor to Tendies

r/smallstreetbetsSee Post

$DGX Quest Diagnostics Pt. 3 - Leading a Redditor to Tendies

r/wallstreetbetsSee Post

$DGX Quest Diagnostics Pt. 3 - Leading a Redditor to Tendies

r/optionsSee Post

Iron condors with the same day expiration

r/wallstreetbetsSee Post

DGX (Quest Diagnostics) is Free Tendies

Mentions

An enterprise-grade 8-GPU H200 server (like a Dell PowerEdge XE9680 or an NVIDIA DGX H200) currently costs between $400,000 and $500,000.

Mentions:#DGX

Lenovo market cap 15.8B on the HK exchange, same as super micro. My nipples are hard. DGX spark manufacturers could be a clue, ASUS? HPE too expensive by comparison

Mentions:#DGX#HPE

Yeah makes sense, they already partner with Nvidia for the DGX servers, the motherboards are designed by Super Micro

Mentions:#DGX

I bought some DGX. Might go all in.

Mentions:#DGX
r/stocksSee Comment

At the Nvidia GTC, Nvidia announced new AI data-center hardware, including the Groq-3 AI inference chip and a new CPU-based server platform designed to power AI infrastructure. The new systems aim to provide a full AI data-center stack (CPU + accelerators + servers), which puts Nvidia in more direct competition with traditional server-CPU providers like Intel. Oh yes and Intel chips are being used mainly in Nvidia’s DGX Rubin NVL8 AI servers, where Xeon 6 CPUs act as the host processor controlling clusters of Rubin GPUs in large AI data-center systems However as said longterm nvidia is coming for them. Plus no fab news.

Mentions:#DGX
r/stocksSee Comment

I personally decided to stay away from memory and storage because historicaly they are a commodity business. Their biggest buyers would be server/desktop/laptop buyers just looking for the cheapest prices to increase their own margin. Now back in the day, I did invest in SNDK, before they got bought out and spun out again. They were the first to successfuly commerical flash, until it became commditized. I still see there being room to run for memory and storage, but I'm a long term investor. I feel there is some risk of the rug being pulled out at some point. These companies are benefiting from high demand, rather than from competitive advantage or innovation. I'd rather park my money in a company such as NVDA where historically it is well run, high margin, innovative; and even if GPU sales slow they are building new revenue streams such as DGX Cloud.

Huh? They aint buying your bog stand $RU and 2RU servers they buy the DGX SuperPods with a massive Netapp all flash array underneath,

Mentions:#DGX
r/stocksSee Comment

He provided 60% of funding in 2015, Brockman has an open diary stating about lying and deceiving him. Musk convinced Ilya from Google Brain to OpenAi. He was also the person to convince Huang to give OpenAI to DGX compute. All under the assumption they were a non profit. That law suit is not clear cut. I am not a fan of Musk by any merits but his numbers and evidence seem clear to me.

Mentions:#DGX
r/optionsSee Comment

> Specifically because Nividia uses silver in its high-end AI GPUs and server boards. LOL, dude. Yes, NVIDIA GPUs uses silver, but not in any economically meaningful quantity. It's used in solders mostly (typically Sn-Ag-Cu alloys) and it is standard across the semiconductor industry, not NVIDIA-specific. Also, small amounts of silver are used in die attach materials, thermal interface materials, and conductive adhesives. Silver is not a core input like silicon, copper, or aluminum. There is no scaling relationship between GPU volume and silver demand that matters at the commodity level. Nor does increase in silver price change the economics of GPU market. Back of envelope check. Assume 5–10 grams of silver per GPU server (this is far, far above reality). Silver at $100/oz ≈ $3/gram, that’s $15 of silver per server. HGX / DGX-class servers sell for $150k–$400k+. NVIDIA’s gross margins are driven by, silicon yield and wafer pricing (TSMC), advanced packaging (CoWoS), HBM stacks and software (CUDA, networking, ecosystem lock-in). Raw material prices are almost inconsequential.

Mentions:#DGX#HBM
r/wallstreetbetsSee Comment

Climate impact and surviving as a specie is not grounded to any logic. AI is becoming more and more sustainable (a freaking DGX spark is running on 120Watts peak), more and more ubiquitous and less competitive with clear winniners like Google / Anthropic while companies like OpenAi won't last very long. The rationale is flawed, that's a very risky take.

Mentions:#DGX
r/pennystocksSee Comment

DGX!!!! pushing up to 4$

Mentions:#DGX
r/stocksSee Comment

They’re being incredibly smart with DGX. The whole goal here is to build an Nvidia ecosystem for AI. The idea is everything just “works” with each other, it’s also clear by some of their frameworks that comes with their products to develop AI models. The end result is you, as a company, no longer need to invest in developing the “low level” minutiae of AI, such as programming CUDA, memory management, etc. You only need to worry about developing a competent model using the frameworks they provide and any inputs you specify. You can see this already with things like the Jetson Thor. You have software like [Groot](https://developer.nvidia.com/isaac/gr00t) and [Cosmos](https://developer.nvidia.com/cosmos) which is a fully fledged framework for developing robotic AI models and helps to handle things like image processing or similar. The downside is vendor lock in. This is probably why we’re seeing Google’s TPU’s being hyped up as well as AMD. The problem really becomes that for SME’s that are getting into AI work, they’re going to find it much easier to use Nvidia because Nvidia has already done the “difficult” part of the job because everything they offer is specifically designed around DGX. As a result, those SME’s trying to jump off the ecosystem later will find it much harder. They are also pulling an Adobe and offering DGX to the educational sector to try and get new graduates hooked to the ecosystem that Nvidia provides.

Mentions:#DGX#AMD
r/stocksSee Comment

Tsla is using DGX to power their driving

Mentions:#DGX
r/stocksSee Comment

2 things excited me: 1: Alphamayo. This is potentially pretty big: now a car manufacturer can just take this and give a shot at AV. I am going to need to see the new Mercedes Benz in action: it has no giant bulky sensor like a Waymo (though prob still has lidar), and it being production ready (to a point where you can buy one soon in the US) may be a pretty big disruptor in the robo-taxi game. One thing of note is that their partners are a giant list of Chinese EV makers. While companies like Stellantis, Uber, Benz, and Lucid are there, likely it is the BYDs and Xiaomis that will make this super main stream, esp if they are allowed to be tested in Chinese cities. 2) The whole DGX platform: like, how is anyone supposed to even compete with that? Feels like nvidia has taken the whole data center game to the next level. At the end of the day, even with a bubble burst, AI will still be all around us and utilized, and nvidia has given a strong reason for data centers to keep buying their stuff. (I jokingly made a remark of when he did the blackwell comparison: okay, so peak performance will be 10x, will the price of one of these also be 10x?)

Mentions:#EV#DGX
r/stocksSee Comment

The personal ai agent robot called Reachy Mini that they want to provide for every desk is a very interesting product. [NVIDIA brings agents to life with DGX Spark and Reachy Mini](https://huggingface.co/blog/nvidia-reachy-mini)

Mentions:#DGX
r/wallstreetbetsSee Comment

Why a 5090 sell for 5k when an rtx 6000 can be had for 5-8k? Maybe 5090 TI super but it would need at least 128gb of vram if it’s going to cost more than DGX spark…….

Mentions:#DGX
r/wallstreetbetsSee Comment

1x 256GB RAM Stick: $4,799.99 Nvidia DGX Spark: $3999.99 Mac Studio (M4 Max, 128GB unified memory): $3,329.99 lol

Mentions:#DGX
r/stocksSee Comment

The Nvidia’s-Groq deal is enormous. Nvidia now owns both the best training and inference platforms. By excluding GroqCloud, we clearly see NVDA stating that it will not be a cloud compute provider. DGX Cloud becomes strictly R&D, POC kind of platform for partners and internal use. Nothing against Google. NVDA is absolutely killing it though.

Mentions:#NVDA#DGX
r/wallstreetbetsSee Comment

Nvidia has shifted to using low-power DDR in its DGX GB300 to save on server electricity budgets. This puts memory demand from Nvidia servers into direct competition with high-end smartphone makers, which use the same DDR chips. And it all comes amid a push to add more memory to premium consumer devices so that they can do on-device AI.

Mentions:#DGX
r/wallstreetbetsSee Comment

Nvidia’s AI moat in 2025 is still enormous, but the gap is finally narrowing at the edges. The core of the moat is not just GPUs; it’s the full‑stack ecosystem and lock‑in around CUDA plus Nvidia’s scale in data‑center AI. Why the moat is so deep • Nvidia still controls the large majority of AI accelerators in data centers, which gives them pricing power, massive R&D budget, and deep integration with every major cloud. • CUDA has effectively become the default “OS” for accelerated compute. Millions of devs, tons of tooling, endless tutorials and pretrained models are built assuming Nvidia GPUs. • They’re now full‑stack: GPUs, networking (Infiniband, NVLink), systems (DGX/HGX), and increasingly software platforms (Nvidia AI Enterprise, libraries, SDKs). Ripping this out is expensive and risky for big customers.

Mentions:#OS#DGX
r/wallstreetbetsSee Comment

Difference this time is it is a structural shift in demand for new DRAM products that are based on long term contracts, and for the most part inelastic to price hikes. All the fabs are shifting to HBM and SOCAMM2, which is 5 times more expensive. The B100 gpu has 192GB of HBM, the newer ones are 288GB. A medium sized deployment is 128 servers of DGX B200, which is 2.3TB of HBM each. Do the math. These numbers are insane. DRAM is now the bottleneck for AI datacenter build outs so Nvidia, Google, Meta, Apple, pretty much everyone, is forced to contract out for years to secure the supply. For OpenAI and Nvidia, this is also about taking all the chips away so competition can’t grow. The war is being waged via DRAM, the winners are the Memory oligarchs, Micron, Samsung, SK Hynix.

Mentions:#HBM#DGX
r/wallstreetbetsSee Comment

Just a small note the 200k is for the dgx... The GPU was 10- 15k which probably boils down to like 50c an hour once u add some server cost? From Google search: AI Overview NVIDIA A100 PCIe 80 GB Specs | TechPowerUp GPU Database When NVIDIA launched the A100 in May 2020, it was positioned as a high-end AI accelerator, with systems like the DGX A100 costing around $200,000, while individual GPU modules (40GB/80GB) had street prices starting from roughly $10,000-$15,000, depending on the reseller and variant (PCIe vs. SXM),

Mentions:#DGX
r/wallstreetbetsSee Comment

NVIDIA’s Rubin platform is set to launch in 2026 and will redefine ai computing by enabling million-token context windows, generative video, and agentic ai which will further cement NVIDIA’s leadership in training and inference while expanding its reach into next-gen applications. Rubin will reshape the ai landscape with a massive leap in ai performance. Rubin context processing extension (CPX) is built for massive context inference and enables models to process million-token sequences which is game changer for code generation (such as full software systems), generative video, and agentic ai (utilizing multi-step reasoning and planning). The NVL144 CPX platform delivers 8 exaflops of ai performance and 100TB of fast memory per rack dwarfing current Blackwell capabilities. Nvidia’s annual chip cadence means relentless innovation. Rubin is part of NVIDIA’s new annual release cycle, with Rubin Ultra slated for 2027 and Feynman in 2028. This cadence pressures competitors (like AMD, Google, and Amazon) to match NVIDIA’s pace in both hardware and software evolution. Will they keep up? No. They won’t. Nvidia is expanding beyond data centers. Rubin is designed not just for hyperscaler training, but also for sovereign ai infrastructure (national scale deployments), enterprise ai factories (like custom LLMs and vertical ai) and Nvidia is working on edge inference at scale (via modular Rubin variants). The economic impact of this is a trillion dollar ai boom. J Huang projects that agentic ai will require 100x more compute than previously forecasted. Rubin’s efficiency and scale could unlock $5B in token revenue per $100M invested in infrastructure. Nvidia has a strategic moat based on software and systems. Rubin is tightly integrated with CUDA and TensorRT for developer lock-in as well as NeMo and DGX Cloud for enterprise ai deployment with NVLink and InfiniBand for ultra-fast interconnects. Nvidia’s full-stack approach an unmatched platform with multiple next gen chips in the works. The conclusion that matters is that Rubin will extend NVIDIA’s already superior lead. Don’t let the hype and news shake you (or deter you) from the clear winner. While Google and Amazon are gaining ground in specific inference uses, Rubin will reinforce NVIDIA’s dominance in training frontier models, high-context inference and ai infrastructure at national and enterprise scale. Let’s also not forget who is destined to get the lion’s share of international contracts also.

Mentions:#AMD#DGX
r/wallstreetbetsSee Comment

Regarding Google and Amazon Custom AI Chips TPUs (v5e, v6) are optimized for training and inference and are tightly integrated with Google Cloud. Trainium3 (training) and Inferentia2 (inference) offer high performance at lower cost and energy. TPUs are native to Google Cloud’s Vertex AI, enabling seamless scaling for LLMs. AWS is the largest cloud provider. Trainium3 is embedded in UltraServer systems for enterprise AI. TPUs are often cheaper and more power-efficient than general-purpose GPUs. Trainium3 uses 40% less energy and delivers 4x the performance of its predecessor. But Google designs chips for its own AI workloads (like Search, Bard, YouTube). And Amazon uses its chips to power Alexa, AWS services, and internal LLMs. With that said, NVIDIA still controls 80–90% of the AI training chip market, but custom ASICs are growing faster. There obviously is some hyperscaler defection risk. Meta, Google, Amazon, and Microsoft are all designing in-house chips to reduce reliance on NVIDIA. Meta is testing Google’s TPUs for future workloads. But they can’t replace their reliance on Nvidia anytime soon. NVIDIA’s GPUs can sometimes be considered overkill for some specific inference tasks. Meaning they are more powerful than a specific task may require. Amazon’s Inferentia2 and Google’s TPUs can offer cheaper, more efficient alternatives for some very specific production-scale inference. As hyperscalers shift to in-house chips, NVIDIA may face pricing pressure and reduced volume in its highest-margin segment. Yet NVIDIA Still Leads. CUDA Ecosystem Lock-in makes it difficult to switch. Developers are deeply entrenched in NVIDIA’s software stack, making switching costly. Nvidia has substantial performance leadership. Blackwell GPUs remain the gold standard for training frontier models. Nvidia is also working in the next gen Rubin line that will be released in 2026 which will make a clear statement of continued dominance. Nvidia has full-stack AI Infrastructure. NVIDIA offers not just chips, but networking (NVLink, InfiniBand), systems (DGX), and software (TensorRT, NeMo). So, outlook? Some fragmentation, but not outright replacement Hyperscalers would love to produce everything NVDA does in-house and maintain quality standards but they simply can’t and therefore won’t replace NVIDIA, but they will carve out share in specific domains (like inference, internal workloads). NVIDIA’s biggest risk is losing hyperscaler loyalty, not because of inferior tech, but because of cost, control, and vertical integration. But these are problems that can be resolved. By 2028, NVIDIA is projected to lose some AI chip market share to custom ASICs from Google, Amazon, and others but it will remain the dominant player in training workloads. Regarding AI Chip Market Share Projections (thru 2028) NVIDIA will still have appx 80% (training), 60% (overall). Still dominant in training but loses some inference share to ASICs. Google (TPU) appx 5–7%. TPU production could reach 7M units by 2028. Amazon (Trainium/Inferentia) appx 3–5%. Gains in inference, especially within AWS. AMD appx 5% - 10% MI300X adoption grows, especially in cloud and HPC. Intel (Gaudi) <3%. Gains traction in cost-sensitive enterprise AI. Others (startups, China) appx 10%. Includes Hailo, Tenstorrent, Huawei Ascend, and domestic Chinese players. En resumen… by and far NVIDIA Still Leads CUDA ecosystem lock in creates a moat. Developers and enterprises are deeply embedded in NVIDIA’s software stack. Blackwell and its successors remain the gold standard for training frontier models. This is training dominance. Full-stack integration from chips to networking (NVLink, InfiniBand) to software (TensorRT, NeMo), NVIDIA offers unmatched vertical depth. Fin

Mentions:#DGX#NVDA#AMD
r/stocksSee Comment

Can't bear to hear Rogan tell the same story about wolves, aliens, and gorillas for the millionth time. Here's a summary of Big J's episode: >Executive Summary: The conversation between Joe Rogan and Jensen Huang centers on 4 main themes: the geopolitical and energy context of artificial intelligence, the nature and trajectory of AI capabilities and risks, the economics of compute and Nvidia’s strategic positioning, and Huang’s personal and corporate history as a case study in entrepreneurial risk, resilience, and culture. Huang characterizes AI as the latest phase of a long-running global technology race that confers “information, energy and military superpowers,” with national prosperity and security depending on energy growth, industrial capacity and technological leadership. He credits pro‑growth U.S. energy and onshoring policies in the previous administration with enabling the capital‑ and power‑intensive build‑out of AI factories and chip fabs, arguing that without such policies “we would not be able to build factories for AI.” >On AI trajectory and risk, Huang rejects a singular “event horizon” moment and anticipates a gradual, continuous improvement process, with multiple competing AIs balancing one another much like offensive and defensive cyber systems. He acknowledges serious concerns about military applications, cyber security and quantum‑era encryption but argues that the same AI technologies will be deployed at scale for defense, monitoring and post‑quantum cryptography. He views AI primarily as a new class of software whose growing power is being channeled toward safety, accuracy and controllability, noting that in the last 2 years AI capability has increased “maybe a 100x,” with much of the incremental compute redirected into reasoning, research, reflection and tool use that reduce hallucinations. >Economically, Huang expects AI to augment rather than universally displace labor, emphasizing the distinction between a profession’s purpose and its constituent tasks. He points to radiology, where deep learning has “swept the whole field” but radiologist headcount has increased because image reading was only a task in service of diagnosis. He anticipates new categories of work around robotics, AI operations and maintenance, and believes that in the next 5–10 years AI will materially reduce the technology divide because it is the easiest tool in history to use, will run locally on phones, and will provide “yesterday’s AI” at low cost to nearly all countries. He is more skeptical about clean universal basic income narratives, arguing that discussions of universal abundance and large‑scale public income support cannot both be true in their extreme forms. On compute economics, Huang explains Nvidia’s thesis that traditional Moore’s law improvements are no longer sufficient and that accelerated computing has become the dominant performance driver. Over the last decade Nvidia’s approach has delivered roughly 100,000x improvement in AI computing efficiency, which he compares to a car becoming 100,000x faster or 100,000x cheaper to operate. The DGX1 AI supercomputer he delivered to Elon Musk in 2016 delivered 1 petaflop of performance in a $300,000 rack‑scale system; 9 years later the DGX Spark he hands Musk at SpaceX offers a similar 1 petaflop in a device the size of a book costing about $4,000. He projects that AI will remain energy constrained and data centers will increasingly require dedicated generation, including “hundreds of megawatts” small nuclear reactors over the next 6–7 years, but argues that per‑task AI energy requirements for most users will eventually be “utterly minuscule.” >Huang’s recounting of Nvidia’s history highlights repeated near‑death experiences, highly concentrated strategic bets, and a culture organized around first‑principles reasoning, constant reassessment and an unusual tolerance for vulnerability from the CEO. Early Nvidia made three major technical choices for 3D graphics that were “all wrong,” nearly failed trying to build a console chip for Sega, and survived only because Sega’s CEO converted the last $5 m of a development contract into equity despite acknowledging it would “most likely be lost.” Later, Nvidia risked half its remaining cash on an emulator from a failing company and convinced TSMC’s founder to fabricate a new chip directly into volume production without the usual silicon test spin. These decisions led to the Riva 128 and subsequent GeForce lines that effectively collapsed million‑dollar image generators into a consumer graphics card and, ultimately, to the CUDA accelerated‑computing stack that underpins modern AI. >Huang describes himself as driven far more by fear of failure than by desire for success. He works 7 days a week, wakes around 4 a.m., reads “several thousand emails a day,” sleeps 6–7 hours, and says he has used the phrase “30 days from going out of business” for 33 years. He emphasizes that his leadership style deliberately presents vulnerability so that employees feel free to challenge his assumptions and to pivot the company’s strategy when needed. The episode closes with Huang’s immigrant background—sent alone with his brother from Thailand to a harsh Baptist boarding school in rural Kentucky, reunited with parents who arrived with almost no money, and ultimately becoming what he calls “the first generation of the American dream”—providing a narrative frame for Nvidia’s corporate trajectory and his current stance that the United States remains uniquely capable of creating such opportunities.

Mentions:#DGX
r/stocksSee Comment

NVDA shipped their very first DGX 10 years ago - it just shows how far ahead NVDA is or how far behind AMD is.

Mentions:#NVDA#DGX#AMD
r/wallstreetbetsSee Comment

Seaport Global Securities on $NVDA (Sell, PT $140): "We see Nvidia facing growing competitive pressure." "To address this, the company has been leaning on a variety of sales mechanisms to adapt. These measures are not fully reflected in financials, but they are already material and look likely to grow significantly next year. We remain negative on Nvidia as signs of competition increase: Nvidia has $26 billion of cloud compute service agreements." "The company maintains that these will be used for R&D and its DGX offering. We see these as a form of rebate which, if recognized, would take 400bps off gross margins next year, or at least $0.30. Google has surprised with its ability to promote third party use of its internally designed TPUs." "TPUs are not for everyone, but can outperform Nvidia systems on many metrics. Growing commitments and investments to customers. The company spent $6 billion this year in private companies. It has commitments for another $17 billion (including $5 billion to Intel)."

Mentions:#NVDA#DGX
r/investingSee Comment

**You’re wrong again**. NVIDIA is involved in building QPU. Start with NVIDIA DGX Quantum: https://www.quantum-machines.co/products/nvidia-dgx-quantum/ They also develop with 15+ QPU vendors, because you don’t optimize CUDA-Q and all other components for a device without being part of its development. Please learn how hardware–software co-design actually works. Your claim of “interacting with QPU just by building GPU” is exorbitantly ignorant. Maybe try not to use world’s worst search engine.

Mentions:#DGX
r/investingSee Comment

**You’re wrong again**. NVIDIA is involved in building QPU. Start with NVIDIA DGX Quantum: https://www.quantum-machines.co/products/nvidia-dgx-quantum/ They also develop with 15+ QPU vendors, because you don’t optimize CUDA-Q and all other components for a device without being part of its development. Please learn how hardware–software co-design actually works. Your claim of “interacting with QPU just by building GPU” is exorbitantly ignorant.

Mentions:#DGX
r/investingSee Comment

You’re wrong again. **NVIDIA is involved in building QPU**. Start with NVIDIA DGX Quantum: https://www.quantum-machines.co/products/nvidia-dgx-quantum/ They also develop with 15+ QPU vendors, because you don’t optimize CUDA-Q and all other components for a device without being part of its development. Please learn how hardware–software co-design actually works. Your claim of “interacting with QPU just by building GPU” is exorbitantly ignorant.

Mentions:#DGX
r/investingSee Comment

You can rent a DGX A100 through a public cloud provider and pay hourly which is way more cost effective. Is there an issue with this?

Mentions:#DGX
r/investingSee Comment

98% of posts in stock subs about this are circular and ignore public evidence, earning call transcripts, and financial statements. While also having zero context for ML and semis. It's not hard, they should try to rent a DGX A100 node and see how it goes.

Mentions:#ML#DGX
r/investingSee Comment

Some quick web research on server GPUs that are 5 years old. This does not look positive on depreciation. So 5 years old in server graphics cards would be ampere generation. As of today we are on Blackwell next year we are on Rubin. DGX a100 early Jan 2020 would have 40gb vram. Now let's jump to recent server gpu with vram b200 192gb. That's 4* speed increase nearly. If a competitor uses a newer GPU which at base has increased at a jump of 4* the amount. They are 4* faster than you in compute. But maybe it's worth something, so not much better news here. Release price DGX A100 = $199,000 Price today Second hand cost now aroud £26,000 uk so $37,000 Please don't take my word research this yourselves.

Mentions:#DGX
r/wallstreetbetsSee Comment

NVIDIA isn’t winning just because their GPUs are fast — Google’s TPUs are actually monsters at the specific math they’re built for. The problem is that TPUs are basically a super-fast screwdriver, while NVIDIA GPUs are a Swiss Army knife with a power drill strapped to it. Hyperscalers want hardware that can run every model, not just the ones TPUs love. NVIDIA has CUDA, cuDNN, TensorRT, thousands of libraries, a massive dev community, and people already trained to use it. TPUs? Great at tensor ops, but way more niche, way harder to integrate, and you can’t even buy the good ones because Google keeps the top versions for itself. On top of that, NVIDIA’s whole ecosystem (NVLink, NVSwitch, DGX racks, etc.) scales ridiculously well across huge datacenters. TPUs can scale too, but only inside Google’s walls. Nobody else wants to depend on Google — a direct competitor — for their core AI hardware. So even though TPUs can be faster, NVIDIA wins because they’re flexible, mature, everywhere, and come with the software glue that actually makes massive AI training work. In short: TPUs are specialized rockets; NVIDIA is the entire airport, fuel system, pilot training program, and air-traffic control.

Mentions:#DGX
r/stocksSee Comment

I'm not sure the figures quoted in that story are apples with apples. The story says a B200 costs $500k, but rents for under $3.20/hour. According to Gemini (no, I haven't done deeper digging), the $500k price tag is for a DGX B200 server, which includes 8 individual chips. That server then rents out for $45-$60/hour. It's still a steep differential, but the ROI decimal needs to move over one space.

Mentions:#DGX
r/wallstreetbetsSee Comment

Buying a NVDA DGX H100 SuperPOD: $18-20m, plus you've gotta pay for power, cooling, the site, and other upkeep Renting equivalent cloud compute from AWS, Azure: $10-12m per year Renting equivalent compute from a Google TPU v5p Pod: $8.5m per year If you can find cheaper compute, it's probably booked out til 2030. Google will win the AI war.

Mentions:#NVDA#DGX
r/wallstreetbetsSee Comment

AMD may also play a role in this. Their new repurposed GPU on die accelerators may be sufficient for inference at some scale. Wildly cheaper - but not suited to training, it will bifurcate the AI market into specialized hardware products for for Training and Inference at scale where for now there is only NVDA. The major limitation for widespread training competition that I see after looking into all of this is that NVDA is still the king of memory bandwidth. Those sweet sweet 74% margins are going to go the way of the dodo. It was always going to go this way and there is a lot between the hardware and a usable ecosystem for AI such as good drivers that are very stable. Further, xAi and Tesla's custom silicon is probably going to end up being a blow to demand for NVDA as well. And .. totally anecdotally, I went looking for a high end GPU recently and not only did I have a choice, I also paid MSRP. This hasn't been the case for \*years\* between crypto and AI. I know they said something about the GPU market being robust and blackwell being supply constrained. This may be the case for the DC hardware but it isn't for the consumer and pro cards cards. (Which then makes me wonder what part of the DC hardware is constrained) The DGX also shipped relatively on time and I'd set way met with lack luster demand. Don't get me wrong it looks cool but it's also pretty slow for what it costs.

r/investingSee Comment

Meta is rumored to be reporting failing GPUs in their DGX200 units with six GPUs that cost a half million a pop and require specialized power power circuits to operate and therefore have little to no aftermarket resale value. The exact rate of failure is unknown but reportedly about 10% annually for the GPUs mostly from overheating. Those chips begin losing their value as soon as they leave the fab. Also, in all of their products from gaming GPUs to crypto to "AI", Nvidia uses a most likely illegal technique called "signed drivers" which mean the customer never really owns the equipment, the hardware is merely a token of a software license that Nvidia retains control of through the drivers which are licensed, not sold. This massively inflates depreciation in the same way that people don't want old Tesla autos because they don't trust the parent company to play fair.

Mentions:#DGX
r/wallstreetbetsSee Comment

NVDA will always have a spot somewhere, their stuff is still the king of raw model training power. Once companies start getting the models trained though, they're gonna put the compute focus on inference, and that's where TPUs shine. Mass AI rollout is going to eventually boil down to cost efficiency, and the jack-of-all-trades DGX systems are *hella expensive*. TPUs will do the job for cheaper, and with less power draw, which is what you need for on-device AI for stuff like self-driving AI cars and robots.

Mentions:#NVDA#DGX
r/stocksSee Comment

You use the shiny new stuff for training models, and the slightly used shit for inference. Or image processing. Or whatever else where running on some sort of GPU is preferable to CPU-only. In our case it’s ML inference and image processing. Some of our researchers are working on H100s/H200s, but we’re still getting great mileage out of our older A100s. Hell, one of our guys is still running a DGX with fucking VOLTAS. Works well enough for him.

Mentions:#ML#DGX
r/stocksSee Comment

Curious what AI are you working with that’s messing up like that? Because the ones running on DGX clusters, Azure AI, aren’t exactly known for wrong answers. Especially the wrong date lol the early versions didn’t even do this . Those systems are literally optimizing logistics, defense, and finance in real time. I doubt it’s the same setup?

Mentions:#DGX
r/wallstreetbetsSee Comment

If you want something thats relatively safe and good fundamentals, then DGX. Or you could gamble on random biotech companies

Mentions:#DGX
r/wallstreetbetsSee Comment

After Elon's shareholder meeting, I expect CEO bullshitting to up their game. Jensen: "We are penning a deal with Blockbuster in which we will invest $1B in DVD rewinders and they will buy $5B in NVIDIA DGX platforms" Zuck: We anticipate 40% of Americans will trade their prescription glasses for Meta Quest 4 headsets by 2028" Karp: "Our short sellers will fail. You know what, fuck them. Here's their home addresses" Altman: "AI is going to be everywhere. It already is. Look to your left. See your wife? She's an AI"

Mentions:#DGX
r/wallstreetbetsSee Comment

For context: A 'Kkanbu Alliance' of AI between leading companies from Korea and the United States was formed at a Korean chicken restaurant. Jensen Huang, CEO of NVIDIA, Lee Jae-yong, Chairman of Samsung Electronics, and Chung Eui-sun, Chairman of Hyundai Motor Group, met on the 30th at a chicken restaurant in Gangnam-gu, Seoul, for a three-way 'chimaek' (chicken + beer) gathering. The meeting lasted for about three hours, including the NVIDIA event held nearby on the same day. ● Unprecedented Meeting of Corporate Leaders The meeting was unprecedented. CEO Huang remarked, "Today is the best day of my life." The leaders of global companies, including NVIDIA, Samsung Electronics, and Hyundai-Kia Motors, with a combined market capitalization of approximately KRW 8,300 trillion, visited the 'Kkanbu Chicken' store near Samseong Station in Gangnam-gu, Seoul, and enjoyed a public chimaek in front of hundreds of citizens. CEO Huang entered the chicken restaurant with Chairman Chung around 7:20 PM after arriving in Korea. He wore his signature black leather jacket and a black T-shirt. Chairman Lee arrived about five minutes later and embraced CEO Huang. Chairman Lee and Chairman Chung also wore casual white T-shirts. This gathering was arranged because CEO Huang wanted to experience Korea's chimaek culture. CEO Huang ordered fried chicken, spicy sea snails, and cheese sticks to share with Chairman Lee and Chairman Chung. They drank beer and also tried 'soju tower,' a device for mixing soju and beer, consuming several glasses. The three exchanged drinks in a 'love shot' style. When CEO Huang exclaimed, "Dinner is Free," Chairman Chung replied, "I'll cover the second round." However, it is reported that Chairman Lee actually paid the bill. The total meal cost at the restaurant was approximately KRW 2.5 million. ● Kkanbu Alliance Continued into the Night Jensen Huang, CEO of NVIDIA, takes a commemorative photo with Lee Jae-yong, Chairman of Samsung Electronics, and Chung Eui-sun, Chairman of Hyundai Motor Group, after a 'chimaek' gathering at Kkanbu Chicken in Gangnam-gu, Seoul, on the 30th. The informal demeanor of the global corporate leaders was broadcast live during the meeting. CEO Huang left his seat to distribute kimbap, banana milk, and chicken to citizens. During this time, Chairman Lee remarked, "It's been about ten years since I last had chimaek," to which Chairman Chung replied, "I eat it often." CEO Huang gifted Chairman Lee and Chairman Chung a bottle of Japanese Hakushu 25-year whiskey worth approximately KRW 7 million and NVIDIA's 'DGX Spark' AI supercomputer. The gifts were signed with the message, "TO OUR PARTNERSHIP AND FUTURE OF THE WORLD!" Lee Jae-yong, Chairman of Samsung Electronics, distributes chicken to citizens during a 'chimaek' gathering with Jensen Huang, CEO of NVIDIA, at Kkanbu Chicken in Gangnam-gu, Seoul, on the 30th. The choice of venue, 'Kkanbu,' which means close friend, was interpreted as a nod to the famous line "We are kkanbu" from the Netflix drama 'Squid Game.' CEO Huang stated, "I enjoy chimaek with friends, so Kkanbu is the perfect place." He repeatedly expressed, "So good. So Happy," at the chicken restaurant. Chairman Lee, leaving the restaurant, commented, "Happiness is nothing special. It's about enjoying good food and drinks with good people." The late-night chimaek gathering, lasting about an hour and twenty minutes until 8:40 PM, continued at the 'GeForce Gamer Festival' hosted by NVIDIA at COEX in Gangnam-gu, Seoul.

Mentions:#DGX
r/wallstreetbetsSee Comment

I just bought a DGX from NVDA… nah if they were that sold out why even bother send me a desktop…

Mentions:#DGX#NVDA
r/wallstreetbetsSee Comment

Bought the dip on MSFT and continuing to hold DGX and AZO

Mentions:#MSFT#DGX#AZO
r/wallstreetbetsSee Comment

I need DGX and AZO to start moving up

Mentions:#DGX#AZO
r/investingSee Comment

50% of Apple's sales are iPhones and they're a $4T company. Nvidia is doing something exponentially greater for humanity. I've been on this roller coaster for 7 years, I've heard every bear argument since then. The reason I bought Nvidia was bc of the DGX, and I thought..wow if they can solve autonomous driving then they will be worth a lot of money. I didn't think they'd have gotten here at that time. There's a lot more to come.

Mentions:#DGX
r/wallstreetbetsSee Comment

I'm gonna need DGX and AZO to pump

Mentions:#DGX#AZO
r/wallstreetbetsSee Comment

DGX, AZO, no bias

Mentions:#DGX#AZO
r/investingSee Comment

I work in HPC. Why would I spend thousands of my own money on too much hardware, when we have racks of DGX hardware? I wasn't born with money. I earned it, by working hard, saving hard, doing without. And by *not* spending it when stocks are stupidly overvalued. Buffett isn't either.

Mentions:#DGX
r/wallstreetbetsSee Comment

Autozone (AZO) and Quest Diagnostics (DGX), hoping to buy low after they have taken a beating last week

Mentions:#AZO#DGX
r/investingSee Comment

Medical Diagnostics: VCYT, DGX Legal like DISCO Financial services if you belive mortgage processing is going to face disruption Customer service - NICE and Verint - great poticial but giant companies Professional services - Workday for example

Mentions:#VCYT#DGX

Digipower X DGX- The next runner

Mentions:#DGX
r/wallstreetbetsSee Comment

DGX- Digipower

Mentions:#DGX
r/wallstreetbetsSee Comment

DGX- Digipower

Mentions:#DGX
r/wallstreetbetsSee Comment

DGX

Mentions:#DGX
r/investingSee Comment

By happenstance, I currently work in HPC and have racks of DGX hardware. AI is in a big fucking bubble. Go "Big Badda Boom" soon.

Mentions:#DGX
r/stocksSee Comment

AI to be monetized through many promising AI projects. Many specialized AI generative and be sold to many companies. Corrdiff AI extreme weather modeling. GE partnership and specifically their sonomet AI ultrasound and CT projects. RadimageGan generative AI medical imaging along with Deeptek and for AI augmented radiology projects. Clara parabricks genome sequencing AI software. Nvidia DRIVE solution for autonomous vehicles. Nvidia Omniverse partnered with Siemens for AI industrial production applications. Also has some US government contracts/ projects using DGX Superpod and working with DARPA All of these revolutionize the spaces they've been applied to. The amount of potential revenue derived from these projects could be insane going forward and they're all just getting started. The scope is broad and AI is only limited by power consumption at this point. There's an AI arms race going on across the globe and there's no reason to stop unless constricted by power restraints and production timelines.

Mentions:#GE#DGX
r/wallstreetbetsSee Comment

> NVIDIA has been throwing billions at AI infrastructure companies but they don't have any optical interconnect plays in their portfolio. Nvidia calls them NVLink you regard. For inter-node (server-to-server) GPU communication, NVIDIA also integrates InfiniBand and NVLink Switch Systems (used in Grace Hopper and DGX SuperPODs). These extend NVLink-like performance beyond a single chassis.

Mentions:#DGX
r/pennystocksSee Comment

Oh and one last thing. This was from RXRX Article itself and they used BioNeMo and guess what QSI proprietary platform is built on in collaboration with Nvidia? well, not a long shot here but just piecing information to all the similarities. QSI: "We are thrilled to collaborate with NVIDIA to make single-molecule proteomics more accessible to researchers," said John Vieceli, Ph.D., Chief Product Officer of Quantum-Si. We have been leveraging AI protein structure prediction tools with [NVIDIA BioNeMo](https://cts.businesswire.com/ct/CT?id=smartlink&url=https%3A%2F%2Fnam04.safelinks.protection.outlook.com%2F%3Furl%3Dhttps%253A%252F%252Fwww.nvidia.com%252Fen-us%252Fclara%252Fbiopharma%252F%26data%3D05%257C02%257Ckatkinson%2540quantum-si.com%257C0f550c9f118049de621408dd05c4bfd3%257C48afde5b18304e18a221f6417a5a1bde%257C0%257C0%257C638673064827649542%257CUnknown%257CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%253D%253D%257C0%257C%257C%257C%26sdata%3D1FrNzFk8l3KEsejU8%252BV87mg2fw79CNi9nCAvD6zCP%252Fw%253D%26reserved%3D0&esheet=54155579&newsitemid=20241120405534&lan=en-US&anchor=NVIDIA+BioNeMo&index=2&md5=c77614581e3d4ab90f107177f38939c0), both in the cloud and on-premises to design new and improved biomolecules. Now, we are excited to apply NVIDIA technology for downstream data processing and interpretation applications for Proteus." [Quantum-Si to Develop Acceleration Platform and Advance Core Technologies in Collaboration with NVIDIA](https://finance.yahoo.com/news/quantum-si-develop-acceleration-platform-120000059.html) RXRX: "Recursion plans to utilize its vast proprietary biological and chemical dataset, which exceeds 23 petabytes and 3 trillion searchable gene and compound relationships, to accelerate the training of foundation models on [NVIDIA DGX™ Cloud](https://www.globenewswire.com/Tracker?data=HbfHhJGLFLovux_4GAinPDR2wH9w0m3CGf1Fb9Ct-PV00DWmzZS9HUvNao6gCV8tcLmGKs_X3yLyrEcPn3l86GtKVxsoPehDzmGbHLWQcCbrh2f1T6ms3yrlyJPPSgqR) for possible commercial license/release on BioNeMo, NVIDIA’s cloud service for generative AI in drug discovery. NVIDIA will also help optimize and scale Recursion foundation models leveraging the NVIDIA AI stack and NVIDIA’s full-stack computing expertise. [BioNeMo](https://www.globenewswire.com/Tracker?data=8Un3Nqj2782TnmY-gziLiq0rP2rheA9QTSi5YT8FH4GaI8kTUFZwcrGNhHakA5U9GZF9uKpnaNotEZ4CX6BVB-lb0W86a_aBQM48XFxUQWk=) was announced earlier this year as a cloud service for generative AI in drug discovery, offering tools to quickly customize and deploy domain-specific, state-of-the-art biomolecular models at-scale through cloud APIs. Recursion anticipates using this software to support its internal pipeline as well as its current and future partners." [Recursion Pharmaceuticals, Inc. - Recursion Announces Collaboration and $50 Million Investment from NVIDIA to Accelerate Groundbreaking Foundation Models in AI-Enabled Drug Discovery](https://ir.recursion.com/news-releases/news-release-details/recursion-announces-collaboration-and-50-million-investment)

r/wallstreetbetsSee Comment

$rxrx Recursion Pharmaceuticals those calls are printing.. Only matter of time for this to explode Recursion Pharmaceuticals and NVIDIA have a multi-faceted partnership focused on AI-driven drug discovery. NVIDIA invested $50 million in Recursion and provided access to its AI expertise and supercomputing hardware, which powers Recursion's own supercomputer, [BioHive-1](https://www.google.com/search?sca_esv=913ae395eea29b77&rlz=1C5ZNUK_enUS1142US1145&cs=1&sxsrf=AE3TifPX-RSqBlMT-4P1XuSxZMeECRCPJg%3A1759330712825&q=BioHive-1&sa=X&ved=2ahUKEwiNtJXRoYOQAxWQIjQIHeSJIKUQxccNegQIBRAB&mstk=AUtExfAIItCYqVo_vzCNva9h1hGDR5AvOxgytsx4IosaxqdUh4obFGJIfSiPgUk18SnH874hcXWu9oijPL50OWY4ypiEzwgKdjjmuf7rKQESOy3MAfnsMiuTMqSLllgjcZev62RfDDb9WxHGMaF9be8wfIg7uzZVQBXn4eQN_AHrncdOI4uu932FLGkoHsMe2sM0gFq3V02UYWHYwO9ZVIZhSsFLGAlonJJaYrjGrV-Y953jDUEQU54AccsbIRWUNjO2Qo6rYsWLaB3C79SbhrsFy6Uf&csui=3). The collaboration aims to accelerate the development of AI-powered foundation models for drug discovery by leveraging Recursion's vast biological and chemical datasets and NVIDIA's leading AI platform, including its [DGX systems](https://www.google.com/search?sca_esv=913ae395eea29b77&rlz=1C5ZNUK_enUS1142US1145&cs=1&sxsrf=AE3TifPX-RSqBlMT-4P1XuSxZMeECRCPJg%3A1759330712825&q=DGX+systems&sa=X&ved=2ahUKEwiNtJXRoYOQAxWQIjQIHeSJIKUQxccNegQIBxAB&mstk=AUtExfAIItCYqVo_vzCNva9h1hGDR5AvOxgytsx4IosaxqdUh4obFGJIfSiPgUk18SnH874hcXWu9oijPL50OWY4ypiEzwgKdjjmuf7rKQESOy3MAfnsMiuTMqSLllgjcZev62RfDDb9WxHGMaF9be8wfIg7uzZVQBXn4eQN_AHrncdOI4uu932FLGkoHsMe2sM0gFq3V02UYWHYwO9ZVIZhSsFLGAlonJJaYrjGrV-Y953jDUEQU54AccsbIRWUNjO2Qo6rYsWLaB3C79SbhrsFy6Uf&csui=3). This partnership will allow Recursion to create new medicines and also enables them to license AI tools to other drug hunters.  Key Aspects of the Partnership

Mentions:#DGX
r/wallstreetbetsSee Comment

Sorry bears but if NVDA was just financial engineering their success, AMD and INTC would be doing it too. They even gave free compute to OpenAI 2016! A whole DGX! And obviously all that achieved is cooking the books.... Lots of fools revealing themselves akin to Deepseek day.

r/wallstreetbetsSee Comment

Jensen wanted the entire world's AI workload on NVDA's stack-CUDA, H100, DGX, TensorRT. Gyna said no. lol.

Mentions:#NVDA#DGX
r/stocksSee Comment

Much of AMD's valuation hinges on the hopes they will eventually compete with NVDA in the lucrative AI GPU market. But thus far it's been nothing but hopes. AMD stock price is same as it was in late 2021. Meanwhile NVDA and AVGO left 2021 in the dust. With AMD, you'd have to select windows of time to make its performance look good, but even then it's not parabolic like other AI winners. AVGO custom chips won't replace NVDA chips; they are just going to handle lower end workloads that don't require top performance. Problem for AMD is, it's not so much what they have done wrong, it's more so what NVDA has done right. NVDA built an ecosystem which has been the AI platform for a decade - when they shipped the first DGX cluster to OpenAI 10 years ago. That's right 10 years ago. It's tough to catchup with first mover; you are copying them, while they are already working with customers to make next generation improvements and always at least a step ahead. > invested in AMD because I always considered it a cost-leading alternative to NVDA And you have what material data to backup these claims?

r/stocksSee Comment

It's amusing how the masses think something like AI could just spring up out of nowhere overnight. DeepMind was formed 15 years ago. They made an AI chess engine that could easily outclass the best humans - not even a chance. OpenAI formed 10 years ago and got the first NVDA DGX cluster. It's just a matter of whether you had any relation to the field or any interest in it - but it wasn't developed in any sort of secrecy. In early to mid 2010's, ML had started gaining traction as a potential degree option. I started building NVDA position back in 2017/18 timeframe after reading an article where several Silcon Valley seed investors were interviewed and asked which publicly traded company they would invest in - top choice by far was NVDA because of the future of AI.

Mentions:#NVDA#DGX#ML
r/StockMarketSee Comment

Actually, GPUs were already being used in machine learning a decade ago. Nvidia went over this during GTC the following year in 2016 and released DGX.

Mentions:#DGX
r/stocksSee Comment

I remember when Nvidia released the Volta based DGX computer for something like $150k and thinking to myself wow that’s cheap, and Reddit was all LOL can it play crysis. Back then if you want HPC for scientific compute, or do AI research, you need a decent size team to babysit the hardware and software. DGX and the entire Nvidia stack make it much more accessible for smaller companies. They’re running the same playbook for robotics. By the time the industry realizes they can’t build robots without Nvidia, Reddit will once again accuse Nvidia of being lucky while kicking themselves for not buying NVDA at $200.

Mentions:#DGX#NVDA
r/wallstreetbetsSee Comment

German tech firm sues Nvidia for patent infringement, seeks to block Nvidia across 18 European countries — ParTec lawsuit alleges DGX AI supercomputer design theft.

Mentions:#DGX
r/investingSee Comment

When you buy or rent Nvidia DGX you have the networking already. Nvidia can scale thousands of DGX clusters thanks to NVLink for Inter-GPU and Nvidia Photonics / Infini-Band / NCCL for Intra-Cluster connectivity. Nvidia NVL Rack is equipped with world’s most advanced memory transfer technology on the Planet, as well as the highest performance AI computing power. Google’s Ironwood is not even available in Google Cloud. And the previous TPU can’t scale in the same way as Nvidia NVL.

Mentions:#DGX
r/wallstreetbetsSee Comment

someone leaked DGX report early. jumping in afterhours

Mentions:#DGX
r/wallstreetbetsSee Comment

KO, GPC, DGX puts ong ong 💯 wallahi

Mentions:#KO#GPC#DGX
r/stocksSee Comment

Last year, Nvidia made an unusual proposal to Amazon Web Services and other cloud providers that have long been the biggest buyers of Nvidia’s specialized artificial intelligence server chips. Nvidia wanted to lease Nvidia-powered servers in the cloud providers’ data centers so it could turn around and rent the same servers to AI software developers. Those developers included some of the biggest cloud customers in the world. As the discussions progressed, Nvidia’s leverage increased. Demand for Nvidia-powered servers exploded among AI software developers following the launch of OpenAI’s ChatGPT in November, and the cloud providers soon couldn’t keep up. In that delicate moment, Nvidia saw a way to essentially compete with the cloud providers for customers. Nvidia’s trump card? It was about to release a much anticipated new AI chip, the H100, which the traditional cloud providers needed. Microsoft, Google and Oracle agreed to Nvidia’s proposal but AWS did not, according to a person with direct knowledge of the decision. ... For traditional cloud providers, the rise of DGX Cloud risks turning them into intermediaries. For instance, ServiceNow uses DGX Cloud to develop AI that summarizes IT requests and powers customer service chatbots. John Sigler, a senior vice president at the IT software giant, said the Nvidia service makes it easier for ServiceNow to run its new AI software in its own data centers as well as across multiple cloud providers simultaneously because it can use a “single software platform” from Nvidia to manage the process. Not sure if it works, but it's a great gambit to commoditize the CSPs. Coreweave is another example. Nvidia will create their its own cloud customers for their products to keep the CSPs in line. Source -

Mentions:#DGX
r/stocksSee Comment

NVDA sells the entire DGX cluster which is used for both modeling and inferencing. Lower powered GPU can be used for inferencing small scale solutions - I mean I can run AI models on my desktop - for me it works fine - it will take hours/days/weeks if I had to share that load with others. But it doesn't really work so well when a large commerical platform is using it at scale.

Mentions:#NVDA#DGX
r/investingSee Comment

Yeah I'm not talking about the current iteration or even generative LLM's in general, but other AI applications and this is only what we build upon. Artificial general intelligence is coming within probably the next 5 years. Then soon after that comes artificial super intelligence where not only the top people in their fields are surpassed but collectively the entire human population' s processing power, including the smartest individuals on Earth are surpassed by one single AI. At that point we don't know what happens. Things like sonomet Al ultrasound and CT, RadimageGan generative Al medical radiological imaging, Clara parabricks genome sequencing Al software and Evo 2 are much more interesting and make the likes of ChatGpt look boring. Then the US government projects using DGX Superpod working with DARPA, and who knows what they're working on.

Mentions:#DGX
r/wallstreetbetsSee Comment

Buying myself a DGX spark to celebrate today

Mentions:#DGX
r/wallstreetbetsSee Comment

Deepseek 671b 4bit quant with a CPU and RAM runs at about 3.5 to 4 tokens per second. Whereas the exact same Deepseek 671b 4bit quant model on a GPU server like the Nvidia DGX B200 runs at about 4,166 tokens per second Tldr this is an insanely regarded take

Mentions:#DGX

Quest Diagnostics (DGX) reports earnings Tuesday morning. Decent P/E and dividends. And it is one of the stocks likely to be benefiting from UHC's increase in costs.

Mentions:#DGX
r/wallstreetbetsSee Comment

DGX is probably one of those responsible for UHC's increased costs.

Mentions:#DGX
r/wallstreetbetsSee Comment

They could would want to buy the Nvidia DGX stuff.

Mentions:#DGX
r/wallstreetbetsSee Comment

We have a “DGX” In my area, in under a sky scraper. It’s their urban version and honestly it’s quite handy and cheaper than Target for things like toilet paper and other household staples

Mentions:#DGX
r/wallstreetbetsSee Comment

No, I don't. Here are some of the more promising AI projects I'm way more interested in. Corrdiff AI extreme weather modeling. GE partnership and specifically their sonomet AI ultrasound and CT projects. RadimageGan generative AI medical imaging along with Deeptek and for AI augmented radiology projects. Clara parabricks genome sequencing AI software. Nvidia DRIVE solution for autonomous vehicles. Nvidia Omniverse partnered with Siemens for AI industrial production applications. Also has some US government contracts/ projects using DGX Superpod and working with DARPA All of these revolutionize the spaces they've been applied to. The amount of potential revenue derived from these projects could be insane going forward and they're all just getting started. Said this 👇 back in 2017 and this still applies. I said this before the birth of ChatGPT or before AI was a trend. http://stocktwits.com/mikel3113/message/71190466 This is just my opinion and you don't have to agree with it, but nonetheless this is still how I think of it. Stick to your theory if you wish and we can agree to disagree.

Mentions:#GE#DGX
r/wallstreetbetsSee Comment

* Announced that NVIDIA will serve as a key technology partner for the $500 billion Stargate Project . * Revealed that cloud service providers AWS, CoreWeave, Google Cloud Platform (GCP), Microsoft Azure and Oracle Cloud Infrastructure (OCI) are bringing NVIDIA® GB200 systems to cloud regions around the world to meet surging customer demand for AI. * Partnered with AWS to make the NVIDIA DGX™ Cloud AI computing platform and NVIDIA NIM™ microservices available through AWS Marketplace . * Revealed that Cisco will integrate NVIDIA Spectrum-X™ into its networking portfolio to help enterprises build AI infrastructure. * Revealed that more than 75% of the systems on the TOP500 list of the world’s most powerful supercomputers are powered by NVIDIA technologies. * Announced a collaboration with Verizon to integrate NVIDIA AI Enterprise, NIM and accelerated computing with Verizon’s private 5G network to power a range of edge enterprise AI applications and services. * Unveiled partnerships with industry leaders including IQVIA, Illumina, Mayo Clinic and Arc Institute to advance genomics, drug discovery and healthcare. * Launched NVIDIA AI Blueprints and Llama Nemotron model families for building AI agents and released NVIDIA NIM microservices to safeguard applications for agentic AI. * Announced the opening of NVIDIA’s first R&D center in Vietnam . * Revealed that Siemens Healthineers has adopted MONAI Deploy for medical imaging AI.

Mentions:#DGX#NIM#TOP
r/wallstreetbetsSee Comment

I wonder if GB10 Project DGX will be allowed to be sold in China?

Mentions:#DGX
r/stocksSee Comment

HIMS continues on its Quest. Announcement today purchase of Trybe Labs at home lab testing is indeed a big deal. The testing co I'm most familiar with, Quest Diagnostics (DGX $18B cap) made approx $10B+ in testing last year, and that market is expected to keep rising for the foreseeable future. I'd (wild) guess that HIMS, with their aggressive marketing to its rapidly growing user base could potentially add as much as $1B rev from adding this service in the 1st year alone. Up 25% now.

Mentions:#HIMS#DGX
r/stocksSee Comment

Better jump on that bandwagon quickly. GRAL, DGX, JNJ, CVS, JAZZ are all great picks in the healthcare and consumer staples sector. I just bought GOLD (Barrick), based on how it’s near the bottom, posting increasing profits, and moving forward with a stock buyback program. I disagree with your assessment that gold is overbought. I think it is the opposite. If you have an open mind to other commodities, energy is a great choice, including uranium. I have also bought LIT, and would be okay with oil.

r/investingSee Comment

[https://arxiv.org/abs/2408.14158](https://arxiv.org/abs/2408.14158) > we deployed the Fire-Flyer 2 with 10,000 PCIe A100 GPUs, achieved performance approximating the DGX-A100 while reducing costs by half and energy consumption by 40%. We specifically engineered HFReduce to accelerate allreduce communication and implemented numerous measures to keep our Computation-Storage Integrated Network congestion-free. Through our software stack, including HaiScale, 3FS, and HAI-Platform, we achieved substantial scalability by overlapping computation and communication. Our system-oriented experience from DL training provides valuable insights to drive future advancements in AI-HPC.

Mentions:#DGX
r/pennystocksSee Comment

MYNZ Compliance will help now to get more trust, interesting, latest news with Clinical Study and Partnerships with DGX + TMO multibillion companies https://preview.redd.it/u4ggmxroujfe1.jpeg?width=411&format=pjpg&auto=webp&s=2d17445b599c0e5b2606aaef4feb0448b46a2a8d

Mentions:#MYNZ#DGX#TMO
r/wallstreetbetsSee Comment

6 million on model training does not equal a 60 million dollar hardware capex. As far as I know they trained their model on 50000 H100 Nvidia GPUs. That's over 6000 fully packed DGX systems which cost more than 50k each. You're looking at a minimum of half a billion dollars in hardware capex just to get started for those kinds of volumes. Inferencing may well be cheaper, but all that means is you can have a much larger number of players in the market renting the services. Any loss of revenue per user will be more than made up for with the orders of magnitude higher number of users that will access the technology now. Demand for Nvidia's chips will only go up, since now there is evidence that even smaller players can enter the game, widening Nvidia's customer base. Also, any tech investor who doesn't know that this is a rug pull sector should not invest in tech or should stick to a mutual fund type of product. If parade around naked on horseback, of course someone will come out and say that you have no clothes on.

Mentions:#DGX
r/pennystocksSee Comment

MYNZ is definitely one of my top long-term penny stock picks. Their innovative approach to cancer diagnostics, especially with ColoAlert, and partnerships with major players like Quest Diagnostics (DGX) and Thermo Fisher Scientific (TMO) give them a solid foundation for growth. The potential FDA approval is a game-changer, and I'm confident in holding this one for the long haul

Mentions:#MYNZ#DGX#TMO
r/ShortsqueezeSee Comment

Have you noticed MYNZ's recent performance? They were at a 52-week low, but they're doing exceptionally well now following their partnerships with DGX and TMO.

Mentions:#MYNZ#DGX#TMO
r/smallstreetbetsSee Comment

hold those shares until we get that US approval on this first quarter of 2025 sheesh this is now really making sense thanks for that news from DGX and TMO yesterday MYNZ might get it's ATH this day !

Mentions:#DGX#TMO#MYNZ
r/wallstreetbetsSee Comment

# Vishay: The Hidden Key Winner in Nvidia’s AI Servers and RTX 50 Series Supply Chain [https://medium.com/@mingchikuo/vishay-the-hidden-key-winner-in-nvidias-ai-servers-and-rtx-50-series-supply-chain-vishay-3-0-526aec187854](https://medium.com/@mingchikuo/vishay-the-hidden-key-winner-in-nvidias-ai-servers-and-rtx-50-series-supply-chain-vishay-3-0-526aec187854) Ticker: VSH 1. While the tech industry focuses on Nvidia’s AI server and RTX 50 series supply chain, my latest research reveals Vishay as the hidden key winner. 2. Vishay has secured better-than-expected orders for Nvidia’s Blackwell AI servers (GB200 series and DGX/HGX B200) and RTX 50 series graphics cards. The component orders include MOSFET/DrMOS (VRPower), vPolyTan (Polymer Tantalum, such as T55), Current Shunt Resistors, Transient Voltage Suppressors (TVS), and Schottky Barrier Diodes (SBD), etc. 3. Among the components above, MOSFET and vPolyTan deserve special attention. 4. For GB200 NVL72/36’s 8kW 54V-to-12V power supply, Nvidia has switched to a Renesas (controller) and Vishay (MOSFET) combination, replacing power management modules from Flextronics and Delta. Notably, Vishay has replaced Infineon as the current MOSFET supplier. 5. For DGX/HGX’s 2kW 54V-to-12V power supply, Nvidia now uses ADI (controller) and Vishay (MOSFET), replacing MPS and Delta’s power management modules. 6. The GB200 NVLink Switch’s 2kW 54V-to-12V power supply also uses the Renesas (controller) and Vishay (MOSFET) combination. 7. For Nvidia’s upcoming RTX 50 series graphics cards planned for 2025, Vishay has been selected as a new DrMOS supplier, with mass production starting in 1Q25. 8. Based on the above 4, 5, 6, and 7, Vishay’s current MOSFET production capacity in 2025 is fully loaded, and it is expected to contribute approximately 20–30% of revenue, with a higher-than-average gross profit margin. 9. GB200 heavily utilizes Vishay’s vPolyTan (polymer tantalum capacitors, primarily T55). Vishay’s vPolyTan is already facing supply shortages for 2025 and is expected to contribute a high single-digit percentage of revenue with significantly above-average gross margins. 10. Vishay 3.0 stands to capitalize on robust demand for Nvidia’s AI servers and the new RTX 50 series, with meaningful results anticipated in 2025. The company is also well-positioned to benefit from the automotive sector’s rebound and the burgeoning IoT market. Looking ahead, Vishay’s growth trajectory appears strong, driven by the ongoing success of its Vishay 3.0 strategy implementation.

r/StockMarketSee Comment

Is this the part where hardware catches up to software? How’s the NVIDIA DGX Quantum going?

Mentions:#DGX
r/stocksSee Comment

What are you talking about? Blackwell DGX servers have ARM based processors. NVDA has not dumped ARM

Mentions:#DGX#ARM#NVDA
r/wallstreetbetsSee Comment

Nvidia to partner with SoftBank to build Japan's largest AI Factory, Nvidia DGX 25 EF.

Mentions:#DGX
r/wallstreetbetsSee Comment

I went full regard boiz. I apologize to nana CALLS: GM, DGX, TMUS, UPS, TXRH & SKX PUT: AAL rest in peace

r/wallstreetbetsSee Comment

You don't have a clue if you think this. NVDA's customers are the biggest companies in the world who have little to no debt and lots of cash. AAPL may be able to sell iPhones for 1k, but NVDA can sell their “Blackwell” DGX B200 for $500,000 with 70% margins because no one comes close to offering what they can.

r/investingSee Comment

Oh you think I'm just talking about LLM's, no not at all. There's plenty of data out there non-specific to language models that is currently still being trained on multiple different AIS that are specialized for a specific design purpose. That's where the money is. Chat gpt and similar rivals aren't at all exciting in the grand scheme of things when I'm referencing to AI Here are some of the promising AI projects that I'm talking about. Corrdiff AI extreme weather modeling. Nvidia and GE partnership and specifically their sonomet AI ultrasound and CT projects. RadimageGan generative AI medical imaging along with Deeptek and for AI augmented radiology projects. Clara parabricks genome sequencing AI software. Nvidia DRIVE solution for autonomous vehicles. Nvidia Omniverse partnered with Siemens for AI industrial production applications. Also has some US government contracts/ projects using DGX Superpod and working with DARPA All of these revolutionize the spaces they've been applied to. The amount of potential revenue derived from these projects could be insane going forward and they're all just getting started.

Mentions:#GE#DGX