See More StocksHome

ML

MoneyLion Inc

Show Trading View Graph

Mentions (24Hr)

4

300.00% Today

Reddit Posts

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models affect retail trading and investing?

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models Impact Trading and Investing?

r/smallstreetbetsSee Post

Luduson Acquires Stake in Metasense

r/investingSee Post

Best way to see asset allocation

r/wallstreetbetsSee Post

Neural Network Asset Pricing?

r/ShortsqueezeSee Post

$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...

r/wallstreetbetsSee Post

Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now

r/investingSee Post

Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?

r/StockMarketSee Post

Moving from ML to Robinhood. Mutual funds vs ETFs?

r/smallstreetbetsSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/stocksSee Post

hypothesis: AI will make education stops go up?

r/pennystocksSee Post

AI Data Pipelines

r/pennystocksSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/StockMarketSee Post

The Wednesday Roundup: December 6, 2023

r/wallstreetbetsSee Post

Why SNOW puts will be an easy win

r/smallstreetbetsSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/wallstreetbetsSee Post

I'm YOLOing into MSFT. Here's my DD that convinced me

r/pennystocksSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/investingSee Post

I created a free GPT trained on 50+ books on investing, anyone want to try it out?

r/pennystocksSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/smallstreetbetsSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/stocksSee Post

How often do you trade news events?

r/stocksSee Post

Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning

r/RobinHoodPennyStocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/pennystocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallstreetbetsnewSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/smallstreetbetsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsOGsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallStreetbetsELITESee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsSee Post

🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!

r/investingSee Post

AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)

r/StockMarketSee Post

Exciting Opportunity !!!

r/wallstreetbetsSee Post

🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙

r/WallstreetbetsnewSee Post

Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)

r/wallstreetbetsSee Post

The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle

r/investingSee Post

Treasury Bill Coupon Question

r/pennystocksSee Post

Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/stocksSee Post

The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth

r/wallstreetbetsSee Post

NVDA is the wrong bet on AI

r/pennystocksSee Post

Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/wallstreetbetsSee Post

NVIDIA to the Moon - Why This Stock is Set for Explosive Growth

r/StockMarketSee Post

[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?

r/investingSee Post

The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts

r/wallstreetbetsSee Post

My thoughts about Nvidia

r/wallstreetbetsSee Post

Do you believe in Nvidia in the long term?

r/wallstreetbetsSee Post

NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading

r/wallstreetbetsSee Post

Apple Trend Projection?

r/stocksSee Post

Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"

r/investingSee Post

Which investment profession will be replaced by AI or ML technology ?

r/pennystocksSee Post

WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology

r/pennystocksSee Post

$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41

r/wallstreetbetsSee Post

$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).

r/pennystocksSee Post

Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine

r/stocksSee Post

This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?

r/wallstreetbetsSee Post

roku thesis for friend

r/stocksSee Post

Training ML models until low error rates are achieved requires billions of $ invested

r/wallstreetbetsSee Post

AMD AI DD by AI

r/wallstreetbetsSee Post

🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨

r/wallstreetbetsSee Post

AI/ML Quadrant Map from Q3…. PLTR is just getting started

r/pennystocksSee Post

$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News

r/wallstreetbetsSee Post

DD: NVDA to $700 by this time next year

r/smallstreetbetsSee Post

VetComm Accelerates Affiliate Program Growth with Two New Partnerships

r/pennystocksSee Post

NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT

r/pennystocksSee Post

Netramark (AiAi : CSE) $AINMF

r/pennystocksSee Post

Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)

r/wallstreetbetsSee Post

Testing my model

r/pennystocksSee Post

Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)

r/wallstreetbetsSee Post

[Serious] Looking for teammates

r/stocksSee Post

[Serious] Looking for teammates

r/StockMarketSee Post

PLTR Stock – Buy or Sell?

r/StockMarketSee Post

Why PLTR Stock Popped 3% Today?

r/wallstreetbetsSee Post

How would you trade when market sentiments conflict with technical analysis?

r/ShortsqueezeSee Post

Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.

r/StockMarketSee Post

Stock Market Today (as of Mar 3, 2023)

r/wallstreetbetsSee Post

How are you integrating machine learning algorithms into their trading?

r/investingSee Post

Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits

r/pennystocksSee Post

Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare

r/ShortsqueezeSee Post

Why I believe BBBY does not have the Juice to go to the Moon at the moment.

r/investingSee Post

Meme Investment ChatBot - (For humor purposes only)

r/pennystocksSee Post

WiMi Build A New Enterprise Data Management System Through WBM-SME System

r/wallstreetbetsSee Post

Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT

r/ShortsqueezeSee Post

The Squeeze King - I built the ultimate squeeze tool.

r/ShortsqueezeSee Post

$HLBZ CEO is quite active now on twitter

r/wallstreetbetsSee Post

Don't sleep on chatGPT (written by chatGPT)

r/wallstreetbetsSee Post

DarkVol - A poor man’s hedge fund.

r/investingSee Post

AI-DD: NVIDIA Stock Summary

r/investingSee Post

AI-DD: $NET Cloudflare business summary

r/ShortsqueezeSee Post

$OLB Stock DD (NFA) an unseen gold mine?

r/pennystocksSee Post

$OLB stock DD (NFA)

r/wallstreetbetsSee Post

COIN is still at risk of a huge drop given its revenue makeup

r/wallstreetbetsSee Post

$589k gains in 2022. Tickers and screenshots inside.

r/pennystocksSee Post

The Layout Of WiMi Holographic Sensors

r/pennystocksSee Post

infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.

r/investingSee Post

Using an advisor from Merril Lynch

r/pennystocksSee Post

$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.

r/StockMarketSee Post

Traded companies in AI generated photos?

r/pennystocksSee Post

$APCX Huge developments of late as it makes its way towards $1

r/pennystocksSee Post

($LTRY) Lets Hit the Lotto!

r/wallstreetbetsSee Post

Robinhood is a good exchange all around.

Mentions

The earliest winners in AI/ML were digital ad companies like Meta and Google. Ads were the first money maker for machine learning in the early to mid 2010s before generative AI blew up.

Mentions:#ML

It's amusing how the masses think something like AI could just spring up out of nowhere overnight. DeepMind was formed 15 years ago. They made an AI chess engine that could easily outclass the best humans - not even a chance. OpenAI formed 10 years ago and got the first NVDA DGX cluster. It's just a matter of whether you had any relation to the field or any interest in it - but it wasn't developed in any sort of secrecy. In early to mid 2010's, ML had started gaining traction as a potential degree option. I started building NVDA position back in 2017/18 timeframe after reading an article where several Silcon Valley seed investors were interviewed and asked which publicly traded company they would invest in - top choice by far was NVDA because of the future of AI.

Mentions:#NVDA#DGX#ML

The Google discovery (the attention mechanism) improved the effectiveness of ML models, particularly but not exclusively ones that process text, but it had nothing to do with parallelization.

Mentions:#ML

Back in 2016 I put half of my 401k into nvidia and Tesla after I saw GPUs were going into every self driving car whether it was a Tesla or other competitors after seeing the demos at CES. Even though the tech at that time was based on CNNs and I was focused on cars, it seemed like a no brainer that GPU infrastructure was inevitable once breakthroughs were made in AI/ML.

Mentions:#ML

I think I was lucky that ML became the target of my Autistic fixation in 2016 which led to me making some good predictions of the future, could have easily ended up as trains or something if I was exposed to the right media around that time LOL Side note I recall a thread on an AI tangential sub ~2019 where an investor was asking people involved in the field what stocks we recommend he should buy, I heavily shilled Nvidia along with an explanation of CUDA and how all researchers were reliant on it - I still wonder if he bought in early or not :)

Mentions:#ML

I predicted Nvidia would boom in 2016 due to ML being heavily reliant on CUDA. I was learning some deep learning stuff at the time in Uni and recall speaking to my Father (a backend engineer) saying that I thought ANNs (specifically GANs at the time, this was pre "attention is all you need" and transformers) were going to revolutionize every single industry and that dGPUs and Nvidia were at the forefront. He disagreed and said that useful AI was 100s of years out (interestingly he still doesn't seem to understand that you don't need consciousness for intelligence).

Mentions:#ML

chargers ML was the play

Mentions:#ML

I am not saying that there are no uses for Machine Learning/AI, I am saying that most of the investment that is taking place is not sustainable or practical. Up until 2022, you had a lot of organic growth in the ML industry that was justified. Lots of very practical applications. The main problem is that in late 2022/early 2023 when ChatGPT and StableDiffusion drew a lot of attention by impressing the world, it created a generative AI bubble, in which any company that could make a cool tech demo would get funding, and AI become a solution in search of a problem. There are so many useless startups out there in the Generative AI space. Think video generation, AI generated games, etc. These are impressive tech demos, but their underlying architecture provides no feasible path to a valuable product. For example, there are some startups that have raised millions for AI generated games. But they do not store data variables in a logically consistent way, they do not map the world in 3d, they just render a sequence of 2d images with significant latency, in a way that responds to your inputs. They can throw all the compute they want at these models, they aren't going to produce a viable product consumers actually want to pay for, because their underlying basis is flawed. Even with faster hardware to resolve input lag, more memory to increase context window, they cannot generate a solid experience relying on their architecture. Many top researchers have come out and said that Generative AI has set back AI research several years due to all the bad investment being made.

Mentions:#ML

>The earnings growth isn't coming from useful products for end users, they are coming from other tech companies that are buying their products in order to make speculative bets on GenAI products. I'm baffled how confident you are in this statement, which is unlikely to be true. GenAI is not just an empty game, it has billion of daily users There are lots of very clearly good user products from AI. Most of the tech companies will use these chips regardless of it's GenAI or more traditional AI/ML. https://cloud.google.com/transform/101-real-world-generative-ai-use-cases-from-industry-leaders https://indatalabs.com/blog/companies-using-generative-ai https://www.innovationleader.com/topics/articles-and-content-by-topic/scouting-trends-and-tech/the-top-10-biggest-us-companies-ai-gen-ai-use-cases/ People that don't understand applications of AI will be left behind

Mentions:#ML

I guarantee Klarna has a machine learning division in which they have their own scores of probability of payout. They're burning money to see which factors are the largest. Can't have a good ML model without sacrificing to see who will actually fuck them over and how they behave.

Mentions:#ML

Protein engineers will be very happy when you tell them that they will be just as relevant in 5 years even though ML can create novel proteins that perform better than they create in 1/100th the time and for less money. Protein engineering uses LLMs in the same manner as anything else. The language in this case is protein sequences.

Mentions:#ML

The bubble we're talking about is LLMs, the idea of "AI as a brain." Using ML/big data isn't a bubble, but it also doesn't require the massive capital expenditures that companies are doing right now. And ML is allowing you to do things you could not do before, not replacing you, as a researcher.

Mentions:#ML

I think google is one of the best plays for the next 10yrs. There's still so much opportunity left for real value to be derived from the useful applications of AI/ML (not just chat bots and shit but semantic search applications / vector embeddings / things like that) and its all going to run on AWS, GCP, and Azure - and frankly everyone in the data community is leaning towards AWS and GCP for where to host this stuff bc Azure is just a clusterfuck of cloud service offerings

Mentions:#ML

Considering ML scientists inside of Meta report constant shortages of compute time, I really don’t see the demand drying up in the next two or three years. For fucks sake, we just used machine learning to solve every single protein folding problem possible. We went from massive folding at home distributed compute networks solving maybe a few hundred protein folding problems. To ML solving every protein folding problem known to man. Personally I think the fact that people think the demand for these chips is at risk says more about the lack of education of the general public than anything else. There will be ebbs and flows, but I think we are in a completely new era of computers now. And have been ever since we achieved EUV en masse for acceptable prices. Well, everyone except Intel that is. Twenty years from now is going to be absolutely nuts thanks to what we are just now unlocking with modern machine learning.

Mentions:#ML

Thanks for your reply. So to answer your questions: a) Definitely, want to be able to "enjoy" 80% of $2M+, vs. not enjoy $1M+. b) I can definitely use my margin for PUTS. I use IBKR for their generous margin policies and rates (transferred everything from ML and MS/e-Trade, when I decided to go Margin, and saw their rates. I'll try a few CSP's, but need to find some stocks I'm familiar with, at prices I'd be comfortable owning at the PUT strike (minus the Premium). c) As for why I'm holding onto COOP (and RKT, for now, once it converts in Q4), is not "just" to save on the tax hit, but TO MAKE MONEY. The "market" has definitely NOT priced the upside of the COOP/RKT deal in the next yr or so, especially with Rate Cuts coming, and lack of full clarity from COOP/RKT on their combined EPS/EBITDA for '26. See: [https://www.tipranks.com/stocks/rkt/forecast#](https://www.tipranks.com/stocks/rkt/forecast#) All these "respected Analysts" haven't updated their forecasts in months, and the \*current\* RKT price is literally (even with a 10% drop in the past weeks, due to some news which I don't feel justifies the drop) is above all their forecasts. MY (and other's who follow the two stocks closely) feel that the combined entity, based on current (non-rate cut) earnings is worth about $25/share, PRE-any rate cut bump in ReFi's etc. ONLY the recent BTIG Rating which I referred earlier has taken this info/number into account. $25 (expected) / $18 (current) RKT price is a \*40% bump for 2026\*, and so I'm not comfortable selling/putting it at risk for getting called, till it hits/gets close to that number (unless it's like at 20% above the current price, with a 30DTE, but I'd need to see what kind of Premium I'd get for that CC). Trust me, my objective is to MAKE MONEY, and leverage the $$ I have on Margin, which is why I setup the layered CC ETF strategy. So far, 1 month in, with about $400k borrowed/invested, post Margin (5.x%), I should be netting around $10K a month. And I'm already planning to "adjust" my layers to shift some $$ away from XDTE to the higher premium Single Stock CC ETFs (that I'm comfortable with, not the highest payers like MSTR focued ones). Would welcome your thoughts/feedback on the above, as I continue to learn how to leverage/grow my $1M+ into $2M+ (and beyond).

OP, quantum computing is a different beast that is far beyond an NVIDIA/AI situation. From a theoretical level, Machine Learning + AI have been actively researched and developed in some form since the mid-1900’s, and if you consider optimization as a core part of machine learning, its foundations were well-defined even back into the 1800’s. The foundational theory of machine learning and statistics that led to the generative AI that we have today has been developing for over a century, and the first nascent neutral networks that are a precursor to today’s deep learning generative AI models were developed in the 1950’s to 1960’s before becoming a full-on viable theory (with the invention of backpropagation to train models) from 1970 onwards. From a practical level, machine learning and AI have been solving practical problems for a LONG time before this hype started getting priced into the market. Google, Netflix, Meta, all started leveraging ML in their products in some form for decades before becoming the monster they are today. As of now, quantum computers have demonstrated zero ability to solve problems that matter to us today. Furthermore, the engineering required to maintain a quantum computer that works is massive and near the limits of physics (I’m talking you basically need to reach near absolute zero on a consistent basis). I’m not saying things can’t change overnight, but I would be VERY wary of any hype around quantum computing. It is VERY unlikely that issues with decoherence and stability of qubits will be meaningfully resolved in a matter of only 2 decades. So OP, be wary.

Mentions:#ML

Nvidia has monopoly on the GPUs that are used for AI/ML. All the AI/ML happens using libraries like Tensorflow. Those are optimized for CUDA. And CUDA is proprietary Nvidia tech. This is the reason why x86 processors (Intel and AMD) cannot be displaced in the personal computer space because most of the software is written and optimized for x86. And it is also not about merely writing new libraries or new code. The existing code is supported in hardware by processor features. Given how these processors are blackboxes, it is impossible to build competitors for them.

Mentions:#ML#AMD

The fact that they are behind Grok despite being invested in the AI/ML business for over a decade is highly alarming. Grok barely got going a year ago and they are destroying Google, which has spent far more on AI. Losing to OpenAI is a bit more understandable, as OpenAI is a leader in the industry and has some fantastic researchers. But losing to Grok is embarassing.

Mentions:#ML

I'm an Applied Scientist and I've worked in AI/ML for 10 years and you are just flat out wrong.

Mentions:#ML

>The MIT study published last week shows that 95% of attempted AI implementations at companies fail. Ah the parrotted MIT study. The report fails to clear the bar for any good statistical study. Low n-study with no sampling validity or measurement clarity. There is no data or appendix to reproduce this. Ignoring all this....just because a pilot doesn't progress doesn't mean it isn't delivering any value (would help if study had any measurement clarity). The "study" also attributes the biggest issue is lack of memory and context window. This is something models have been evolving and getting better. > And if you understand the math behind it you'll know that it can be useful as a tool under highly skilled hands of field experts, but that it's not going to be a general "replace all workers" tool like the claims from tech would have you believe. 1. Never claimed it will replace all the workers 2. It doesn't have to be used by highly skilled field experts. Like not even close. A junior programmer with the appropriate model can perform close to a senior programmer (doesn't mean senior programmer doesn't have experience or experience doesn't matter). 3. You are misunderstanding the difference between task and job. 4. Custom models with enough memory and context windows for sector specific are already on the way. These models even assuming they don't replace workers will still be running on GCP, AWS, MS servers. The need for compute will skyrocket and the models will be licensed by companies creating their own models. [AI will be a cash cow for MS,AWS,GCP, ORCL] > I think you forget that the VAST majority of people are just now becoming aware of what big tech does and the younger populace, being much more technically literate, is likely going to see a shift relative to the populace currently. Don't see it at all. Younger people are caring less and less and are pivoting more towards consumerism. Take a look at the TikTok ban - TikTok (chinese company) quite literally is collecting billions of data points and Trump wanted to ban it and the younger generation threw a fit. People are content with the dopamine drip and the algorithm feeding them exactly what they want. > but now there are companies starting with new business models, building the same (and arguably better) services that big tech offers. lol like what? > I think you are severely underestimating the irritation of people that the AI models are trained off of their data, without their permission (sorry burying stuff in the T&Cs might count legally but not to consumers). And all it takes is one lawsuit to completely change the legal framework, or for one law to rewrite what can and cannot be done. Not particularly. Like I mentioned vast majority of people don't even understand. Even if they did they really don't have many options for them to opt out of. Every social media company is collecting information. Your comments are being collected by Reddit and then sent to Google for their models but you are still on here debating an internet stranger. Sure all it take is a law but with how much funding and influence the big tech has? I'll keep my money on big tech and you can keep hoping for reforms that might one day happen. > The models aren't "intelligent" in the human sense. They run statistics on massive datasets and return the most likely set of words based on the input set of words. The human brain, which is the most effective intelligence we know of today, runs on 20W. That's not even enough energy to power the old fashioned tungsten lightbulbs. I do ML. Nobody claimed these models are sentient or intelligent. They don't even need to be "intelligent" - you are confusing AGI with AI. LLMs are just part of ML and we have had ML for years now. It turns out the human brain as special as it is - is still a pattern recognizing statistical machine with a bigger context window and memory. The models don't need to be "intelligent" for them to generate value nor do they need to something special that only humans can do. > It's really best if you learn a little about things, because you seem to be basically building your view based on what you hear from people who have a vested financial interest, not based on independent reviews and a fundamental understanding of the technology. My work literally entails around DE/ML. I work with these models regularly. I don't think you quite understand the nuances of AI...you keep saying "math" but I don't see any actual evidence for your statements or your so called math.

Thank you and glad I can be helpful. 1/ Yes it has to be on the same timeframe. 2/ The VRP in SPY turned negative on Mar 20 and despite being ever so slightly positive at the worst of the crisis (Apr 8-9) it was mostly at 0 and then negative for a long time, as IV got mercilessly crush and realized was still high. VRP is really positive (+5 on average) since early June, and indeed since then it's been fairly easy making money selling options again. Especially with a particularly forgiving realized vol and gentle path drifting on the way up. It's not always like this, and sometimes you have to delta hedge. 2b/ VRP is a measure of the past - yes and no. The best estimate of RV tomorrow, is often RV today. Therefore, VRP today is very often a great predictor of VRP tomorrow. There are other factors, but in my ML model, VRP today come (not surprisingly again) as one of the top feature. 3/ The key is still to put it in context and to capture moment where it is really stretched - knowing that it is positive or negative is already great. Knowing when it is really stretched compare to recent past is even better. 4/ I expose almost all of my research in my app. I know how painful that stuff is to recompute because.. well I've been trading for a while. Retail traders are at a massive disadvantage to pros because... well they don't have data but sometimes tech skills and even more often time. There are other tools in the market, do a quick google search and you will find the one that suits you best. 5/ Stop loss: never. I size small and I am not buying wings either. Again, it's like being an insurance provider. You can decide to reinsure yourself, but it eats your margin (especially with the volatility smile, you end up buying a vol that is often much higher than the one you sold). If you insist in hedging, you should consider calendar, they are probably the best of both world, but not a magic solution either: you are now expose to the term structure.

Mentions:#VRP#SPY#ML

Dude, you're absolutely right. I used to think trading was all about hitting that one massive winner, but honestly? The real edge these days comes from combining AI tools with solid risk management. Took me way too long to figure this out. Backtesting changed everything for me. I started running my ideas through years of historical data first. The AI can simulate thousands of different market conditions and basically tell you "hey, this strategy would've blown up your account in 2018." Saved me from so many stupid plays. Pattern recognition is where it gets interesting. ML is insanely good at catching things I'd never notice - like weird volume patterns before breakouts or momentum shifts that happen right before reversals. Helps cut through all the market noise so I'm not just trading because something "feels" right. Automated risk management was my biggest game changer though. I set up bots to handle stop losses and position sizing automatically. No more holding losers because I'm "sure it'll come back" or risking too much on a single trade. Takes all the emotional BS out of it. Sentiment analysis is like having superpowers. The AI scans everything - news, earnings calls, social media, even WSB posts - to gauge how everyone's feeling about the market. Really helps you avoid getting caught in bull/bear traps. The key thing I learned: don't let AI trade FOR you, use it WITH you. I still do my fundamental analysis, but now I have this AI copilot helping with entries, exits, and keeping me from doing anything too stupid . If you want to know more about Ai trading go check out this page [Algolyra ](http://algolyra.beehiiv.com)

Mentions:#ML

I made a bag on this back in the day and have started eyeing it again since it’s starting to look oversold. Their numbers are better than you would expect and they have a pretty decent ML R&D org. I could see them releasing a product that goes super viral and spikes the stock back up, and I don’t think the downside is huge

Mentions:#ML

To his point, all these methods are not super accurate. They work but ... they have some disadvantages one cannot ignore. I personally use some mixes of HAR and a few other ML models to predict RV. IVR is def backward looking and in and of itself... it's not super useful. But it looks like you know your stuff :) Curious why you were asking these questions at the first place :)

Mentions:#ML#IVR

Dude, GoogleDeepMind is OWNED by Google, no fucking shit they are gonna use TPU... Are you really this stupid? You're trying to prove to me that TPU is "best" by telling me GOOGLE DeepMind uses TPU? ...? Next you're gonna tell me Macbook Pro is the best laptop because Apple engineers use them. My point is that 99.9% of university researchers, AI labs, AI startups, mid-sized companies, enterprise companies ARE ALL USING Nvidia GPUs. Why? Because CUDA won the war, all the open source ML libraries are optimized for CUDA first, also, all the engineers have an NVidia GPU at home so they can test their code on their gaming desktop. This is how I first started doing AI programming, I was running AI code on my 2080S years ago, then the same code ran on my 3080ti, then the same code ran on my 4090, the exact SAME CODE, also runs fine on a h100 that I used on Nebius cloud 1 month ago. I.e. Engineers love the software stack for NVidia chips and it has been backwards compatible for years and years now. Google's AI software stack is seen as a piece of shit in comparison. And by the way, I also do my AI engineering on NVidia chips, for all of the above reasons. And none of the AI engineers I know use TPUs, again, because the software sucks ass.

Mentions:#ML

I think the main driver of snowflake adoption had been that it feels just like a database, since it's managed so fully. They've definitely worked hard to expand their offerings and integrations, but with the focus on SQL it got branded as a distributed DB primarily. Databricks from the start was focused on programming languages for distributed computation, which offered a lot more flexibility, and for being a place to do ML on big data. They've since basically closed the warehousing gap, and keep rolling out new features at an insane pace to sprint ahead of competition. Execs don't care about features as much as cost, reliability, and simplicity, so walking that knife edge to maintain max profits is an ever evolving process.

Mentions:#DB#ML

Snowflakes customer retention rate (not revenue retention) is well below 100% so companies are in fact migrating to other providers - Fortune companies included. Customers migrate for many reasons such as cost per DBU consumed, open source capabilities, ML capabilities, and wanting to move to more capable data warehousing models. You can search of stories of companies migrating and there is no shortage of CTO/CIOs championing about their switch. >I'm just not seeing anything with substance here, just words Are you looking for data to support SNOW as being bullish or data to support Databricks growth?

Mentions:#ML#CTO#SNOW

Yes you can transfer the shares in your ML account to fidelity without selling any shares or insuring any taxes. Once in fidelity you can do whatever you want.

Mentions:#ML

My initial response to you was just correcting you because you said Nvidia was just doing cards for a niche area of gaming and graphics. I was simply telling you that isn't true because they've been in the ML space back then too. Not sure why the conversation deviated to investors.

Mentions:#ML

My initial response to you was just correcting you because you said Nvidia was just doing cards for a niche area of gaming and graphics. I was simply telling you that isn't true because they've been in the ML space back then too. Not sure why the conversation deviated to investors.

Mentions:#ML

For some reason ML and BofA doesn’t care what mutual funds/ETFs I buy at Fidelity. But at ML everything is locked down and I am stuck with mainly T. Rowe Price and American Funds. Idk if it’s because they can control the limitations from their side because of my joint account with the covered person. Even when I call the help desk they said it’s a limitation on my account. So thus I transfer to Fidelity make all the changes I want then transfer back to ML to maintain free banking.

Mentions:#ML

Edge is what I meant. I just call it ML. Edge sucks.

Mentions:#ML

>Can I move my rollover IRA from ML to Fidelity? Sell the SPLG position and buy other funds I want? Then, once I do that, I move the new positions back to ML to maintain my platinum honors? Is Fidelity on your family member's list of approved brokerages?

Mentions:#ML#SPLG

I read this story in a photoshop adjacent subreddit and people were laughing at the idea that these tools could affect the use cases of photoshop. Something about the lack of control, but the way I see it, the rate of improvement suggests that these tools are going to be quite refined in a few more years. Adobe stock is down 1.8% on this news and I have to wonder if Adobe is going to keep sustaining paper cut after paper cut with AI releases. I have like 1.5% of my port in Google and I consider them one of my critical AI holdings. Been following their ML research teams for years and I'm still impressed with what they're discovering.

Mentions:#ML

This is a complete bastardisation of "AI" tbh. Yes, traditional ML methods have been used for decades, but conflating that with modern AI (aka LLMs) is like saying we're still doing electronic trading. Asset pricing? That's set by the market, so quite a catch-22 to say market efficiency is affected by that. ML is used for finding signal, upon which dealers will interact with the market. Actual """AI""" has no indicator on the market beyond these flows. Ren Tech is such a cherry picked example that I'll pay you $100 if you've ever actually worked in the fund industry full time.

Mentions:#ML
r/stocksSee Comment

The long term catalyst AMD needs is large scale adoption of their AI GPU's - MI3xx. Given that the company itself won't provide any material guidance in this area, it's safe to assume they're not making any traction yet on NVDA CUDA. IMO - It's very difficult to see that happening anytime soon. First of all you have all the ML/DL engineers using CUDA for the past 15 years. As everyone is looking for first mover advantage, I can't really see any of the big players looking to divert time looking into an alternative over something that's been baked into the industry. NVDA basically has the entire ecosystem; GPU cluster with high speed links. And secondly, NVDA has always made best in class products and have had top GPU for 25 years. Strip away the names for a minute and just look at the scenario. Here we have a top class product that everyone knows and loves that has a great reputation spanning over a decade. Now another entrant is trying to make a competing product that at best you could say is near equivalent, but it's not in any way a game changer/next generation step ahead. What's the incentive/motivation to swap to the new? Is there one?

You think that generative LLMs are trained on next token predictions. And you just named an introductory to the transformer architecture that has no relation to modern LLMs aside from introducing the attention mechanism. I have multiple degrees in AI/ML and am a published author so based on your two posts I can safely say you’re either an undergrad in the field or someone who read an intro to NLP and LLMs and claims to know more than they do, no offense :)

Mentions:#ML

What’s wrong with Edward jones? Just curious because I have most my money with Merril Lynch but I have an orphan account that was handed down to me with 100k that still with Edward Jones? End of the day they both return about the same percentage but Edward jones uses a lot of mutual funds and etf’s where ML has a lot on individual stocks.

Mentions:#ML

As a ml engineer, their software is overhyped. Hyperscalers will eat their lunch money for their private core. Government is their hold and I value it at 12-18x revenue multiple which would price them at $12-16/share. With private biz, closer to $18-22 a share. Another note, the public will realize the blood money and realize if they win, they will lose and the bad press will be their downfall. Right now, it’s a meme stock. How ironic for a company that’s in the “let’s describe reality” to have valuations built on delusion. The masses are ignorant of the ai technology and think this shit is theirs only. ML models aren’t code but data. Orgs aren’t the same. This shit ain’t scaling.

Mentions:#ML

Customer user experience ranking on the brokers I use - Fidelity is the best, next is Schwab - BofA Merrill Lynch and Vanguard are awful BofA/ML and Vanguard intentionally make your life miserable to manage your holdings with them

Mentions:#ML

They’re not a leader on AI *yet*. I follow Apple very closely and yes, their AI offering sucks, but they are definitely not behind by any stretch. They have surpassed their competitors in traditional ML for years. The issue with Siri is that when they launched their AI Siri, it had “two brains” so to speak. The regular one and the new AI infused one, and when you speak to it, it has to decide which one it kicks off the prompt too (or send it off to GPT). They ended up delaying their actual AI Siri with app intents (where Siri could control the device) because of this “two brain” issue. Now it’s very likely that it will be coming out in March 2026 because they’ve been working on it for 1.5 years now, and will work with specific apps (I.e. all first party ones, and select third party ones). I don’t think people understand what this means. Siri will be able to access information from apps and take actions on your behalf. Apple’s new HomePad device will be using Siri + this new app intents to do exactly this. No other AI company has been able to get this working on device locally and probably wont for a long time, given how hard it is. Apple’s hardware + software stack allows them to execute this, and I’m 100% sure it’ll blow the “behind in AI” narrative away when we see it next March.

Mentions:#ML

The economy is shifting. But, also, I think a lot of investors are buying into the "tech has reached a peak/bubble" rumors that keep flooding the place. If folks keep saying something about the market it starts to become a self-fulfilling prophecy. "Tech is crashing! Tech is a bubble!" every day. I had a spread of tech stocks... AMD, NVDA, AMZN, MSFT, GOOG.. and even some lesser known stuff. Earnings have been good, but the stocks drop. Folks are taking earnings. But, the stocks kind of stay level. Could just be the summer/fall blues that hit the market. But, folks are reading into it that the tech "fad" is over. In the long term.. like years, decades... tech is here to stay. We're exponentially increasing in tech every day now, and chips, algorithms, softwares, etc are driving everything now. So, tech stocks are not fads. They are the foundation to future techs.. like how ML/AI is being used to fast-track new drugs to production or find new uses for drugs. Or to analyze tons of data for things a human couldn't spot. A dip in the economy will create a hiccup in all of that, though. And investors sometimes being irrational beings doesn't help, either.

But why are you only talking about AR and VR and not ad revenue ? AI infrastructure helps both. Your second paragraph doesn’t make sense. First of all generative models are already used in adspace and makes meta billions of $. It seems like you think generative ai = llms. Infrastructure capex isn’t just for generative ai or llms, it’s also for r and d for the next breakthrough of future models.  Your argument is literally “meta wasn’t good at vr glasses, so why would they be good at AI”.  That’s an equally bad argument. ML is a subset of AI. Of course if you’re successful in a subset of a certain thing there is a decent chance you’ll be successful at the superset of it. 

Mentions:#ML

Your parent comment was addressing the "infrastructure" investment for Meta AI. I'm just throwing out their track record with tech as an example of their performance. 2024 had like $2b in revenue from AR/VR. Not exactly stellar return from the $100b buy-in. They have a history of throwing money away for tech that is beyond their scope. >business model the last decade So then the current investment is likely attributed directly to the novel generative models. Why use this as a counter point to discussing their new business? You attributed their existing ad space revenue with their current venture... General success with the standard application of ML in ad space doesn't attribute success with novel AI. That's like saying, "they were successful as a social media platform, why wouldn't they nail AI?"

Mentions:#ML

I mean it is taking off…do you deny that? Yeah, I know they’ve been using ML algorithms, ML is AI. I don’t understand what point you’re trying to make. That it’s been a backbone of their business model the last decade? I mean yes lots of ai and ai adjacent tech has meaningful hype. Again, I don’t get what point you’re trying to make. 

Mentions:#ML
r/stocksSee Comment

>Machine learning (ML) is a field of study in **artificial intelligence** concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions. That's the term's definition from 1959. When we're talking about AI advancements and AI usage and AI technologies, these include ML and the results of ML.

Mentions:#ML
r/stocksSee Comment

No it's not :) otherwise it would be still called ML. AI is an umbrella marketing term, that hints are AGI - for the trick to work, people need to believe it's inteligent, thus they use as many words as possible to anthropomorphize it.

Mentions:#ML#AGI
r/stocksSee Comment

ML and computational analytics in general have far more substantial contributions in industry and are literally driving us to the future right now. LLMs are just something you can get retail interested in and create hype to drive investment.

Mentions:#ML

>the system isn’t functioning as intended. Companies that would otherwise be on the path to a potential initial public offering or lucrative acquisition are getting pulled apart, with the bulk of the cash ending up in the pockets of the founders and their leading engineers This is a twisted narrative. The reality is all these AI startups do not have any monetiziation plan whatsoever. It's just a bunch of experts in ML and DL banding together hoping for a big payday. But current infrastructure does not support wide usage of AI yet. If that's the case, how can you even create and try to market and sell an AI based product/service? It would basically be ".com" era all over again - all we have is "AI" in our name. Most of these AI companies don't have any worth outside of their talent. The talent fetching big paydays is fair - they have an in demand and scarce talent. The capital markets are just trying to pass pre-revenue/pre-product/no monetization business to the next person - that's the actual scam.

Mentions:#ML

Nobody thinks that. Everybody knows about ML and all the other things that can help process massive amounts of data. But LLMs are the things that these big companies are getting people excited about, because people suddenly think they can _think_. It's _words_, like the kind my employees use! But it doesn't work well enough, and that's showing. Zuckerberg did not "replace mid-level engineers by the the middle of 2025." As far as the other stuff, JEPA is out and not that impressive. Moore's Law weighs heavy, we are nearing the limits of how much more tightly and efficiently we can pack silicon in without getting _really_ expensive. Now, when these new data centers come online, and they're ten times the previous size and they _still_ can't create AGI, so we put them all together and get something 100 times the size, maybe, _maybe_ we can create something like an AGI. Something that is _almost_ as smart as a real person. You know, something almost as smart as two people could create by accident if they just forgot to use condoms one night. All this, for trillions of dollars. Silicon Valley has fucked up its microdosing and gone off the rails.

Mentions:#ML#AGI

And all those things were there before "AI", it was ML :) I find it fascinating that people rediscover how useful computers/algorithms are.

Mentions:#ML

I'm thinking more of the evolution of ML which is tied to LLMs and how they use ML for ratings. Rating accuracy is the biggest deal for insurers because if you rate correctly the good drivers stay because they get lower prices and the bad drivers leave because of high prices and go to your competitor. I'm not in the machine learning space though so I could be way off and they might already be using the most up to date algorithms.

Mentions:#ML

I'm the lead of ML integration at a B$ company with 300 employees, I've been assigned to create a GPT with existing company data to streamline research and development of new products which honestly would be useful no doubt, but our IT team is killing any and all initiatives we have, they block projects with endless meetings, endless planning, deferring tasks, or literally lie about having done the work. Its been like pulling teeth to get even most simple GPT integrated, so I've just taken to kicking back and waiting for them to move the needle, I don't have any authority to demand they work on it so what can I do? I figure, most companies are also caught in some terrible corporate deadlock.

Mentions:#ML

I am BSing a bit, I don't work with ML, but from what I understand LLMs are just another approach to ML and I think any evolution in that space that could be applied to their rating approach would be good.

Mentions:#ML

Are you even from the tech space? AI might not be perfect, but neither did we have anything near its capability years ago... We had ML, but no one was using LLMs.

Mentions:#ML

>ML space is really evolving lately with LLMs so they could really land on something good. I call BS LLM is short for large language models, that's great for generating narrative but is useless for math Insurance companies and their risk underwriting is built on solid math, actuarial processes etc. LLMs have 0 impact on this process. Youre either BSing without knowing about ML or you were never really an employee there

Mentions:#ML

LLMs, perhaps. All generative AI and ML, I don't think so.

Mentions:#ML

It doesn't really matter. The point is that ML is insanely useful for many problems. People focus too much on if AGI is achieved, but even if it is not then it still brings a lot of value.

Mentions:#ML#AGI

here is a good metaphor so that we won't be in circles: AI = Teach a robot to cook. ML = Show it thousands of recipes and let it learn patterns. RL = Let it cook in the kitchen, reward it when the dish tastes good, punish it when it burns food.

Mentions:#ML#RL

> the ML space is really evolving lately with LLMs so they could really land on something good. An actual software engineer would hopefully know that the usefulness of LLMs to an insurance company would be pretty much nonexistent.

Mentions:#ML

I used to work there as a software developer, I'm still holding stock I bought on IPO because I think they'll come back. They were all in on growth before but now from what I can tell are dialing it in for rating and profitability. I've been tempted multiple times to put a lot more in but usually talk myself out of it because I'm already holding a good amount and don't want to risk losing even more. That being said, I did just buy 30 more shares after the weird drop after earnings since they seem to keep steadily rising. From my time there they've always been very data driven and flexible so I'm hopeful. They have enough data to backtest any changes to rating algorithms and the ML space is really evolving lately with LLMs so they could really land on something good.

Mentions:#ML

Late to this party, but ML blocks like 90% of the tickers now due to volatility. Have to call in trades now.

Mentions:#ML

Another big reason is cause they’re staffed with devs that think because they can use MCP they can build ML centric products that impact customers. Turns out that’s hard!

Mentions:#ML

ML engineer here. I laughed at you saying people think AI is a silver bullet. I’ve worked through the big data, business intelligence, data science and machine learning corporate phases. Mapping data to biz ops is 90% of the work. Nothings is really changed in a way..

Mentions:#ML

Okay now what? I sold for near 100% losses and used the last of my emergency fund this past weekend to hammer the Bill’s ML bc it had easy win all over it. I’m fucked

Mentions:#ML

Cool I've trained ML models before too. Can you provide any examples of quantum computers being used for AI training or inference? Maybe it is relevant in 50 years, but it isn't not and won't be for a long time. The amount of weights you need for a LLM is way too large for any quantum computer that will be built within the next decade.

Mentions:#ML

Coming from a CS perspective though, AI is better used in diagnostic healthcare (through provided data and machine learning) than it is outside of it (ChatGPT LLM). Their AI is for diagnosis, and the first thing they teach new machine learning students is using AI/ML to diagnose cancers, tumors, and other medical ailments. Read the research, not the buzzword. If they were using an LLM, I would have been out ages ago

Mentions:#ML

Comparing OpenAI and Palantir is just kind of stupid. Palantir is primarily using AI/ML - an extremely advanced and mature technology with clear and effective use cases. OpenAI is (obviously) GenAI/LLMs - a novel technology that is hot and exciting but has yet to really identify an enterprise profit model.

Mentions:#ML

As a ML engineer and architect, people need to understand Palantir’s private wing is extremely weak. I fully expect big cloud providers to eat their lunch money. Their government contracts though is a serious differentiation but I would price it at 10-15x. Annualized commercial rev is $1.7b. At 15x multiple that’s a $10/share. Annualized private rev will be $1.3b so another $8/share at a very liberal 15x multiple. I price them at around $18-20 a share in 12-24 months.

Mentions:#ML

There is already intense competition in humanoid robotics. I wouldn't even say Tesla has a lead. FigureAI, 1x, Apptronik, Agility Robotics and then you have Amazon robotics and Google's Gemini Robotics teams pushing hard on the research side. A lot of these guys have heavy backing from Nvidia, Microsoft, Amazon, etc. China has a ton of companies pushing humanoid robots hard as well. The barrier to entry on a humanoid robots has become so low. Commodity robotics hardware paired with foundation ML models trained via simulation / world models. A much much lower barrier to entry than automobiles. You need a small fraction of the capital and there's so much development going on with Vision Language Action (VLA) models and tools for training via simulation. Google's Genie3 world model could fundamentally transform robotics development. Nvidia is also pushing to open source VLA models and simulation tools for training robots. They benefit by locking everyone into their edge compute platform.

Mentions:#ML

Is AI customer support partly responsible for Comcast subscriber losses? "One of the key strategies involved leveraging AI and ML to transform the customer experience." https://www.toolify.ai/ai-news/transforming-customer-experience-comcasts-ai-and-ml-success-story-1482075 It appears as if AI customer support has created customer losses for other companies: https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/ What has your experience been?

Mentions:#ML

PR That’s what Palantir does good; PR. Palantir is simply data analytics along with some API calls to ML and LLM systems for what they call AI data analysis.

Mentions:#PR#API#ML
r/stocksSee Comment

Came across this thread while studying more about STM. Very nice to have opinions from engineers and distributers. Thanks! I want to add some research I gathered on STM from an angle less discussed: Post-quantum cryptography (PQC) and the migration plan to using PQC software and hardware: The National Institute of Standards and Technology (NIST) finalized the first PQC standards - ML-KEM (FIPS 203) and ML-DSA (FIPS 204) - in August 2024. The NSA anchored the U.S. migration in CNSA 2.0 (the PQC playbook for National Security Systems), reinforced by NSM-10 and OMB M-23-02. Under CNSA 2.0, any NSS equipment that can’t support CNSA 2.0 must be phased out by December 31, 2030, and CNSA 2.0 algorithms are mandated by December 31, 2031. NSM-10 and OMB M-23-02 extend planning and migration across civilian systems toward 2035. In practice: chips used in federal/NSS systems need PQC support this decade - specifically ML-KEM (FIPS 203) and ML-DSA (FIPS 204) - and suppliers that can prove those algorithms now are better positioned for U.S. government demand (with knock-on commercial pull). To achieve compliance, modules typically go through validation in two steps: 1. NIST's Cryptographic Algorithm Validation Program (CAVP), for FIPS 203/204 algorithms. 2. NIST's Cryptographic Module Validation Program (CMVP), for FIPS 140-3, which can include the validated algorithms. As of now, STM is the only MCU vendor with a vendor-labeled NIST CAVP validation explicitly covering ML-KEM and ML-DSA for an MCU library - validated July 8, 2025 (Validation A7125) for the STM32 PQC library on Cortex-M33. Outside the MCU space, some hyperscalers are pursuing (and in some cases obtaining) these validations: Apple, Amazon, Google, and more. Yet, we also hear peers projecting hardware lifetimes that don’t match the migration tempo. Meta just lengthened its server depreciation schedules (cutting 2025 depreciation by about $2.9B). While investors debate whether AI accelerators truly have 5.5-year useful lives when leading-edge compute turns over in 2–3 years, many overlook the PQC roadmap: these systems will be effectively out-of-policy (and thus completely irrelevant) by 2031 - not due to demand or performance, but by the NSA. Back to MCUs - here’s where key competitors stand on PQC (algorithm-level) validations: 1. NXP Semiconductors: NXP scientists co-authored CRYSTALS-Kyber (now ML-KEM), but there’s no NXP-vendor-labeled ML-KEM/ML-DSA CAVP validation listed. In other words, no PQC certification. 2. Infineon Technologies: Visibly active in quantum/security (e.g., Quantinuum collaboration), but again, no PQC certification. 3. Renesas Electronics: No PQC certification; they collaborate with wolfSSL, whose module has relevant certifications. 4. Microchip Technology: No PQC certification. 5. Texas Instruments: No PQC certification. 6. onsemi (ON Semiconductor): No PQC certification. Bottom line: STM’s named, vendor-labeled CAVP validation (A7125) for ML-KEM + ML-DSA on STM32/Cortex-M33 lands exactly as U.S. policy pushes PQC-capable gear into government systems by 2030–2031, with broader migration working toward 2035. That’s a competitive advantage in the MCU space worth highlighting, and I don't see almost anyone talking about it. And yes, similar PQC roadmaps are emerging globally: the EU published a coordinated PQC implementation roadmap in June 2025, and Canada set milestones to finish high-priority migrations by 2031 and all remaining systems by 2035. China is also pursuing a PQC migration plan.

r/stocksSee Comment

Came across this thread while studying more about STM. Very nice to have opinions from engineers and distributers. Thanks! I want to add some research I gathered on STM from an angle less discussed: Post-quantum cryptography (PQC) and the migration plan to using PQC software and hardware: The National Institute of Standards and Technology (NIST) finalized the first PQC standards - ML-KEM (FIPS 203) and ML-DSA (FIPS 204) - in August 2024. The NSA anchored the U.S. migration in CNSA 2.0 (the PQC playbook for National Security Systems), reinforced by NSM-10 and OMB M-23-02. Under CNSA 2.0, any NSS equipment that can’t support CNSA 2.0 must be phased out by December 31, 2030, and CNSA 2.0 algorithms are mandated by December 31, 2031. NSM-10 and OMB M-23-02 extend planning and migration across civilian systems toward 2035. In practice: chips used in federal/NSS systems need PQC support this decade - specifically ML-KEM (FIPS 203) and ML-DSA (FIPS 204) - and suppliers that can prove those algorithms now are better positioned for U.S. government demand (with knock-on commercial pull). To achieve compliance, modules typically go through validation in two steps: 1. NIST's Cryptographic Algorithm Validation Program (CAVP), for FIPS 203/204 algorithms. 2. NIST's Cryptographic Module Validation Program (CMVP), for FIPS 140-3, which can include the validated algorithms. As of now, STM is the only MCU vendor with a vendor-labeled NIST CAVP validation explicitly covering ML-KEM and ML-DSA for an MCU library - validated July 8, 2025 (Validation A7125) for the STM32 PQC library on Cortex-M33. Outside the MCU space, some hyperscalers are pursuing (and in some cases obtaining) these validations: Apple, Amazon, Google, and more. Yet, we also hear peers projecting hardware lifetimes that don’t match the migration tempo. Meta just lengthened its server depreciation schedules (cutting 2025 depreciation by about $2.9B). While investors debate whether AI accelerators truly have 5.5-year useful lives when leading-edge compute turns over in 2–3 years, many overlook the PQC roadmap: these systems will be effectively out-of-policy (and thus completely irrelevant) by 2031 - not due to demand or performance, but by the NSA. Back to MCUs - here’s where key competitors stand on PQC (algorithm-level) validations: 1. NXP Semiconductors: NXP scientists co-authored CRYSTALS-Kyber (now ML-KEM), but there’s no NXP-vendor-labeled ML-KEM/ML-DSA CAVP validation listed. In other words, no PQC certification. 2. Infineon Technologies: Visibly active in quantum/security (e.g., Quantinuum collaboration), but again, no PQC certification. 3. Renesas Electronics: No PQC certification; they collaborate with wolfSSL, whose module has relevant certifications. 4. Microchip Technology: No PQC certification. 5. Texas Instruments: No PQC certification. 6. onsemi (ON Semiconductor): No PQC certification. Bottom line: STM’s named, vendor-labeled CAVP validation (A7125) for ML-KEM + ML-DSA on STM32/Cortex-M33 lands exactly as U.S. policy pushes PQC-capable gear into government systems by 2030–2031, with broader migration working toward 2035. That’s a competitive advantage in the MCU space worth highlighting, and I don't see almost anyone talking about it. And yes, similar PQC roadmaps are emerging globally: the EU published a coordinated PQC implementation roadmap in June 2025, and Canada set milestones to finish high-priority migrations by 2031 and all remaining systems by 2035. China is also pursuing a PQC migration plan.

Nvidia has been pretty good at keeping CUDA dominant. Huang a long time ago has been hyping up ML and GPGPU programming. Pretty much since CUDA 2007. Before that Nvidia always talked up their software in terms of speed and quality of driver updates for specific software along with utilities for customizing some display settings and later video game centric utilities.  Nvidia's software moat began before 2007 because Nvidia was the hardware company that since the early 2000s has been talking up how software was the future. By the time AMD realized this, they were spinning the drain approaching bulldozer and Intel was being the Intel we've known for the last decade. In those days AMD and Intel had software teams but nothing compared to Nvidia. By the time they started to see the software advantage Nvidia had and how that was benefitting them, Intel and AMD weren't in the position to heavily expand software investment. At least hard to justify  AMD was in survival most pretty much the whole of 2010-2020. Now they a strong high margin data center business especially with the AI boon. Intel manufacturing problems for the past decade. Failure to break into mobile. Failure with the cellular radio business. Whatever happened with 3D X Point and Altera. Whatever other business that they've sold or spun out the past decade What I'm getting there is that all the ones that could reasonably have the domain knowledge to compete in building out a GPGPU API and get that supported in damn near all the relevant major open source projects and commercial software were in financial straits and/or had their attention all over the place. Anyone that wasn't a SPARC, MIPs, Power, Imagination Technologies enthusiast looked at the backing companies as ready to compete with the breadth of Intel, Nvidia, AMD. Qualcomm was growing but not there yet. At that time future super major could have been Qualcomm, could have been Texas Instruments So up to AI boom, there wasn't really a stablly well funded CUDA alternative. It was all side projects for these hardware companies. Then AI boom and every non-Nvidia hardware company is like, "crap, CUDA, scramble what resources we have to show off something and start trying to hire more software developers." It's a multi year process to find and hire developers to develop a good CUDA equivalent. A lot of time to push support in software projects that are all CUDA It's not that it's impossible. Just that the major investment didn't come in and in the case of AMD, wasn't really possible until the 2020's post ChatGPT public release. So it would be years until a cross platform solution can become ubiquitous. But this decade compared to the 2010s, hardware companies are spending way more money than before and the market is proven. It's no longer Nvidia constantly trying to hype up the future of ML and self driving cars. DLSS was major for gamers, CharGPT and stable diffusion were what changed the whole venture capital market Everyone else here being super dismissive, people seem triggered at the thought of CUDA facing competition which I remember this is the investing sub so people are team sports about this and are simultaneously past doesn't predict the future and the present will be the same in the future. CUDA is old but AI mania is young. There are no AI products that are so superior and mature to upstarts that using a non-CUDA API is a non-starter. Most of all this AI stuff is regularly something new from scratch. No technical debt. Very limited profitable products. Still searching for what is monetizable and how monetizable Also the dismissals of Linux vs Windows. In the late 90s and early 2000s Steve Ballmer was railing against Linux and trying to motivate investors and engineers to combat the Linux virus coming for Microsofts business. Linux took the server and is now growing substantially in desktop usage in Europe. It was already pretty prominent in India. The US government has support contracts with Red Hat and on network you'll have managed Windows, RHEL, and Ubuntu machines. ChromeOS and iPads made major inroads into education In terms of investing it doesn't matter as much in terms of investing because Microsoft diversified. Office, Azure, now copilot, GitHub, LinkedIn, they bought Activision and became a huge huge publisher of already live service games, etc. Windows market share on the server tanked a long time ago. Windows market share has been dropping since like 2005 primarily to OSX/MacOS but if you include mobile, it was a bloodbath the last 15 years. iOS and Android. If Microsoft didn't diversify and were a Windows company, they'd be a much smaller company than they are today because the Windows moat was penetrated multiple times in the last 25 years

Mentions:#ML#AMD#API

CUDA is not a monopoly if you know anything about ML.

Mentions:#ML

I'm not so sure. Objectively speaking, we've come a long way with LLM chat and image generation in a short time. They are disrupting career fields and artists. Eventually the VC funding runs out, but all the biggest players in the space are either subsidiaries of giants that can take the hit, or else aligned to roll into them eventually anyway. And big players like MS/Meta/Alphabet are much more broadly invested in machine learning in general. If the LLM approach falls apart, it will still be cleverly written off and hidden behind new shiny things. If Meta can spend billions every quarter on the Metaverse and not pop a bubble, the current AI boom isn't going to hard crash anything. LLM is a buzzword, one that major players keep using to describe all sorts of ML and non-ML products. It can't fail if the public doesn't even recognize what it actually is.

Mentions:#VC#MS#ML

>if agents can replicate individuals with a good degree of consistency Can they do that without any supervision? If not it's "just" a technology / tool (very handy, we can agree on that) Let's just clarify that I truly believe in ML and that has already solid position in todays SWE. What I am not buying is the hype around LLMs and AGI-chasing bullshit

Mentions:#ML#AGI

Remember AARON? Telling people we were screwing around with AI in the 80s has convinced zero people that this is a buzz word bubble. I was writing ML code for my MUD in 89.

Mentions:#ML#MUD

I mean, to be fair, at those asset levels, some BofA credit cards get absolutely god tier, but that's the only reason anyone should do business with ML

Mentions:#ML

Its answer makes sense once you realize the vast majority of ML engineers are Asian immigrants.

Mentions:#ML

AFRM has been undervalued for the last few years and is started to maintain a floor of 70+ Just launched in Europe and have a ton of stuff in the works that doesn’t get talked about. People also default to just calling them a doomed meme company because they think they are just BNPL/layaway product. CEO is former PayPal mafia. They have an over 200+ size ML/AI engineering org, 1000+ engineers company wide, hundreds of open roles on their careers page - do with that what you will.

Mentions:#AFRM#ML

I'm dead serious when I tell you Nvidia is going to see headwinds, possibly massive ones, in the mid term future. Running AI on GPUs is generation 1 shit. Right place, right time, right architecture. But there's enough money and energy in the field that someone, possibly multiple someone's, is going to design far more efficient hardware for running ML deep learning algorithms.

Mentions:#ML

Please see the subreddit resources. This question is asked daily. There is also a wiki in r/personalfinance that covers your question - [https://www.reddit.com/r/personalfinance/wiki/windfall/](https://www.reddit.com/r/personalfinance/wiki/windfall/) and they also have lots of information in that subreddit for your situation. From an investing perspective - the usual answer is the same. Talk with your ML adviser. Or talk with a different investment adviser for a second or third opinion. Investing options depend heavily on your own risk tolerance and financial situation. There is no one size-fits-all answer. If capital preservation and lower risk tolerance is a goal - then the usual answer is to move into an income producing portfolio. There are lots of ways to do this - including self-managed options using bond fund ladders of varying credit quality and duration. If you have experience and depending on the makeup of the portfolio - you can implement option collars, risk reversals, etc. to generate income. If you want to slowly diversify the portfolio - you can slowly ladder collars or covered calls to reduce the delta of the portfolio. You can use a PAL against the portfolio, etc. etc.

Mentions:#ML#PAL

And ML is owned by Bank of America. Which used to be called Nations Bank. When Nations Bank bought Bank of America, they decided to name their company Bank of America.

Mentions:#ML

Yes, they are the same. AI is comprised of some subset of ML algorithms.

Mentions:#ML

I have been an engineer - I have been an engineer that has helped design, build, implement, and license computer vision/ML/AI solutions to businesses, all built for purpose, years before ChatGPT was readily available to the masses. I've been doxxed and created a new Reddit account because someone found white papers I didn't know I had been credited on that were public. I just don't think people are aware how much of this already exists, and has existed, for literally years. AI built processes still need to go through dev-test-prod iterations. AI built inspection solutions still need to be validated. Engineering and AI are not mutually exclusive. AI is just another tool. It's like arguing if saws or hammers are better for building houses.

Mentions:#ML

You are thinking of Tesla, and confusing hardware, software, and AI as concepts. Tesla doesn’t have LiDAR. But like any other self-driving company, Tesla uses AI. Artificial intelligence is essentially the ability to analyze data and make autonomous decisions like a human would. There’s no more specific definition. Generative AI (chat bots for example) create stuff. Agentic AI (self-driving for example) focus on taking agency to make decisions around something and executing them. Technology-wise, both models require an intensive amount of GPUs (what NVDA leads the world in making). There’s another important form of AI that has more recently been broken apart from these models - Machine Learning. ML is where a system uses a large set of data to build algorithms that make predictions. LiDAR vs cameras vs whatever other sensors have to do with the data that is feeding the intelligence of an autonomous vehicle. A fully self-driving vehicle such as Waymo makes use of all the technologies I listed other than generative AI (unless they use it for support or user interactions).

Mentions:#NVDA#ML

The entirety of googles ML workload. Specifically. It's not the entire compute workload.

Mentions:#ML

how do you structure that project ? did you have people in AI/ML field beforehand or was it all new from scratch ?

Mentions:#ML

GPU vs TPU long-term: GPUs cost more per hour for the same raw compute, but they win on flexibility, portability, and developer ramp-up time. Almost every ML engineer knows CUDA, and your code will run across AWS, Azure, GCP, or on-prem with minimal changes. TPUs can be cheaper and faster for big deep learning workloads - if you’re staying in Google Cloud and mostly using TensorFlow - but you’re locking into GCP’s ecosystem. Moving away later means re-optimizing your whole stack for GPUs, which can eat any early cost savings. In short: TPUs for speed if you’re all-in on GCP, GPUs if you want portability and future-proofing.

Mentions:#ML
r/stocksSee Comment

You are literally asking Reddit for advice on a stock pick? Now consider this as a merchant or someone selling a product and how useful said comments would be for you selling said service or product to a specific audience. Pair that with a growing user base from top Google searches and going international using ML to translate and you have a one stop data powerhouse for advertisers. The irony of your post is dumbfounding

Mentions:#ML

You could go QTUM if you want some exposure with risk mitigation. It has RGTI and a collection of ML, AI and quantum companies equally weighted more or less.

Mentions:#QTUM#RGTI#ML
r/stocksSee Comment

I work in AI/ML as an applied scientist as well. Nothing that you said changes anything I said. TPU was developed for their internal workloads. It's co-designed with their models for efficiency. It's all about driving the cost per token down.

Mentions:#ML

Don’t conflate LLM/GenAI with regular ML/AI. We had the latter for years and it ran on commodity hardware just fine. GenAI is just power hungry due to its architecture. I definitely see a “deep seek” moment where some player will come up with an approach that’s closer to traditional ML yet can do what LLMs can do.

Mentions:#ML
r/stocksSee Comment

I'm thinking q3+ the damage to the US economy will start to hit. So, I sold MSFT, AMD and some other tech plays, and shifted to 1 share of SEB Seaboard. Seaboard is a shipping and agriculture conglom. I see it as a transitionary stock to transition from bull tech run to a potential recession. They focus on pork. I'm noticing pork sales are going up as beef is becoming too expensive. (Even walmart is starting to sell a ground beef/pork mix to lower costs for folks wanting beef but can't afford 100% beef prices. They're literally cutting the beef product with pork to dilute it down like it's a drug.) SEB is like $3500/share. I see the major tech stocks starting to cusp a bit. So, decided to pull the trigger and make the shift. I'm still holding NVDA. I think their earnings are going to be fantastic, but, once again, the market will find a reason to pull back on them. That said.. I bought ALAB before earnings. Got 25% return in 2 days. Astera Labs is working on AI chip designs and what-not. I think specialty work in the AI, ML, Data Sci world still has untapped potential, and ALAB going up reflects that. I just think AMD and NVDA (and MSFT) need time to cool. Another play I made as I switch to more recession-oriented stocks... 1) I bought online gambling stocks.. Draft Kings, Sports Trader, Flutter, Rush Interactive. Vegas and casinos and destination gambling is dying as tourism dries up and folks tighten the belt. But, people will still find ways to gamble, esp as economy tanks. There will always be folks wanting to find a score to feel like their luck is looking up, even if it's a small score. Online gambling is accessible, easy to buy into, etc. 2) I'll probably buy auto parts stocks again in a month or two. Maybe in Q4. I had Oreilleys early this year, and it went up when you know who started doing tariffs and there was a huge wonder if the market was gonna insta-crash. But, then it staganted as folks got used to him messing around. But, now we see the long-term damage to the economy coming down the pipe. We see the long-term damage he's done to US auto makers. Folks in hard times don't buy cars, they fix them. Auto parts will go up as economy goes down. I'll probably do Autozone instead of Oreilleys, b/c Autozone seems to be doing better. I sold Oreilleys after an earnings for some ROI, but it's just not doing as well as Autozone. A longer term play I did was MBOT and Intuitive Surgical who are both working on micro surgery bots. Basically they go on the end of the lines they put in your veins, and surgeon can remote control the bot via wifi. The surgeon doesn't even have to be in the same room or country. As long as they can remote in they can control the bot. I think we're seeing major turning points with wireless surgery bots, and these things will be something that will expand as 21st century keeps going. The stocks are currently down, b/c everything healthcare that isn't tech seems to be getting dumped on. (EG HIT, Healthcare in Tech is a tech stock that is doing gangbusters right now. Vanguard wouldn't let me invest in them, so I missed the bus on them. But, biotech, robo-tech, etc.. other healthcare stocks seem to be getting dumped on, perhaps b/c of the changes to medicare/medicaid and the rising cost of healthcare.)

Nah I’m safe I work in ML others won’t be as safe

Mentions:#ML

> A lot of my colleagues (I work as ML engineer) have mentioned that Gemini is the best for coding. Get them to try Claude Code. The other models don't come close IMO.

Mentions:#ML

It depends. A lot of my colleagues (I work as ML engineer) have mentioned that Gemini is the best for coding. Some of my friends who work in finance (non-coding) said they liked Claude the best cause it gets tone right usually. I personally just stick to ChatGPT cause I'm used to it, but I don't actually try to get it to think too hard, more of just using it instead of googling stuff.

Mentions:#ML

I may be biased cause I have a few Google calls (and some stock), but Google is terribly underappreciated by the AI hype cycle / bubble. They literally invented most of the technology that is being used by OpenAI etc -- they were the number 1 publisher at most ML conferences for a while. They are also the only company that isn't super reliant on NVDA's GPUs because they invented their own version, the TPU, in like 2018, and at the time they were said to be way faster/more efficient than any GPU (not sure if that's still true now since NVDA has pivoted more to AI since then). And on top of that Google is one of the more tariff-resilient companies there is, unlike something like Apple or Amazon.

Mentions:#ML#NVDA