Reddit Posts
[Discussion] How will AI and Large Language Models affect retail trading and investing?
[Discussion] How will AI and Large Language Models Impact Trading and Investing?
Neural Network Asset Pricing?
$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...
Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now
Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?
Moving from ML to Robinhood. Mutual funds vs ETFs?
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I'm YOLOing into MSFT. Here's my DD that convinced me
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I created a free GPT trained on 50+ books on investing, anyone want to try it out?
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Option Chain REST APIs w/ Greeks and Beta Weighting
Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!
AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)
🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙
Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)
The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle
Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth
Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
NVIDIA to the Moon - Why This Stock is Set for Explosive Growth
[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?
The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts
Do you believe in Nvidia in the long term?
NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading
Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"
Which investment profession will be replaced by AI or ML technology ?
WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology
$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41
$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).
Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine
This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?
Training ML models until low error rates are achieved requires billions of $ invested
🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨
AI/ML Quadrant Map from Q3…. PLTR is just getting started
$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News
VetComm Accelerates Affiliate Program Growth with Two New Partnerships
NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT
Netramark (AiAi : CSE) $AINMF
Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)
Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)
How would you trade when market sentiments conflict with technical analysis?
Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.
How are you integrating machine learning algorithms into their trading?
Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits
Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare
Why I believe BBBY does not have the Juice to go to the Moon at the moment.
Meme Investment ChatBot - (For humor purposes only)
WiMi Build A New Enterprise Data Management System Through WBM-SME System
Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT
The Squeeze King - I built the ultimate squeeze tool.
$HLBZ CEO is quite active now on twitter
Don't sleep on chatGPT (written by chatGPT)
DarkVol - A poor man’s hedge fund.
COIN is still at risk of a huge drop given its revenue makeup
$589k gains in 2022. Tickers and screenshots inside.
The Layout Of WiMi Holographic Sensors
infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.
$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.
$APCX Huge developments of late as it makes its way towards $1
Robinhood is a good exchange all around.
Mentions
Existing ML toolchains are tightly tied to CUDA which only runs on NVIDIA GPUs. Silicon is only one part of the puzzle, the other is software support. NVIDIA being banned in China creates two opportunities: one for silicon lithography and one for ML software. Lithography is difficult because of export controls on EUV machinery, but China has an extensive pool of software engineering talent.
Oh don't get me wrong, my work is going all in on my AI/ML'ing our work. I just don't see it getting beyond a basic assistant, in my domain at least, as we're taking theoretical knowledge and applying it physically. Data is sensitive and only in-house, while discovery projects last a few years at most before we're "on to the next one". Any AI suggestions would need to be empirically confirmed so at best AI would completely take over the hypothesis stage of work (where it currently seems to struggle the most). In the short term this obsessive data capture has actually created additional work for us and decreased productivity lol.
https://www.globenewswire.com/news-release/2025/09/16/3150782/0/en/AmpliTech-Group-Advances-AI-ML-Integration-to-Enhance-ORAN-5G-Private-Networks-and-WiFi-6-7-Solutions.html
\>Nvidia is trading at 50x earnings, even though we can clearly see big tech pivoting away from Nvidia into custom chips from Broadcom. Nvidia was a temporary solution for big tech to quickly pivot and scale quickly, not a company they will depend on long term. Where did you get this info ? this is not true at all... inference and training are not the same thing. There is almost no competition to cuda. And training is not one time thing. \>Broadcom, while they have a lot of potential, trades at 92x earnings. check forward PE. As I am saying earnings they are keeping up with the valuations. but they are ofc slower. Why? check money supply again. Where do you think the money goes ? \>Oracle trades at 67x earnings based on speculation of a $300 Billion contract that depends on their customer's ability to raise capital. Additionally, no guarantee that said contract provides high margins. in the bull market there are always that kind of speculations. it is normal. \>Workday trades at a P/E of 87, ServiceNow at a P/E of 117, Applovin a P/E of 83. The products these companies offer are really nothing special. as I said.. highly valued. if the earnings don't keep up they will come down. for sure. \>AMD trades at a P/E of 91 even though they are struggling to compete with Nvidia. again check forward PE (35 ish). And this was one time spending or sth I don't remember what it was currently. \>IBM, known as a dinosaur tech stock, trades at a P/E of 41 obviously you are stuck in the history. they don't sell computers anymore they sell IP , involved in quantum computers, AI and cloud computing. they are keeping up. \>A normal P/E is in the range of 10-20, historically. It is worth noting that P/E can be converted to earnings yield. For example, a 50 PE is an earnings yield of 2%. I doubt you will ever see 15 PE again for any big tech in near future under normal circumstances. Again money supply + inflation + number of investors via HOOD like apps etc .... -> 25 PE is the new 15 PE \>When you factor in the costs, AI has made companies LESS efficient. The cost of paying for huge compensation packages to top researchers, huge capex costs for datacenters, greatly exceed the potential labor costs savings from automation. This is the investment phase. It is totally normal. Everything is new and when something is new it is always inefficient. What I was talking about they are efficient in terms of they need less human power for same job so it is working. More than 30% of googles code written by AI today according to them. And it is just the beginning \>This is based on the flawed assumption that it's necessary to have hundreds of thousands of GPUs to train a competitive ML model. Plenty of researchers have found you can train models for much cheaper. Look at what is coming out of China, where they are forced to make due with less due to export bans. They are building models that trade blows with American products with less than 1% of the compute capacity. I would be very sceptical about the infos coming from china. They were saying deep seek costs only 5M or sth and They did not need many Nvidia GPU. It was ofc bullshit. Believe me they are working with Nvidia GPUs :) They have no other choice.
That's been priced in for the past two year. Why do you think google was trading at 15x P/E ratio at one point. No matter what happens, whether AI is hot trash or not, google will be able to dominate the market. They already have the framework to deploy their AI unlike the others. People are not smart enough to realize that these search predictions and youtube recommendations have always been using ML/AI.
> Again, I don't understand what you're saying. If you can create a model for $50 using distillation that can outperform flagship reasoning models, why wouldn't Meta or Google do it or any non-tech company using AI? You are misunderstanding the study. Distillation does not outperform leading models. It provides a model that is very close to, but not quite as good as the leading models. The concern here is whether or not $50 Billion in capex is worth it to build a model that is just a little bit better. Cheap models will eat into margins of flagship models. >Isn't that an argument in favor of the fact that AI costs will keep getting lower, so companies utilizing AI will not have the problem of mounting costs that you claimed they would have AI costs have only been increasing though, not decreasing, because of fierce competition. Meta, Google, OpenAI, Anthropic, XAI, and others all want to have the best LLM, best image/video engine, etc. This means throwing tens of billions of dollars at hardware to both extract an extra 2-3% on benchmark tests, and on providing services at a loss to users to gain market share. >Just because you might have experience working with ML models doesn't make you an expert. Nothing you wrote suggests any level of technical competency. Also I mean it's kind of hilarious if you think VMWare costs nothing for Broadcom to sell or maintain or develop. They jacked up the prices because there are no better alternatives and the market had to accept it. 1. There are alternatives to VMWare. I've worked at multiple places that have migrated from VMWare to Proxmox. And I guarantee you broadcom's costs of maintaining VMware have not gone up 500-1000% in 10 years. >Is the same true for AI models? Right now, There is a lot of competition in the LLM space. So much so that there are dedicated services to routing LLM requests to the cheapest LLM. Currently, fierce competition is in favor of customers, and AI providers have little ability to raise prices they charge customers.
Again, I don't understand what you're saying. If you can create a model for $50 using distillation that can outperform flagship reasoning models, why wouldn't Meta or Google do it or any non-tech company using AI? Isn't that an argument in favor of the fact that AI costs will keep getting lower, so companies utilizing AI will not have the problem of mounting costs that you claimed they would have? "Because that's what tech companies do" is not really a reasonable answer. If a model costs $50 to create and then you can run it on your local infra, why is 500% price hike on OpenAI's api a factor here? Why isn't the world running that $50 model and building massive datacenters trying to make bigger models? Again you can say "because that's what tech companies do" or the more likley expalanation is thta $50 models don't really work and have massive limitations. Just because you might have experience working with ML models doesn't make you an expert. Nothing you wrote suggests any level of technical competency. Also I mean it's kind of hilarious if you think VMWare costs nothing for Broadcom to sell or maintain or develop. They jacked up the prices because there are no better alternatives and the market had to accept it. Is the same true for AI models?
us-west-2 is basically at full capacity for large GPU compute instances, we have to start deploying to other regions. I'm not sure there even is capacity available in other regions. To dumb this down, AI/ML workloads have consumed all available capacity in an AWS region.
> Can you provide any concrete evidence of this? If you're talking about Deepseek then it's not valid as well. They do not use "less than 1% of the compute capacity" https://www.reddit.com/r/LinusTechTips/comments/1ija6iu/deepseek_actually_cost_16_billion_usd_has_50k_gpus/ Read the source, they basically pulled the 50k GPUs figure out of nowhere, basically claiming because other companies needed to have that much GPUs to achieve the performance, therefore they must have. Another flaw is the article claims that you need to count the lifetime cost of all the GPUs, even if they are only used for a few hours. That's not how tech companies operate; you only need to pay the cloud rental costs, not the full cost of all the datacenter infrastructure. Read the actual Paper by Deepseek, they outline exactly how they saved money. You can actually replicate it yourself at a smaller scale. Researchers from Standford and University of Washington have successfully replicated what Deepseek did, training a reasoning model for under $50 using distillation. This is just one cost savings method. >And I don't get your core argument here regarding API prices? Why will they go up 500% over a decade if at the same time you're saying models are getting more efficient citing the "1% of the compute costs" thing? Wouldn't the costs and therefore the price for models keep getting lower since it requires less resources to run them? If the costs are hiked 500% over a decade, at some point, it will make sense to run their models locally on their own infra (which should be much cheaper since models are getting more effcicient leading to lower costs for computing resources) Because that's what tech companies do. Software like VMWare costs almost nothing for Broadcom to sell, but they have jacked up the price 500-1000% over a decade. AI is especially likely because it is highly unprofitable currently, only priced cheap to drive adoption. >I get the thing about tech being overvalued but your post shows a fundamental lack of understanding how the AI market/models work. I've literally built ML models myself.
> This is based on the flawed assumption that it's necessary to have hundreds of thousands of GPUs to train a competitive ML model. Plenty of researchers have found you can train models for much cheaper. Look at what is coming out of China, where they are forced to make due with less due to export bans. They are building models that trade blows with American products with less than 1% of the compute capacity. Can you provide any concrete evidence of this? If you're talking about Deepseek then it's not valid as well. They do not use "less than 1% of the compute capacity" https://www.reddit.com/r/LinusTechTips/comments/1ija6iu/deepseek_actually_cost_16_billion_usd_has_50k_gpus/ And I don't get your core argument here regarding API prices? Why will they go up 500% over a decade if at the same time you're saying models are getting more efficient citing the "1% of the compute costs" thing? Wouldn't the costs and therefore the price for models keep getting lower since it requires less resources to run them? If the costs are hiked 500% over a decade, at some point, it will make sense to run their models locally on their own infra (which should be much cheaper since models are getting more effcicient leading to lower costs for computing resources) I get the thing about tech being overvalued but your post shows a fundamental lack of understanding how the AI market/models work.
> I totally agree with you. Stocks are pricy BUT, currently there is no bubble. (other than 2-3 meme Stocks.) It's not just meme stocks like Tesla and Palantir with P/E of 230 and 570. Even quality companies are trading at valuations that greatly exceed their intrinsic value: - Nvidia is trading at 50x earnings, even though we can clearly see big tech pivoting away from Nvidia into custom chips from Broadcom. Nvidia was a temporary solution for big tech to quickly pivot and scale quickly, not a company they will depend on long term. - Broadcom, while they have a lot of potential, trades at 92x earnings. - Oracle trades at 67x earnings based on speculation of a $300 Billion contract that depends on their customer's ability to raise capital. Additionally, no guarantee that said contract provides high margins. - Workday trades at a P/E of 87, ServiceNow at a P/E of 117, Applovin a P/E of 83. The products these companies offer are really nothing special. - IBM, known as a dinosaur tech stock, trades at a P/E of 41 - MSFT trades at 37x earnings. They have given up a lot of their AI growth opportunities by giving up exclusivity agreements with openAI and agreeing to a smaller rev share. A normal P/E is in the range of 10-20, historically. It is worth noting that P/E can be converted to earnings yield. For example, a 50 PE is an earnings yield of 2%. You can get 4.7% on US treasuries risk free. Real estate offers cap rates of 5-7%, with long term capital appreciation as well. **The biggest problem with this is most of big tech is not positioned to produce the high level of earnings growth they have historically:** - Tech historically traded at a P/E of 10-15. Therefore, they could buy back 7-8% of their shares each year, boosting EPS even without boosting profits. With a P/E of 50-100%, that's only 1-2% growth. - Historically tech companies could be scaled and deploy their services to Billions at very little cost, allowing for very high earnings growth without capital investment. However, in today's industry, AI is very expensive. It requires $50+ billion a year in capex to train models that are competitive, and also operates at a loss when providing inference for consumers. - Big tech has laid off most of their top engineers in favor of stock buybacks and AI capex. As a result, there will be a lot less innovation at big tech, which will become evident over the next few years. >On the otherside, earnings and stock prices are aligned in the big tech, they are much more efficient than ever thanks to AI. And they are monetizing it well. When you factor in the costs, AI has made companies LESS efficient. >There is not enough computation capacity for all those companies all around the world, not enough datacenters not enough power to feed them. Thats why, I think it is just the beginning. This is based on the flawed assumption that it's necessary to have hundreds of thousands of GPUs to train a competitive ML model. Plenty of researchers have found you can train models for much cheaper. Look at what is coming out of China, where they are forced to make due with less due to export bans. They are building models that trade blows with American products with less than 1% of the compute capacity.
Their AWS Profits will also go down, because a lot of people are using compute for training AI/ML
The key point is that they’ve been using their AI and ML primarily to drive relevance on the search side. They’re still figuring out how to effectively monetize without prematurely cannibalizing their core business (search/ads), but I think they are finally turning a corner on that. They’re actually a leader in AI and ML, they’re just not a leader in terms of productization and monetization in the current context.
ROI and the 80/20 rule are definitely applied everywhere, sure. But I'm more on the angle of, there are a lot of companies. And a big chunk of that pie of companies has something still to move to the cloud, some, everything. And AI/ML can be applied to a lot of stuff, from fleet management to product marketing in your ecosystem. Some will want to start doing that, others will want to scale. Even doing experiments to build models for some of those usecases can take easily $100k just in computing power, then take into consideration data, inference and operation upkeep costs, plus infrastructure costs & service help. Multiply the number for a company into 50k companies that are in the process of doing or scaling that up in the cloud, you get to the point of new data centers. Maybe not the multiplication of data centers surface, but yeah. ThonI very much agree with you, companies collect a lot of garbage. Let me add that there are a lot of fuckups too, and you be using computing and data, not for garbage, but by mistake.
Oracle has a strong core business of providing basic and somewhat essential data services to the government and businesses. That reliable revenue is not nothing, but their overly optimistic claims about AI/ML enabled earnings growth could be speculative. Compared to Softbank, Oracle is less diversified and doesn't act as a speculative investor to generate hype. So take that for what you will.
ML/AI.... absolutely are not needed for well anything. Also its all relative, compared to the capacity of a datacenter AI training sets and AI size is very small. Even the really big AI models are just barely in the TB range.
eh, ML and AI do need a lot of data. It's also risk mitigation going from on-prem to cloud. you can diversify, most stuff is CSP's responsibility, pay-as-you-go, etc. AI was the big push for a lot of companies to start moving to cloud, migrations that take 3-4 to 6-10 years. I don't think all data centers being built now will be useless, but we may be going a bit overboard on it. In any case Palantir and others will have interest in all that computing power just sitting there, regardless
If your goal is strong defence against zero-day vulnerabilities and AI-agent threats (which often exploit unknown flaws, exploit chains, run time anomalies), then you likely need companies that offer threat detection, behaviour analysis / ML-driven security, endpoint protection, intrusion prevention systems (IPS), identity / privilege / agent identity management, etc.
Yea I have it's not really clear. Uses a bunch of buzzwords that don't really mean much. I work with data in healthcare in very large data sets and have some data certifications. I was looking for more specifics on what the business need would be and how it's monetized as a platform. Yes you're replacing a reporting analyst, but a company like Walmart would need 500k in payroll to support this work. Unless they didn't have any prior reporting or SQL based data collection for all their stuff. But even that isn't anything special about palantir. The way they aggregate data has already been done. It's their AI analysis and ML that's supposed to do *something* I just never get a clear answer on what that is.
I was reading WEF forum plans and they had Oracle slated to do all this AI/ML stuff that was completely outside it's wheelhouse.
you forgot the law of diminishing returns. It was easy to get from 50 to 80, but from 80 to 90 it will become exponentially harder anyone in ML / AI field knows this. There is a limit to how much you can scale existing models like LLM before they plateau. Like GPT 5 has pretty much plateued. From GPT 1 to GPT 4 it was pretty much larger and larger model and more and more data. Now that it is trained on the entire internet data, what’s next?
Care to elaborate? I'm not a serious sports better, just a bit of dabbling, but in my limited experience it appears prediction markets typically offer a better price than most ML bets and more flexibility given you can close your bet early for a smaller loss. Also, one could argue that professional sports bettors can get more of an edge in these
LLMs are not edge detectors by themselves. They are not going to tell you whether SPX vol goes up or down next week and that is where the “hallucination” problem kicks in. Where they do shine is into turning raw features into trade frameworks (e.g. “this skew/VRP setup implies you want to do a calendar vs diagonal”). I find them particularly useful to bridging ML outputs into natural language so you can actually use the signal. Think of them less as forecasters and more as front-ends for your models. The heavy lifting is still done by your vol surfaces, regressions, or whatever statistical framework you run. The LLM makes it usable at scale. The mistake people make is asking them to predict. The real win is asking them to translate what your actual models say into human-readable setups and risk calls. That is the difference between science fiction and a desk tool. Good luck.
And where is this chip? Has this been tested already, does it have a software stack? Examples like the [ML.net](http://ML.net) ready to go with the Broadcomm chip? All I see right now is hot air. A bet. Buying the roumour. Where are the facts about the chip? And who is going to make it where no fab on earth can already do 2 nm? Luckily I have a friend who installs such machines, he works for NXP and he might know something. It's not yet production ready... from my friend I know that it could take weeks or months before even the first machine produces reasonable output. And he is The Man who makes semiconductor lithographics work. When TSMC says it is soon produciton ready this "soon" extends between 3 and 6 months. [](https://www.embedded.com/tsmcs-2nm-technology-almost-ready-for-mass-production/)
You’re making this sound like as if NVDA’s case was something that the entire world overlooked and only Jensen/NVDA knew what was going to happen. Without OpenAI’s work, none of this would have materialized to the extent that it has. Because companies would have continued to do their own internal, closed-door, isolated kind of ML. And not many would have spent the kind of $$$ OpenAI did before making ChatGPT public and turning into a household commodity like it is today. Keep in mind that even today, 40% of NVDA revenue comes from two customers (a major risk): https://fortune.com/2025/08/29/nvidia-revenue-anonymous-customers-chips-ai-china/
Yes but only because I'm a software engineer. Anyone who studied CS after around 2015-16 or so would've been able to notice Nvidia's monopoly. That's when Tensorflow was released and the AI/ML/DL/Kaggle space started to be noticeable to even your average software dev and CS student. I didn't expect it to balloon to bubble-level valuations though.
I can't take 100% credit, but in 2016/2017 AI & ML were being mentioned in my industry .. A LOT! Funny enough, I'm not in the Tech Industry. Add to that, GPUs were gaining popularity for Crypto Mining. Those two years I plowed the entirety of my Solo 401(k) contributions into FSELX (Fidelity Select Semiconductors Portfolio).. knowing that the Semiconductor Sector was going to boom. After all, everything has to run on some sort of hardware. It did & that position is up >600% since I bought in. Those same conversations are now about Quantum Computing. I am working to figure out how to responsibly get into that sector. I'm not the type to invest heavily into a single Equity, especially when it's speculative. MFs & ETFs are more my style for anything I plan to hold for more than 5 years.
The earliest winners in AI/ML were digital ad companies like Meta and Google. Ads were the first money maker for machine learning in the early to mid 2010s before generative AI blew up.
It's amusing how the masses think something like AI could just spring up out of nowhere overnight. DeepMind was formed 15 years ago. They made an AI chess engine that could easily outclass the best humans - not even a chance. OpenAI formed 10 years ago and got the first NVDA DGX cluster. It's just a matter of whether you had any relation to the field or any interest in it - but it wasn't developed in any sort of secrecy. In early to mid 2010's, ML had started gaining traction as a potential degree option. I started building NVDA position back in 2017/18 timeframe after reading an article where several Silcon Valley seed investors were interviewed and asked which publicly traded company they would invest in - top choice by far was NVDA because of the future of AI.
The Google discovery (the attention mechanism) improved the effectiveness of ML models, particularly but not exclusively ones that process text, but it had nothing to do with parallelization.
Back in 2016 I put half of my 401k into nvidia and Tesla after I saw GPUs were going into every self driving car whether it was a Tesla or other competitors after seeing the demos at CES. Even though the tech at that time was based on CNNs and I was focused on cars, it seemed like a no brainer that GPU infrastructure was inevitable once breakthroughs were made in AI/ML.
I think I was lucky that ML became the target of my Autistic fixation in 2016 which led to me making some good predictions of the future, could have easily ended up as trains or something if I was exposed to the right media around that time LOL Side note I recall a thread on an AI tangential sub ~2019 where an investor was asking people involved in the field what stocks we recommend he should buy, I heavily shilled Nvidia along with an explanation of CUDA and how all researchers were reliant on it - I still wonder if he bought in early or not :)
I predicted Nvidia would boom in 2016 due to ML being heavily reliant on CUDA. I was learning some deep learning stuff at the time in Uni and recall speaking to my Father (a backend engineer) saying that I thought ANNs (specifically GANs at the time, this was pre "attention is all you need" and transformers) were going to revolutionize every single industry and that dGPUs and Nvidia were at the forefront. He disagreed and said that useful AI was 100s of years out (interestingly he still doesn't seem to understand that you don't need consciousness for intelligence).
chargers ML was the play
I am not saying that there are no uses for Machine Learning/AI, I am saying that most of the investment that is taking place is not sustainable or practical. Up until 2022, you had a lot of organic growth in the ML industry that was justified. Lots of very practical applications. The main problem is that in late 2022/early 2023 when ChatGPT and StableDiffusion drew a lot of attention by impressing the world, it created a generative AI bubble, in which any company that could make a cool tech demo would get funding, and AI become a solution in search of a problem. There are so many useless startups out there in the Generative AI space. Think video generation, AI generated games, etc. These are impressive tech demos, but their underlying architecture provides no feasible path to a valuable product. For example, there are some startups that have raised millions for AI generated games. But they do not store data variables in a logically consistent way, they do not map the world in 3d, they just render a sequence of 2d images with significant latency, in a way that responds to your inputs. They can throw all the compute they want at these models, they aren't going to produce a viable product consumers actually want to pay for, because their underlying basis is flawed. Even with faster hardware to resolve input lag, more memory to increase context window, they cannot generate a solid experience relying on their architecture. Many top researchers have come out and said that Generative AI has set back AI research several years due to all the bad investment being made.
>The earnings growth isn't coming from useful products for end users, they are coming from other tech companies that are buying their products in order to make speculative bets on GenAI products. I'm baffled how confident you are in this statement, which is unlikely to be true. GenAI is not just an empty game, it has billion of daily users There are lots of very clearly good user products from AI. Most of the tech companies will use these chips regardless of it's GenAI or more traditional AI/ML. https://cloud.google.com/transform/101-real-world-generative-ai-use-cases-from-industry-leaders https://indatalabs.com/blog/companies-using-generative-ai https://www.innovationleader.com/topics/articles-and-content-by-topic/scouting-trends-and-tech/the-top-10-biggest-us-companies-ai-gen-ai-use-cases/ People that don't understand applications of AI will be left behind
I guarantee Klarna has a machine learning division in which they have their own scores of probability of payout. They're burning money to see which factors are the largest. Can't have a good ML model without sacrificing to see who will actually fuck them over and how they behave.
Protein engineers will be very happy when you tell them that they will be just as relevant in 5 years even though ML can create novel proteins that perform better than they create in 1/100th the time and for less money. Protein engineering uses LLMs in the same manner as anything else. The language in this case is protein sequences.
The bubble we're talking about is LLMs, the idea of "AI as a brain." Using ML/big data isn't a bubble, but it also doesn't require the massive capital expenditures that companies are doing right now. And ML is allowing you to do things you could not do before, not replacing you, as a researcher.
I think google is one of the best plays for the next 10yrs. There's still so much opportunity left for real value to be derived from the useful applications of AI/ML (not just chat bots and shit but semantic search applications / vector embeddings / things like that) and its all going to run on AWS, GCP, and Azure - and frankly everyone in the data community is leaning towards AWS and GCP for where to host this stuff bc Azure is just a clusterfuck of cloud service offerings
Considering ML scientists inside of Meta report constant shortages of compute time, I really don’t see the demand drying up in the next two or three years. For fucks sake, we just used machine learning to solve every single protein folding problem possible. We went from massive folding at home distributed compute networks solving maybe a few hundred protein folding problems. To ML solving every protein folding problem known to man. Personally I think the fact that people think the demand for these chips is at risk says more about the lack of education of the general public than anything else. There will be ebbs and flows, but I think we are in a completely new era of computers now. And have been ever since we achieved EUV en masse for acceptable prices. Well, everyone except Intel that is. Twenty years from now is going to be absolutely nuts thanks to what we are just now unlocking with modern machine learning.
Thanks for your reply. So to answer your questions: a) Definitely, want to be able to "enjoy" 80% of $2M+, vs. not enjoy $1M+. b) I can definitely use my margin for PUTS. I use IBKR for their generous margin policies and rates (transferred everything from ML and MS/e-Trade, when I decided to go Margin, and saw their rates. I'll try a few CSP's, but need to find some stocks I'm familiar with, at prices I'd be comfortable owning at the PUT strike (minus the Premium). c) As for why I'm holding onto COOP (and RKT, for now, once it converts in Q4), is not "just" to save on the tax hit, but TO MAKE MONEY. The "market" has definitely NOT priced the upside of the COOP/RKT deal in the next yr or so, especially with Rate Cuts coming, and lack of full clarity from COOP/RKT on their combined EPS/EBITDA for '26. See: [https://www.tipranks.com/stocks/rkt/forecast#](https://www.tipranks.com/stocks/rkt/forecast#) All these "respected Analysts" haven't updated their forecasts in months, and the \*current\* RKT price is literally (even with a 10% drop in the past weeks, due to some news which I don't feel justifies the drop) is above all their forecasts. MY (and other's who follow the two stocks closely) feel that the combined entity, based on current (non-rate cut) earnings is worth about $25/share, PRE-any rate cut bump in ReFi's etc. ONLY the recent BTIG Rating which I referred earlier has taken this info/number into account. $25 (expected) / $18 (current) RKT price is a \*40% bump for 2026\*, and so I'm not comfortable selling/putting it at risk for getting called, till it hits/gets close to that number (unless it's like at 20% above the current price, with a 30DTE, but I'd need to see what kind of Premium I'd get for that CC). Trust me, my objective is to MAKE MONEY, and leverage the $$ I have on Margin, which is why I setup the layered CC ETF strategy. So far, 1 month in, with about $400k borrowed/invested, post Margin (5.x%), I should be netting around $10K a month. And I'm already planning to "adjust" my layers to shift some $$ away from XDTE to the higher premium Single Stock CC ETFs (that I'm comfortable with, not the highest payers like MSTR focued ones). Would welcome your thoughts/feedback on the above, as I continue to learn how to leverage/grow my $1M+ into $2M+ (and beyond).
OP, quantum computing is a different beast that is far beyond an NVIDIA/AI situation. From a theoretical level, Machine Learning + AI have been actively researched and developed in some form since the mid-1900’s, and if you consider optimization as a core part of machine learning, its foundations were well-defined even back into the 1800’s. The foundational theory of machine learning and statistics that led to the generative AI that we have today has been developing for over a century, and the first nascent neutral networks that are a precursor to today’s deep learning generative AI models were developed in the 1950’s to 1960’s before becoming a full-on viable theory (with the invention of backpropagation to train models) from 1970 onwards. From a practical level, machine learning and AI have been solving practical problems for a LONG time before this hype started getting priced into the market. Google, Netflix, Meta, all started leveraging ML in their products in some form for decades before becoming the monster they are today. As of now, quantum computers have demonstrated zero ability to solve problems that matter to us today. Furthermore, the engineering required to maintain a quantum computer that works is massive and near the limits of physics (I’m talking you basically need to reach near absolute zero on a consistent basis). I’m not saying things can’t change overnight, but I would be VERY wary of any hype around quantum computing. It is VERY unlikely that issues with decoherence and stability of qubits will be meaningfully resolved in a matter of only 2 decades. So OP, be wary.
Nvidia has monopoly on the GPUs that are used for AI/ML. All the AI/ML happens using libraries like Tensorflow. Those are optimized for CUDA. And CUDA is proprietary Nvidia tech. This is the reason why x86 processors (Intel and AMD) cannot be displaced in the personal computer space because most of the software is written and optimized for x86. And it is also not about merely writing new libraries or new code. The existing code is supported in hardware by processor features. Given how these processors are blackboxes, it is impossible to build competitors for them.
The fact that they are behind Grok despite being invested in the AI/ML business for over a decade is highly alarming. Grok barely got going a year ago and they are destroying Google, which has spent far more on AI. Losing to OpenAI is a bit more understandable, as OpenAI is a leader in the industry and has some fantastic researchers. But losing to Grok is embarassing.
I'm an Applied Scientist and I've worked in AI/ML for 10 years and you are just flat out wrong.
>The MIT study published last week shows that 95% of attempted AI implementations at companies fail. Ah the parrotted MIT study. The report fails to clear the bar for any good statistical study. Low n-study with no sampling validity or measurement clarity. There is no data or appendix to reproduce this. Ignoring all this....just because a pilot doesn't progress doesn't mean it isn't delivering any value (would help if study had any measurement clarity). The "study" also attributes the biggest issue is lack of memory and context window. This is something models have been evolving and getting better. > And if you understand the math behind it you'll know that it can be useful as a tool under highly skilled hands of field experts, but that it's not going to be a general "replace all workers" tool like the claims from tech would have you believe. 1. Never claimed it will replace all the workers 2. It doesn't have to be used by highly skilled field experts. Like not even close. A junior programmer with the appropriate model can perform close to a senior programmer (doesn't mean senior programmer doesn't have experience or experience doesn't matter). 3. You are misunderstanding the difference between task and job. 4. Custom models with enough memory and context windows for sector specific are already on the way. These models even assuming they don't replace workers will still be running on GCP, AWS, MS servers. The need for compute will skyrocket and the models will be licensed by companies creating their own models. [AI will be a cash cow for MS,AWS,GCP, ORCL] > I think you forget that the VAST majority of people are just now becoming aware of what big tech does and the younger populace, being much more technically literate, is likely going to see a shift relative to the populace currently. Don't see it at all. Younger people are caring less and less and are pivoting more towards consumerism. Take a look at the TikTok ban - TikTok (chinese company) quite literally is collecting billions of data points and Trump wanted to ban it and the younger generation threw a fit. People are content with the dopamine drip and the algorithm feeding them exactly what they want. > but now there are companies starting with new business models, building the same (and arguably better) services that big tech offers. lol like what? > I think you are severely underestimating the irritation of people that the AI models are trained off of their data, without their permission (sorry burying stuff in the T&Cs might count legally but not to consumers). And all it takes is one lawsuit to completely change the legal framework, or for one law to rewrite what can and cannot be done. Not particularly. Like I mentioned vast majority of people don't even understand. Even if they did they really don't have many options for them to opt out of. Every social media company is collecting information. Your comments are being collected by Reddit and then sent to Google for their models but you are still on here debating an internet stranger. Sure all it take is a law but with how much funding and influence the big tech has? I'll keep my money on big tech and you can keep hoping for reforms that might one day happen. > The models aren't "intelligent" in the human sense. They run statistics on massive datasets and return the most likely set of words based on the input set of words. The human brain, which is the most effective intelligence we know of today, runs on 20W. That's not even enough energy to power the old fashioned tungsten lightbulbs. I do ML. Nobody claimed these models are sentient or intelligent. They don't even need to be "intelligent" - you are confusing AGI with AI. LLMs are just part of ML and we have had ML for years now. It turns out the human brain as special as it is - is still a pattern recognizing statistical machine with a bigger context window and memory. The models don't need to be "intelligent" for them to generate value nor do they need to something special that only humans can do. > It's really best if you learn a little about things, because you seem to be basically building your view based on what you hear from people who have a vested financial interest, not based on independent reviews and a fundamental understanding of the technology. My work literally entails around DE/ML. I work with these models regularly. I don't think you quite understand the nuances of AI...you keep saying "math" but I don't see any actual evidence for your statements or your so called math.
Thank you and glad I can be helpful. 1/ Yes it has to be on the same timeframe. 2/ The VRP in SPY turned negative on Mar 20 and despite being ever so slightly positive at the worst of the crisis (Apr 8-9) it was mostly at 0 and then negative for a long time, as IV got mercilessly crush and realized was still high. VRP is really positive (+5 on average) since early June, and indeed since then it's been fairly easy making money selling options again. Especially with a particularly forgiving realized vol and gentle path drifting on the way up. It's not always like this, and sometimes you have to delta hedge. 2b/ VRP is a measure of the past - yes and no. The best estimate of RV tomorrow, is often RV today. Therefore, VRP today is very often a great predictor of VRP tomorrow. There are other factors, but in my ML model, VRP today come (not surprisingly again) as one of the top feature. 3/ The key is still to put it in context and to capture moment where it is really stretched - knowing that it is positive or negative is already great. Knowing when it is really stretched compare to recent past is even better. 4/ I expose almost all of my research in my app. I know how painful that stuff is to recompute because.. well I've been trading for a while. Retail traders are at a massive disadvantage to pros because... well they don't have data but sometimes tech skills and even more often time. There are other tools in the market, do a quick google search and you will find the one that suits you best. 5/ Stop loss: never. I size small and I am not buying wings either. Again, it's like being an insurance provider. You can decide to reinsure yourself, but it eats your margin (especially with the volatility smile, you end up buying a vol that is often much higher than the one you sold). If you insist in hedging, you should consider calendar, they are probably the best of both world, but not a magic solution either: you are now expose to the term structure.
Dude, you're absolutely right. I used to think trading was all about hitting that one massive winner, but honestly? The real edge these days comes from combining AI tools with solid risk management. Took me way too long to figure this out. Backtesting changed everything for me. I started running my ideas through years of historical data first. The AI can simulate thousands of different market conditions and basically tell you "hey, this strategy would've blown up your account in 2018." Saved me from so many stupid plays. Pattern recognition is where it gets interesting. ML is insanely good at catching things I'd never notice - like weird volume patterns before breakouts or momentum shifts that happen right before reversals. Helps cut through all the market noise so I'm not just trading because something "feels" right. Automated risk management was my biggest game changer though. I set up bots to handle stop losses and position sizing automatically. No more holding losers because I'm "sure it'll come back" or risking too much on a single trade. Takes all the emotional BS out of it. Sentiment analysis is like having superpowers. The AI scans everything - news, earnings calls, social media, even WSB posts - to gauge how everyone's feeling about the market. Really helps you avoid getting caught in bull/bear traps. The key thing I learned: don't let AI trade FOR you, use it WITH you. I still do my fundamental analysis, but now I have this AI copilot helping with entries, exits, and keeping me from doing anything too stupid . If you want to know more about Ai trading go check out this page [Algolyra ](http://algolyra.beehiiv.com)
I made a bag on this back in the day and have started eyeing it again since it’s starting to look oversold. Their numbers are better than you would expect and they have a pretty decent ML R&D org. I could see them releasing a product that goes super viral and spikes the stock back up, and I don’t think the downside is huge
To his point, all these methods are not super accurate. They work but ... they have some disadvantages one cannot ignore. I personally use some mixes of HAR and a few other ML models to predict RV. IVR is def backward looking and in and of itself... it's not super useful. But it looks like you know your stuff :) Curious why you were asking these questions at the first place :)
Dude, GoogleDeepMind is OWNED by Google, no fucking shit they are gonna use TPU... Are you really this stupid? You're trying to prove to me that TPU is "best" by telling me GOOGLE DeepMind uses TPU? ...? Next you're gonna tell me Macbook Pro is the best laptop because Apple engineers use them. My point is that 99.9% of university researchers, AI labs, AI startups, mid-sized companies, enterprise companies ARE ALL USING Nvidia GPUs. Why? Because CUDA won the war, all the open source ML libraries are optimized for CUDA first, also, all the engineers have an NVidia GPU at home so they can test their code on their gaming desktop. This is how I first started doing AI programming, I was running AI code on my 2080S years ago, then the same code ran on my 3080ti, then the same code ran on my 4090, the exact SAME CODE, also runs fine on a h100 that I used on Nebius cloud 1 month ago. I.e. Engineers love the software stack for NVidia chips and it has been backwards compatible for years and years now. Google's AI software stack is seen as a piece of shit in comparison. And by the way, I also do my AI engineering on NVidia chips, for all of the above reasons. And none of the AI engineers I know use TPUs, again, because the software sucks ass.
I think the main driver of snowflake adoption had been that it feels just like a database, since it's managed so fully. They've definitely worked hard to expand their offerings and integrations, but with the focus on SQL it got branded as a distributed DB primarily. Databricks from the start was focused on programming languages for distributed computation, which offered a lot more flexibility, and for being a place to do ML on big data. They've since basically closed the warehousing gap, and keep rolling out new features at an insane pace to sprint ahead of competition. Execs don't care about features as much as cost, reliability, and simplicity, so walking that knife edge to maintain max profits is an ever evolving process.
Snowflakes customer retention rate (not revenue retention) is well below 100% so companies are in fact migrating to other providers - Fortune companies included. Customers migrate for many reasons such as cost per DBU consumed, open source capabilities, ML capabilities, and wanting to move to more capable data warehousing models. You can search of stories of companies migrating and there is no shortage of CTO/CIOs championing about their switch. >I'm just not seeing anything with substance here, just words Are you looking for data to support SNOW as being bullish or data to support Databricks growth?
Yes you can transfer the shares in your ML account to fidelity without selling any shares or insuring any taxes. Once in fidelity you can do whatever you want.
My initial response to you was just correcting you because you said Nvidia was just doing cards for a niche area of gaming and graphics. I was simply telling you that isn't true because they've been in the ML space back then too. Not sure why the conversation deviated to investors.
My initial response to you was just correcting you because you said Nvidia was just doing cards for a niche area of gaming and graphics. I was simply telling you that isn't true because they've been in the ML space back then too. Not sure why the conversation deviated to investors.
For some reason ML and BofA doesn’t care what mutual funds/ETFs I buy at Fidelity. But at ML everything is locked down and I am stuck with mainly T. Rowe Price and American Funds. Idk if it’s because they can control the limitations from their side because of my joint account with the covered person. Even when I call the help desk they said it’s a limitation on my account. So thus I transfer to Fidelity make all the changes I want then transfer back to ML to maintain free banking.
Edge is what I meant. I just call it ML. Edge sucks.
>Can I move my rollover IRA from ML to Fidelity? Sell the SPLG position and buy other funds I want? Then, once I do that, I move the new positions back to ML to maintain my platinum honors? Is Fidelity on your family member's list of approved brokerages?
I read this story in a photoshop adjacent subreddit and people were laughing at the idea that these tools could affect the use cases of photoshop. Something about the lack of control, but the way I see it, the rate of improvement suggests that these tools are going to be quite refined in a few more years. Adobe stock is down 1.8% on this news and I have to wonder if Adobe is going to keep sustaining paper cut after paper cut with AI releases. I have like 1.5% of my port in Google and I consider them one of my critical AI holdings. Been following their ML research teams for years and I'm still impressed with what they're discovering.
This is a complete bastardisation of "AI" tbh. Yes, traditional ML methods have been used for decades, but conflating that with modern AI (aka LLMs) is like saying we're still doing electronic trading. Asset pricing? That's set by the market, so quite a catch-22 to say market efficiency is affected by that. ML is used for finding signal, upon which dealers will interact with the market. Actual """AI""" has no indicator on the market beyond these flows. Ren Tech is such a cherry picked example that I'll pay you $100 if you've ever actually worked in the fund industry full time.
The long term catalyst AMD needs is large scale adoption of their AI GPU's - MI3xx. Given that the company itself won't provide any material guidance in this area, it's safe to assume they're not making any traction yet on NVDA CUDA. IMO - It's very difficult to see that happening anytime soon. First of all you have all the ML/DL engineers using CUDA for the past 15 years. As everyone is looking for first mover advantage, I can't really see any of the big players looking to divert time looking into an alternative over something that's been baked into the industry. NVDA basically has the entire ecosystem; GPU cluster with high speed links. And secondly, NVDA has always made best in class products and have had top GPU for 25 years. Strip away the names for a minute and just look at the scenario. Here we have a top class product that everyone knows and loves that has a great reputation spanning over a decade. Now another entrant is trying to make a competing product that at best you could say is near equivalent, but it's not in any way a game changer/next generation step ahead. What's the incentive/motivation to swap to the new? Is there one?
You think that generative LLMs are trained on next token predictions. And you just named an introductory to the transformer architecture that has no relation to modern LLMs aside from introducing the attention mechanism. I have multiple degrees in AI/ML and am a published author so based on your two posts I can safely say you’re either an undergrad in the field or someone who read an intro to NLP and LLMs and claims to know more than they do, no offense :)
What’s wrong with Edward jones? Just curious because I have most my money with Merril Lynch but I have an orphan account that was handed down to me with 100k that still with Edward Jones? End of the day they both return about the same percentage but Edward jones uses a lot of mutual funds and etf’s where ML has a lot on individual stocks.
As a ml engineer, their software is overhyped. Hyperscalers will eat their lunch money for their private core. Government is their hold and I value it at 12-18x revenue multiple which would price them at $12-16/share. With private biz, closer to $18-22 a share. Another note, the public will realize the blood money and realize if they win, they will lose and the bad press will be their downfall. Right now, it’s a meme stock. How ironic for a company that’s in the “let’s describe reality” to have valuations built on delusion. The masses are ignorant of the ai technology and think this shit is theirs only. ML models aren’t code but data. Orgs aren’t the same. This shit ain’t scaling.
Customer user experience ranking on the brokers I use - Fidelity is the best, next is Schwab - BofA Merrill Lynch and Vanguard are awful BofA/ML and Vanguard intentionally make your life miserable to manage your holdings with them
They’re not a leader on AI *yet*. I follow Apple very closely and yes, their AI offering sucks, but they are definitely not behind by any stretch. They have surpassed their competitors in traditional ML for years. The issue with Siri is that when they launched their AI Siri, it had “two brains” so to speak. The regular one and the new AI infused one, and when you speak to it, it has to decide which one it kicks off the prompt too (or send it off to GPT). They ended up delaying their actual AI Siri with app intents (where Siri could control the device) because of this “two brain” issue. Now it’s very likely that it will be coming out in March 2026 because they’ve been working on it for 1.5 years now, and will work with specific apps (I.e. all first party ones, and select third party ones). I don’t think people understand what this means. Siri will be able to access information from apps and take actions on your behalf. Apple’s new HomePad device will be using Siri + this new app intents to do exactly this. No other AI company has been able to get this working on device locally and probably wont for a long time, given how hard it is. Apple’s hardware + software stack allows them to execute this, and I’m 100% sure it’ll blow the “behind in AI” narrative away when we see it next March.
The economy is shifting. But, also, I think a lot of investors are buying into the "tech has reached a peak/bubble" rumors that keep flooding the place. If folks keep saying something about the market it starts to become a self-fulfilling prophecy. "Tech is crashing! Tech is a bubble!" every day. I had a spread of tech stocks... AMD, NVDA, AMZN, MSFT, GOOG.. and even some lesser known stuff. Earnings have been good, but the stocks drop. Folks are taking earnings. But, the stocks kind of stay level. Could just be the summer/fall blues that hit the market. But, folks are reading into it that the tech "fad" is over. In the long term.. like years, decades... tech is here to stay. We're exponentially increasing in tech every day now, and chips, algorithms, softwares, etc are driving everything now. So, tech stocks are not fads. They are the foundation to future techs.. like how ML/AI is being used to fast-track new drugs to production or find new uses for drugs. Or to analyze tons of data for things a human couldn't spot. A dip in the economy will create a hiccup in all of that, though. And investors sometimes being irrational beings doesn't help, either.
But why are you only talking about AR and VR and not ad revenue ? AI infrastructure helps both. Your second paragraph doesn’t make sense. First of all generative models are already used in adspace and makes meta billions of $. It seems like you think generative ai = llms. Infrastructure capex isn’t just for generative ai or llms, it’s also for r and d for the next breakthrough of future models. Your argument is literally “meta wasn’t good at vr glasses, so why would they be good at AI”. That’s an equally bad argument. ML is a subset of AI. Of course if you’re successful in a subset of a certain thing there is a decent chance you’ll be successful at the superset of it.
Your parent comment was addressing the "infrastructure" investment for Meta AI. I'm just throwing out their track record with tech as an example of their performance. 2024 had like $2b in revenue from AR/VR. Not exactly stellar return from the $100b buy-in. They have a history of throwing money away for tech that is beyond their scope. >business model the last decade So then the current investment is likely attributed directly to the novel generative models. Why use this as a counter point to discussing their new business? You attributed their existing ad space revenue with their current venture... General success with the standard application of ML in ad space doesn't attribute success with novel AI. That's like saying, "they were successful as a social media platform, why wouldn't they nail AI?"
I mean it is taking off…do you deny that? Yeah, I know they’ve been using ML algorithms, ML is AI. I don’t understand what point you’re trying to make. That it’s been a backbone of their business model the last decade? I mean yes lots of ai and ai adjacent tech has meaningful hype. Again, I don’t get what point you’re trying to make.
>Machine learning (ML) is a field of study in **artificial intelligence** concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions. That's the term's definition from 1959. When we're talking about AI advancements and AI usage and AI technologies, these include ML and the results of ML.
No it's not :) otherwise it would be still called ML. AI is an umbrella marketing term, that hints are AGI - for the trick to work, people need to believe it's inteligent, thus they use as many words as possible to anthropomorphize it.
ML and computational analytics in general have far more substantial contributions in industry and are literally driving us to the future right now. LLMs are just something you can get retail interested in and create hype to drive investment.
>the system isn’t functioning as intended. Companies that would otherwise be on the path to a potential initial public offering or lucrative acquisition are getting pulled apart, with the bulk of the cash ending up in the pockets of the founders and their leading engineers This is a twisted narrative. The reality is all these AI startups do not have any monetiziation plan whatsoever. It's just a bunch of experts in ML and DL banding together hoping for a big payday. But current infrastructure does not support wide usage of AI yet. If that's the case, how can you even create and try to market and sell an AI based product/service? It would basically be ".com" era all over again - all we have is "AI" in our name. Most of these AI companies don't have any worth outside of their talent. The talent fetching big paydays is fair - they have an in demand and scarce talent. The capital markets are just trying to pass pre-revenue/pre-product/no monetization business to the next person - that's the actual scam.
Nobody thinks that. Everybody knows about ML and all the other things that can help process massive amounts of data. But LLMs are the things that these big companies are getting people excited about, because people suddenly think they can _think_. It's _words_, like the kind my employees use! But it doesn't work well enough, and that's showing. Zuckerberg did not "replace mid-level engineers by the the middle of 2025." As far as the other stuff, JEPA is out and not that impressive. Moore's Law weighs heavy, we are nearing the limits of how much more tightly and efficiently we can pack silicon in without getting _really_ expensive. Now, when these new data centers come online, and they're ten times the previous size and they _still_ can't create AGI, so we put them all together and get something 100 times the size, maybe, _maybe_ we can create something like an AGI. Something that is _almost_ as smart as a real person. You know, something almost as smart as two people could create by accident if they just forgot to use condoms one night. All this, for trillions of dollars. Silicon Valley has fucked up its microdosing and gone off the rails.
And all those things were there before "AI", it was ML :) I find it fascinating that people rediscover how useful computers/algorithms are.
I'm thinking more of the evolution of ML which is tied to LLMs and how they use ML for ratings. Rating accuracy is the biggest deal for insurers because if you rate correctly the good drivers stay because they get lower prices and the bad drivers leave because of high prices and go to your competitor. I'm not in the machine learning space though so I could be way off and they might already be using the most up to date algorithms.
I'm the lead of ML integration at a B$ company with 300 employees, I've been assigned to create a GPT with existing company data to streamline research and development of new products which honestly would be useful no doubt, but our IT team is killing any and all initiatives we have, they block projects with endless meetings, endless planning, deferring tasks, or literally lie about having done the work. Its been like pulling teeth to get even most simple GPT integrated, so I've just taken to kicking back and waiting for them to move the needle, I don't have any authority to demand they work on it so what can I do? I figure, most companies are also caught in some terrible corporate deadlock.
I am BSing a bit, I don't work with ML, but from what I understand LLMs are just another approach to ML and I think any evolution in that space that could be applied to their rating approach would be good.
Are you even from the tech space? AI might not be perfect, but neither did we have anything near its capability years ago... We had ML, but no one was using LLMs.
>ML space is really evolving lately with LLMs so they could really land on something good. I call BS LLM is short for large language models, that's great for generating narrative but is useless for math Insurance companies and their risk underwriting is built on solid math, actuarial processes etc. LLMs have 0 impact on this process. Youre either BSing without knowing about ML or you were never really an employee there
LLMs, perhaps. All generative AI and ML, I don't think so.
It doesn't really matter. The point is that ML is insanely useful for many problems. People focus too much on if AGI is achieved, but even if it is not then it still brings a lot of value.
here is a good metaphor so that we won't be in circles: AI = Teach a robot to cook. ML = Show it thousands of recipes and let it learn patterns. RL = Let it cook in the kitchen, reward it when the dish tastes good, punish it when it burns food.
> the ML space is really evolving lately with LLMs so they could really land on something good. An actual software engineer would hopefully know that the usefulness of LLMs to an insurance company would be pretty much nonexistent.
I used to work there as a software developer, I'm still holding stock I bought on IPO because I think they'll come back. They were all in on growth before but now from what I can tell are dialing it in for rating and profitability. I've been tempted multiple times to put a lot more in but usually talk myself out of it because I'm already holding a good amount and don't want to risk losing even more. That being said, I did just buy 30 more shares after the weird drop after earnings since they seem to keep steadily rising. From my time there they've always been very data driven and flexible so I'm hopeful. They have enough data to backtest any changes to rating algorithms and the ML space is really evolving lately with LLMs so they could really land on something good.
Late to this party, but ML blocks like 90% of the tickers now due to volatility. Have to call in trades now.
Another big reason is cause they’re staffed with devs that think because they can use MCP they can build ML centric products that impact customers. Turns out that’s hard!
ML engineer here. I laughed at you saying people think AI is a silver bullet. I’ve worked through the big data, business intelligence, data science and machine learning corporate phases. Mapping data to biz ops is 90% of the work. Nothings is really changed in a way..
Okay now what? I sold for near 100% losses and used the last of my emergency fund this past weekend to hammer the Bill’s ML bc it had easy win all over it. I’m fucked
Cool I've trained ML models before too. Can you provide any examples of quantum computers being used for AI training or inference? Maybe it is relevant in 50 years, but it isn't not and won't be for a long time. The amount of weights you need for a LLM is way too large for any quantum computer that will be built within the next decade.
Coming from a CS perspective though, AI is better used in diagnostic healthcare (through provided data and machine learning) than it is outside of it (ChatGPT LLM). Their AI is for diagnosis, and the first thing they teach new machine learning students is using AI/ML to diagnose cancers, tumors, and other medical ailments. Read the research, not the buzzword. If they were using an LLM, I would have been out ages ago
Comparing OpenAI and Palantir is just kind of stupid. Palantir is primarily using AI/ML - an extremely advanced and mature technology with clear and effective use cases. OpenAI is (obviously) GenAI/LLMs - a novel technology that is hot and exciting but has yet to really identify an enterprise profit model.
As a ML engineer and architect, people need to understand Palantir’s private wing is extremely weak. I fully expect big cloud providers to eat their lunch money. Their government contracts though is a serious differentiation but I would price it at 10-15x. Annualized commercial rev is $1.7b. At 15x multiple that’s a $10/share. Annualized private rev will be $1.3b so another $8/share at a very liberal 15x multiple. I price them at around $18-20 a share in 12-24 months.
There is already intense competition in humanoid robotics. I wouldn't even say Tesla has a lead. FigureAI, 1x, Apptronik, Agility Robotics and then you have Amazon robotics and Google's Gemini Robotics teams pushing hard on the research side. A lot of these guys have heavy backing from Nvidia, Microsoft, Amazon, etc. China has a ton of companies pushing humanoid robots hard as well. The barrier to entry on a humanoid robots has become so low. Commodity robotics hardware paired with foundation ML models trained via simulation / world models. A much much lower barrier to entry than automobiles. You need a small fraction of the capital and there's so much development going on with Vision Language Action (VLA) models and tools for training via simulation. Google's Genie3 world model could fundamentally transform robotics development. Nvidia is also pushing to open source VLA models and simulation tools for training robots. They benefit by locking everyone into their edge compute platform.
Is AI customer support partly responsible for Comcast subscriber losses? "One of the key strategies involved leveraging AI and ML to transform the customer experience." https://www.toolify.ai/ai-news/transforming-customer-experience-comcasts-ai-and-ml-success-story-1482075 It appears as if AI customer support has created customer losses for other companies: https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/ What has your experience been?
PR That’s what Palantir does good; PR. Palantir is simply data analytics along with some API calls to ML and LLM systems for what they call AI data analysis.
Came across this thread while studying more about STM. Very nice to have opinions from engineers and distributers. Thanks! I want to add some research I gathered on STM from an angle less discussed: Post-quantum cryptography (PQC) and the migration plan to using PQC software and hardware: The National Institute of Standards and Technology (NIST) finalized the first PQC standards - ML-KEM (FIPS 203) and ML-DSA (FIPS 204) - in August 2024. The NSA anchored the U.S. migration in CNSA 2.0 (the PQC playbook for National Security Systems), reinforced by NSM-10 and OMB M-23-02. Under CNSA 2.0, any NSS equipment that can’t support CNSA 2.0 must be phased out by December 31, 2030, and CNSA 2.0 algorithms are mandated by December 31, 2031. NSM-10 and OMB M-23-02 extend planning and migration across civilian systems toward 2035. In practice: chips used in federal/NSS systems need PQC support this decade - specifically ML-KEM (FIPS 203) and ML-DSA (FIPS 204) - and suppliers that can prove those algorithms now are better positioned for U.S. government demand (with knock-on commercial pull). To achieve compliance, modules typically go through validation in two steps: 1. NIST's Cryptographic Algorithm Validation Program (CAVP), for FIPS 203/204 algorithms. 2. NIST's Cryptographic Module Validation Program (CMVP), for FIPS 140-3, which can include the validated algorithms. As of now, STM is the only MCU vendor with a vendor-labeled NIST CAVP validation explicitly covering ML-KEM and ML-DSA for an MCU library - validated July 8, 2025 (Validation A7125) for the STM32 PQC library on Cortex-M33. Outside the MCU space, some hyperscalers are pursuing (and in some cases obtaining) these validations: Apple, Amazon, Google, and more. Yet, we also hear peers projecting hardware lifetimes that don’t match the migration tempo. Meta just lengthened its server depreciation schedules (cutting 2025 depreciation by about $2.9B). While investors debate whether AI accelerators truly have 5.5-year useful lives when leading-edge compute turns over in 2–3 years, many overlook the PQC roadmap: these systems will be effectively out-of-policy (and thus completely irrelevant) by 2031 - not due to demand or performance, but by the NSA. Back to MCUs - here’s where key competitors stand on PQC (algorithm-level) validations: 1. NXP Semiconductors: NXP scientists co-authored CRYSTALS-Kyber (now ML-KEM), but there’s no NXP-vendor-labeled ML-KEM/ML-DSA CAVP validation listed. In other words, no PQC certification. 2. Infineon Technologies: Visibly active in quantum/security (e.g., Quantinuum collaboration), but again, no PQC certification. 3. Renesas Electronics: No PQC certification; they collaborate with wolfSSL, whose module has relevant certifications. 4. Microchip Technology: No PQC certification. 5. Texas Instruments: No PQC certification. 6. onsemi (ON Semiconductor): No PQC certification. Bottom line: STM’s named, vendor-labeled CAVP validation (A7125) for ML-KEM + ML-DSA on STM32/Cortex-M33 lands exactly as U.S. policy pushes PQC-capable gear into government systems by 2030–2031, with broader migration working toward 2035. That’s a competitive advantage in the MCU space worth highlighting, and I don't see almost anyone talking about it. And yes, similar PQC roadmaps are emerging globally: the EU published a coordinated PQC implementation roadmap in June 2025, and Canada set milestones to finish high-priority migrations by 2031 and all remaining systems by 2035. China is also pursuing a PQC migration plan.