Reddit Posts
[Discussion] How will AI and Large Language Models affect retail trading and investing?
[Discussion] How will AI and Large Language Models Impact Trading and Investing?
Neural Network Asset Pricing?
$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...
Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now
Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?
Moving from ML to Robinhood. Mutual funds vs ETFs?
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I'm YOLOing into MSFT. Here's my DD that convinced me
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I created a free GPT trained on 50+ books on investing, anyone want to try it out?
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Option Chain REST APIs w/ Greeks and Beta Weighting
Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!
AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)
🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙
Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)
The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle
Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth
Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
NVIDIA to the Moon - Why This Stock is Set for Explosive Growth
[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?
The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts
Do you believe in Nvidia in the long term?
NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading
Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"
Which investment profession will be replaced by AI or ML technology ?
WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology
$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41
$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).
Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine
This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?
Training ML models until low error rates are achieved requires billions of $ invested
🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨
AI/ML Quadrant Map from Q3…. PLTR is just getting started
$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News
VetComm Accelerates Affiliate Program Growth with Two New Partnerships
NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT
Netramark (AiAi : CSE) $AINMF
Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)
Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)
How would you trade when market sentiments conflict with technical analysis?
Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.
How are you integrating machine learning algorithms into their trading?
Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits
Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare
Why I believe BBBY does not have the Juice to go to the Moon at the moment.
Meme Investment ChatBot - (For humor purposes only)
WiMi Build A New Enterprise Data Management System Through WBM-SME System
Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT
The Squeeze King - I built the ultimate squeeze tool.
$HLBZ CEO is quite active now on twitter
Don't sleep on chatGPT (written by chatGPT)
DarkVol - A poor man’s hedge fund.
COIN is still at risk of a huge drop given its revenue makeup
$589k gains in 2022. Tickers and screenshots inside.
The Layout Of WiMi Holographic Sensors
infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.
$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.
$APCX Huge developments of late as it makes its way towards $1
Robinhood is a good exchange all around.
Mentions
I haven't used facebook in over a decade so frankly I have no idea what the comments look like on there. It just struck me as very "how do you do fellow kids." I agree that reddit's gotten more trash as many of the highly knowledgeable people have left because as with any big, generalist system because people gravitate towards and push to the top content they can effortlessly understand. It used to be an audience of enthusiasts and now its not, it is what it is. Eternal september and all that, we ruined the internet for experts and now the internet is being ruined for us. I think some of that is confusion over terms. People hear AI and think chatgpt not building out a CNN to improve quality control. Marketing uses confusion over terms this to try to make people believe that every business is incorporating a genAI chatbot and seeing great returns so you need to buy their chatbot, and since that's easy there's a load of people doing it and thus is highly visible. There's a lot of obvious use-case for ML/AI, but when people are bitching they're mostly bitching about chatbots because that's the AI they engage with most frequently. And I kinda agree with them about the chatbots.
There's a difference between coding roles and operational roles. There's a huge difference between writing a small project and managing a whole companies codebase. Why do you think these "companies" that your referring to are replacing coders? The first jobs to go in a company will not be the ones who have knowledge to understand the code. There are multiple studies online that show progress is not meaningfully increased in real world scenarios. There may not be as many low level entry coding jobs, but as I said earlier, Ops engineering, devops, and ML roles will be increasing to accomodate the different tools required to debug, deploy, and maintain the codebase. You think these models will just run on their own, fix and deploy themselves? [https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/](https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/) [https://www.hashicorp.com/en/blog/ai-is-making-developers-faster-but-at-a-cost](https://www.hashicorp.com/en/blog/ai-is-making-developers-faster-but-at-a-cost)
Currently in the process of getting onboarded at a massive medical company for a remote AI/ML engineering role. The job is basically just automating 10,000 + data input roles over the next 6-7 months. It’s incredibly sad
The idea that coding is going out the window when the experts in this field are literally some of the only people that can debug, deploy, and maintain these models is crazy. Roles in operational engineering, devops, and ML are all very sought after because of this.
I was able to do it! there are some great ML studies and engines out there that I’m using in my ui and I’m getting 85%+ confidence signals on the next 30min/1hr/4hr move and after the 30 min I get a 98%+ accuracy hit on that predicted number 🫣🫣
ML is hard to get right man. AI/LLMs are easier for me
Counterpoint: all the released papers point to the fact the scaling hasn’t reached a plateau with the data they have and additional training on the data available is the fastest way to improve LLMs right now. LLMs are big enough that there might never be an overfitting issue at all especially since every frontier model has a corpus of the entire internet stored locally to them. To put in perspective all business email ML Before gpt 3 was basically only trained on the Enron emails. This isn’t a not enough data issue. While higher quality data is always preferred, it’s just not necessary yet to produce better models. XAI has proven with grok that just throwing more compute is enough.
I did the same type of thing but instead we used ML to learn from these behaviors, scatter the web for these type of trades and based on that it predicts the next move.
Yes of course, this has always been true and known to AI/ML researchers. There are many stages to training an LLM and they are all important. The implication that compute is suddenly less important is wrong though. All else being equal (e.g., given the same high-quality datasets), a model trained with more compute will perform better than one with less compute. Its all important and if you want the best results you will make improvements to all stages in the training pipeline.
If you interested in the subject you should check [ML factor investing](http://www.mlfactor.com). Quite insightfull on factor construction and theoritical/academic background of the different premia.
Definitely worth it. ML has a bad online interface but it’s fine for set it and forget it. The CC bonus makes the Premium Rewards card one of the most valuable cards around, and you don’t have to concentrate hard on navigating bonus categories. It rivals my Chase Sapphire Reserve but BoA comes far ahead for everyday spend.
[Freenome and Perceptive Capital Solutions Corp Announce Business Combination Agreement to Create a Publicly Listed Company Transforming Blood-Based Multi-Cancer Detection through an AI/ML-Enabled Multiomics Platform](https://www.prnewswire.com/news-releases/freenome-and-perceptive-capital-solutions-corp-announce-business-combination-agreement-to-create-a-publicly-listed-company-transforming-blood-based-multi-cancer-detection-through-an-aiml-enabled-multiomics-platform-302634039.html) \- PCSC [Investor Presentation](https://www.sec.gov/Archives/edgar/data/2017526/000114036125044461/ef20060706_ex99-2.htm)
do i YOLOOOO everything into cowboys ML?
The government will be backing quantum defense and it will be ramping up over the next 2-5 years. I'd rather buy and hold this for 2-5 years at these evaluations than chasing it when it's $10+. yes it is currently unprofitable but if you review their news trends, they are positioning themselves very well to be a player in the market Governments aim: # Phases of the Migration Strategy # 1. Standardization (Complete/Ongoing) * **Target:** Select and standardize quantum-resistant algorithms. * **Status:** **Complete/Ongoing.** The National Institute of Standards and Technology (**NIST**) has finalized the first set of PQC standards, including: * **ML-KEM** (Module-Lattice-Based Key-Encapsulation Mechanism, replacing key exchange algorithms like Diffie-Hellman). * **ML-DSA** (Module-Lattice-Based Digital Signature Algorithm, replacing digital signature algorithms like ECDSA). * **SLH-DSA** (A hash-based digital signature, intended as a secondary option). * **NSA's Role:** The NSA's **CNSA 2.0** suite requires the use of these NIST-selected algorithms, confirming the government's official cryptographic direction. # 2. Inventory and Pilot Deployment (Current Phase) * **Target:** Federal agencies must conduct a **comprehensive cryptographic inventory** to identify all systems using vulnerable public-key cryptography. * **Timeline:** * **Immediate:** Agencies must create quantum-readiness roadmaps and begin identifying systems that are vulnerable or will not be able to support PQC. * **2025 (CNSA 2.0):** New software, firmware, web servers, and cloud services for NSS must **support and prefer** CNSA 2.0 algorithms. # 3. Implementation and Enforcement (2027 Onward) * **Target:** Full transition to hybrid and then exclusively PQC algorithms. * **Key Milestones (CNSA 2.0):** * **January 1, 2027:** All **new acquisitions** for National Security Systems must be CNSA 2.0 compliant by default. * **2030:** All deployed equipment and services in NSS that cannot support CNSA 2.0 must be **phased out**. * **2031:** Full enforcement begins across most NSS cryptographic implementations.
> Intel can't even fabricate their own chips, why would Apple have confidence that Intel could manage theirs? Well, firstly, that's hyperbolic, Intel fabricates some of its chips, and outsources some of its chips to TSMC, and the proportion that it outsources looks likely to shrink in the future. Secondly, that's a retrospective analysis of past decisions, based on what everyone already knows, which is that TSMC *was* the process leader. The strategy of IFS is to leapfrog TSMC for process leadership *in the future*, so we have to look at what may happen in 2027, 2028, etc. as a result of the nodes IFS has in the pipeline, not just say, "Well, TSMC has been the process leader in the recent past, so this will continue to be the case". > If you're running a multi billion dollar business, why would you have the least competent potential partner making your most critical products? Even *if* TSMC remains the process leader, and 14A has poor yields or gets delayed or something, there's still significant value for Apple in diversifying its supply chain, and having a viable plan to switch some or all of its M-series chips out of Taiwan fabs, particularly given the geopolitical issues over Taiwan that look poised to come to a head in 2027, and the general tenor of the US administration. Also, it's not necessarily clear that Apple *needs* to "go with the best" for its mobile device chips, particularly if hyperscalers want to start a bidding war over TSMC fab space for the best chips for ML use cases.
For more context… Most of the driver for spending on AI has been motivated by an opinion piece from a few years ago that suggested AI abilities would follow scaling laws. If you’re not familiar with that, basically, the notion was that as long as you made them “big” enough, they could do anything. This is why companies were spending trillions of dollars on this stuff. The big problem happened in the past 12 months or so when more recent ML research showed that **they don’t actually follow scaling laws,** and that in many applications, we are already at or near the maximum theoretical ability possible. This is why you’re not hearing people talk about AGI incessantly anymore. And why hype over agentic AI is fading as well. TLDR the technology turned out not to follow scaling laws. This was not expected and most spending has been made assuming it would.
God of the gaps reasoning is crazy in AI- "Yeah but it can't do x" over and over and over as it keeps being able to do the previous xs faster and faster and faster. Compared to when I started in ML, there have been a SHIT TON of things that it couldn't do well that are now trivial. The progress has been insane, and accelerating. Like, other humans are going to use it to our disadvantage, but that's the main problem with every technology. Calls.
This. Been working in the ML world for a long time and the success of new models tends to come from how well defined (think rigid) the business process. Financial services is so regulated, e.g. UDAAP, larger institutions have spent a decade removing decision-making from points of consumer customer contact. GenAI will simply remove the "robotic" feel and AI Agents can likely take over the well defined tasks.
Call centers have been doing automation for years. Either automated as rule based flows , ML or now AI. It is the perfect use case for ML and AI. Chatbots wet the original target for call center / customer experience.
ML enabled autocorrect is one of those things where the ceiling is amazing but the floor is just so much worse than the more mathematical based ones. Also apparently my phone thinks I’m a pirate because it always likes to autocorrect to “thar”
"People outside the ML software industry don't *really* understand this" And are you on the inside of the ML industry?
My ML model just recommended a frozen potato company. I think it had too much Tylenol.
You are correct. AI is not ML based products. Terms get interchanged unfortunately.
Difficult to say. The rush to implement and master AI (and robotics it seems now) in my opinion leads to an economic collapse specially for the working class, or potentially a major technical catastrophe due to lack of forethought when rapidly implementing AI to replace humans. My guess is that if things go to plan, the market will continue to rise, USD may likely continue to falter, inflation continues, and the working class struggles into a potential depression type scenario. But if AI/ML isn’t properly overseen (regulated), we could eventually see a major infrastructural collapse. That would have the potential to hinder across all socioeconomic classes. Of course, many billionaires have been perfecting their bunkers in between space races, so I’d imagine they’d again fair well in this situation. It’s a toss up for most of us if I had to guess.
That Google is ahead in the LLM competition 1. Google has been cheating on benchmarks by feeding their models test data in their training sets, and overfitting via RL. This has created scenarios where Gemini performs impressively well on popular benchmarks, but very poorly in real world usage. This has completely fooled the 99% of investors that do not actually understand AI/ML. 2. Gemini's significant growth in downloads was mostly due to Nano Banana, not the chatbot itself. This is significant because image/video generation is mostly a fad that users engage with a lot for a month while it's new and exciting, and then usage falls off a cliff once the novelty wears off(at which point it just becomes "AI slop"). Chatbots are far more important, because people use them in their day to day life, and at work. 3. Google has been giving their flagship model(Gemini 3.0 pro) away for free, with nearly unlimited usage, but these massive losses have been covered up because Google lumps Gemini with other businesses to cover up how much money they are losing. As a result, investors do not notice just how unsustainable Google's AI approach really is. Gemini's entire competitive advantage is that it is given away for free with high limits, with no ads, subsidized by their profits in other areas. And even with Billions in marketing spend, integrating it with every Google product including Android, they still fail to come close to ChatGPT's engagement numbers.
You just proved you don't know what 'AI' actually means; Search ranking (BERT) and YouTube recommendations are the AI workloads running on those TPUs, so thanks for confirming you are completely out of your depth I copy/pasted Google's own words from their OWN BLOG highlighting how TPU facilitates Search and YouTube ML workloads. Google THEMSELVES are telling us where these TPU are used and you just choose to not believe them?
Not all AI are equal, not all AI are ChatGPT. Most of the "specialized AIs" have nothing to do with LLMa at all, some of them what was previously called ML. Those are trained on relatively narrow, specific sets of data and they don't need all the pirated content. OpenAI and LLMs are different story, it's basically a pattern matching. They are not that good in any case, when a context is important.
I just support people doing ML, so I don't really know. I do know that CPUs with PyTorch are significantly slower for them and they migrated from custom CUDA code to PyTorch's abstraction layer that will supposedly easily work with more than just Nvidia/CUDA and didn't lose performance on Nvidia.
I would not be entirely sure about this. I have heard many complaints that ML research at Google has stagnated and the bureaucracy is insufferable. Know plenty of colleagues that have left their research branch for greener pastures. But maybe it’s turning around again. It’s hard to be picky in this job market.
Until then Google wins! Well Google research might still win the research front since they have no lack of ML researchers.
Not so long ago, Google declared code red after gpt4 had launched. I feel sorry for the engineers trying to stir the ML training pot based on the whims and tantrums of their CEOs 😄
“Hard to do” in relation to llms is purely a money thing and data thing (and the data thing is actually a disguised money problem). The llm architecture for a model that can fit on your laptop versus a model that needs 8 h100s to run is exactly the same and is just some variant on the transformer. The only difference is an increase of parameters and more training data. Getting training data is also just a money problem too because it requires huge teams of engineers to scrap the whole web and huge teams of basically slaves to then add human touches to the data. Creating these multi trillion token datasets is such an insane task and it makes up for probably 90% of the work. And its a job that any technical staff member can work on even if they arent in ML. Nothing gemma did was revolutionary either. They have been using the tpu for years. All they did was curate a nice dataset and trained some generic model they had and were able to beat some benchmarks. But because so much money is required for this, it’s impossible to achieve this task unless you have google type of money. So then these companies like google and openai use this fact to make it seem like these companies are ran by genius savants who know more than anyone.
Well, Google's been at it for a decade. The irony is that the ML ecosystem for Nvidia ... was built by Facebook.
Sorry to hear about he divorce, I can imagine that was a tough time. Looking at the numbers: Backing out returns on that, looks like 10.6% return = (12+113)/113 over a decade (the corresponding annual annual rate of return is CAGR ≈ 1.01% per year). As mentioned elsewhere, this is below inflation so the money has lost value in real (not nominal) terms. It's ok, let it go. The important question is what are you doing *now* to make sure that your future needs are being met by the capital that you have (and earning in your job). This means considering how to invest your capital now so that your future needs are met. If you need help, a fee-only financial advisor or an RIA might be the best way to go if you don't feel confident about where to invest and what the tradeoffs are among assets and asset classes (I'm unsure if your ML person has fiduciary duty or not).
There's a reason most of the old ML folks left after the firm was acquired by BoA ...
Not only that, he's betting against the only hardware platform that can run ALL cutting edge ML models with ease. People outside the ML software industry don't *really* understand this, but Nvidia is completely unmatched because of how tightly integrated they are with the software stack for AI/ML models. Nvidia has a moat, and it is fucking enormous
ML convinced me to start a managed account with a 100k. After 2 years it had made 2k. I dumped immediately. They have a long term approach while collecting their fees with a grin. I am not even sure who does the investing because it was so bad..interns?
Please read A Simple Path to Wealth by JL Collins. It's available as audiobook on Spotify. ML sucks. Move your money to charles Schwab or preferably fidelity. Learn how to budget. Start being more involved in your own life and finances.
Whoa hold your horses and don’t do anything hastily and especially based on “advice” from Reddit. While what everyone here is saying is correct, ie be more active and find low cost index funds, THE WAY YOU EXECUTE the plan has to be well thought out. Because we only have limited information, WHAT YOU SHOULD DO depends on having a complete picture. As far as I can tell, you are in some kind of tax advantaged annuity so you really need to make sure you move out of it judiciously so avoid any potential capital gains tax or penalties. Unfortunately, you will need someone who has access to information about your funds and the knowledge base to recommend HOW TO MOVE your money. Especially given your lack of any financial literacy (not meant as an insult it’s just a statement of fact) you want someone like a FA to help you move the funds into an appropriate account type (I’m assuming this is a tax advantaged retirement fund so you’d need an IRA of some type) Again don’t do anything hastily because the money spent on an advisor to help you move the funds (or if you already pay for it through ML and they are fiduciaries) will be well worth it in terms of preventing a VERY costly taxable event and/or penalties!
I work in companies where we have done training of ML software. Because of the importance of the dataset, generally, good care is taken while training it as ultimately, you must be sure the answers you're telling it to mimic are indeed accurate. More sarcastically: you ... don't already think the internet is full of garbage?
98% of posts in stock subs about this are circular and ignore public evidence, earning call transcripts, and financial statements. While also having zero context for ML and semis. It's not hard, they should try to rent a DGX A100 node and see how it goes.
NVDA has many technical problems to navigate now, which is why their exec team is starting to panic. GPUs are the equivalent of a Swiss army knife for many types of AI/ML training, where as TPUs are a precision tool for LLMs (which underpins most AGI efforts). NVDA is focused on more performance per token, where GOOG is more focused on token and context window optimization. NVDA, even when increasing performance to power ratios, is not solving the power supply challenge, whereas Google is investing in micro reactors. My prediction NVDA will fall off a cliff in late 26/early 27 when the market realizes there is not enough power in the world to achieve AGI using GPUs using NVDA's tech or anyone else's for that matter...
> NVIDIA GPU: Training, inference, graphics, scientific computing – almost everything. It's CUDA + ecosystem: every AI/ML engineer and infra relies on it. AI apps, data centers and inference demand still makes NVIDIA hold its position. Isn’t all of the revenue in the inference and training use-cases though? Which, as you mentioned, TPUs are applicable to? That’s a sincere question, I’m not an NVDA investor - I couldn’t tell you off the top of my head what percentage of NVIDIA revenue is datacenter (inference and training) but I suspect it’s a large majority. I’m curious to know now, actually…
AI / ML is literally just solving high degree polynomials on parallel threads to reduce computation time. LLM, ConvNet, LSTM etc all work on the same basic principle.
I took them both ML and spread. Who gives 7 points to a 8-3 team of grown ass men? Also hit Cinncy and da Boys. Missed on the Lions. Dammit
“Before AI” show me a tool or tech which can mine the data of that scale before AI. No one is going to invest billions thinking some advances will be made in AI or ML god knows when in future. Hence, saving that sort of granular data is a liability than advantage
Well it uses tensor flow which has lots of ML applications is all I’m saying, including neural nets which are great at LLMs and vision detection.
TPUs were just a FUD, Nvidia ain't going anywhere. Even Google themselves would say it was nonsense, and some of their researchers in fact do. I remember using Nvidia's GPUs on Google's cloud years ago lmao. TPUs are useful for an important yet a comparably small fraction of ML applications. It's a shame that fields such as finance and economics are dominated by technologically (and scientifically) illiterate dumbasses mostly, so they penic söll shares like this. Then some clever funds will come and sweep it all from them, with Jensen buying himself another Rolex or leather jacket or smth idk.
Financial advisors, if they are from big firms like JPM, ML, etc., can give you access to alternative investments. I don't mean bitcoin or gold, I mean private equity, exchange funds, VC funds, etc. that you wouldn't be able to get in open market. Imagine if you were a VC invested Anthropic a couple of years ago, you would be banking right now! Alternative investments would be good for diversification, especially if you think AI bubble and recession will lead to a big correction in the market. One caveat is that private equity and exchange funds come with higher fees (I've seen up to 5%, they usually have performance fee on top of admin fees if the investment is doing well) , minimum size requirements (the smallest minimum investment that I've come accross is $50k), and for the most part less liquid. But if you're young, have excess money that you don't need for the next 7 to 10 years, it is not a big issue. With regard to fees, you can negotiate fee reduction as your portfolio grows. I've been able to get my fees reduced from 1% to 0.9%, to 0.7% over 10 years period, and should be able to get it down to 0.6% in the next year based on growth projections. Also, if you're interested in private equity or exchange funds, have your financial advisor waived the placement fee.
I think as technology progresses, older tech depreciates and the floor grows along with the ceiling. For instance, we would have to regress pretty far to revert to an S&P price in the 200’s, like 1980’s era with no cell phones or laptops, or personal computers. We are past the dot com era w/r/t processing power, so the market floor is higher than 1500. Compute power is comparable but vastly superior to the 2010’s when I started building computers, so higher than 3000. Then there’s the COVID dip at 4000, when remote work kicked off. Track it up to ML models and LLM’s, which despite being in infancy are undoubtedly industry-changing. I don’t see the market crashing any bigger than it did for COVID, and I see a justification for continued growth.
Considering that I work in Data Science/ML, no doubt we do. The slick front-end that ChatGPT delivered a few years ago that's driven the latest push certainly has pushed LLMs into the mainstream. But there were many models before that also functioned. This is not new technology.
>The index treats the 151 million workers as individual agents, each tagged with skills, tasks, occupation and location. It maps more than 32,000 skills across 923 occupations in 3,000 counties, then measures where current AI systems can already perform those skills. ... >The index is not a prediction engine about exactly when or where jobs will be lost, the researchers said. Instead, it’s meant to give a skills-centered snapshot of what today’s AI systems can already do, and give policymakers a structured way to explore what-if scenarios before they commit real money and legislation. Per [the article.](https://www.cnbc.com/2025/11/26/mit-study-finds-ai-can-already-replace-11point7percent-of-us-workforce.html) They are not stating that 11.7% of the workforce can or will be replaced, that are stating that 11.7% of the skills they have identified in the labor market can be done by AI, although they don't define what they mean by AI in the article so I assume this is just LLMs? The author just seems to be taking things out of context for the sake of making an article sound more exciting. Even [the actual paper published by the Iceberg team](https://arxiv.org/abs/2510.25137) does not state "11.7% of the labor market". They focus entirely on 'skills' that are identified as being core elements of different sectors of the labor market and what current technology can perform. Per the Iceberg paper: >Beyond technology occupations, AI capabilities extend to cognitive and administrative work. Tools developed for coding demonstrate technical capability in document processing, financial analysis, and routine administrative tasks - illustrating how capabilities demonstrated in technology contexts translate to other domains. Some adoption is already occurring: IBM reduced HR staff through AI automation \[[26](https://arxiv.org/html/2510.25137v1#bib.bib26)\], Salesforce froze hiring for non-technical roles \[[29](https://arxiv.org/html/2510.25137v1#bib.bib29)\], and McKinsey projects that 30% of financial tasks could be automated by 2030 \[[15](https://arxiv.org/html/2510.25137v1#bib.bib15)\]. >We apply the same skill-overlap methodology to administrative, financial, and professional service occupations beyond the technology sector. The Iceberg Index for digital AI shows values averaging 11.7%—five times larger than the 2.2% Surface Index. Unlike technology-sector exposure concentrated in coastal hubs, this broader skill overlap is geographically distributed. South Dakota, North Carolina, and Utah show higher Index values than California or Virginia. >Industrial states illustrate this pattern. Tennessee (11.6%) and Ohio (11.8%) show substantial Index values driven by administrative and coordination roles within factories and supply chains. These white-collar functions show technical exposure that maybe invisible to policymakers while states focus largely on physical automation. These patterns reveal where skill overlap extends beyond current visible adoption, though actual workforce impacts will depend on adoption decisions, quality thresholds, and organizational constraints (Figure [6](https://arxiv.org/html/2510.25137v1#S5.F6)(a)). The talk entirely of 'skills'. not replacing a certain percentage of the labor market. Just read the paper. The study does not at all discuss the infrastructure or energy requirements to facilitate the operation of LLMs (or other ML-systems) at the scale required to mass-replace labor. The study does not discuss or investigate whether or not current LLMs or other related technologies are actually capable of *replacing people.* None of this to say that the paper does not have merit, but the article (and this post) are undoubtedly blowing the information in the paper way of of proportion.
I'm not, but I'm perfectly qualified by being unqualified. And that's despite having a CS degree and having built some simple ML models using PyTorch for image classification and reinforcement learning games. ML is a specialized skill and deep transformer models are a specialized skill within a specialized skill.
Porting entire pipelines over is absolutely necessary. How is there any other way to move their years of research and model development to entirely new hardware with its own unique software framework requiring entirely different model architectures? For the records, I think TPUs are fucking sweet. They’re just too different to maximize from GPUs for the vast majority of top level AI researchers. I think Google will benefit just as much as Nvidia from the AI boom for different reasons. I’m invested heavily in both. I also work on Googles cloud platform everyday from their dev kit in ADK to ML models to deploying production agents in Agent Engine and with Gemini Enterprise endpoints. Their vertical stack is insane and allows them to have immense profits at every level. I also see how different their NN frameworks are even at my level as a senior data scientist and how that is a massive switching cost. That said, they will not significantly steal AI cloud customers from Nvidia for a very long time.
NVIDIA GPU •Thousands of flexible CUDA cores •SIMD/SIMT architecture •Highly programmable •Supports FP8, FP16, BF16, TF32, FP32, FP64 (varies by generation) •Big L2 cache, high-bandwidth memory (HBM3/3e) •Tensor Cores accelerate matrix multiplies •Uses CUDA, the dominant AI software ecosystem Google TPU •Matrix multiplication units arranged into giant systolic arrays (e.g., 128×128 blocks) •Very limited instruction set •No graphics capability •Designed for maximum efficiency on fixed ML patterns •Uses HBM + interconnect optimized for Google’s internal workloads •Runs XLA compiler and is tied tightly to TensorFlow and JAX
Yes, the codebase has to change if folks have hard-coded to CUDA (presumably any of the larger NVIDIA customers do this to maximize ROI, but they are also the most well-positioned to rewrite to TensorFlow or whatever is the new hotness for TPU use in Google Cloud). TensorFlow continues to work on NVIDIA, but I have no idea how optimal it is or not. The general advantage to the TPUs is going to be cost over time - less expensive per unit/work for Google to build, and they design and deploy a new generation roughly every year that delivers better efficiency per unit power. Yes, NVIDIA will continue to produce higher-density chips over time, too - but I don't believe they are as efficient at comparable tasks and the gap will continue to widen - but IANAMLP. I suspect Google will have to discount TPU pricing vs. comparable NVIDIA pricing to attract customers afraid of vendor lock-in to TensorFlow, but their cost of goods to deliver those units of processing has got to be much lower. Presumably some tasks are more suited to CUDA (See [Google docs here](https://docs.cloud.google.com/tpu/docs/intro-to-tpu) for a list of tasks that aren't optimal on TPUs). I have a feeling larger companies will move to multivendor ML/GenAI provider sourcing for all of the same reasons they do so for general cloud compute today - price leverage. Yes, there is pain in having to write to N different APIs. There are some solution providers who abstract that away, but you have to pay a price for those software layers. Here's how adoption goes for the little guys: \- startup founders DIY for a time on rented cloud AI, nudged toward one vendor by their benevolent VC advisers for Kairetsu purposes \- eventually, the company scales so much that they negotiate a deal to get preferred bulk pricing from any one of the big vendors \- eventually, the company gets bent over so badly by that one vendor that they immediately rewrite on some sort of intermediate abstraction layer and pay the price to get access to deployment on the other cloud vendors, so they get some pricing leverage back \- eventually, the company gets big enough to make it worthwhile to rewrite directly to each cloud vendors' APIs and make their own abstraction layer At any point along the way, the little guys may die, get acquired, or stall out at a size where it doesn't make sense to go to the next stage. Here's how adoption goes for the big-sized guys whose primary competency is not computer systems: \- endless RFPs for years handheld by consultants, eventually deal is inked, consultants get paid handsomely to start moving workloads into the cloud \- the solution gets rebuilt a few times over the ensuing years, never quite working as advertised, but well enough to claim some victories for director and VP promptions
Because models come in all different sizes and use different tensor operations. At the end of the day you need to 1.) software where kernels are tailored to your PEs 2.) lots of HBM 3.) them to have a sensible programming model There’s a million other issues but ML workloads aren’t as fixed function as people might think.
You can still do non-LLM ML workloads
Google is a long term hold. One of the biggest tech companies with the widest range of expertise. Good management and excellent leadership especially in ML and AI (Demis Hassabis).
Firstly, it is months not years. Secondly as has already been pointed out to you there are not huge amounts of engineers at this level of the tech stack. Third, you think the XLA developers can’t debug an XLA error? I can’t even. How long does it take a decent researcher to learn Jax? Well I hope for fucks sake they already know NumPy or they don’t belong in the field. XLA is not an unreliable dumpster fire and most engineers are not spending their time on weird custom ops that hit some undiscovered bug. Yes, every company is quite comfortable with “relying” on external engineering departments. They do so constantly and everywhere. My god, I’m relying on Apples engineering department to write this message, who are relying on ARM, who are relying on… > If you wish to make an ~~apple pie~~ ML tech stack from scratch, you must first invent the universe Carl Sagan
AI video models can easily run on TPUs. Google has [explicitly confirmed](https://cloud.google.com/blog/products/compute/ironwood-tpus-and-new-axion-based-vms-for-your-ai-workloads) that Veo (their line of video models) runs on TPUs. Video models don't use the rasterization pipeline and instead use the same operations as any other large transformer based ML model: a ton of matrix multiplies + a little bit of vector processing for nonlinear activations + a moderate amount of shuffling data around. Sure, a TPU doesn't have specialized graphics units like raytracing cores or ROPs, but those aren't useful for video models anyways since they don't even touch the traditional rasterization pipeline. Even Nvidia has been cutting these from their datacenter AI GPUs to minimize wasted space and maximize perf/mm2. Technically there are still a few vestigal ROPs on the GB100 for firmware compatibility reasons, but they've been cutting them down every generation and they're likely to be removed entirely soon.
As an ML person, I care because none of.the optimizations I want to use exist unless I'm targeting CUDA, and writing those optimization myself is immensely painful and a different skillset than what I do.
I’ll detail it for you. duh, most people don’t code CUDA by hand. Thats the whole point. CUDA isn’t about the syntax or code, it’s the entire kernel/tooling ecosystem underneath PyTorch and TF. You can abstract it away, but you can’t replace it. That’s why AMD, AWS, Google, etc. all have to build their own backend compilers just to get in the same ballpark. Yeah, PyTorch “runs” on TPUs, but performance, kernels, debugging, fused ops, all the shit that actually matters at scale still lives in CUDA land. That’s why every major lab, including Anthropic, still trains their SOTA models on NVIDIA even if they sprinkle inference on other hardware. The CUDA moat isn’t devs writing CUDA. It’s that the entire industry’s ML stack is built around it. Google can afford to live inside their own TPU world. Everyone else can’t and will run on CUDA.
The ASIC nonsense is a ridiculous differentiation, and nvidia's rather pathetic cope statement is trying to feed into misinformation. Like, the core thing ML is using in large deployments is tensor cores. Basically ASICs custom built for MAC/FMA. Just massive matrices being fuse multiplied and biases added, trillions of times. Which is precisely what a TPU does. Indeed, a TPU has a pretty robust CISC instruction set, and them has an ARM64 orchestrator, and basically the entire imaginary "we're general and they're an ASIC" difference disappears.
"Sure and why do you think AMD gpu adoption for AI/ML is so abysmal. " Because AMD had *dogshit* contributions to the ML framework for years. Not only did they contribute little, they then tied it to very specific pieces of hardware. Where nvidia knew how important it was and contributed heavily to these projects to make them effortless on almost any nvidia hardware, including laptops, low end graphics cards, etc. But now everyone realizes how important this is. Google added Pytorch/XLA to make running models on TPUs relatively straightforward. As the other person said, the moat basically got filled in.
Sure and why do you think AMD gpu adoption for AI/ML is so abysmal. It’s because PyTorch et al are perf optimized for CUDA and the AMD drivers and support isn’t anywhere near as mature
Job postings are meant to cast as wide a net as possible when trying to attract specific talent, not sure if that’s necessarily the best indicator of actual market share. Also, we aren’t talking about our average ML job applicants. The software engineers actually programming the bleeding edge LLMs and GenAI architectures at places outside of Google are the very top level mathematicians and scientists that got to where they are because of their highly specialized expertise in the architectures behind the popular models. None of these architectures are JAX. Llama 4, Anthropic Claude, OpenAI, Deepseek, you name it, are all CUDA. You do not risk retraining these experts.
Their GPUs are basically ASICs at this point. They have “tensor” cores that are purpose designed for ML The other challenge is CUDA as the software moat is very high.
Come on JAX is mentioned in like 80% of professional ML job ads
TPUs aren't new. AI changes too quick for ASICS to stay relevant long enough without having to redesign them. If they do create something that can adapt or some kind of framework for new LLM/ML that reduces that obsolscence, then yes they will outscale GPUs. It's the same kind of principle as with Bitcoin miners. ASICS far outperform GPUs but can only do one thing (SHA256). If Google creates TPUs for their own model and only that, they can def destroy the competition as they are far more cost efficient than GPUs and it will force people to go with Google as the TPUs will only work with their models. Sure is a threat to OpenAI as they have no edge.
Here’s one for the ML needs - if Meta picks up TPUs, is it PyTorch or Tensorflow?
because they think they're ML architects now
You all really think AI doesn't have use cases? LMFAO I have bad news for you. That entire argument about "sheer momentum" is missing the point. AI isn't some vaporware running on hopes and dream, it's a massive, efficiency engine already deployed in nearly every sector of the economy. We're talking about present-day results, not future speculation: Amazon uses it for warehouse robotics and logistics, Palantir and defense sectors rely on it for predictive intelligence and threat modeling, and in medicine, it's already beating humans at diagnosing specific cancers from MRIs. It's maximizing throughput, cutting labor costs, and saving billions in R&D. The money being invested isn't just investors doubling down on a hope-fueled bubble, they're scaling deployment for a technology that's already proven it can generate trillions in marginal profit. Every industry, from algorithmic trading in finance to customer service bots is now reliant on ML models. Sure, monetary tightening will pop some speculative valuations, but it won't kill the essential technology that's keeping the lights on in modern business operations. The use cases are already here, and they are demonstrably producing ROI.
Sure. But that's the nature of business. Thermofisher scientific still makes money when failing companies with no future buy products to conduct laboratory research. That doesn't mean TMO isn't also supplying a rockstar in the making with a fantastic drug in the pipeline. Same thing with Nvidia, as long as there is a general use case for AI and ML theirs and others shovels will continue selling. Dot com bust also left phoenixes rising from the ashes to become some of the largest companies in the world.
Am I just attracting shitty AI bots powered by garbage ML today or some shit? Who the fuck would even put MU in the same category as pharma/biotech? It's up 156% this ytd and you think it has very little upside when the demand for memory chips barely begun? Are you retarded?
this. ML algorithms are nothing new. LLMs don’t seem that useful to scientific discovery tbh
Assistance with ML is very different. Both VS code and VS has ML assisted completions for example. For me written by AI means using agent modes to produce code and push it.
lol all the engineers at Nvidia code in Cursor. I worked at FAANG this summer and my boss estimates 80% of code is written with the assistance of ML.
1. Tensor cores are a rebrand of CUDA cores and the main addition was stuff for upscaling and raytracing. That's why older cards with lots of VRAM are actually pretty good for AI work. 2. ML/AI is just the computation of billions of sigmoid functions in big matrixes. This is something GPUs are basically built for, there's no "oh but they weren't built for AI" nonsense here. The fastest AI processors are still NVIDIA cards. 3. Google's TPUs are not commercially available, have the driver/support infrastructure of GPUs, and have no resale value because you can't use them for something else. The real risk for NVIDIA is its own used products flooding the market if the bubble pops and all these startups/datacenters find themselves insolvent, much like what happened with crypto, but 100x worse. Consumers can't absorb datacenter GPUs like hobbysts could with intel servers. Can't game on an H100.
>GPUs were NOT custom built to handle machine learning. GPUs are designed towards solving physics problems and generating dense graphics. Wrong. Certain Nvidia GPUs are designed specifically for ML pipelines. You are mixing it up the consumer GPU. > For machine learning models you don't NEED GPUs anymore. How so? Google TPU are not even available for sale? And even though they were, do you think you can cover the entire world's demand for compute? No way... not even NVidia can handle that at the moment: 2 year backlog. > NVIDIA also has 70-80% margins on their chips. That margin is now in question. This is you opinion and there is nothing that would suggest that at he moment. > A lot of their customers are developing their own custom chips. Which customers? Google is one of the biggest Nvidia customer. Even though they use it for the Cloud business. Everyone else is securing compute whether directly with NVidia or using proxy neo cloud companies. You got it all wrong. I agree that Google is a very good bet at the moment, but this has nothing to do with Nvidia.
Tensor chips were custom built for machine learning workloads. That's what an LLM is. GPUs were NOT custom built to handle machine learning. They are very good at doing math which is why they are being used to handle ML work. GPUs are designed towards solving physics problems and generating dense graphics. For machine learning models you don't NEED GPUs anymore. That's what Google has proved out. NVIDIA also has 70-80% margins on their chips. That margin is now in question. Will GPUs still be used? Sure. Will they be NVIDIA GPUs? Maybe, maybe not. A lot of their customers are developing their own custom chips.
The actual large companies involved in the dot com were actually profitable. Largest participating companies in Nasdaq 100 during dotcom: Cisco, Intel, Microsoft, Oracle, Sun Microsystem, Qualcomm, AOL, Oracle, etc. These were fast growing companies that were massively profitable. People write pets dot com and other example of the crazy valuation, but these were not even in the nasdaq 100 and for example pets dot com reached a total of \~$300 million valuation (vs for example Cisco's $450 billion valuation). The likes of pets dot com might be better compared with Lovable, Model ML, Figure AI and similar unprofitable (sometimes pre-revenue) startups. And of course OpenAI, the largest pure AI provider doesn't earn billions, it is currently massively loss-making, they are loosing around $11 billion per quarter and they made more than $1 trillion commitments.
To be clear: "AGI is the goal" is a media narrative. It's not the actual goal. It's a possible by-product if AI companies keep developing their technology instead of retraining new models and sending out marketing for them (there's a difference.) In this, Google is so far ahead of everyone else that they might as well be declared the winner. No one else is doing what Google is doing, in developing new kinds of chips specifically for the purpose of AI and ML. Their QC section is making huge strides. What they proved with the new Gemini release is that they sprint far ahead of everyone else on the basis of their R&D with everything else. Gemini is just the thing that helps them with the media narrative. It's not the core of development. Other AI companies are focused just on LLM development. Google is focused on the whole forest.
Air-gapped sovereign cloud sounds promising, but the hard part is the ML lifecycle: offline updates, supply-chain attestation, and cross-domain data movement without breaking classification rules. I’d watch how they handle keys, auditing, and vendor lock-in; clear exit plans, reproducible builds, and regular red-teaming will drive real trust.
I'm a ML eng in tech lmao. You might want to hit the textbooks bc you're not making sense
Yeah, that's my thought too. DL/ML has been doing great job I presume without LLM. How much can you squeeze that lemon?
It’s amazing how people just refuse to hear the truth. All AI/ML/Neural Network workloads use the same hardware. The build out happening now will support all of these non chatbot workloads.
Not only you are not asking for it, but you are also not paying for it. I fail to see where the money is flowing in. People claiming that ads will be better and generate more revenue are completely missing the point that ML has been used in advertisements for decades... On top of that ads revenue comes from spent dollars. If the average Joe ends up on unemployment checks because AI, they will have less disposable income
I'm sorry, but having read your two previous comments and now this one, you're very clearly unprepared for any answer I might be able to give you as an extension of your misunderstanding of the technology at a fundamental level. If you want your feelings eased, you're going to need to actually learn about the underlying principles, and that's just not in scope for a Reddit comment. I've been at this for years; I can't catch you up in thirty seconds. Go start with a Udacity course on NNs and ML architecture.
AlphaFold is not a LLM. We've had Tesla autopilot since like 2017 and still not much has improved. LLMs are where all the hype lives and they're a dud. AI and ML are amazing fields that will survive but cost way way less than LLMs
The general consensus among people with actual backgrounds in ML is that this is imminently going to be the most powerful technology in history. Empirically, progress over the last ~3 years has been quite a bit faster than most people expected. Most signs right now are pointing towards capabilities continuing to accelerate. There doesn't seem to be any fundamental barriers to continuing progress, which is why all of these insane multi-billion dollar infrastructure projects are being greenlit. Wall Street and the general public are obviously highly skeptical of this, but the bubble fears are overblown in the short term if you believe the experts.
Definition of a bubble, lots of demand and only a fraction of it is actually providing value. Sales/Support chatbots already existed, and they pull from pre-existing documentation so that's a problem that's already solved. Vibe coding is a failure and always will be because ML always should've been tool-ish, not agentic. It does provide value, and can still be improved, but it will never replace programmers so its value is limited, a replacement for Stack Overflow. It's useful for searching things online, but that use is also limited, and Google used to do that much better without AI. People and businesses are hopping on the AI train for marketing and hype reasons, the idea that you need it, or at the very least advertise it, to compete with everyone else. In every single other use, "AI" is ML that should've been adapted for a tool-ish use, not agentic. Baking apps helping you with a pre-written recipe, farming machines sorting between rocks and tomatoes. Instead, ChatGPT ran on hype, selling the idea shown in futuristic media where you talk to an AI and it answers, acts, and learns like a human. Even the term "AI" is wrongful marketing, they're LLMs, they can't learn by themselves. Once people start tacking on the bills these AIs are costing them, they'll start seeing it's not worth the investment. It will still exist, but the bills and strategy around its application will be very different. That said, Google, ChatGPT and the like will be able to adapt the infrastructure to it, so they're probably right to invest in the hype.
Google has been doing this ML infra since forever. Their first TPU was all the way back in 2015
I don’t know anything about investing into Intel or AMD ( I do own a chunk of Nvidia shares though), but I do know the technical side of CPUs and GPUs since I work as a ML engineer…. I seriously don’ know if you are being sarcastic or not, but if you think you can compare i7 cpus with blackwell chips you are a full on regard. God speed!
This is kind of an incomplete answer. Google being able to replicate it is not the same as “everyone can replicate it”. CUDA infrastructure is not considered the go-to simply for being the best infra to make the best out of Nvidia hardware (it’s also a factor). It’s because it was ingrained into parallel programming frameworks and ML frameworks like PyTorch and OpenCL from 2006, making it a ubiquitous framework. It means that if you’re a startup with a good AI model but not enough capital to have your own hardware and compiler design team, Nvidia still presents you with the most straight forward option to get your company up and running. For instance, DeepSeek actually managed to leverage an AI infrastructure that doesn’t depend on CUDA compiler (they still used Nvidia hardware) and was able to optimise their design. But they still had the capital to have a dedicated compiler design team and former Nvidia interns and employees who tinkered with the design to figure these out. If you are more of an algo expert, you cannot afford to invest in that kind of a side quest. I think the biggest issue still remains the fact that most of Nvidia’s customers apart from the hyperscalars still haven’t really figured out how to recoup their investments. P.S. I haven’t invested in Nvidia and am not particularly a fan of the company either. But it’s a common misconception that being able to circumvent CUDA is the moat. It’s not. It’s mostly the ability to have a system that’s more easily integrated than CUDA that makes it more of a moat.
Google already uses ML a lot in their products outside of LLMs, so do Amazon, Meta, etc. For the big companies even if AI turns out to be a dead end, they can still make use of their investments in GPUs/TPUs. I remember an interview with Zuck where he said they had to spend billions on GPUs just to *launch* Reels.
To all the folks that think i gamble my parents money LOL i held Gold to the top and took a 10% haircut there I got squeezed on the $PZZA buy out fake rumors and Apollo's nonsense Oh & also probably spent $100k holding UVXY and $30k on the Blue Jays ML for Game 6
Perfect time for this question, take tonight’s prime time football game for example. Bills were a -265 favorite against the Texans, most of the “public” (aka retail investor) money is on the bills ML or spread. Of course the Texans end up winning…same analogy as what’s happening in the stock market. When all logic/analysis/historical data points in one direction, the market goes the opposite direction and the public (retail investors) lose. The only way to “win” is to just buy and hold, and for sports betting, just don’t fucking bet at all.
Of course I have, but individual anecdotes at companies outside of the ML/SWE team at top tier tech companies don’t fucking matter. Only a hard would assume their individual viewpoint is similar to the top tier of engineering talent. Especially coming from a moron who doesn’t understand even the simplest shit like cloud platform differences, stock incentives, and just regurgitates the most surface level, basement dweller takes I can 100% guarantee you I make more than you do, have a bigger NW than you do, and have more ML and development experience than you do.
At least when I was in grad school a few years back Walmart had a surprisingly good ML research group