Reddit Posts
[Discussion] How will AI and Large Language Models affect retail trading and investing?
[Discussion] How will AI and Large Language Models Impact Trading and Investing?
Neural Network Asset Pricing?
$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...
Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now
Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?
Moving from ML to Robinhood. Mutual funds vs ETFs?
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I'm YOLOing into MSFT. Here's my DD that convinced me
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I created a free GPT trained on 50+ books on investing, anyone want to try it out?
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Option Chain REST APIs w/ Greeks and Beta Weighting
Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!
AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)
🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙
Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)
The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle
Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth
Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
NVIDIA to the Moon - Why This Stock is Set for Explosive Growth
[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?
The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts
Do you believe in Nvidia in the long term?
NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading
Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"
Which investment profession will be replaced by AI or ML technology ?
WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology
$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41
$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).
Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine
This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?
Training ML models until low error rates are achieved requires billions of $ invested
🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨
AI/ML Quadrant Map from Q3…. PLTR is just getting started
$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News
VetComm Accelerates Affiliate Program Growth with Two New Partnerships
NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT
Netramark (AiAi : CSE) $AINMF
Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)
Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)
How would you trade when market sentiments conflict with technical analysis?
Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.
How are you integrating machine learning algorithms into their trading?
Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits
Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare
Why I believe BBBY does not have the Juice to go to the Moon at the moment.
Meme Investment ChatBot - (For humor purposes only)
WiMi Build A New Enterprise Data Management System Through WBM-SME System
Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT
The Squeeze King - I built the ultimate squeeze tool.
$HLBZ CEO is quite active now on twitter
Don't sleep on chatGPT (written by chatGPT)
DarkVol - A poor man’s hedge fund.
COIN is still at risk of a huge drop given its revenue makeup
$589k gains in 2022. Tickers and screenshots inside.
The Layout Of WiMi Holographic Sensors
infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.
$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.
$APCX Huge developments of late as it makes its way towards $1
Robinhood is a good exchange all around.
Mentions
Porting entire pipelines over is absolutely necessary. How is there any other way to move their years of research and model development to entirely new hardware with its own unique software framework requiring entirely different model architectures? For the records, I think TPUs are fucking sweet. They’re just too different to maximize from GPUs for the vast majority of top level AI researchers. I think Google will benefit just as much as Nvidia from the AI boom for different reasons. I’m invested heavily in both. I also work on Googles cloud platform everyday from their dev kit in ADK to ML models to deploying production agents in Agent Engine and with Gemini Enterprise endpoints. Their vertical stack is insane and allows them to have immense profits at every level. I also see how different their NN frameworks are even at my level as a senior data scientist and how that is a massive switching cost. That said, they will not significantly steal AI cloud customers from Nvidia for a very long time.
NVIDIA GPU •Thousands of flexible CUDA cores •SIMD/SIMT architecture •Highly programmable •Supports FP8, FP16, BF16, TF32, FP32, FP64 (varies by generation) •Big L2 cache, high-bandwidth memory (HBM3/3e) •Tensor Cores accelerate matrix multiplies •Uses CUDA, the dominant AI software ecosystem Google TPU •Matrix multiplication units arranged into giant systolic arrays (e.g., 128×128 blocks) •Very limited instruction set •No graphics capability •Designed for maximum efficiency on fixed ML patterns •Uses HBM + interconnect optimized for Google’s internal workloads •Runs XLA compiler and is tied tightly to TensorFlow and JAX
Yes, the codebase has to change if folks have hard-coded to CUDA (presumably any of the larger NVIDIA customers do this to maximize ROI, but they are also the most well-positioned to rewrite to TensorFlow or whatever is the new hotness for TPU use in Google Cloud). TensorFlow continues to work on NVIDIA, but I have no idea how optimal it is or not. The general advantage to the TPUs is going to be cost over time - less expensive per unit/work for Google to build, and they design and deploy a new generation roughly every year that delivers better efficiency per unit power. Yes, NVIDIA will continue to produce higher-density chips over time, too - but I don't believe they are as efficient at comparable tasks and the gap will continue to widen - but IANAMLP. I suspect Google will have to discount TPU pricing vs. comparable NVIDIA pricing to attract customers afraid of vendor lock-in to TensorFlow, but their cost of goods to deliver those units of processing has got to be much lower. Presumably some tasks are more suited to CUDA (See [Google docs here](https://docs.cloud.google.com/tpu/docs/intro-to-tpu) for a list of tasks that aren't optimal on TPUs). I have a feeling larger companies will move to multivendor ML/GenAI provider sourcing for all of the same reasons they do so for general cloud compute today - price leverage. Yes, there is pain in having to write to N different APIs. There are some solution providers who abstract that away, but you have to pay a price for those software layers. Here's how adoption goes for the little guys: \- startup founders DIY for a time on rented cloud AI, nudged toward one vendor by their benevolent VC advisers for Kairetsu purposes \- eventually, the company scales so much that they negotiate a deal to get preferred bulk pricing from any one of the big vendors \- eventually, the company gets bent over so badly by that one vendor that they immediately rewrite on some sort of intermediate abstraction layer and pay the price to get access to deployment on the other cloud vendors, so they get some pricing leverage back \- eventually, the company gets big enough to make it worthwhile to rewrite directly to each cloud vendors' APIs and make their own abstraction layer At any point along the way, the little guys may die, get acquired, or stall out at a size where it doesn't make sense to go to the next stage. Here's how adoption goes for the big-sized guys whose primary competency is not computer systems: \- endless RFPs for years handheld by consultants, eventually deal is inked, consultants get paid handsomely to start moving workloads into the cloud \- the solution gets rebuilt a few times over the ensuing years, never quite working as advertised, but well enough to claim some victories for director and VP promptions
Because models come in all different sizes and use different tensor operations. At the end of the day you need to 1.) software where kernels are tailored to your PEs 2.) lots of HBM 3.) them to have a sensible programming model There’s a million other issues but ML workloads aren’t as fixed function as people might think.
You can still do non-LLM ML workloads
Google is a long term hold. One of the biggest tech companies with the widest range of expertise. Good management and excellent leadership especially in ML and AI (Demis Hassabis).
Firstly, it is months not years. Secondly as has already been pointed out to you there are not huge amounts of engineers at this level of the tech stack. Third, you think the XLA developers can’t debug an XLA error? I can’t even. How long does it take a decent researcher to learn Jax? Well I hope for fucks sake they already know NumPy or they don’t belong in the field. XLA is not an unreliable dumpster fire and most engineers are not spending their time on weird custom ops that hit some undiscovered bug. Yes, every company is quite comfortable with “relying” on external engineering departments. They do so constantly and everywhere. My god, I’m relying on Apples engineering department to write this message, who are relying on ARM, who are relying on… > If you wish to make an ~~apple pie~~ ML tech stack from scratch, you must first invent the universe Carl Sagan
AI video models can easily run on TPUs. Google has [explicitly confirmed](https://cloud.google.com/blog/products/compute/ironwood-tpus-and-new-axion-based-vms-for-your-ai-workloads) that Veo (their line of video models) runs on TPUs. Video models don't use the rasterization pipeline and instead use the same operations as any other large transformer based ML model: a ton of matrix multiplies + a little bit of vector processing for nonlinear activations + a moderate amount of shuffling data around. Sure, a TPU doesn't have specialized graphics units like raytracing cores or ROPs, but those aren't useful for video models anyways since they don't even touch the traditional rasterization pipeline. Even Nvidia has been cutting these from their datacenter AI GPUs to minimize wasted space and maximize perf/mm2. Technically there are still a few vestigal ROPs on the GB100 for firmware compatibility reasons, but they've been cutting them down every generation and they're likely to be removed entirely soon.
As an ML person, I care because none of.the optimizations I want to use exist unless I'm targeting CUDA, and writing those optimization myself is immensely painful and a different skillset than what I do.
I’ll detail it for you. duh, most people don’t code CUDA by hand. Thats the whole point. CUDA isn’t about the syntax or code, it’s the entire kernel/tooling ecosystem underneath PyTorch and TF. You can abstract it away, but you can’t replace it. That’s why AMD, AWS, Google, etc. all have to build their own backend compilers just to get in the same ballpark. Yeah, PyTorch “runs” on TPUs, but performance, kernels, debugging, fused ops, all the shit that actually matters at scale still lives in CUDA land. That’s why every major lab, including Anthropic, still trains their SOTA models on NVIDIA even if they sprinkle inference on other hardware. The CUDA moat isn’t devs writing CUDA. It’s that the entire industry’s ML stack is built around it. Google can afford to live inside their own TPU world. Everyone else can’t and will run on CUDA.
The ASIC nonsense is a ridiculous differentiation, and nvidia's rather pathetic cope statement is trying to feed into misinformation. Like, the core thing ML is using in large deployments is tensor cores. Basically ASICs custom built for MAC/FMA. Just massive matrices being fuse multiplied and biases added, trillions of times. Which is precisely what a TPU does. Indeed, a TPU has a pretty robust CISC instruction set, and them has an ARM64 orchestrator, and basically the entire imaginary "we're general and they're an ASIC" difference disappears.
"Sure and why do you think AMD gpu adoption for AI/ML is so abysmal. " Because AMD had *dogshit* contributions to the ML framework for years. Not only did they contribute little, they then tied it to very specific pieces of hardware. Where nvidia knew how important it was and contributed heavily to these projects to make them effortless on almost any nvidia hardware, including laptops, low end graphics cards, etc. But now everyone realizes how important this is. Google added Pytorch/XLA to make running models on TPUs relatively straightforward. As the other person said, the moat basically got filled in.
Sure and why do you think AMD gpu adoption for AI/ML is so abysmal. It’s because PyTorch et al are perf optimized for CUDA and the AMD drivers and support isn’t anywhere near as mature
Job postings are meant to cast as wide a net as possible when trying to attract specific talent, not sure if that’s necessarily the best indicator of actual market share. Also, we aren’t talking about our average ML job applicants. The software engineers actually programming the bleeding edge LLMs and GenAI architectures at places outside of Google are the very top level mathematicians and scientists that got to where they are because of their highly specialized expertise in the architectures behind the popular models. None of these architectures are JAX. Llama 4, Anthropic Claude, OpenAI, Deepseek, you name it, are all CUDA. You do not risk retraining these experts.
Their GPUs are basically ASICs at this point. They have “tensor” cores that are purpose designed for ML The other challenge is CUDA as the software moat is very high.
Come on JAX is mentioned in like 80% of professional ML job ads
TPUs aren't new. AI changes too quick for ASICS to stay relevant long enough without having to redesign them. If they do create something that can adapt or some kind of framework for new LLM/ML that reduces that obsolscence, then yes they will outscale GPUs. It's the same kind of principle as with Bitcoin miners. ASICS far outperform GPUs but can only do one thing (SHA256). If Google creates TPUs for their own model and only that, they can def destroy the competition as they are far more cost efficient than GPUs and it will force people to go with Google as the TPUs will only work with their models. Sure is a threat to OpenAI as they have no edge.
Here’s one for the ML needs - if Meta picks up TPUs, is it PyTorch or Tensorflow?
because they think they're ML architects now
You all really think AI doesn't have use cases? LMFAO I have bad news for you. That entire argument about "sheer momentum" is missing the point. AI isn't some vaporware running on hopes and dream, it's a massive, efficiency engine already deployed in nearly every sector of the economy. We're talking about present-day results, not future speculation: Amazon uses it for warehouse robotics and logistics, Palantir and defense sectors rely on it for predictive intelligence and threat modeling, and in medicine, it's already beating humans at diagnosing specific cancers from MRIs. It's maximizing throughput, cutting labor costs, and saving billions in R&D. The money being invested isn't just investors doubling down on a hope-fueled bubble, they're scaling deployment for a technology that's already proven it can generate trillions in marginal profit. Every industry, from algorithmic trading in finance to customer service bots is now reliant on ML models. Sure, monetary tightening will pop some speculative valuations, but it won't kill the essential technology that's keeping the lights on in modern business operations. The use cases are already here, and they are demonstrably producing ROI.
Sure. But that's the nature of business. Thermofisher scientific still makes money when failing companies with no future buy products to conduct laboratory research. That doesn't mean TMO isn't also supplying a rockstar in the making with a fantastic drug in the pipeline. Same thing with Nvidia, as long as there is a general use case for AI and ML theirs and others shovels will continue selling. Dot com bust also left phoenixes rising from the ashes to become some of the largest companies in the world.
Am I just attracting shitty AI bots powered by garbage ML today or some shit? Who the fuck would even put MU in the same category as pharma/biotech? It's up 156% this ytd and you think it has very little upside when the demand for memory chips barely begun? Are you retarded?
this. ML algorithms are nothing new. LLMs don’t seem that useful to scientific discovery tbh
Assistance with ML is very different. Both VS code and VS has ML assisted completions for example. For me written by AI means using agent modes to produce code and push it.
lol all the engineers at Nvidia code in Cursor. I worked at FAANG this summer and my boss estimates 80% of code is written with the assistance of ML.
1. Tensor cores are a rebrand of CUDA cores and the main addition was stuff for upscaling and raytracing. That's why older cards with lots of VRAM are actually pretty good for AI work. 2. ML/AI is just the computation of billions of sigmoid functions in big matrixes. This is something GPUs are basically built for, there's no "oh but they weren't built for AI" nonsense here. The fastest AI processors are still NVIDIA cards. 3. Google's TPUs are not commercially available, have the driver/support infrastructure of GPUs, and have no resale value because you can't use them for something else. The real risk for NVIDIA is its own used products flooding the market if the bubble pops and all these startups/datacenters find themselves insolvent, much like what happened with crypto, but 100x worse. Consumers can't absorb datacenter GPUs like hobbysts could with intel servers. Can't game on an H100.
>GPUs were NOT custom built to handle machine learning. GPUs are designed towards solving physics problems and generating dense graphics. Wrong. Certain Nvidia GPUs are designed specifically for ML pipelines. You are mixing it up the consumer GPU. > For machine learning models you don't NEED GPUs anymore. How so? Google TPU are not even available for sale? And even though they were, do you think you can cover the entire world's demand for compute? No way... not even NVidia can handle that at the moment: 2 year backlog. > NVIDIA also has 70-80% margins on their chips. That margin is now in question. This is you opinion and there is nothing that would suggest that at he moment. > A lot of their customers are developing their own custom chips. Which customers? Google is one of the biggest Nvidia customer. Even though they use it for the Cloud business. Everyone else is securing compute whether directly with NVidia or using proxy neo cloud companies. You got it all wrong. I agree that Google is a very good bet at the moment, but this has nothing to do with Nvidia.
Tensor chips were custom built for machine learning workloads. That's what an LLM is. GPUs were NOT custom built to handle machine learning. They are very good at doing math which is why they are being used to handle ML work. GPUs are designed towards solving physics problems and generating dense graphics. For machine learning models you don't NEED GPUs anymore. That's what Google has proved out. NVIDIA also has 70-80% margins on their chips. That margin is now in question. Will GPUs still be used? Sure. Will they be NVIDIA GPUs? Maybe, maybe not. A lot of their customers are developing their own custom chips.
The actual large companies involved in the dot com were actually profitable. Largest participating companies in Nasdaq 100 during dotcom: Cisco, Intel, Microsoft, Oracle, Sun Microsystem, Qualcomm, AOL, Oracle, etc. These were fast growing companies that were massively profitable. People write pets dot com and other example of the crazy valuation, but these were not even in the nasdaq 100 and for example pets dot com reached a total of \~$300 million valuation (vs for example Cisco's $450 billion valuation). The likes of pets dot com might be better compared with Lovable, Model ML, Figure AI and similar unprofitable (sometimes pre-revenue) startups. And of course OpenAI, the largest pure AI provider doesn't earn billions, it is currently massively loss-making, they are loosing around $11 billion per quarter and they made more than $1 trillion commitments.
To be clear: "AGI is the goal" is a media narrative. It's not the actual goal. It's a possible by-product if AI companies keep developing their technology instead of retraining new models and sending out marketing for them (there's a difference.) In this, Google is so far ahead of everyone else that they might as well be declared the winner. No one else is doing what Google is doing, in developing new kinds of chips specifically for the purpose of AI and ML. Their QC section is making huge strides. What they proved with the new Gemini release is that they sprint far ahead of everyone else on the basis of their R&D with everything else. Gemini is just the thing that helps them with the media narrative. It's not the core of development. Other AI companies are focused just on LLM development. Google is focused on the whole forest.
Air-gapped sovereign cloud sounds promising, but the hard part is the ML lifecycle: offline updates, supply-chain attestation, and cross-domain data movement without breaking classification rules. I’d watch how they handle keys, auditing, and vendor lock-in; clear exit plans, reproducible builds, and regular red-teaming will drive real trust.
I'm a ML eng in tech lmao. You might want to hit the textbooks bc you're not making sense
Yeah, that's my thought too. DL/ML has been doing great job I presume without LLM. How much can you squeeze that lemon?
It’s amazing how people just refuse to hear the truth. All AI/ML/Neural Network workloads use the same hardware. The build out happening now will support all of these non chatbot workloads.
Not only you are not asking for it, but you are also not paying for it. I fail to see where the money is flowing in. People claiming that ads will be better and generate more revenue are completely missing the point that ML has been used in advertisements for decades... On top of that ads revenue comes from spent dollars. If the average Joe ends up on unemployment checks because AI, they will have less disposable income
I'm sorry, but having read your two previous comments and now this one, you're very clearly unprepared for any answer I might be able to give you as an extension of your misunderstanding of the technology at a fundamental level. If you want your feelings eased, you're going to need to actually learn about the underlying principles, and that's just not in scope for a Reddit comment. I've been at this for years; I can't catch you up in thirty seconds. Go start with a Udacity course on NNs and ML architecture.
AlphaFold is not a LLM. We've had Tesla autopilot since like 2017 and still not much has improved. LLMs are where all the hype lives and they're a dud. AI and ML are amazing fields that will survive but cost way way less than LLMs
The general consensus among people with actual backgrounds in ML is that this is imminently going to be the most powerful technology in history. Empirically, progress over the last ~3 years has been quite a bit faster than most people expected. Most signs right now are pointing towards capabilities continuing to accelerate. There doesn't seem to be any fundamental barriers to continuing progress, which is why all of these insane multi-billion dollar infrastructure projects are being greenlit. Wall Street and the general public are obviously highly skeptical of this, but the bubble fears are overblown in the short term if you believe the experts.
Definition of a bubble, lots of demand and only a fraction of it is actually providing value. Sales/Support chatbots already existed, and they pull from pre-existing documentation so that's a problem that's already solved. Vibe coding is a failure and always will be because ML always should've been tool-ish, not agentic. It does provide value, and can still be improved, but it will never replace programmers so its value is limited, a replacement for Stack Overflow. It's useful for searching things online, but that use is also limited, and Google used to do that much better without AI. People and businesses are hopping on the AI train for marketing and hype reasons, the idea that you need it, or at the very least advertise it, to compete with everyone else. In every single other use, "AI" is ML that should've been adapted for a tool-ish use, not agentic. Baking apps helping you with a pre-written recipe, farming machines sorting between rocks and tomatoes. Instead, ChatGPT ran on hype, selling the idea shown in futuristic media where you talk to an AI and it answers, acts, and learns like a human. Even the term "AI" is wrongful marketing, they're LLMs, they can't learn by themselves. Once people start tacking on the bills these AIs are costing them, they'll start seeing it's not worth the investment. It will still exist, but the bills and strategy around its application will be very different. That said, Google, ChatGPT and the like will be able to adapt the infrastructure to it, so they're probably right to invest in the hype.
Google has been doing this ML infra since forever. Their first TPU was all the way back in 2015
I don’t know anything about investing into Intel or AMD ( I do own a chunk of Nvidia shares though), but I do know the technical side of CPUs and GPUs since I work as a ML engineer…. I seriously don’ know if you are being sarcastic or not, but if you think you can compare i7 cpus with blackwell chips you are a full on regard. God speed!
This is kind of an incomplete answer. Google being able to replicate it is not the same as “everyone can replicate it”. CUDA infrastructure is not considered the go-to simply for being the best infra to make the best out of Nvidia hardware (it’s also a factor). It’s because it was ingrained into parallel programming frameworks and ML frameworks like PyTorch and OpenCL from 2006, making it a ubiquitous framework. It means that if you’re a startup with a good AI model but not enough capital to have your own hardware and compiler design team, Nvidia still presents you with the most straight forward option to get your company up and running. For instance, DeepSeek actually managed to leverage an AI infrastructure that doesn’t depend on CUDA compiler (they still used Nvidia hardware) and was able to optimise their design. But they still had the capital to have a dedicated compiler design team and former Nvidia interns and employees who tinkered with the design to figure these out. If you are more of an algo expert, you cannot afford to invest in that kind of a side quest. I think the biggest issue still remains the fact that most of Nvidia’s customers apart from the hyperscalars still haven’t really figured out how to recoup their investments. P.S. I haven’t invested in Nvidia and am not particularly a fan of the company either. But it’s a common misconception that being able to circumvent CUDA is the moat. It’s not. It’s mostly the ability to have a system that’s more easily integrated than CUDA that makes it more of a moat.
Google already uses ML a lot in their products outside of LLMs, so do Amazon, Meta, etc. For the big companies even if AI turns out to be a dead end, they can still make use of their investments in GPUs/TPUs. I remember an interview with Zuck where he said they had to spend billions on GPUs just to *launch* Reels.
To all the folks that think i gamble my parents money LOL i held Gold to the top and took a 10% haircut there I got squeezed on the $PZZA buy out fake rumors and Apollo's nonsense Oh & also probably spent $100k holding UVXY and $30k on the Blue Jays ML for Game 6
Perfect time for this question, take tonight’s prime time football game for example. Bills were a -265 favorite against the Texans, most of the “public” (aka retail investor) money is on the bills ML or spread. Of course the Texans end up winning…same analogy as what’s happening in the stock market. When all logic/analysis/historical data points in one direction, the market goes the opposite direction and the public (retail investors) lose. The only way to “win” is to just buy and hold, and for sports betting, just don’t fucking bet at all.
Of course I have, but individual anecdotes at companies outside of the ML/SWE team at top tier tech companies don’t fucking matter. Only a hard would assume their individual viewpoint is similar to the top tier of engineering talent. Especially coming from a moron who doesn’t understand even the simplest shit like cloud platform differences, stock incentives, and just regurgitates the most surface level, basement dweller takes I can 100% guarantee you I make more than you do, have a bigger NW than you do, and have more ML and development experience than you do.
At least when I was in grad school a few years back Walmart had a surprisingly good ML research group
This is exactly the issue though - what happens to all the people who are currently paid to do those jobs? If truckers/bus drivers, shop assistants, admin/white collar middle mgmt workers, factory workers and uber drivers are all going to be made obsolete in the coming decades, how do people afford the products and services that AI-powered machinery now largely takes care of? What happens to tax revenue when automated workers don’t pay taxes? What happens to the stock market when all those workers no longer exist to pay into their pension fund? The AI utopia that big tech are selling is currently based on the premise of how much money can be saved using AI to automate tasks and increase productivity, ultimately requiring less paid staff (self-checkouts at grocery stores and point-of-payment services being an example we all experience daily). But there’s no corresponding vision for how this will maintain the job market for the average *human* - aside from those already in tech, mining and construction, ie. industries involved in AI/ML, robotics and data centres. It feels akin to your boss wanting you to be excited about training your replacement for the majority of people right now.
ML was AI back in the day with stuff like scikitlearn. Now it has progressed into LLMs, and then image/video generation. Not a huge industrial revolution, but it is a potent combination of marketing use cases and technological advancement.
Uh huh. I see. Gosh, you almost wonder why NVIDIA market cap is so high when everyone can run an AI on their PC. What are all those gigawatts even for? Anyway, how are your toy "AI" ML going to ensure that AI isn't used by capitalism to surveil, constrain, and generally enshitify?
I'm so dumb I've been making money with ML since the 90s and only worked a few hours a year in the last decades. You can't even understand what you write, totally hopeless what your read.
Only I've been a heavy user and making money with ML since the 90s, and I only needed to work a few hours a year in the past decades. I'd be totally ok with doing a blue collar job because this also means creating **real** value, things people **use** and make their lives better.
I'm concerned with Jensen having to reassure everyone on the call about all the other industries (including classical ML) where Nvidia is being utilized when the caller asked about the AI bubble. Also Google's CEO's claim of irrationality in the AI market.
Only AMD is the meaningful competitor in the scene. Intel is catching up, but still very far. This is a market with huge barrier of entry because of the knowledge and expertise requirement. If you’ve dealt with AI, TPU is not really the most user friendly tool to work with, google is the most consumer of their own tools and they aren’t planning to release it to the open market even something like for edge compute. They want to package it so that people buy through google. The challenge for AMD is simply CUDA. CUDA is still the widely use tool to interact with GPU, and the most widely used. Ask most engineers if they have to choose which GPU for serious ML related stuffs, they’d still recommend Nvidia GPU due to CUDA support. For inference then yes it is much less restrictive (i.e. you can use AMD, intel gpu) but still less stable and flaky.
> driving what products you’re recommended This isn't what they're spending massive amounts of capex building though. They are spending that on scaling up LLMs, which everybody agrees now is a dead end. Yann LeCun, Andrey Karpathy etc. That recommendation stuff already exists, is called ML, and while still growing, is not going to be worth trillions per year. and LLMs aren't going to enhance it at all.
It's cuz you don't have a forward deployed solutions engineer Not all the automation is "hey chatgpt do my job" You need someone who knows AI / ML, who understands business processes / systems, and how has the ability to automate (IE thats me and it's why I keep getting recruiters hitting me up for $200/hour 6 mo to 12mo contracts) There isn't a business process or report that I can't automate
because his whole thesis is >China’s top AI labs are developing models that are dramatically more compute-efficient requiring far less energy, fewer GPUs, and much smaller training pipelines. With breakthroughs in algorithmic efficiency, sparsity, low-rank methods, and new ML theory, we’re heading toward AI systems that no longer need the brute-force hardware NVIDIA built its empire on. No one knows when that is coming nor what kind of barriers is China encountering in said development, what is the timeline, what are the costs, what are the bottlenecks
GOOG’s P/E ratio is currently higher than META’s yes, but I would argue the ratio is justified because GOOG is better positioned to monetize AI. Meta’s main lever for AI monetization is to supercharge their ad targeting and user engagement. Google has this lever as well, but in addition to that they have Google Cloud, Waymo, and a few moonshot bets. As demand for compute increases, GCP will benefit as it is able to rent access to their infrastructure and platform, which includes their own custom TPUs in addition to the GPUs they’re purchasing from NVIDIA. Waymo is one of the most tactile examples we have of AI/ML being applied in the real world. It started as a moonshot, but it is increasingly being seen as a legitimate business. Waymo is far and away the dominant player in this space because Google has been working on AVs for over a decade. Tesla is probably the closest competitor, but it isn’t close.
Well, in that case... > I can tell you are not an engineer [...] Non-Technical people rarely understand [...] I have an MSc in Data Science and I've worked ~3 years as an SRE and ~3 years as an MLE, both at top companies. Btw, your example being "Django" and not some ML-related task makes it clear *you* aren't working in the field. Your comment ignored most of what I said, created a strawman ("LLM doesn't allow an intern to perform [senior work]"), and went off of that, rambling about LLMs and vide-coding. I didn't say interns will perform senior work, nor did I say it was for coding. I gave an example of how a specific computer vision problem that was insanely hard 10 years ago with just traditional CV and barely-working ConvNets, now is almost trivial with off-the-shelf VLMs. Here, read it again: > Seventh: Usefulness - ease of use. LLMs (and related research) really redefined what's possible, and what's easy. **Let's say you wanted to make an app that counted how many unique people visited your shop per day**. Just 5 years ago you'd need a highly capable data scientist working on this for weeks or months. Today your cheap junior developer from Lidl can call an LLM API and it will likely work okay. Your other point about build vs ongoing costs / maintenance is valid, but is very case-dependent and probably not very meaningful on this example. It doesn't take the same amount of maintenance to keep a simple static site up, as it takes some huge system that depends on 50 other services. Similarly, a simple CV/VLM-based app with one specific and narrow goal may be able to run perfectly fine without any fixes for years, retraining isn't as necessary as it used to be. Even if it is, assuming the initial work is correctly done and a framework is in place, retraining, monitoring, alerting, etc, become almost trivial. I know because we have productions models that need near 0 maintenance deployed and running fine, and we also have training pipelines setup with automatic ingestion of new data, retraining, publishing, and all other goodies. Maybe you just worked at B-tier teams/companies that are simply yoloing their AI/ML projects?
I understand the difference. They are very related. LLM is a kind of ML in a genre specific definition. There will be less and less distinction as LLMs evolve. ML is more traditional and kind of a precursor. Much of the innovation value yet to be unlocked requires unstructured data and a ton of computational power. Been a limitation of trad ML. Hence why LLMs will be increasingly useful
I’m up 3.2k on nba so far this in 11 games, rotate out of tech stock to being all in on VJ Edgecombe fucking balling daily and all my ML/Spread bets hitting
**Before adopting intelligent monitoring, Ford’s EV battery testing facility** MSAI is just asking to use testing environments to study their IR tech combined with licenses AI/ML-tools (not their own AI/ML tech, just bought)
They use Amazon AWS for hosting and the AWS AI/ML testing environment - MSAI is just the client and Amazon the provider. MSAI is not on the Global Partners list and Amazon announced no cooperation. Everyone who is suging AWS Servers and the Testing environment is an "AWS Partner" but is not in a real Amazon partnership. Some trolls bought MSAI low and try to convince people. Throuwh AWS AI/ML testing MSAI has access to the cams and robots to test IR-cams images. This investment trolls try to twist it like they implement their tech their. MSAI made a tour through one Amazon AWS testing warehouse, the trolls here try to sell it as a business meeting.
As I've already said >From a technological perspective ML integration at an industrial level is promising
Since when Nvidia is an AI company only? THey are a monopolist for consumer grade GPU hardware. They own 80% of the gaming market, 0% related to AI, they are the monopolist for crypto mining, for software center doing non AI related compute, for everything you need a GPU for which is not related to AI. AND Ai is not just ChatGPt, its every ML model, everyfucking model runs on Nvidia, your banking fraud detection, your car, everything. The Bubble is LLM, which Microsoft blasted 80 billion into, named OpenAI
I agree with 1 and 2 but you are wrong on 3. From a technological perspective ML integration at an industrial level is promising, but LLM adoption is only going to provide value in services. People don't understand how it works and thus don't understand its limitations especially in real production environments. It's not nVidia or the big players like Google and Microsoft that shape the AI bubble. It's the dozens of AI-first service firms that are overvalued and offer nothing but promise.
Customers *are* paying for it. Again, the freebie public casual ChatGPT stuff is a drop in the bucket and not the big picture with industrial applications. As the saying goes, if you're not paying for a product you *are* the product. I wouldn't be surprised if, when it comes to the publicly available stuff, if it's all just extra input for further model development; ML loves having lots to chew on.
I am not saying anything different. What I am saying is that these chatbots stuff do not have much to offer than they do now, there is not much going forward. We have made the ML leap a decade ago. Industrially applicable stuff like computer vision were already there, LLMs are just the most impressive client side application. I have been in the machine learning space for a decade now as a researcher and being a financial AI developer, the only real value I see for my field is to parse large volumes of text data, which I have to handle afterwards with conventional ML. I am not saying LLMs are useless, I am saying that they haven't been as revolutionary as we first expected.
Those existed prior to LLMs, and they did not gain this much momentum. They were confined mostly to the software engineer landscape. LLMs are arguably the most impressive client side ML application. And, Google alongside a few companies like Meta and Amazon were at the forefront of ML in general. I am talking mostly about the genai stuff.
I didn't say it's going anywhere. It's just that AI has not proven its worth other than being an (extremely good) personal assistant. That's excluding machine learning, I am focusing solely on the GenAI stuff. Google offers a pretty sophisticated ML ecosystem which puts it miles ahead of OpenAI and the multitude of AI startups by default. Nvidia also; their chips were already in use before the GenAI wave -you could use Nvidia infra from Google's Colab environment for at least 5 years now-. That's the AI hype; I am not talking about startups that design ML pipelines and systems to diagnose illnesses etc. and they've been around for at least a decade now. It's the GenAI that's the balloon. Society doesn't need weird videos of cats making pasta. It needs heavy ass industry to feed billions, it needs sustainable energy infra, it needs drugs that cure disease. All those needs and billions are being poured into OpenAI. That's the balloon.
“AGI tomorrow” The current AI hype has managed to extend what is basically Siri/Alexa out over years and convince the dumbest people that they can be rich and eliminate jobs. The hype keeps shifting from LLM to image/video gen and nobody has stopped to see it’s all noise with no real value. ML is amazing, but it isn’t something an LLM can shit out on command for any moron. It takes a lot of effort to get right and each application is unique. For example: https://youtu.be/DcYLT37ImBY?si=KIPhb1IyMYf1tiKD Already seeing some hype shift away to robotics with these bipedal robots people keep fundraising for. They mention AI as an afterthought, but it’s transparently just an Alexa/Siri. China is already miles ahead in robotics too so it’s just more grifting of American investors.
If it worked why wouldn't countless ML agos just be exploiting it? And once they exploit it, it no longer works. Until you can answer how a human would be able to utilize set things like TA but a sophisticated multi million dollar ML can't. You shouldnt think TA works.
If I were to put money on AI, it'd be Google because they're using their compute power for projects that will actually matter. OpenAI is still trying to make ChatGPT into a profitable consumer and institutional product. Basically same with Anthropic. But thats not where AI shines, even though that's where most of the compute power is going and why all of these data centers are getting built. It's unsustainable. Google's Gemini integrations are basically sidequests, though. Their really important stuff with AI and ML are developing products that show AI in the context of real productivity gains that matter to various fields, including in health, meteorology and other scientific fields.
You use the shiny new stuff for training models, and the slightly used shit for inference. Or image processing. Or whatever else where running on some sort of GPU is preferable to CPU-only. In our case it’s ML inference and image processing. Some of our researchers are working on H100s/H200s, but we’re still getting great mileage out of our older A100s. Hell, one of our guys is still running a DGX with fucking VOLTAS. Works well enough for him.
I agree it is inefficient, but disagree that it can’t scale. These companies haven’t figured out how to optimize compute per user query. The reasoning models are a step toward that optimization because it lets the model decide how long to “think” or use compute for a given problem. I think companies are going to continue to try and find a balance between performance and user satisfaction. That’s going to take time, and some major breakthroughs, both in ML research and hardware/software development. I do think it’s very possible. Just look at cloud computing 15 years ago and large-scale database architectures. They’ve changed tremendously with the introduction of Hadoop and Spark. These took time to develop, but has led to massive gains in cloud computing power, costs, and capabilities.
I'm not confusing ML with LLM's, that wasn't what I was saying. You're also not recognizing how useful LLM's are at processing audio into text and analyzing it.
This is probably the crux of it. Mom and Pop users are necessary. The LLM has to touch every level of the economy for the amount of investment that's being pumped in to be justified. Accelerating back end ML is fine and further product developments, but that's not gonna create a return fast enough to justify how much money is being pushed in... The other option is we replace a whole bunch of workers and somehow enable companies to save / profit billions. Or it's a bubble and will pop when no one wants to give OpenAI or Anthropic more money...
I was young but I remember it being more of an obvious bubble. It was pretty obvious that just having a website like socks.com wasn't going to bring in the cash, everybody was looking for monetization strategies, turning eyeballs into money. I'm not sure it's that obvious now with AI, at least with Gen AI. ML has been providing real value for a decade or more so we are really talking about LLMs. So far it's a time saver for a few use cases but it is an unreliable partner for others.
I don't think they will go away or even crash. The novelty might wear off for day to day users, but that's fine. A tiny blip. Daily mom and pop users aren't really the value. It's because it accelerates AI/ML development. I think it just looks crazy because it's novel. But that also doesn't mean the way the it's being used now is the only way to use it. If you think about it, it's self serving in that an LLM will accelerate its own advancement.
Ok, then LLM boom and LLM bubble? I guess I mean, money is flowing to try to make better LLMs. Those had better either justify the investment, otherwise Open AI / Anthropic etc go belly up and take down the S&P 500. I totally agree AI /ML and even LLMs are here to stay. It's just, are they going to crash the stock market first (it's not like the internet went away after the dot com boom / crash).
That's what they said about regular CISC/RISC computers, in time any technology will trickle down. Perhaps during the initial stages, (which they should have done with AI/ML) such advanced computers could be institutionalised and access granted through proven utility and results driven agenda for commercial entities.
That's an LLM, friend. To the layperson, AI = LLM over the last couple of years, but it really and truly means AI/ML. LLM is just a type of AI/ML, but it's not AI as a whole, nor does it represent even \*most\* companies who say they are leveraging AI.
Anyone recommend a good option trader. Believe it or not ML does not allow me to trade options
When you say AI, you mean LLM. When tech says AI, they mean AI/ML and sometimes LLMs. Everything, and I mean *everything* is touched by AI/ML.
Reclaim – AI Calendar for Work & Life https://share.google/ftKxcIxIv8nE7xc9S > I cant tell AI to do any function of my job more efficiently than I can do it. Is it manual work? If you're using a computer, there's probably *something* an LLM can help you make more efficient. Anyways, we're probably not talking about LLMs specifically, but more likely machine learning and algorithms. Everything, and I mean *everything* is touched by AI/ML. The clothes you're wearing, your tap water, your car... you'd have to live like an 1800s monk if you wanted to avoid AI/ML.
Cloud revenue jumps at this stage of adoption is more tied to increased spend from existing customers than acquiring new ones. More workloads shifting to cloud, increased spend related to ML/AI initiatives, always increasing nests, etc.
> Sam (more like Scam) Altman Third or fourth fastest upvote of my life. > I hope you read recent research papers [...] Yes, I read research papers (not as much as I'd want), and also the various researchers in the various teams in our department read a bunch too, and we organize events and presentations for knowledge sharing. And I agree that LLM capabilities are overstated by marketing & startups. But I almost always ignore all conversations about "AGI", "true intelligence", and the like, I prefer a more grounded and practical discussion, because often "it can't be done because XYZ" just translates to "we don't want to spend time to bother with implementation / engineering details or more complex approaches that might reduce error rates from an unacceptable 10% to an acceptable 5%". And I do think that everything that could be meaningful has not been tried yet, or at least not tried well enough. Many projects die (especially those with limited time, or without a researcher present) because people thought it would be as simple as: "throw in your documents in an embeddings model, use a vector DB, inject everything into 100K context capable LLM, profit". Or (from my last job) "just feed the logs into the LLM and have it run terminal commands from our playbook to fix it". > As someone else said, commoditizing of LLMs is likely gonna happen. Absolutely. After a point people will be happy enough with the small/free stuff for most use cases (I already am plenty happy with my 3-month old, 24B dense Mistral model). > I used to work at Amazon [...] Thanks for sharing! I had only one friend there who worked on AI-related stuff, but it was mostly statistical ML stuff with time series, and he left a year or two ago, so this was new to me.
> don’t I get bonus points [...] 2018 I began into the ML/DL You definitely get bonus points! Btw, I started my ML journey pretty close to you, I think around Feb 2017. And sorry if I came out as too confrontational. I did read it, but I cannot connect how the previous bubble would be relevant to tech-based arguments about future profitability potential of current companies (big and small). I see the parallels and differences of profitability and future potential for stock appreciation from whoever survives, but I avoided any of that in my post, hence it was weird to me that this was a topic raised.
> Based on your experience in the field, how far off are the reasoning models being able to do anything genuinely useful? Negative distance. They can already do plenty of useful stuff, and I mentioned in point 6 that I'm working on an actually useful (and likely profitable) project. What made this project possible is: * Inputs, intermediate data, and outputs are all text-based * The output is *very* standardized (format, structure, tone) * The current solution is a custom-built workflow, not a generic "agentic" implementation. We leave very little to the LLM, and hold its hand all the way * *Lots* of feedback from experts, scientists, UX, and engineering > I see the big money over the next 5-10 years from AI / ML in robotics and similar fields [...] Big data is old news (but always relevant). A strong "yes" about robotics. Judging by some research results I've seen and the drastic cost reduction of robots, it makes sense to me that interest will rise both from hobbyists and companies. Robotics will let AI (be it LLMs/VLMs or other entirely different architectures) tap into fields that weren't possible before. And it doesn't have to be anything fancy or humanoid, a robot that can pick up more sensitive fruit would be nice (btw some early attempts were made in the 2010s, but not sure how it ended up). I *think* that there is research showing that betting on new technologies (and sector ETFs in general) hasn't worked out in the past, but who knows. I've personally put a tiny amount (<1% and will keep dropping) in ROBO (if anyone knows an international alternative, please suggest!).
Thanks for this, great post. A few questions if you don't mind - 1. Based on your experience in the field, how far off are the reasoning models being able to do anything genuinely useful? 2. I see the big money over the next 5-10 years from AI / ML in robotics and similar fields (self-driving cars, industrial processes, agricultural processes, etc.) and possibly also in big data processing - the stuff Palantir and Snowflake are doing. Would you agree?
don’t I get bonus points that I was a software dev in the tech industry before the dot com era, I’ve got to see the debacle or that in 2015-16 I was confident that NVDA was a different animal, 2018 into the ML/DL, I understood your AI experience The stocks mkt is a behavioral attitude sometimes agrees with the tech facts
As someone with a 9070XT, gaming? Great. Try to do any ML workloads? Actually like pulling teeth.
Supervised learning implies you provide a label of correctness and the loss optimises towards that objective. This is alignment because a human creates that objective and the optimisation algorithm finds a design that satisfies this objective the best it can within the variability of parameters it can tweak. So yes, all supervised models are aligned in that respect to-the objective encoded in their respective loss functions because that’s what the ML engineer intended. When doing next token prediction there is no structure to the data and it is unsupervised to begin with. True there is loss but that’s just token prediction loss which you cannot say encodes the engineers alignment. No engineer at any point tweaks the data and looks at what token should precede what other tokens etc. the engineer has no clue what the training data embedded space looks like not how tokens should relate to each other. There is no question of alignment here as there is nothing to be aligned too
They call their man military product an "Al-powered kill chain". Not sure if you're suggesting that palantir is just lying about that or what. I've never used it but they claim it can make drones autonomously identify targets. That's definitely AI. They also have foundry for civilian companies and that automates a lot of different things across the supply chain using artificial intelligence and ML. Foundry is incredibly expensive though. No clue why any company would think it's worth that type of investment.
1. Whether LLMs are "better aligned" than "ML models" (any examples? is Word2Vec aligned according to you?) is beyond what's being discussed here. 2. Training method has nothing to do with it. LLMs can be trained in a supervised manner; they're usually trained in a self-supervised manner, not unsupervised. > They aren’t optimizing for human goals at all; they’re optimizing for statistical likelihood in text. A supervised model trained on labeled data is explicitly anchored to a measurable human-defined objective. If your input data is aligned, they will be too. However RLHF is usually leveraged for the alignment step. Which is exactly what you said that it isn't: "The loss function encodes alignment by design." (By the way, according to you "the loss function in the other ML models encodes alignment by design"? What's that even supposed to mean? What's the loss function? What are the other models? I can only guess why you're being so vague) > They’re trained on enormous unlabeled datasets to minimize perplexity, meaning their only goal is to continue text in a plausible way, not to serve any purpose or outcome that humans care about. Again, RLHF. https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback > fine-tuning or reinforcement from human feedback, which is a weak, cosmetic layer over a fundamentally amoral predictive core "Weak" by what metrics? According to whom? Compared to what? > That’s why they can sound helpful and still hallucinate, contradict, or mislead—because there’s no intrinsic connection between prediction accuracy and human intent. ? What's the causal connection here? Hallucinations or lack of logical reasoning (the irony...) have nothing to do with alignment/RLHF. > In practical terms, LLMs are impressive at imitation but poorly aligned to truth, safety, or reliability What is "truth alignment"? That they shouldn't lie? Or shouldn't make facts up accidentally? Again, hallucinations have little to do with alignment. > compared to older supervised systems that were at least optimizing for a concrete, verifiable target Thanks for being as specific as possible. It proves your in-depth knowledge of the subject. I'm just wasting my time here. You're not discussing in good faith.
I am sorry, your understanding of the alignment problem is wrong. LLMs are some of the worst aligned models in existence because almost all ML models built prior to that in supervised approaches are far better aligned than ChatGPT that is unsupervised and goalless beyond next token prediction and that’s exactly why calling LLMs aligned is misleading. They aren’t optimizing for human goals at all; they’re optimizing for statistical likelihood in text. A supervised model trained on labeled data is explicitly anchored to a measurable human-defined objective. The loss function encodes alignment by design. LLMs have none of that. They’re trained on enormous unlabeled datasets to minimize perplexity, meaning their only goal is to continue text in a plausible way, not to serve any purpose or outcome that humans care about. Any alignment we see in them is bolted on afterward through fine-tuning or reinforcement from human feedback, which is a weak, cosmetic layer over a fundamentally amoral predictive core. That’s why they can sound helpful and still hallucinate, contradict, or mislead—because there’s no intrinsic connection between prediction accuracy and human intent. In practical terms, LLMs are impressive at imitation but poorly aligned to truth, safety, or reliability compared with older supervised systems that were at least optimizing for a concrete, verifiable target.
Isn't there clear evidence against this house of cards in that none of the actual AI players are making any money off of it except for the AI they were doing before all this? ML AI had been used since the 2000s, so the current AI bubble is really all about LLM AI. No company is making money off it except the people building the data centers and selling the chips, and how do they continue to get revenue when their 4 big customers don't make any money off of it and don't actually end up making God?
I read it this way: MSAI is using Amazon AWS Services for over 2 years... last year they begang to use the AWS Tools (AI/ML Learning plattforms connected to the warehouses cams and robots). This entire talk is related to the implementation of the testing environment. Furthermore Luke was a maintance engineer - not a manager or anyone that could establish a partnernship. He helped them set up AWS Tool so MSAI could test their infrared AI readers through the warehouse stream API... so no real partnership, just cooperation to create some test environment in AWS Services... nothing more nothing less.
If Facebook were just Facebook, I think it would be in a worse place right now. It is also Instagram and whatsapp, which are not as horribly monetized as Duolingo, and have much wider user bases. Then they are on the forefront of AI (open source AI at that) and VR, which is more revolutionary tech than social media sites. And even when they were more Facebook, they have built industry standards in software. They maintain React. Duolingo itself was more cutting edge back when it was crowd sourced language translation and machine learning. It's from the guy that invented recaptcha and sold everything to Google for ML. That was it's initial monetization strategy. Now it has moved from that to a subscription language flashcard app with cute cartoons and funny social media... Definitely a huge brand, but enough to justify a big tech valuation?