See More StocksHome

ML

MoneyLion Inc

Show Trading View Graph

Mentions (24Hr)

3

200.00% Today

Reddit Posts

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models affect retail trading and investing?

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models Impact Trading and Investing?

r/smallstreetbetsSee Post

Luduson Acquires Stake in Metasense

r/investingSee Post

Best way to see asset allocation

r/wallstreetbetsSee Post

Neural Network Asset Pricing?

r/ShortsqueezeSee Post

$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...

r/wallstreetbetsSee Post

Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now

r/investingSee Post

Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?

r/StockMarketSee Post

Moving from ML to Robinhood. Mutual funds vs ETFs?

r/smallstreetbetsSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/stocksSee Post

hypothesis: AI will make education stops go up?

r/pennystocksSee Post

AI Data Pipelines

r/pennystocksSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/StockMarketSee Post

The Wednesday Roundup: December 6, 2023

r/wallstreetbetsSee Post

Why SNOW puts will be an easy win

r/smallstreetbetsSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/wallstreetbetsSee Post

I'm YOLOing into MSFT. Here's my DD that convinced me

r/pennystocksSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/investingSee Post

I created a free GPT trained on 50+ books on investing, anyone want to try it out?

r/pennystocksSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/smallstreetbetsSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/stocksSee Post

How often do you trade news events?

r/stocksSee Post

Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning

r/RobinHoodPennyStocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/pennystocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallstreetbetsnewSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/smallstreetbetsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsOGsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallStreetbetsELITESee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsSee Post

🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!

r/investingSee Post

AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)

r/StockMarketSee Post

Exciting Opportunity !!!

r/wallstreetbetsSee Post

🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙

r/WallstreetbetsnewSee Post

Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)

r/wallstreetbetsSee Post

The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle

r/investingSee Post

Treasury Bill Coupon Question

r/pennystocksSee Post

Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/stocksSee Post

The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth

r/wallstreetbetsSee Post

NVDA is the wrong bet on AI

r/pennystocksSee Post

Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/wallstreetbetsSee Post

NVIDIA to the Moon - Why This Stock is Set for Explosive Growth

r/StockMarketSee Post

[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?

r/investingSee Post

The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts

r/wallstreetbetsSee Post

My thoughts about Nvidia

r/wallstreetbetsSee Post

Do you believe in Nvidia in the long term?

r/wallstreetbetsSee Post

NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading

r/wallstreetbetsSee Post

Apple Trend Projection?

r/stocksSee Post

Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"

r/investingSee Post

Which investment profession will be replaced by AI or ML technology ?

r/pennystocksSee Post

WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology

r/pennystocksSee Post

$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41

r/wallstreetbetsSee Post

$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).

r/pennystocksSee Post

Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine

r/stocksSee Post

This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?

r/wallstreetbetsSee Post

roku thesis for friend

r/stocksSee Post

Training ML models until low error rates are achieved requires billions of $ invested

r/wallstreetbetsSee Post

AMD AI DD by AI

r/wallstreetbetsSee Post

🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨

r/wallstreetbetsSee Post

AI/ML Quadrant Map from Q3…. PLTR is just getting started

r/pennystocksSee Post

$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News

r/wallstreetbetsSee Post

DD: NVDA to $700 by this time next year

r/smallstreetbetsSee Post

VetComm Accelerates Affiliate Program Growth with Two New Partnerships

r/pennystocksSee Post

NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT

r/pennystocksSee Post

Netramark (AiAi : CSE) $AINMF

r/pennystocksSee Post

Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)

r/wallstreetbetsSee Post

Testing my model

r/pennystocksSee Post

Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)

r/wallstreetbetsSee Post

[Serious] Looking for teammates

r/stocksSee Post

[Serious] Looking for teammates

r/StockMarketSee Post

PLTR Stock – Buy or Sell?

r/StockMarketSee Post

Why PLTR Stock Popped 3% Today?

r/wallstreetbetsSee Post

How would you trade when market sentiments conflict with technical analysis?

r/ShortsqueezeSee Post

Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.

r/StockMarketSee Post

Stock Market Today (as of Mar 3, 2023)

r/wallstreetbetsSee Post

How are you integrating machine learning algorithms into their trading?

r/investingSee Post

Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits

r/pennystocksSee Post

Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare

r/ShortsqueezeSee Post

Why I believe BBBY does not have the Juice to go to the Moon at the moment.

r/investingSee Post

Meme Investment ChatBot - (For humor purposes only)

r/pennystocksSee Post

WiMi Build A New Enterprise Data Management System Through WBM-SME System

r/wallstreetbetsSee Post

Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT

r/ShortsqueezeSee Post

The Squeeze King - I built the ultimate squeeze tool.

r/ShortsqueezeSee Post

$HLBZ CEO is quite active now on twitter

r/wallstreetbetsSee Post

Don't sleep on chatGPT (written by chatGPT)

r/wallstreetbetsSee Post

DarkVol - A poor man’s hedge fund.

r/investingSee Post

AI-DD: NVIDIA Stock Summary

r/investingSee Post

AI-DD: $NET Cloudflare business summary

r/ShortsqueezeSee Post

$OLB Stock DD (NFA) an unseen gold mine?

r/pennystocksSee Post

$OLB stock DD (NFA)

r/wallstreetbetsSee Post

COIN is still at risk of a huge drop given its revenue makeup

r/wallstreetbetsSee Post

$589k gains in 2022. Tickers and screenshots inside.

r/pennystocksSee Post

The Layout Of WiMi Holographic Sensors

r/pennystocksSee Post

infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.

r/investingSee Post

Using an advisor from Merril Lynch

r/pennystocksSee Post

$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.

r/StockMarketSee Post

Traded companies in AI generated photos?

r/pennystocksSee Post

$APCX Huge developments of late as it makes its way towards $1

r/pennystocksSee Post

($LTRY) Lets Hit the Lotto!

r/wallstreetbetsSee Post

Robinhood is a good exchange all around.

Mentions

Vision recognition algorithms have been around for years. Is BMW using LLMs for this, or traditional ML/image processing algorithms implemented by data scientists to do this QC work?

Mentions:#ML

While this is a fairly cynical take, it's also a very accurate one. I design devices and write code for them for a living, and the only "AI" tools worth using in production are old-school ML algorithms; LLMs are absolutely unsuitable for any real product, unless you want to spend bookoo bucks hiring customer support personnel to unfuck all the things that LLMs touch. Even battle-tested tools like CNNs need redundant systems to catch misses, as they are simply not reliable.

Mentions:#ML

It's a catchy term as it uses all the trendy words investors like to hear, but it's basically just what I wrote above: "use a quantum compute processor to speed up ML training". It is not an innovation in ML or anything. And since current quantum processors are an error-prone mess (which is expected when you allow multiple states), rewriting ML algorithms to work with it, is kinda putting the cart before the horse. To go back to my original point. Training better and better LLMs is likely not a path to General Intelligence. In fact, it really feels like we are reaching the max peak for LLMs. So training LLMs faster/cheaper with quantum compute, also wouldn't lead to any AI breakthroughs.

Mentions:#ML

Raptors ML + Magic ML today 🎶

Mentions:#ML

Supply chain efficiency. Better routing to reduce travel/shipping costs. Optimize purchasing. Most of that is traditional ML versus GenAI but it still fits in the bucket of AI

Mentions:#ML

I’m in ML/AI for nat defense but that’s just because it came to me, I didn’t go to it. The good part of that is while a bunch of clowns are trying to shove down some expensive and questionable products, for the most part the process has been Darwinian so far. Like using AI/ML to monitor the skies enhances current countermeasures, it isn’t meant to replace them. I have heard similar stories from healthcare. AI/ML detecting probable cancer spots in x-rays, which are then reviewed by a human for further research. There are papers coming out in healthcare journals that show a demonstrable increase in patient outcomes due to catching certain diseases earlier. Obviously these successes are not universal, current technologies make some diseases better candidates than others.

Mentions:#ML

No idea what the person you replied to since they apparently deleted it, but: > And for the most part you don't need a transformer model to look for the statistical likely hood that based on thermal and acoustic data that your motor is going bad and needs to be replaced soon. I'm in IT in a trucking company, and do software development. This is one thing with a lot of AI products I've seen in the last few years. "Oh look at our fancy AI software!" - literally just taking x,y,z and plugging it into some algorithm and doing some math to spit out something. There's way too much marketing wank at play. Flawed as they are I think there are some legitimate usages of LLMs, there are people trying to plug them into things that don't need them, and then there's just standard ML or algorithms we've had for a long time being rebranded as new AI. It's all a mess right now.

Mentions:#ML

I’m in ML/AI for healthcare, I also work with pharma companies etc. Most ML/AI applications in this industry aren’t anything to do with LLMs/hyperscalers, because they have the inconvenient habit of regularly but unpredictably producing hallucinations which could kill people. Whether people actually understand the difference or not is another matter. My suspicion is that the insistence of OpenAI & pals of screaming about AI at every opportunity is going to mean that everyone who’s used the term to describe what they do is going to be affected if/when they crash. In healthcare and pharma, the core business is pretty decoupled from AI/ML though for the most part, likely both will remain mostly unaffected.

Mentions:#ML

Indeed, seeing from the inside and dealing with managers that suddenly became “specialists” in my own field of study, they sure think artificial intelligence is just to vomit verbose left and right, without knowing machine learning was already being deployed in the back end for around a decade (in my experience at least) for data management and classification, boosting automation and forecasting, amongst other automated processes. Where my opinion differs from yours a bit is that I think objectivelly this is a bubble: - main players are overleveraged and already presenting liquidity issues - ROI for the main claimed application of these technologies can’t be easily measured and realized by their customers (and to your point, it won’t be anytime soon) - more than half of use cases where companies are trying to implement it for productivity gain either face employee resistance and/or the telemetry to measure it costs the same if not more than its potential gains: productivity is a 30 year old question when it comes to measuring it out of manufacturing or service management scopes - major players are already facing liquidity issues due to cost of processing and hardware depreciation (ML training shortens chip lifespan significantly) and limited chip supply to rotate at financially sustainable costs - the clear collusion of Nvidia with its own competitors + main software companies on the race, communicating their billion deals to promise futures and move money laterally in hopes of offseting the debt on investor calls (although, legally at least net revenue usually doesn’t lie) Idk man, seeing from the inside my bet is that either we’ll see bailouts happening soon to keep the bonanza and/or enterprise contracts will raise their prices per token and that will suddenly shrink enterprise customer spend to cover only what they can properly track, which due to the fact that machine learning inheritly has its value as a technology will probably deflate the bubble, not necessarily burst. R&D, health sciences, biotech, fintech will keep benefiting, tech and general knowledge work not so much, imo. Unless they keep printing money to maintain the sham, then it might be a legit burst, if investment firms allocate too much of their ETF money to “AI”… oh wait. 😂

Mentions:#ML

AI is already integrated at almost every level of what we do today. From ML to AI to gen AI to LLMs. If you use iOS the predictive text is now done with a language model. This site is filled with backend AI and now AI results in searches, AI is all throughout Google products, from search to Gmail to Maps and YouTube. AI is in Netflix, Prime, Amazon shopping, your credit card infrastructure, the rest of banking, PayPal, Venmo, Uber, Lyft, Facebook, Instagram, TikTok, every photo you take with a smartphone camera ... so I get the point about OpenAI but everything they and other AI companies do shapes that entire ecosystem, which has spread into essential services faster than most people know. Because the initial computing and internet infrastructure was already there for part of what was needed, and that took 20+ years to build. Then there's physical AI, humanoid AI - just like every bureaucratic office job will be gone in a decade, most warehouse and factory jobs will be too, and it will just keep spreading. This isn't just the US - look at China's plan for their AI economy and the numbers they're talking about. That's why sovereign AI is such a huge investment area also because nations want to control the AI that will be powering everything that happens at every level of government, from taxation to regulations to defence, etc. AI is going to be the operating system for the economy.

Mentions:#ML

get it all back today broncos ML + suns ML

Mentions:#ML

Thanks, although I studied AI and ML, so I don't need the theory; I asked about your personal opinion. > These are not next word generations, they are pattern matchers that encode spatial relationships. You could say the latter, although it's not fully precise (e.g. it doesn't cover how the tokens are encoded so that they become dense vectors capturing some meaning; it's relevant, since it's not just any "spatial relationship", but very complex, high level and black box process of attending to all tokens in a sequence, capturing not only an the last token's and all tokens' meaning at once, but also considering their position in the sequence). Either way, the former doesn't contradict the latter (the decoder is simply a generator of the next token; it doesn't matter that it's a pattern matcher — which, by the way, all models of ML, let alone DL, are). > When you 'train' the model, you pass the images in with the text, and then it encodes the distance between the presence of that image, and the known words of its vocabulary. That's an oversimplification. Again, the attention mechanism (and the transformer architecture in general) is way more complex. Let alone there are plenty of different models, (sub)architectures and training methods. > where that 'next word' generation becomes 'next pixel' Okay, so it's a "next token" (not "word") generator. By the way, in vision transformers tokens are not individual pixels, but patches of pixels. > when the model encodes these patterns into little probabilistic linear algebraic mathematical functions Why are you saying it's "linear algebraic functions"? Deep neural networks have the ability to approximate almost any function due to the use of nonlinear activations; only linear activations boil down even a very deep neural network to a simple linear model. Without nonlinear activations it wouldn't even be able to solve the XOR problem. > Now what happens when the capitalist machine needs to make back the $1.4T taxpayer funds sunk into this? Do you think they'll say, "wait, let's make sure our models are unbiased?", or do you think they'll say, "I'm a genius bringing you the future, just take this, I promise, it's good for you"? I still don't understand the reasoning behind your opinion. Why would anyone deploy unprofitable models? OpenAI and Anthropic are now burning investors' cash with no path to profitability; I don't see how that would lead not only to the deployment of LLM based robots, but also to such an extent that they perform almost every job. I think the AI bubble would've bursted way sooner.

Mentions:#ML

![gif](giphy|jrbdRK2J4jt8o7LVGF) well, I am really sorry, but it is a private company, and they should be able to finance their daily operations (by the way, I hate AI, LLM, ML, autocorrect, AI chatbots, AI in Word/Excel/PowerPoint, etc., so I am a bit biased here) :)

Mentions:#ML

It's extremely common in the world of engineering and ML to refer to matrices with more than two dimensions as "tensors". Furthermore, a tensor is not a thing with physical meaning, they're algebraic objects that describe linear relationships between other algebraic objects. Dunno where you get this 'physical meaning' thing, but it's complete nonsense.

Mentions:#ML

I'm going to piggy back on this to break this down in painful detail: American taxpayers are being asked to pay for something being purchased by a private entity. That private entity is a company whose product arguably has absolutely no moat around its product, and many competitors. This private entity can only pay back their debts from revenues that would only come from a massive increase in usage of their (barely defensible) models. Such an increase could only come from a gargantuan displacement of existing workers, or a gargantuan increase in a yet unspecified industry that does not yet exist (e.g. widespread LLM-backed robotics powered through edge computing, another unsolved problem). Because this private entity has no moat to defend their products from competitors, they are trying to throw more compute at the problem (with money that they don't have to spend). All experts have clearly stated that throwing more compute at the training gap will not solve the problem, because the underlying model architecture is inherently unable to accomplish the type of generalizeable determinism that Moreover, many venture experts also state that trying to scale up current architectures to an "AGI" moment is the wrong goal, the correct goal would, in fact, be to distill and scale down models into fine tuned and use-case specific models that can be deployed in inherently reproducible and messy environments In parallel, the CEO of this private entity has been accused for decades of manipulative behavior. And this CEO's company last year had revenues of $13B, less than most major tech companies. In fact, most of their $13 in revenues come from around ~60 million paying customers, who are paying for another ~740 million free users. Most of their revenues do not come from established enterprise computing contracts, such as you would see from the likes of AWS, Oracle, GCP, and Microsoft Azure. Instead, these revenues are coming from Pro and Plus subscribers - who themselves have complained viscerally that GPT-5 is worse than even GPT-4 in many cases (I will spare you the technical details here, but if you're interested just google Mixture of Experts and Synthetic Training Data) These Plus and Pro subscribers are subjects to models that they don't have consistent control over, and digital nannying over top of their experience that kicks in any time this private entity finds their private chats triggering, or "unsafe". Meanwhile, this private entity refuses to simply provide the service with minimal guardrails to users who are 18 and over (because herr derr Uber-growth model). So, this private entity, and the parasite who leads it, are now officially in the US government's military apparatus - along with all the other major tech firms and AI players This private entity is currently providing its services to the US government's federal agencies, for $1, likely in violation of government acquisition rules which have long stated the government cannot receive gifts or services that are (effectively) free So, to your point, are taxpayers paying to be replaced? The answer is not yet. You can do something. You can contact your congressional representatives TODAY about this, and demand that if any taxpayer funds go to OpenAI, you will vote them out of office in 2026. But, what happens if taxpayers are replaced with AI? The truth is, layoffs are increasing dramatically, but this is not because AI's improvements in performance have been so grand that it's equipped for all use cases In fact, even companies like AWS who heavily mandate AI-usage by their software engineers, are now experiencing greater numbers of outages from code that is likely AI-generated So, what will happen if taxpayers are replaced with AI is that your quality of life will become radically worse and more dystopian than you could ever dream of Nurse's assistants, Taxi drivers, Delivery, Fast food, will all become infuriatingly worse until people literally revolt because everything has become awful. Your food orders aren't made right, your medicine is incorrect, whatever - and when you contact another AI for customer service, it doesn't understand what happened (fully) and you have to wait 7 days before a human representative contacts you Meanwhile, your speed limits are tracked by the increment, and you are penalized for every word you say online, every mile per hour over some arbitrary limit you go, for every small gum wrapper that falls out of your hands and onto the sidewalk of the inner city you live in This is the world the parasite CEOs of these AI companies want to create. They want you to believe they are all powerful They want you to believe they are all knowing They are not They are the man behind the curtain And when the puppeteer tightens their strings, the marionette tightens too.. but you never expected to be trapped in a world surrounded by these marionettes If OpenAI receives these funds, I promise you we will lose everything that it means to live in a free market economy, and all of our livelihoods Sam Altman is a snake, and he and the others will pay in due time when history is written In the meantime, you can make your congressional representatives pay at the ballot box, in the next elections I promise anyone seeing this, there is no scaling law of the current paradigm of machine learning that will get to AGI. All we are doing is scaling a mirage, and paying for GPUs that effectively become trash within 2-3 years of usage, and often ~1 year with heavy training I study ML at a graduate level, this is my perspective alone, but I have many years of experience working in deep tech. What you should fear is not AI, you should fear our politicians centralizing an oligopoly and abusing the fact that this country's education level is atrocious If you're undereducated, use these AIs to learn math If you want to do something for your country, learn linear algebra, and get an electrical engineering degree (I'm not joking) We were once consumers, but that world is now over. The world of abundance is now gone. We are becoming the cattle, and AI will become the fence, and it will be a shite fence if we let them build it around us Don't let them Free your mind

Mentions:#AGI#ML

So a doctor who is spending only 10 min. \[though studying for previous 15 years\] while sitting in a chair comfortably to diagnose cancer \[from test results performed by others\] and getting very vast riches in a form of a salary - is like CEO and that is not labor? You are trying to arbitrary define "value produced" based on some moral grounds or grounds "ML of sweat produced in a day", refusing to see that salary, wages is that measure of value and it's already defined by market. You are like a child or peasant farmer of old who seeing a king eating fancy cake says that you can also sit with important face & eat a cake and be a king, refusing to notice all other things like cost of error in his decision for entire country.

Mentions:#ML

Yep I was an adult during the .com bubble so I’m definitely familiar. I replied to another comment and said the same but the difference then is vaporware companies were getting billion dollar valuations because they had a .com website with zero cash flow. I feel like that’s quite different from today (though I do realize there’s companies out there overvalued, as there always has been and always will be) As for Elon… he says a lot of stuff and about 1% of it has any substance. Machine Learning has been around a long time but was limited by the technology of its time. Slow CPUs and small amounts of memory limited what ML could do. Training complex models would take forever or wasn’t even possible at all. With GPUs today we can train deep learning models with trillions of parameters which was unimaginable decades ago. It’s like Tony Starks dad explaining to Tony how he was limited by the technology of his time. He had good ideas but tech wasn’t there yet to realize them.

Mentions:#ML

We had people like Elon Musk saying that AGI was going to arrive by 2025. We had people saying AI will eliminate millions of jobs and automate them all away. It’s 2025, the biggest “AI” is just slightly more advanced LLMs and text-to-image/image-to-video AI with more computation. We have big techs back pedaling about AI taking over human labor. What about it has actually lived up to the hype. It’s definitely revolutionary but the money behind it is questionable. They always seem to promise something unrealistic that’s close to happening double or triple (or never) the time they promise. How is “AI” realistically going to bring in money. Also although “AI” is in its “infancy”, ML has existed for like 30+ years by now. And the paper on transformers has been out for 7. Let’s not act like we finally had a big breakthrough like a years ago and nothing substantial hasn’t happened. > You sound like people in 1998 Bro ever heard of the dot com bubble. I’m not even saying AI isn’t revolutionary or an important part of the future. I’m saying the hype and the money and valuation it’s generating is dubious. Nobody said pets.com and the internet was a shit idea in general, but it failed at the time because the money simply wasn’t what it lived up to.

Mentions:#AGI#ML

GOOG is sending TPUs to Sun to train ML and you’re bearish?

Mentions:#GOOG#ML

Google is objectively a great investment. If they think ML in space is going to be profitable, than take my money and let’s see if Gemini can find aliens up there.

Mentions:#ML

Why is that who believes only "AI bubble" always come with "it only has real value if it replaces all of us and we will live in a dystopia with AI our owners and we will have like a dog" I am starting to get a connection with not fully understanding what does Machine Learning does and "bubble" theory. I hate to tell you, but AI has been used long before LLM ever come to light (so ChatGPT), companies widely used ML/AI for statistics, search optimization, administration. I understand LLM is the new hot shit but ML isn't only about "replacing jobs" what about autonomics, robotics, pharmatics, genetics? ML is very good at understanding patterns and giving output based on them (obviously it depends on what data you feed, no, not every AI will hallucinate like ChatGPT and not every AI is a chat bot) I won't state AI hype currently isn't includes "job replacing" and It will but why all of you stop AI hype at that level why not go beyond that?

Mentions:#ML

I feel bad for whoever bought PLTR at 220 after hours. Eh, never mind, it was probably just some ML that analyzed the earnings release.

Mentions:#PLTR#ML

Google TPUs handle training just as well as nvidia at a much lower cost. Still need nvidia for customer workloads that require GPGPU, but not reliant for AI/ML workloads. Source: I work for GCP

Mentions:#ML

Unemployment is not meaningfully increasing due to AI - it’s bullshit cover for short-term performance layoffs and attempted offshoring. My entire career is working with distressed companies across all industries and none of them can legitimately replace swaths of employees with AI. My own company spent buckets and hired outa big team of MIT ML PhD types to “deploy AI” in our firm and the portcos we work with. The result? Emails get written faster and we can pull old decks much quicker. That’s literally it. Not a single soul replaced. AI is currently a fucking joke that lives up to none of the hype. Any job requiring any nuance or delicacy remains untouched. Will that change? I’m sure. But right now and in the next few years? I have seen absolutely nothing to indicate AI being able to remotely replace anyone I have worked with.

Mentions:#ML

Shit, probably. I did some work there about 15 years ago and they were doing really advanced stuff back then, eg massive HPC/ML clusters doing drug discovery work, protein folding, etc. It's the only place I've met dudes with computer science phds.

Mentions:#ML

There is no Amazon partnership... MSAI is using AWS Amazon services such as server hosting and AI/ML testing environment (with control of certain warehouse cams and robots through test API) as client. "AWS Partner" is everyone that uses the AWS Services as client. It is a pure client/provider relationship.

Mentions:#MSAI#ML#API

No comments yet. As a fan of it’s always sunny in Philadelphia and a person who does ML as part of their job, I had a really good laugh at this.

Mentions:#ML

Okay so I’m going to throw my hat in the ring for the last time. I am currently blue in the face saying this. I am also going to not care about the risk of sounding conceited because you know what? I do know better than 99% of people here. For context I research, train, develop AI models for my job. I am a paid researcher in both the public and private sector. I have studied and studied and studied and write algorithms and write algorithms and write algorithms and read papers and read papers and read papers. Data scientists, AI software developers, statisticians and mathematicians who believe AI is capable of replacing people without creating massive amounts of technical debt in the process or leading to long term business/pipeline instability are deluded or lying for the biggest paychecks our field have/will ever see. This goes double for CEOs, board of directors and shareholders who are being conned. The success of what’s called “AI” in the case of natural language processing (NLP - like ChatGPT) and images is a result of the flexibility of neural networks (one flavour of ML) being able to interpolate in many directions (ask many queries, give many responses) from storing massive amounts of data in the form of its many parts. It’s a powerful memory unit which simply stores all the world’s data and spits out a form of it to you - the form being what you’ve asked of it. Lots of other stuff happens but at its core this is what it is. It’s incredible really, especially in how well it mimics the behaviour of human thinking/learning. But it doesn’t “think” or “learn” and isn’t capable of a lot of forms of thinking that we humans are capable of and which are essential to do the jobs we do. This becomes really apparent when asking AI to perform in low data tasks. Ask any of your favourite AI tools to give you a picture of a watch at 10:10. It will do it perfectly because that’s the way watch companies like to advertise their watches - as it shows off the arms of a watch in the most aesthetic way. Therefore, there’s lots of data of watches displaying that time online. Now ask it to give you a picture of a watch at 06:35. Not so pretty right? That’s because it doesn’t have any data to generate your output from and had no concept of time in the first place. It can’t understand and think about time. This is an abstract concept we humans interact with and debate about to this day and we can effectively use it all the while not fully grasping it. Now apply this to my work - I do research that adds value to both communities and companies - I work on crafting bespoke pattern recognition algorithms for each persons use. I solve these “deep industry problems” everybody thinks AI can routinely solve and replace people. And I work in such a low-data area (creative, critical, logical) that I have to turn off copilot/cursor/AI-suggested coding suggestions because they’re so stupid it’s an actual distraction. AI is powerful when used in the right places by human users with domain knowledge who actually know what they’re doing. It’s a tool. Anyone who is saying they’re replacing us is either being a con, or being conned. The layoffs you’re seeing now are either because the US is actually already in a recession which the stock market is not reflecting or because CEOs aren’t as smart as you think they are. This is an unprecedented level of fraud, stupidity, money and wasted CAPEX. Anyone making comparisons to how any other tool or hype has been introduced to humanity has no idea how much this isn’t like the previous times. And ironically if you read anything about predictions, you’ll know that when using historical data to predict the future, things can go horribly wrong. My advice? Go outside and care for your communities. If you start a business, put your workers and customers first. Who gives a fuck about licking the potential boot of AI if no one can feed their family, go to work to earn a living and experience joy. Instead of talking about how much profit AI generates for a few mega-assholes, let’s talk about what we can do to make living on this planet better for everyone.

Mentions:#ML#CAPEX

Finished my Master's in AI & ML and passed my Series 65 this weekend. Now, I am super baked and stacked with dog bones for the dogs, all beef hot dogs, buns, chicken nugs obviously, and mac and cheese. Windows open at a pleasant 68F. This is all I need to be happy.

Mentions:#ML

I don’t know that we have ever seen this successfully delivered. Retraining - for what? The ability to use LLMs/agents? They will be completely different in a year. FWIW, I hire scientists in tech and we are already seeing new grads missing the fundamentals of ML because everyone is pivoting to Language Models as the interface. Now think about the average joe without a PhD. How will retraining help them? Are they going to succeed in such a position?

Mentions:#ML

If you’re long ORCL, the real bet is on OCI growth, the MSFT tie-up, and whether they can line up GPUs and power; averaging down without those hitting is how you get bagheld. What I’d watch this print: OCI growth pace (still >50–60% y/y or slowing), RPO/backlog, Oracle Database@Azure customer logos and new regions, Cerner margin recovery, and any specifics on capex, data center power deals, and Nvidia H200/B200 delivery timing. OCI’s edge is often price/perf on GPUs and cheap egress, but it only matters if they can turn that into capacity and logos. If you want exposure with less pain: sell cash‑secured puts at levels you’d be happy owning, or wait for the call and write covered calls on any spike; set a hard line where the thesis breaks (e.g., OCI decel + weak backlog). On the ecosystem point: we’ve shipped data apps with Snowflake for warehousing and Databricks for ML, and used DreamFactory to quickly stand up REST APIs over Oracle/SQL Server so teams could ship without building gateways. Bottom line: ORCL works if OCI + Azure expands and GPU/power ramps show up; otherwise it’s dead money.

Mentions:#ORCL#MSFT#ML

Even the best AI/ML tools will average a success rate of < 54%, a lucky coin flip. I started writing something which would analyse shares myself (using known ML algos)… 51% success rate with training data. Make your own judgements or you’ll resent the tools that made them for you.

Mentions:#ML

You have it exactly right. Source: ML Engineer

Mentions:#ML

325 capital, an investmentfirm had to reveal their investments in their latest report, turned out that MSAI is among their investments with a 15 million position, shortly after that reveal the stock jumped a little bit... then we got follow up spam here and on other small time investor subreddits about some "amazon connection" that might explain the jump while they all exclude the 325 capital impact or do not mention 325 capital at all. Furthermore some of the latest releases MSAI look like an inside person tried to create some fake hype.... using the AWS label, showing the AWS server and ML login for firms but describe it like an exclusive access... trying to "hint a partnership" that is not there at all - most of the "push accounts" are literally dead accounts registered years ago with zero activity until now and all are focused on MSAI or "did an exclusive research".

Mentions:#MSAI#ML

>“You’re absolutely right!” I see you’re using pandas with this large dataset, sometimes pandas struggles with large matrices, let’s add 17 log files to find the root of the problem…. I have no doubt this could be done for significantly less computational resources than is currently being reported. Lmao so true >ML researcher with econometrics? Sounds like a certain profession I won’t mention here. Any experience with rough bergomi models and/or using ML for calibration No unfortunately, statistical learning theory on time series & nonlinear cointegration tests

Mentions:#ML

“You’re absolutely right!” I see you’re using pandas with this large dataset, sometimes pandas struggles with large matrices, let’s add 17 log files to find the root of the problem…. I have no doubt this could be done for significantly less computational resources than is currently being reported. ML researcher with econometrics? Sounds like a certain profession I won’t mention here. Any experience with rough bergomi models and/or using ML for calibration

Mentions:#ML

Lolzers I was an AI/ML researcher (applications to econometrics) before becoming a degen, and while Chat never fails to amaze me that fucker always gets something wrong and trying to code with it makes stuff unnecessarily complex, unneccessarily fast. AI economy is a bubble, definitely. No reason to lay this much people. There's also the possibility that some clever people (likely from East Asia) coming up with a simpler way to do linear algebra that requires less computational resources and drop NVDA down to earth's crust.

Mentions:#ML#NVDA

I see people with Guest Pass and vests that visited an Amazon Warehouse... most likely because MSAI rented AWS Services such as servers and the AI/ML testing environment.... so do many other firms that use AWS. *What is the smoking gun now?* Warehouse visitation selfies from many different businesses are all over LinkedIn, Facebook, Google Images etc.

Mentions:#MSAI#ML

This. Bots try to trick gullible people. The real deal: MSAI is using AWS Serivces (such as server and ML test environment provided by AWS), everyone that buys AWS Services can level themselves as "AWS Partner". A real partner (real collaboration and investment) would be allowes to use "Amazon Inc. Global Partner" label.. this is just an AWS Partner label that hundred of businesses have... Their testing environment has an amazon subdomain that is just an access gate to the AWS environment rented by MSAI. As part of the testing, they have access to certain warehouse APIs (f.e. specific cams and robots). AWS Partners that test the ML environment are allowed to visit the warehouses that is why they have guest ID cards and the amazon security vests. Some people try to blow that simple facts out of proportions - furthermore Amazon would just list them directly as global partners on their global partners list - amazon has no interest in "hide and seek" games when it comes to real partnerships.

Mentions:#MSAI#ML

Just a cursory search will show you they are working on a lot of different technologies and not just social media. Whether any of them will bear fruit is a different story. Social media ads make most of their money, but to willfully ignorant of their other endeavors is stubborn and stupid. Google makes most of their money on ads too, but both of them are bonafide tech companies. Can't say the same for RDDT. • Large Language Models (LLaMA, etc.) • Foundation Models for Vision/NLP/Multimodal AI • Generative AI Tools (e.g., for ads, chatbots, media creation) • Ray-Ban Meta Smart Glasses • Meta Quest VR Headsets (Quest 2, Quest 3, Quest Pro) • Horizon Worlds (Social VR/Metaverse Platform) • Project Aria (Sensor-rich AR Research Glasses) • CTRL-Labs Neural Wristbands (BCI-style input) • In-house AI Chips (Training & Inference Accelerators) • Custom Silicon Development (ASICs for AI workloads) • Reality Labs (VR/AR R&D Division) • Subsea Cables (e.g., 2Africa, Bifrost, Echo projects) • Meta AI Research (FAIR / GenAI teams) • Massive AI/ML Data Centers • Immersive Meeting Platforms (e.g., Horizon Workrooms) • AI-Powered Content Moderation Systems • AI-Powered Personalized Feed Ranking • 3D Avatars and Virtual Presence Tools • Gesture-based User Interfaces • Computer Vision Systems (for AR/VR integration) • Speech-to-Text and Multilingual Translation Models

Mentions:#RDDT#BCI#ML

that’s being a bit naive no? Just because they’re not the ones who collect the data doesn’t make them any less complicit in mass surveillance. Their analytics and AI/ML models are made to operate within client infrastructure. It’s that analysis that makes the data valuable in the first place, even if not being done directly by them

Mentions:#ML

Their valuation is high because technologically they are state of the art (even more) advanced AI/ML data operations + their political connections... Thiel funded Trump + JD together with Elon.

Mentions:#ML#JD

Back in 2010, I'd devour a 4 lb Chipotle burrito—shoved straight up my ass—while blasting Ke$ha's "Tik Tok." Take me back, dad. They shorted my meat? No biggie. I'd fire up their garbage chatbot Pepper, milk it for free BOGOs. That thing ran on Microsoft Clippy-level ML. Took it years to catch my grind, then slapped a soft account limit. Easy fix: spin new accounts with free Google Voice numbers. Ran that scam 2-3 years strong. Finally, they killed Pepper and said bitch in person. Like I'm some psycho? Chipotle's devolving into Taco Bell trash. Hope they rot.

Mentions:#ML

I can not believe how many times this had to be repeated: LLM chatbots are not the only, let alone the primary, form of ML/AI behind this boom. I have no clue why so many people seem to sincerely think all of this investment are just models for asking chatGPT to make you grocery lists or whatever. I have a colleague from grad school, who is a Biostatistician, who is using a huge amount of compute for deep learning models to power RNA sequence modeling for a pharma company. You have multimodal foundation models, ML/AI models designed to parse image/video/audio/sensor data for things like robotics and manufacturing, security and surveillance tasks, medical imaging tech, etc. Those also feed into deep learning models for 3D perception, object tracking, and planning/prediction transformers for things like self-driving cars. Your entire social media algorithm, from Tiktok/Youtube feeds and ads optimization and what posts show up on what sites and what ads get surfaced, are largely being moved to transformer architecture and new deep learning models. I can tell you from personal experience, deep learning models are being integrated all over the finance world. Graph neural nets are being used everywhere for doing AML (anti-money laundering) and real-time fraud checks on financial transactions and to capture fraud rings. I agree with many that it is \*very\* overhyped right now and will have some deflation, eventually. However you're absolutely clueless if you sincerely think all of this is for some fucking brownie recipes and roleplay chats on OpenAI.

Mentions:#ML#RNA

The AI improving advertising is traditional ML and such for targeting. It is not generative AI, at least not yet. They have pushed LLM-based text variations in ad, but there are only complaints about it by marketers. Every single domain (niche) expert I know suggests that you turn off the AI suggestion tools. On the other hand, their AI-based audience targeting, which is traditional ML and not LLMs, does help at times. The massive capex is into LLMs, which does not aid in revenue yet. There is some hope that generative AI for content will increase user screen time, but that is in very early stages. Please stop conflating all AI with this massive capex. If you look up the articles today, Zuckerberg is quoted as saying he "thinks" they are starting starting to see some ROI in the core business. It is a very weak and defensive statement. The improvements in AI to revenue are all on the non-LLM ML side. Meta's audience targeting is first in class, in my opinion, rivaling or better than Google's. But that is not the AI targeted by the expensive Superintelligence lab.

Mentions:#ML

lol one guy on /r/investing couldn’t figure out how to use it as an assistant so it’s toast 😂 that’s like blaming a hammer for your house collapsing. As someone who worked on major enterprise AI/ML deployments at Google for 7 years, I can tell you confidently that you’ve got sweet fuck all of a clue what you’re talking about… and likely less about what you’re investing in.

Mentions:#ML

Need advice from savvy investors. My advisor just moved from Schwab to Merrill Lynch and I have the task of either moving my portfolio with him to ML or stay with CS but not have it managed. My issue is for the last 6 years, he has managed it with very little return. I just checked my opening rollover and its virtually the same $ amount today. How is that possible? Plus, 2 of the products are not supported by ML so they have to be sold off. I've researched Fisher but have not seen very good posts about them. My Fidelity through work is working very well but I cant roll this over into it. I am not in the least up on the latest investing trends but I may have to get there. What say you experts? Am I an idiot for following a failed relationship or should I roll the dice and let it ride?

Mentions:#ML

What exactly triggered you? I dont understand. I'm against scammers myself and people who promise guaranteed returns or results. We did research for over a year on ML powered technology called boosted.ai . We will explain to people how they can analyze stocks using a simplified version of that technology (since its very complex for average person). That's it. The webinar is free, people can build a strategy for free and they can monitor pre built strategies also for free. What scam are you talking about I have no idea.

Mentions:#ML

I have no idea what you're trying to say. Monetized means someone has to give you money for that thing. Right now AI/ML is very impressive, but it's *losing* companies that train models and maintain the infrastructure massive amounts of money. It's benefitting companies that produce hardware for it, or build out datacenters, or assemble server racks, but for that to continue, the customers of these companies will need to figure out how to make money on AI/ML products.

Mentions:#ML

I'm referring to neural nets, clustering, CCA, all the other stuff that got lumped into ML when people started calling it that. LLMs are an application of ML methods to language, but that's a very specific type of data with its own set of concerns, and at least in my field, we tend to put LLMs in their own class of algorithms.

Mentions:#ML

*sigh* I'll explain why you're being downvoted. Tensors purpose isn't to blast every other chip out of the water in benchmarks. It's to accelerate on device ML workloads..and more importantly, do those workloads using less power. And yes, Google has Tensor processing units (TPUs) in data centers as well. They are two entirely different chips... And surprise surprise! The design of that chip prioritizes power efficiency (and scalability) over performance. Because when you're trying to run an absolute monster service (like search and AI overviews), scaling and power efficiency is a lot more important than individual chip performance.

Mentions:#ML

>how to interface LLMs and agentic programming with the deeper ML algorithms Can you elaborate? Any examples? What do you mean by "deeper ML algorithms"? Deep learning? Which is what LLMs are based on, basically creating a hybrid model?

Mentions:#ML

The useful algorithms are also pretty specialized, all the non-LLM stuff has been on the back burner but that's really where the growth is IMO. On top of that, we're just starting to think about how to interface LLMs and agentic programming with the deeper ML algorithms, which could actually start yielding some results.

Mentions:#ML

It wouldn't be hyperbole to say that *every* ML paper and project of significance prior to 2023 relied on CUDA. ROCm was a nightmare to deal with back then and had very little adoption within academic or industry circles. Custom is definitely the fastest growing, but a lot has changed in just a few years.

Mentions:#ML

We got overhang removed today, same ppl who bought 50ML shares recently, got their 17ML shares tradable today at price of 0.35 (they still have 50ML worth of shares price at 1.35, they won't dump their own money lol, locked in).

Mentions:#ML

People have lost sight of what LLM's are. They are chat bots. Really decent chat bots. They work by guessing what words have the best chance of satisfying you, based on the input prompt, their dataset and weightings provided by a human during training. They're a very useful tool. But surely it's obvious this is not the path to any form of sentience. ML more generally is even better. It's very useful for iterating over a complex problem with many parameters, such as finding new drugs and many other things. But it's not capable of thinking. It can't invent something really out of the box, only iterate. Super useful, but this isn't the Matrix.

Mentions:#ML

> It’s still just doing probabilistic outcomes. That’s what ML has been also why it can never come up with saying it doesn’t know what something is and makes something up. Hallucinations are due to bad training methodology. If you reward it based on accuracy, and punish it for refusing to answer, you encourage hallucinations. This can be remedied by increasing penalties for hallucinations. A lot of human workers have the same pitfalls. People act like they know more than they do, make an educated guess, and fail. Doctors mis-diagnose, sales people claim features that don't exist, construction workers make mistakes, human drivers crash, etc. It's easy to focus on the mistakes AI makes, but no one focuses on the preventable mistakes humans make. We hold AI to a much higher standard than humans. >AI evangelists can keep trying to sell it as some cure all etc, but from my experience and my academic work with ML it’s still doing the same stuff just at a bigger scale. >It won’t replace workers, it will just be yet another automation tool and frankly just a generation tool than being some “knowledge” center. AI has already replaced millions of workers, so it's a bit late to claim it won't replace workers. The only real question is how many workers it will replace. I was an AI skeptic back in 2023 for the same reasons you mentioned. But the pace the industry has made in the past 2 years is nothing short of incredible. When I tested LLMs back in 2023, it couldn't even correctly write a 10 line function to calculate a common financial metric. Now in 2025, it can build entire applications, identify security vulnerabilities and bugs in human-written code, and more.

Mentions:#ML

It’s still just doing probabilistic outcomes. That’s what ML has been also why it can never come up with saying it doesn’t know what something is and makes something up. You can try to make it as complex as you want but as someone that has done and worked with ML, it still boils down to making the best guess based on certain factors and probabilities and even then it’s level of accuracy can be terrible to ok to great based on what it’s given on any domain. Which has solely been based on only digitized information. AI evangelists can keep trying to sell it as some cure all etc, but from my experience and my academic work with ML it’s still doing the same stuff just at a bigger scale. It won’t replace workers, it will just be yet another automation tool and frankly just a generation tool than being some “knowledge” center.

Mentions:#ML

"Probability machine" is a massive oversimplification. It would be like arguing that the internet is just "fancy electrical and light signals". "Pattern recognition and replication machine" is probably a better description. Yes, LLMs select the highest probability output, but their complexity has gone far beyond what most people assume. With Trillions of weights and hundreds of hidden layers, there is a lot of patterns being represented. Most human work can be achieved by AI/ML because most jobs involve learning a series of pattern, and replicating it. The only thing current AI is incapable of is innovating outside the current framework of human knowledge. Think inventing a new style of music/art(not copying an existing style), making a new scientific discovery that doesn't involve existing research, etc.

Mentions:#ML

Dodgers lock in, NEED ML

Mentions:#ML

yup even though i have dodgers ML

Mentions:#ML

As someone who works for one of the big tech firms in ads… India is massive scale but low $. It sometimes costs more for infra and delivery to show an ad to a user in India then you get in return. Takes a lot of investment and targeting to squeeze margin and take home a healthy net - rather have expensive ML staff spending their time on high ROAS markets.

Mentions:#ML

Yeah, I don’t get the narrative of “LLMs don’t work” or “LLMs are under delivering”. There’s always companies and grifters that over promise and hype way too much. But LLMs add real value, which is different from the “AI/ML blockchain” crap of the 2010s.

Mentions:#ML

there's tons of applications of transformer based ML models. the entire digital visual space has been transformed from it every white collar job uses it extensively now every student uses it to cheat and are completely dependent on it

Mentions:#ML

Work in data science / ML. See a large number of companies that used to use UI path or other lesser known RPA tools like Kofax migrating away from them in favor of newer solutions. Would not invest in this one

Mentions:#ML

For your chart, Short put= ML (max loss) should be unlimited, right? Selling a naked put keeps the seller on the hook especially if the price of the underlining goes below the contract's strike. What license are you taking?

Mentions:#ML

Of all the ML models they picked for predicting prices, it was an LLM. 😆

Mentions:#ML
r/stocksSee Comment

Much worse. ML

Mentions:#ML

Loving the comments here especially around how apple isn't doing this or that. My take is this. Apple is using executensive ML/AI throughout their eco system on consumer products but you generally don't know it's there unless you look for it. Everything from searching your images with text descriptions to monitoring your health information on device. More broadly apple is using AI on its server side for everything from service consumption and marketing analytics to observability on its infrastructure. I think Apple is being exceptionally smart in how it's rolling out features and in particular not promising the world on a technology that is still relatively new. I also think that they are working on a lot more than you will ever see or hear about, a lot of which might never make it to the devices. Regarding apple being an innovator vs. a company that just refines products. Personally I'll take my 2 year old MBP M2 Max over pretty much any other new current laptop other than a new MBP ( I have three new work windows laptops on my desk right now ). When I step back and look at the capabilities of their products they're pretty exceptional. They might be expensive, RAM and SSD in particular, but you can't argue that they don't work really, really well. Example, I'm running a 30B local LLM on mine, while running multiple Linux VMs and it all working great and on battery!

Mentions:#ML#SSD

I reckon behind the scenes there's probably a disgusting amount of resource being thrown at advancing different verticals of AI. We just won't hear about it until successful. (Pure assumption that i can't substantiate with data) Have to imagine lots of conventional ML that already had utility in sectors like HLS for drug discovery or predictive financial/reconciliation models has prob benefited from the surge of investment from LLMs getting trendy

Mentions:#ML

I'm sort of getting sick of saying this to every wide-eyed investor who doesn't understand the technology, but there is no possibility of ML on quantum computers for at least 20 years and probably much more. The QCs everyone is trying to develop right now, with great difficulty, do not have QRAM (quantum memory). That gives them a few hundred logical qubits to work with at most, with no other memory to hold the model. That makes them using for ML a nonstarter. Realistically, QMs will not be used for ML tasks within our lifetimes. Even if (or when if you want to be optimistic) we finally have QRAM, ML-type tasks enjoy at most a quadratic Grover speedup, much more modest than the exponential Shor speedup that a *narrow* class of problems (factoring composite numbers into primes, for example). But quantum computers, in terms of cycles per second and instructions per cycle, are much slower than our classical computers. It's just that those instructions can either do a *drastically more* (for things like factoring), or *modestly more* (for things like unordered database search and some ML-related tasks). But the clock speeds and IPS numbers will need to get way up before "quantum supremacy" (that is, when quantum computers outperform classical ones) can finally be achieved. We are currently in the development of **Noisy Intermediate-scale Quantum.** These cannot even factor. The hope is that after NISQ, we can get **Fault-tolerant Quantum** that can run Shor's algorithm. This is the point where quantum computers will become truly useful. But then we still need QRAM, which does not appear to be close at all, and then we still need to improve these technologies to make the quadratic Grover speedup actually matter more than the constant overhead from slower clock cycles/IPS counts. So no. There is currently no connection between quantum and ML. One day there will be, but that day is not soon.

Mentions:#ML
r/stocksSee Comment

I was with you till you said machine learning. When people say AI right now they’re referring to GenAI, which is a very specific field of ML focused on content creation. ML has been around for decades.

Mentions:#ML

I both agree and disagree. This is kind of AI news because quantum computing will certainly have applications in ML. This is already being developed and the real blocker is the “computing” part right now. A professor I know said this was an active research area and he sees it being 5-10 years away from bring one of the big next steps in AI. That said, it’s probably too early to be hyped about AI with quantum computing.

Mentions:#ML

Sure yea, there's also other forms of ML being used in other places. I was referring to the hype and the "bubble". The only self driving hype was Tesla and they ain't delivering. But every company needing to use the AI buzzword is all about gen AI (LLMs and image/video etc generators).

Mentions:#ML

I never said they'd never cross, I said that without quantum memory, quantum computers will not be running any ML applications, and the quantum computers being developed are QRAMless. If or when QRAM is developed it will still be many years for quantum computers to outperform classical ones in ML applications. Proposed quantum speedup for ML is quadratic, not exponential like for QCs get for factoring and related problems, and constant factors from all the advances we've made in classical computing hardware will dominate for reasonable problem sizes until QC cycles per second catch up. I know this because CS is my educational and professional background and the above is the fundamental state of things. As for ML significantly helping QC research and I'm skeptical, but less confident.

Mentions:#ML

ML? Mark Luckerberg?

Mentions:#ML

It's massively more efficient than classical computers for a very narrow set of problems. While there are a billion proposals for how it could maybe help ML tasks, none of them are thought to be realistic or implementable without QRAM (and QRAM is way further down the road than just quantum computing). Everyone wants to merge "quantum" and "AI" because that would sell stocks like hotcakes, but they're not terribly related right now. Maybe in the future.

Mentions:#ML

nah, software engineers are doing agentic work, and the AI/ML PhDs, data scientists, and career pivoting grifters that try to be on the AI teams are getting cut eventually

Mentions:#ML

In companies I work with I see this a lot Basically two things are happening simultaneously: 1) everyone wants to be associated with the AI effort for their career, but they are useless nobodies grifting their way to job security and fooling no-one. They get cut eventually 2) there are several branches of AI/ML work being done by respective divisions like “data science” teams. But the only branch that matters are agentic work, which is creating agents with LLMs, which are the more recent teams and they are executing much quicker, just with software developers none of those time wasting PhDs. This is where the moat is. Other teams gotta go.

Mentions:#ML

Quality matters more than quantity for ML.

Mentions:#ML

Check [Rezolve.ai](http://Rezolve.ai), they use ML to automate and improve digital commerce, customer service, and internal business processes. Its solutions include a generative AI-powered sales assistant for e-commerce, an autonomous agent for IT and HR service desks, and tools for personalized shopping experiences. 

Mentions:#ML#HR

Warriors ML tonight

Mentions:#ML

Check Boosted.ai, they use ML to analyze stocks + LLMs to explain reasons for ranking drivers, stock picks etc. Builder.limex.com is a light version of it

Mentions:#ML

Chat got is fun, but the true power of AI/ML comes when you combine it with autonomous robots, that are intelligent enough to do complex work. Stuff like pattern recognition(audio, video and other sensor data) to identify their surroundings, paired with intelligent systems capable of making decisions based on the data. This is what the next level of automation will look like and while you will only see robots, it is in fact a combination of various complex technologies. AI/ML will be one of the corner atones for it

Mentions:#ML

My 2 cents are: Don't listen to those people very much, as well as people like u/ThePunkyRooster , basically they have no idea what they're talking about despite their credentials. Having worked in ML/AI on itself means basically nothing. Having PhDs in the area can mean something - depending on the exact nature of the research being done - but still large number of even ML/AI postgrads and researchers were unable to predict today's capabilities of LLMs and gen AI. So why should we listen to more predictions from them? Claims like "*Gen AI is garbage, expensive, and won't result in anything positive*" etc. are cocky, over-confident. In reality at this moment **no one** really knows how much this tech will develop further and how it will impact markets, industries - or not, not even the people who invented it and/or understand it on a very deep level. It's a waste of time to try to predict the future impact of Gen AI by looking at the current market sentiment or profitability of current AI companies like OpenAI. During the dot-com bubble you could've also keep pointing out how the leading companies are overvalued and not profitable enough to sustain themselves, and you'd've been right... and you could've claimed how online shops are only good for certain very specific things yadda yadda... and then there was a bubble and you could've gloated how correct your predictions had been... And yet, 10, 20, 25 years later, online commerce is an industry of trillions of dollars and has basically become the default for when you want to sell most kinds of goods. Speculation is a fun pastime, but really the only way to know is to wait and see if & how it pans out (or not).

Mentions:#ML

Let's not forget about their Chief Scientist Mr. Bagnell is a co-founder of Aurora and is currently Chief Scientist. Mr. Bagnell served as Chief Technical Officer of Aurora from December 2016 until July 2020 and has led software engineering throughout much of Aurora's history. He also currently serves as a Consulting Professor at Carnegie Mellon University's Robotics Institute and Machine Learning (ML) Department. He has worked over two decades at the intersection of ML and robotics in industrial and academic roles. His research group has received over a dozen research awards for publications in both the robotics and ML communities including best paper awards at the International Conference on Machine Learning, Robotics Science and System, and Neural Information Processing Systems. Mr. Bagnell received the 2016 Ryan Award, Carnegie Mellon's yearly award for Meritorious Teaching, and was founding director of the Robotics Institute Summer Scholars program, a research experience that has enabled hundreds of undergraduates throughout the world to leap into robotics research. Before co-founding Aurora, Mr. Bagnell served as the Head of Perception and Autonomy Architect at Uber's Advanced Technology Group from January 2015 to December 2016 and as a professor at Carnegie Mellon from 2004-2018. He holds a Ph.D. in Robotics from Carnegie Mellon and a B.S. in Electrical Engineering from the University of Florida. Oh and their ex CPO who is now CPO at GM EVP of Global Product & Chief Product Officer of GM. Co-Founder & Chief Product Officer of Aurora. ***Director of Tesla Autopilot***. Lead PM of Tesla Model X.

Mentions:#ML#GM

You can do a hedged equity exchange. Large private banks like UBS, ML, Morgan Stanley, or JPMorgan can help with this. It’ll get you a diversified ETF over time for your NVDA and AAPL without creating a tax bill.

Nvidia wasn’t the “next thing”. Their core technologies that got them to blow up (CUDA, tensor, etc) was around for years before the stock started running it was AI, ML, and DL becoming more popular and accessible with Nvidia being the best positioned in GPUs. There’s no practical or widespread use case for quantum computing and said computers are over a decade away. I’m happy for whoever is benefitting financially from this but this is far from Nvidia’s situation

Mentions:#ML

There’s a number of things going on.  A few years ago now, a paper came out in machine-learning land that suggested (with a fair amount of hand waving) that AI capabilities would continue to scale exponentially with increased processing power. Basically Moores Law for AI.  A lot of powerful people drank this Koolaid, and the implication of this belief was that AGI and super intelligence and the ability to do anything really was only a few years away. And that if you didn’t catch this train now; you’d be left behind forever.  This lead to people throwing huge sums of money at AI.  In the past year or so, we’ve come to realize that this isn’t what’s going to happen. Both because of real-world results, and more recent ML literature.  Transformer abilities do not continue to scale exponentially; if anything, some problematic behaviours seem to get worse at larger scale. In addition, things like hallucinations seem to be a fundamental feature of the technology.  Instead of being at the start of a rocket taking us to an unimaginable future of excess and ease… we are probably already basically at the plateau of what the technology is capable of.  And concerningly… no one is making money off these current capabilities.  Microsoft et al are spending hundreds of billions of dollars on this stuff, and are only making 1-5% revenue off those expenses.  These big companies seem to realize this as well, and have started panicking a bit. You have Microsoft removing AI-specific revenue from their quarterly reports, and throwing Copilot at everything to see what sticks. Or look at OpenAIs actions. In desperation, these companies are announcing these circular deals to try and buy themselves just a bit more time… because they have nothing and are running on fumes at this point.  The sentinel events here were some of the ML papers that came out earlier this year, the failure of ChatGPT 5, the failure of agentic AI in general, the failure of LLMs to signficantly improve productivity in real life (this data coming out in the last 6-12mo as well) implementations, and the persistent lack of revenue off capex on AI projects. This is all stuff that’s come to the surface largely in the past 6months. Hence why the tone has changed so much.  **TLDR**: a few years ago people thought Moores Law would apply to AI and got overexcited about that possibility, spending hundreds and hundreds of billions of dollar chasing a pipe dream. When it turned out that wouldn’t be the case and the technology had already largely reached a plateau, people are starting to panic. 

Mentions:#AGI#ML

Ive worked in ML/AI for 20 years and Im telling you Gen AI is garbage, expensive, and won't result in anything positive. AI models are best utilized in highly specific areas; pattern recognition across huge sets of data. Things that dont have mass appeal, mass adoption, and are generally speaking not broadly marketable.

Mentions:#ML

Shocker that ML or my WF wealth advisors would use a calculation to show a higher return

Mentions:#ML#WF

I talked to my wealth manager @ ML (played golf with him yesterday afternoon) who said he gets this question all the time. He said Merrill benchmarks against SPXTR index that reinvests dividends instantly and has no fees. TotalRealReturns uses VFINX, which includes drag from expenses, timing of dividends reinvested and tracking error. Over 10 years, these compound into a meaningful gap. I also ran the raw numbers through AI and the return was closer to ML than Totalreturns.com

Mentions:#ML#VFINX

Steelers taking this ML

Mentions:#ML

SHORTING $ORCL through 2026. ENRON vibes. Follow the money. Pump Oracle with AI hype → borrow against inflated shares → liquidate borrowed capital into Skydance Media deals (Paramount, Warner Bros, etc.) → Oracle shareholders left holding the AI bag while Ellison builds a media empire to support other agendas. Do the math: \- Nvidia invests in OpenAI → OpenAI pays Oracle → Oracle buys Nvidia chips (Circular accounting) \- RPOs + hype == 3 x pump \- Ellison shares == 41% \- 2018 Carveout == Ellison pledges shares (no Form 4s) \- Maintains shares == voting rights \- Ellison liquidates 30% == Skydance + TikTok + Free Press + etc. \- RPOs <> $$$ (non-binding) \- (OpenAI rev x 5) - FY27 RPO == breakeven \- FY30 $166b infrastructure revenue x 14% margin <= $23b \- Moore's Law + (FY27+) > cloud margins \- Future == On device LLM/ML and private open-models \- AI \~ diminishing returns All of this does not add up to what the market is being sold.

Mentions:#ORCL#ML

**BTQ demonstrates quantum-safe Bitcoin:** Bitcoin Quantum Core 0.2 replaces Bitcoin's vulnerable ECDSA signatures with NIST-approved ML-DSA, completing the full flow of wallet creation, transaction signing and verification, and mining. This provides a standards-based path to protect the entire $2.4 trillion Bitcoin market. only a mere +25%? no way

Mentions:#ML

**BTQ demonstrates quantum-safe Bitcoin:** Bitcoin Quantum Core 0.2 replaces Bitcoin's vulnerable ECDSA signatures with NIST-approved ML-DSA, completing the full flow of wallet creation, transaction signing and verification, and mining. This provides a standards-based path to protect the entire $2.4 trillion Bitcoin market.

Mentions:#ML

I work in ML space and I see my C-suite bosses working around the clock and going to all the seminars conferences etc trying to get on this AI train but they have zero idea.. they don’t know anything about pros and cons.. they are spending huge amounts of $$ on software and upper level management positions rather than hiring IB to the teams.. we’re struggling trying to do all the cool AI stuff they need but after a couple of weeks projects are dropped or the goal post moves.. bulk of $$ wasted ! Absolutely wasted !! They keep buying softwares and most of these do the same thing.. no one wants to code or people who think they can code are crap!! So everyone is playing with these no code/low code SW but what’s the real ROO here?? Nothing?! They (CEO,CTO, CFOs, VPs) want to tell the world that they are using AI and are with the trend but this adds no value to the company.. to be honest the bubble may have popped already or can burst anytime soon..

Mentions:#ML#SW#CTO

About BURU... I think Nuburu is likely transitioning right now into a monolithic AI defense company. This is my prediction: their blue laser systems in defense and industrial settings generate huge amounts of sensor data (temperature, vibration, light frequency, etc.) which is perfect for training ML models for a lot of things including optimization, targeting, and material detection. I could go on but...I will leave it at this for now. Let's see what happens.

Mentions:#BURU#ML

No, it doesn't. Those are meaningless benchmarks. They do this time and time again in all the industries and people fall for it. Real world performance isn't good.  https://www.worksinprogress.news/p/why-ai-isnt-replacing-radiologists?hide_intro_popup=true I'm not confused about what AI is. I know that ML has been powering algorithmic suggestions and helping parse massive amounts of data. But that's not what this build out is about. It's for LLMs, and attempting to reach AGI. Nobody is spending $500B to have better suggestions on Netflix. 

Mentions:#ML#AGI

You sound very knowledgeable in AI/ML, so can you please elaborate on why you think it won't deliver any value? As far as I can see in biological research (I work in machine learning for biological research, my PhD had a strong focus on natural language processing), this sort of stuff is making tremendous headway into all facets of data analyses workflows. So I for one am very excited for a future with AI. It would be great to hear your views too.

Mentions:#ML