See More StocksHome

ML

MoneyLion Inc

Show Trading View Graph

Mentions (24Hr)

1

-75.00% Today

Reddit Posts

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models affect retail trading and investing?

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models Impact Trading and Investing?

r/smallstreetbetsSee Post

Luduson Acquires Stake in Metasense

r/investingSee Post

Best way to see asset allocation

r/wallstreetbetsSee Post

Neural Network Asset Pricing?

r/ShortsqueezeSee Post

$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...

r/wallstreetbetsSee Post

Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now

r/investingSee Post

Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?

r/StockMarketSee Post

Moving from ML to Robinhood. Mutual funds vs ETFs?

r/smallstreetbetsSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/stocksSee Post

hypothesis: AI will make education stops go up?

r/pennystocksSee Post

AI Data Pipelines

r/pennystocksSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/StockMarketSee Post

The Wednesday Roundup: December 6, 2023

r/wallstreetbetsSee Post

Why SNOW puts will be an easy win

r/smallstreetbetsSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/wallstreetbetsSee Post

I'm YOLOing into MSFT. Here's my DD that convinced me

r/pennystocksSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/investingSee Post

I created a free GPT trained on 50+ books on investing, anyone want to try it out?

r/pennystocksSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/smallstreetbetsSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/stocksSee Post

How often do you trade news events?

r/stocksSee Post

Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning

r/RobinHoodPennyStocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/pennystocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallstreetbetsnewSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/smallstreetbetsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsOGsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallStreetbetsELITESee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsSee Post

🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!

r/investingSee Post

AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)

r/StockMarketSee Post

Exciting Opportunity !!!

r/wallstreetbetsSee Post

🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙

r/WallstreetbetsnewSee Post

Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)

r/wallstreetbetsSee Post

The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle

r/investingSee Post

Treasury Bill Coupon Question

r/pennystocksSee Post

Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/stocksSee Post

The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth

r/wallstreetbetsSee Post

NVDA is the wrong bet on AI

r/pennystocksSee Post

Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/wallstreetbetsSee Post

NVIDIA to the Moon - Why This Stock is Set for Explosive Growth

r/StockMarketSee Post

[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?

r/investingSee Post

The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts

r/wallstreetbetsSee Post

My thoughts about Nvidia

r/wallstreetbetsSee Post

Do you believe in Nvidia in the long term?

r/wallstreetbetsSee Post

NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading

r/wallstreetbetsSee Post

Apple Trend Projection?

r/stocksSee Post

Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"

r/investingSee Post

Which investment profession will be replaced by AI or ML technology ?

r/pennystocksSee Post

WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology

r/pennystocksSee Post

$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41

r/wallstreetbetsSee Post

$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).

r/pennystocksSee Post

Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine

r/stocksSee Post

This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?

r/wallstreetbetsSee Post

roku thesis for friend

r/stocksSee Post

Training ML models until low error rates are achieved requires billions of $ invested

r/wallstreetbetsSee Post

AMD AI DD by AI

r/wallstreetbetsSee Post

🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨

r/wallstreetbetsSee Post

AI/ML Quadrant Map from Q3…. PLTR is just getting started

r/pennystocksSee Post

$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News

r/wallstreetbetsSee Post

DD: NVDA to $700 by this time next year

r/smallstreetbetsSee Post

VetComm Accelerates Affiliate Program Growth with Two New Partnerships

r/pennystocksSee Post

NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT

r/pennystocksSee Post

Netramark (AiAi : CSE) $AINMF

r/pennystocksSee Post

Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)

r/wallstreetbetsSee Post

Testing my model

r/pennystocksSee Post

Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)

r/wallstreetbetsSee Post

[Serious] Looking for teammates

r/stocksSee Post

[Serious] Looking for teammates

r/StockMarketSee Post

PLTR Stock – Buy or Sell?

r/StockMarketSee Post

Why PLTR Stock Popped 3% Today?

r/wallstreetbetsSee Post

How would you trade when market sentiments conflict with technical analysis?

r/ShortsqueezeSee Post

Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.

r/StockMarketSee Post

Stock Market Today (as of Mar 3, 2023)

r/wallstreetbetsSee Post

How are you integrating machine learning algorithms into their trading?

r/investingSee Post

Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits

r/pennystocksSee Post

Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare

r/ShortsqueezeSee Post

Why I believe BBBY does not have the Juice to go to the Moon at the moment.

r/investingSee Post

Meme Investment ChatBot - (For humor purposes only)

r/pennystocksSee Post

WiMi Build A New Enterprise Data Management System Through WBM-SME System

r/wallstreetbetsSee Post

Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT

r/ShortsqueezeSee Post

The Squeeze King - I built the ultimate squeeze tool.

r/ShortsqueezeSee Post

$HLBZ CEO is quite active now on twitter

r/wallstreetbetsSee Post

Don't sleep on chatGPT (written by chatGPT)

r/wallstreetbetsSee Post

DarkVol - A poor man’s hedge fund.

r/investingSee Post

AI-DD: NVIDIA Stock Summary

r/investingSee Post

AI-DD: $NET Cloudflare business summary

r/ShortsqueezeSee Post

$OLB Stock DD (NFA) an unseen gold mine?

r/pennystocksSee Post

$OLB stock DD (NFA)

r/wallstreetbetsSee Post

COIN is still at risk of a huge drop given its revenue makeup

r/wallstreetbetsSee Post

$589k gains in 2022. Tickers and screenshots inside.

r/pennystocksSee Post

The Layout Of WiMi Holographic Sensors

r/pennystocksSee Post

infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.

r/investingSee Post

Using an advisor from Merril Lynch

r/pennystocksSee Post

$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.

r/StockMarketSee Post

Traded companies in AI generated photos?

r/pennystocksSee Post

$APCX Huge developments of late as it makes its way towards $1

r/pennystocksSee Post

($LTRY) Lets Hit the Lotto!

r/wallstreetbetsSee Post

Robinhood is a good exchange all around.

Mentions

Customers *are* paying for it. Again, the freebie public casual ChatGPT stuff is a drop in the bucket and not the big picture with industrial applications. As the saying goes, if you're not paying for a product you *are* the product. I wouldn't be surprised if, when it comes to the publicly available stuff, if it's all just extra input for further model development; ML loves having lots to chew on.

Mentions:#ML

I am not saying anything different. What I am saying is that these chatbots stuff do not have much to offer than they do now, there is not much going forward. We have made the ML leap a decade ago. Industrially applicable stuff like computer vision were already there, LLMs are just the most impressive client side application. I have been in the machine learning space for a decade now as a researcher and being a financial AI developer, the only real value I see for my field is to parse large volumes of text data, which I have to handle afterwards with conventional ML. I am not saying LLMs are useless, I am saying that they haven't been as revolutionary as we first expected. 

Mentions:#ML

Those existed prior to LLMs, and they did not gain this much momentum. They were confined mostly to the software engineer landscape. LLMs are arguably the most impressive client side ML application. And, Google alongside a few companies like Meta and Amazon were at the forefront of ML in general. I am talking mostly about the genai stuff. 

Mentions:#ML

I didn't say it's going anywhere. It's just that AI has not proven its worth other than being an (extremely good) personal assistant. That's excluding machine learning, I am focusing solely on the GenAI stuff. Google offers a pretty sophisticated ML ecosystem which puts it miles ahead of OpenAI and the multitude of AI startups by default. Nvidia also; their chips were already in use before the GenAI wave -you could use Nvidia infra from Google's Colab environment for at least 5 years now-. That's the AI hype; I am not talking about startups that design ML pipelines and systems to diagnose illnesses etc. and they've been around for at least a decade now. It's the GenAI that's the balloon. Society doesn't need weird videos of cats making pasta. It needs heavy ass industry to feed billions, it needs sustainable energy infra, it needs drugs that cure disease. All those needs and billions are being poured into OpenAI. That's the balloon.

Mentions:#ML

“AGI tomorrow” The current AI hype has managed to extend what is basically Siri/Alexa out over years and convince the dumbest people that they can be rich and eliminate jobs. The hype keeps shifting from LLM to image/video gen and nobody has stopped to see it’s all noise with no real value. ML is amazing, but it isn’t something an LLM can shit out on command for any moron. It takes a lot of effort to get right and each application is unique. For example: https://youtu.be/DcYLT37ImBY?si=KIPhb1IyMYf1tiKD Already seeing some hype shift away to robotics with these bipedal robots people keep fundraising for. They mention AI as an afterthought, but it’s transparently just an Alexa/Siri. China is already miles ahead in robotics too so it’s just more grifting of American investors.

Mentions:#AGI#ML

If it worked why wouldn't countless ML agos just be exploiting it? And once they exploit it, it no longer works. Until you can answer how a human would be able to utilize set things like TA but a sophisticated multi million dollar ML can't. You shouldnt think TA works.

Mentions:#ML

If I were to put money on AI, it'd be Google because they're using their compute power for projects that will actually matter. OpenAI is still trying to make ChatGPT into a profitable consumer and institutional product. Basically same with Anthropic. But thats not where AI shines, even though that's where most of the compute power is going and why all of these data centers are getting built. It's unsustainable. Google's Gemini integrations are basically sidequests, though. Their really important stuff with AI and ML are developing products that show AI in the context of real productivity gains that matter to various fields, including in health, meteorology and other scientific fields.

Mentions:#ML

Mich ML VS Osu

Mentions:#ML#VS

You use the shiny new stuff for training models, and the slightly used shit for inference. Or image processing. Or whatever else where running on some sort of GPU is preferable to CPU-only. In our case it’s ML inference and image processing. Some of our researchers are working on H100s/H200s, but we’re still getting great mileage out of our older A100s. Hell, one of our guys is still running a DGX with fucking VOLTAS. Works well enough for him.

Mentions:#ML#DGX

I agree it is inefficient, but disagree that it can’t scale. These companies haven’t figured out how to optimize compute per user query. The reasoning models are a step toward that optimization because it lets the model decide how long to “think” or use compute for a given problem. I think companies are going to continue to try and find a balance between performance and user satisfaction. That’s going to take time, and some major breakthroughs, both in ML research and hardware/software development. I do think it’s very possible. Just look at cloud computing 15 years ago and large-scale database architectures. They’ve changed tremendously with the introduction of Hadoop and Spark. These took time to develop, but has led to massive gains in cloud computing power, costs, and capabilities.

Mentions:#ML

I'm not confusing ML with LLM's, that wasn't what I was saying. You're also not recognizing how useful LLM's are at processing audio into text and analyzing it.

Mentions:#ML

This is probably the crux of it. Mom and Pop users are necessary. The LLM has to touch every level of the economy for the amount of investment that's being pumped in to be justified. Accelerating back end ML is fine and further product developments, but that's not gonna create a return fast enough to justify how much money is being pushed in... The other option is we replace a whole bunch of workers and somehow enable companies to save / profit billions. Or it's a bubble and will pop when no one wants to give OpenAI or Anthropic more money...

Mentions:#ML

I was young but I remember it being more of an obvious bubble. It was pretty obvious that just having a website like socks.com wasn't going to bring in the cash, everybody was looking for monetization strategies, turning eyeballs into money. I'm not sure it's that obvious now with AI, at least with Gen AI. ML has been providing real value for a decade or more so we are really talking about LLMs. So far it's a time saver for a few use cases but it is an unreliable partner for others.

Mentions:#ML

I don't think they will go away or even crash. The novelty might wear off for day to day users, but that's fine. A tiny blip. Daily mom and pop users aren't really the value. It's because it accelerates AI/ML development. I think it just looks crazy because it's novel. But that also doesn't mean the way the it's being used now is the only way to use it. If you think about it, it's self serving in that an LLM will accelerate its own advancement.

Mentions:#ML

Ok, then LLM boom and LLM bubble? I guess I mean, money is flowing to try to make better LLMs. Those had better either justify the investment, otherwise Open AI / Anthropic etc go belly up and take down the S&P 500. I totally agree AI /ML and even LLMs are here to stay. It's just, are they going to crash the stock market first (it's not like the internet went away after the dot com boom / crash).

Mentions:#ML

That's what they said about regular CISC/RISC computers, in time any technology will trickle down. Perhaps during the initial stages, (which they should have done with AI/ML) such advanced computers could be institutionalised and access granted through proven utility and results driven agenda for commercial entities.

Mentions:#ML

That's an LLM, friend. To the layperson, AI = LLM over the last couple of years, but it really and truly means AI/ML. LLM is just a type of AI/ML, but it's not AI as a whole, nor does it represent even \*most\* companies who say they are leveraging AI.

Mentions:#ML

Anyone recommend a good option trader. Believe it or not ML does not allow me to trade options

Mentions:#ML

When you say AI, you mean LLM. When tech says AI, they mean AI/ML and sometimes LLMs. Everything, and I mean *everything* is touched by AI/ML.

Mentions:#ML

Reclaim – AI Calendar for Work & Life https://share.google/ftKxcIxIv8nE7xc9S > I cant tell AI to do any function of my job more efficiently than I can do it. Is it manual work? If you're using a computer, there's probably *something* an LLM can help you make more efficient. Anyways, we're probably not talking about LLMs specifically, but more likely machine learning and algorithms. Everything, and I mean *everything* is touched by AI/ML. The clothes you're wearing, your tap water, your car... you'd have to live like an 1800s monk if you wanted to avoid AI/ML.

Mentions:#ML

Cloud revenue jumps at this stage of adoption is more tied to increased spend from existing customers than acquiring new ones. More workloads shifting to cloud, increased spend related to ML/AI initiatives, always increasing nests, etc.

Mentions:#ML

> Sam (more like Scam) Altman Third or fourth fastest upvote of my life. > I hope you read recent research papers [...] Yes, I read research papers (not as much as I'd want), and also the various researchers in the various teams in our department read a bunch too, and we organize events and presentations for knowledge sharing. And I agree that LLM capabilities are overstated by marketing & startups. But I almost always ignore all conversations about "AGI", "true intelligence", and the like, I prefer a more grounded and practical discussion, because often "it can't be done because XYZ" just translates to "we don't want to spend time to bother with implementation / engineering details or more complex approaches that might reduce error rates from an unacceptable 10% to an acceptable 5%". And I do think that everything that could be meaningful has not been tried yet, or at least not tried well enough. Many projects die (especially those with limited time, or without a researcher present) because people thought it would be as simple as: "throw in your documents in an embeddings model, use a vector DB, inject everything into 100K context capable LLM, profit". Or (from my last job) "just feed the logs into the LLM and have it run terminal commands from our playbook to fix it". > As someone else said, commoditizing of LLMs is likely gonna happen. Absolutely. After a point people will be happy enough with the small/free stuff for most use cases (I already am plenty happy with my 3-month old, 24B dense Mistral model). > I used to work at Amazon [...] Thanks for sharing! I had only one friend there who worked on AI-related stuff, but it was mostly statistical ML stuff with time series, and he left a year or two ago, so this was new to me.

> don’t I get bonus points [...] 2018 I began into the ML/DL You definitely get bonus points! Btw, I started my ML journey pretty close to you, I think around Feb 2017. And sorry if I came out as too confrontational. I did read it, but I cannot connect how the previous bubble would be relevant to tech-based arguments about future profitability potential of current companies (big and small). I see the parallels and differences of profitability and future potential for stock appreciation from whoever survives, but I avoided any of that in my post, hence it was weird to me that this was a topic raised.

Mentions:#ML

> Based on your experience in the field, how far off are the reasoning models being able to do anything genuinely useful? Negative distance. They can already do plenty of useful stuff, and I mentioned in point 6 that I'm working on an actually useful (and likely profitable) project. What made this project possible is: * Inputs, intermediate data, and outputs are all text-based * The output is *very* standardized (format, structure, tone) * The current solution is a custom-built workflow, not a generic "agentic" implementation. We leave very little to the LLM, and hold its hand all the way * *Lots* of feedback from experts, scientists, UX, and engineering > I see the big money over the next 5-10 years from AI / ML in robotics and similar fields [...] Big data is old news (but always relevant). A strong "yes" about robotics. Judging by some research results I've seen and the drastic cost reduction of robots, it makes sense to me that interest will rise both from hobbyists and companies. Robotics will let AI (be it LLMs/VLMs or other entirely different architectures) tap into fields that weren't possible before. And it doesn't have to be anything fancy or humanoid, a robot that can pick up more sensitive fruit would be nice (btw some early attempts were made in the 2010s, but not sure how it ended up). I *think* that there is research showing that betting on new technologies (and sector ETFs in general) hasn't worked out in the past, but who knows. I've personally put a tiny amount (<1% and will keep dropping) in ROBO (if anyone knows an international alternative, please suggest!).

Mentions:#UX#ML#ROBO

Thanks for this, great post. A few questions if you don't mind - 1. Based on your experience in the field, how far off are the reasoning models being able to do anything genuinely useful? 2. I see the big money over the next 5-10 years from AI / ML in robotics and similar fields (self-driving cars, industrial processes, agricultural processes, etc.) and possibly also in big data processing - the stuff Palantir and Snowflake are doing. Would you agree?

Mentions:#ML

don’t I get bonus points that I was a software dev in the tech industry before the dot com era, I’ve got to see the debacle or that in 2015-16 I was confident that NVDA was a different animal, 2018 into the ML/DL, I understood your AI experience The stocks mkt is a behavioral attitude sometimes agrees with the tech facts

Mentions:#NVDA#ML

As someone with a 9070XT, gaming? Great. Try to do any ML workloads? Actually like pulling teeth.

Mentions:#XT#ML

Supervised learning implies you provide a label of correctness and the loss optimises towards that objective. This is alignment because a human creates that objective and the optimisation algorithm finds a design that satisfies this objective the best it can within the variability of parameters it can tweak. So yes, all supervised models are aligned in that respect to-the objective encoded in their respective loss functions because that’s what the ML engineer intended. When doing next token prediction there is no structure to the data and it is unsupervised to begin with. True there is loss but that’s just token prediction loss which you cannot say encodes the engineers alignment. No engineer at any point tweaks the data and looks at what token should precede what other tokens etc. the engineer has no clue what the training data embedded space looks like not how tokens should relate to each other. There is no question of alignment here as there is nothing to be aligned too

Mentions:#ML

They call their man military product an "Al-powered kill chain". Not sure if you're suggesting that palantir is just lying about that or what. I've never used it but they claim it can make drones autonomously identify targets. That's definitely AI. They also have foundry for civilian companies and that automates a lot of different things across the supply chain using artificial intelligence and ML. Foundry is incredibly expensive though. No clue why any company would think it's worth that type of investment. 

Mentions:#ML

1. Whether LLMs are "better aligned" than "ML models" (any examples? is Word2Vec aligned according to you?) is beyond what's being discussed here. 2. Training method has nothing to do with it. LLMs can be trained in a supervised manner; they're usually trained in a self-supervised manner, not unsupervised. > They aren’t optimizing for human goals at all; they’re optimizing for statistical likelihood in text. A supervised model trained on labeled data is explicitly anchored to a measurable human-defined objective. If your input data is aligned, they will be too. However RLHF is usually leveraged for the alignment step. Which is exactly what you said that it isn't: "The loss function encodes alignment by design." (By the way, according to you "the loss function in the other ML models encodes alignment by design"? What's that even supposed to mean? What's the loss function? What are the other models? I can only guess why you're being so vague) > They’re trained on enormous unlabeled datasets to minimize perplexity, meaning their only goal is to continue text in a plausible way, not to serve any purpose or outcome that humans care about. Again, RLHF. https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback > fine-tuning or reinforcement from human feedback, which is a weak, cosmetic layer over a fundamentally amoral predictive core "Weak" by what metrics? According to whom? Compared to what? > That’s why they can sound helpful and still hallucinate, contradict, or mislead—because there’s no intrinsic connection between prediction accuracy and human intent. ? What's the causal connection here? Hallucinations or lack of logical reasoning (the irony...) have nothing to do with alignment/RLHF. > In practical terms, LLMs are impressive at imitation but poorly aligned to truth, safety, or reliability What is "truth alignment"? That they shouldn't lie? Or shouldn't make facts up accidentally? Again, hallucinations have little to do with alignment. > compared to older supervised systems that were at least optimizing for a concrete, verifiable target Thanks for being as specific as possible. It proves your in-depth knowledge of the subject. I'm just wasting my time here. You're not discussing in good faith.

Mentions:#ML

I am sorry, your understanding of the alignment problem is wrong. LLMs are some of the worst aligned models in existence because almost all ML models built prior to that in supervised approaches are far better aligned than ChatGPT that is unsupervised and goalless beyond next token prediction and that’s exactly why calling LLMs aligned is misleading. They aren’t optimizing for human goals at all; they’re optimizing for statistical likelihood in text. A supervised model trained on labeled data is explicitly anchored to a measurable human-defined objective. The loss function encodes alignment by design. LLMs have none of that. They’re trained on enormous unlabeled datasets to minimize perplexity, meaning their only goal is to continue text in a plausible way, not to serve any purpose or outcome that humans care about. Any alignment we see in them is bolted on afterward through fine-tuning or reinforcement from human feedback, which is a weak, cosmetic layer over a fundamentally amoral predictive core. That’s why they can sound helpful and still hallucinate, contradict, or mislead—because there’s no intrinsic connection between prediction accuracy and human intent. In practical terms, LLMs are impressive at imitation but poorly aligned to truth, safety, or reliability compared with older supervised systems that were at least optimizing for a concrete, verifiable target.

Mentions:#ML

Isn't there clear evidence against this house of cards in that none of the actual AI players are making any money off of it except for the AI they were doing before all this? ML AI had been used since the 2000s, so the current AI bubble is really all about LLM AI. No company is making money off it except the people building the data centers and selling the chips, and how do they continue to get revenue when their 4 big customers don't make any money off of it and don't actually end up making God?

Mentions:#ML

I read it this way: MSAI is using Amazon AWS Services for over 2 years... last year they begang to use the AWS Tools (AI/ML Learning plattforms connected to the warehouses cams and robots). This entire talk is related to the implementation of the testing environment. Furthermore Luke was a maintance engineer - not a manager or anyone that could establish a partnernship. He helped them set up AWS Tool so MSAI could test their infrared AI readers through the warehouse stream API... so no real partnership, just cooperation to create some test environment in AWS Services... nothing more nothing less.

Mentions:#MSAI#ML#API

If Facebook were just Facebook, I think it would be in a worse place right now. It is also Instagram and whatsapp, which are not as horribly monetized as Duolingo, and have much wider user bases. Then they are on the forefront of AI (open source AI at that) and VR, which is more revolutionary tech than social media sites. And even when they were more Facebook, they have built industry standards in software. They maintain React. Duolingo itself was more cutting edge back when it was crowd sourced language translation and machine learning. It's from the guy that invented recaptcha and sold everything to Google for ML. That was it's initial monetization strategy. Now it has moved from that to a subscription language flashcard app with cute cartoons and funny social media... Definitely a huge brand, but enough to justify a big tech valuation?

Mentions:#ML

Vision recognition algorithms have been around for years. Is BMW using LLMs for this, or traditional ML/image processing algorithms implemented by data scientists to do this QC work?

Mentions:#ML

While this is a fairly cynical take, it's also a very accurate one. I design devices and write code for them for a living, and the only "AI" tools worth using in production are old-school ML algorithms; LLMs are absolutely unsuitable for any real product, unless you want to spend bookoo bucks hiring customer support personnel to unfuck all the things that LLMs touch. Even battle-tested tools like CNNs need redundant systems to catch misses, as they are simply not reliable.

Mentions:#ML

It's a catchy term as it uses all the trendy words investors like to hear, but it's basically just what I wrote above: "use a quantum compute processor to speed up ML training". It is not an innovation in ML or anything. And since current quantum processors are an error-prone mess (which is expected when you allow multiple states), rewriting ML algorithms to work with it, is kinda putting the cart before the horse. To go back to my original point. Training better and better LLMs is likely not a path to General Intelligence. In fact, it really feels like we are reaching the max peak for LLMs. So training LLMs faster/cheaper with quantum compute, also wouldn't lead to any AI breakthroughs.

Mentions:#ML

Raptors ML + Magic ML today 🎶

Mentions:#ML

Supply chain efficiency. Better routing to reduce travel/shipping costs. Optimize purchasing. Most of that is traditional ML versus GenAI but it still fits in the bucket of AI

Mentions:#ML

I’m in ML/AI for nat defense but that’s just because it came to me, I didn’t go to it. The good part of that is while a bunch of clowns are trying to shove down some expensive and questionable products, for the most part the process has been Darwinian so far. Like using AI/ML to monitor the skies enhances current countermeasures, it isn’t meant to replace them. I have heard similar stories from healthcare. AI/ML detecting probable cancer spots in x-rays, which are then reviewed by a human for further research. There are papers coming out in healthcare journals that show a demonstrable increase in patient outcomes due to catching certain diseases earlier. Obviously these successes are not universal, current technologies make some diseases better candidates than others.

Mentions:#ML

No idea what the person you replied to since they apparently deleted it, but: > And for the most part you don't need a transformer model to look for the statistical likely hood that based on thermal and acoustic data that your motor is going bad and needs to be replaced soon. I'm in IT in a trucking company, and do software development. This is one thing with a lot of AI products I've seen in the last few years. "Oh look at our fancy AI software!" - literally just taking x,y,z and plugging it into some algorithm and doing some math to spit out something. There's way too much marketing wank at play. Flawed as they are I think there are some legitimate usages of LLMs, there are people trying to plug them into things that don't need them, and then there's just standard ML or algorithms we've had for a long time being rebranded as new AI. It's all a mess right now.

Mentions:#ML

I’m in ML/AI for healthcare, I also work with pharma companies etc. Most ML/AI applications in this industry aren’t anything to do with LLMs/hyperscalers, because they have the inconvenient habit of regularly but unpredictably producing hallucinations which could kill people. Whether people actually understand the difference or not is another matter. My suspicion is that the insistence of OpenAI & pals of screaming about AI at every opportunity is going to mean that everyone who’s used the term to describe what they do is going to be affected if/when they crash. In healthcare and pharma, the core business is pretty decoupled from AI/ML though for the most part, likely both will remain mostly unaffected.

Mentions:#ML

Indeed, seeing from the inside and dealing with managers that suddenly became “specialists” in my own field of study, they sure think artificial intelligence is just to vomit verbose left and right, without knowing machine learning was already being deployed in the back end for around a decade (in my experience at least) for data management and classification, boosting automation and forecasting, amongst other automated processes. Where my opinion differs from yours a bit is that I think objectivelly this is a bubble: - main players are overleveraged and already presenting liquidity issues - ROI for the main claimed application of these technologies can’t be easily measured and realized by their customers (and to your point, it won’t be anytime soon) - more than half of use cases where companies are trying to implement it for productivity gain either face employee resistance and/or the telemetry to measure it costs the same if not more than its potential gains: productivity is a 30 year old question when it comes to measuring it out of manufacturing or service management scopes - major players are already facing liquidity issues due to cost of processing and hardware depreciation (ML training shortens chip lifespan significantly) and limited chip supply to rotate at financially sustainable costs - the clear collusion of Nvidia with its own competitors + main software companies on the race, communicating their billion deals to promise futures and move money laterally in hopes of offseting the debt on investor calls (although, legally at least net revenue usually doesn’t lie) Idk man, seeing from the inside my bet is that either we’ll see bailouts happening soon to keep the bonanza and/or enterprise contracts will raise their prices per token and that will suddenly shrink enterprise customer spend to cover only what they can properly track, which due to the fact that machine learning inheritly has its value as a technology will probably deflate the bubble, not necessarily burst. R&D, health sciences, biotech, fintech will keep benefiting, tech and general knowledge work not so much, imo. Unless they keep printing money to maintain the sham, then it might be a legit burst, if investment firms allocate too much of their ETF money to “AI”… oh wait. 😂

Mentions:#ML

AI is already integrated at almost every level of what we do today. From ML to AI to gen AI to LLMs. If you use iOS the predictive text is now done with a language model. This site is filled with backend AI and now AI results in searches, AI is all throughout Google products, from search to Gmail to Maps and YouTube. AI is in Netflix, Prime, Amazon shopping, your credit card infrastructure, the rest of banking, PayPal, Venmo, Uber, Lyft, Facebook, Instagram, TikTok, every photo you take with a smartphone camera ... so I get the point about OpenAI but everything they and other AI companies do shapes that entire ecosystem, which has spread into essential services faster than most people know. Because the initial computing and internet infrastructure was already there for part of what was needed, and that took 20+ years to build. Then there's physical AI, humanoid AI - just like every bureaucratic office job will be gone in a decade, most warehouse and factory jobs will be too, and it will just keep spreading. This isn't just the US - look at China's plan for their AI economy and the numbers they're talking about. That's why sovereign AI is such a huge investment area also because nations want to control the AI that will be powering everything that happens at every level of government, from taxation to regulations to defence, etc. AI is going to be the operating system for the economy.

Mentions:#ML

get it all back today broncos ML + suns ML

Mentions:#ML

Thanks, although I studied AI and ML, so I don't need the theory; I asked about your personal opinion. > These are not next word generations, they are pattern matchers that encode spatial relationships. You could say the latter, although it's not fully precise (e.g. it doesn't cover how the tokens are encoded so that they become dense vectors capturing some meaning; it's relevant, since it's not just any "spatial relationship", but very complex, high level and black box process of attending to all tokens in a sequence, capturing not only an the last token's and all tokens' meaning at once, but also considering their position in the sequence). Either way, the former doesn't contradict the latter (the decoder is simply a generator of the next token; it doesn't matter that it's a pattern matcher — which, by the way, all models of ML, let alone DL, are). > When you 'train' the model, you pass the images in with the text, and then it encodes the distance between the presence of that image, and the known words of its vocabulary. That's an oversimplification. Again, the attention mechanism (and the transformer architecture in general) is way more complex. Let alone there are plenty of different models, (sub)architectures and training methods. > where that 'next word' generation becomes 'next pixel' Okay, so it's a "next token" (not "word") generator. By the way, in vision transformers tokens are not individual pixels, but patches of pixels. > when the model encodes these patterns into little probabilistic linear algebraic mathematical functions Why are you saying it's "linear algebraic functions"? Deep neural networks have the ability to approximate almost any function due to the use of nonlinear activations; only linear activations boil down even a very deep neural network to a simple linear model. Without nonlinear activations it wouldn't even be able to solve the XOR problem. > Now what happens when the capitalist machine needs to make back the $1.4T taxpayer funds sunk into this? Do you think they'll say, "wait, let's make sure our models are unbiased?", or do you think they'll say, "I'm a genius bringing you the future, just take this, I promise, it's good for you"? I still don't understand the reasoning behind your opinion. Why would anyone deploy unprofitable models? OpenAI and Anthropic are now burning investors' cash with no path to profitability; I don't see how that would lead not only to the deployment of LLM based robots, but also to such an extent that they perform almost every job. I think the AI bubble would've bursted way sooner.

Mentions:#ML

![gif](giphy|jrbdRK2J4jt8o7LVGF) well, I am really sorry, but it is a private company, and they should be able to finance their daily operations (by the way, I hate AI, LLM, ML, autocorrect, AI chatbots, AI in Word/Excel/PowerPoint, etc., so I am a bit biased here) :)

Mentions:#ML

It's extremely common in the world of engineering and ML to refer to matrices with more than two dimensions as "tensors". Furthermore, a tensor is not a thing with physical meaning, they're algebraic objects that describe linear relationships between other algebraic objects. Dunno where you get this 'physical meaning' thing, but it's complete nonsense.

Mentions:#ML

I'm going to piggy back on this to break this down in painful detail: American taxpayers are being asked to pay for something being purchased by a private entity. That private entity is a company whose product arguably has absolutely no moat around its product, and many competitors. This private entity can only pay back their debts from revenues that would only come from a massive increase in usage of their (barely defensible) models. Such an increase could only come from a gargantuan displacement of existing workers, or a gargantuan increase in a yet unspecified industry that does not yet exist (e.g. widespread LLM-backed robotics powered through edge computing, another unsolved problem). Because this private entity has no moat to defend their products from competitors, they are trying to throw more compute at the problem (with money that they don't have to spend). All experts have clearly stated that throwing more compute at the training gap will not solve the problem, because the underlying model architecture is inherently unable to accomplish the type of generalizeable determinism that Moreover, many venture experts also state that trying to scale up current architectures to an "AGI" moment is the wrong goal, the correct goal would, in fact, be to distill and scale down models into fine tuned and use-case specific models that can be deployed in inherently reproducible and messy environments In parallel, the CEO of this private entity has been accused for decades of manipulative behavior. And this CEO's company last year had revenues of $13B, less than most major tech companies. In fact, most of their $13 in revenues come from around ~60 million paying customers, who are paying for another ~740 million free users. Most of their revenues do not come from established enterprise computing contracts, such as you would see from the likes of AWS, Oracle, GCP, and Microsoft Azure. Instead, these revenues are coming from Pro and Plus subscribers - who themselves have complained viscerally that GPT-5 is worse than even GPT-4 in many cases (I will spare you the technical details here, but if you're interested just google Mixture of Experts and Synthetic Training Data) These Plus and Pro subscribers are subjects to models that they don't have consistent control over, and digital nannying over top of their experience that kicks in any time this private entity finds their private chats triggering, or "unsafe". Meanwhile, this private entity refuses to simply provide the service with minimal guardrails to users who are 18 and over (because herr derr Uber-growth model). So, this private entity, and the parasite who leads it, are now officially in the US government's military apparatus - along with all the other major tech firms and AI players This private entity is currently providing its services to the US government's federal agencies, for $1, likely in violation of government acquisition rules which have long stated the government cannot receive gifts or services that are (effectively) free So, to your point, are taxpayers paying to be replaced? The answer is not yet. You can do something. You can contact your congressional representatives TODAY about this, and demand that if any taxpayer funds go to OpenAI, you will vote them out of office in 2026. But, what happens if taxpayers are replaced with AI? The truth is, layoffs are increasing dramatically, but this is not because AI's improvements in performance have been so grand that it's equipped for all use cases In fact, even companies like AWS who heavily mandate AI-usage by their software engineers, are now experiencing greater numbers of outages from code that is likely AI-generated So, what will happen if taxpayers are replaced with AI is that your quality of life will become radically worse and more dystopian than you could ever dream of Nurse's assistants, Taxi drivers, Delivery, Fast food, will all become infuriatingly worse until people literally revolt because everything has become awful. Your food orders aren't made right, your medicine is incorrect, whatever - and when you contact another AI for customer service, it doesn't understand what happened (fully) and you have to wait 7 days before a human representative contacts you Meanwhile, your speed limits are tracked by the increment, and you are penalized for every word you say online, every mile per hour over some arbitrary limit you go, for every small gum wrapper that falls out of your hands and onto the sidewalk of the inner city you live in This is the world the parasite CEOs of these AI companies want to create. They want you to believe they are all powerful They want you to believe they are all knowing They are not They are the man behind the curtain And when the puppeteer tightens their strings, the marionette tightens too.. but you never expected to be trapped in a world surrounded by these marionettes If OpenAI receives these funds, I promise you we will lose everything that it means to live in a free market economy, and all of our livelihoods Sam Altman is a snake, and he and the others will pay in due time when history is written In the meantime, you can make your congressional representatives pay at the ballot box, in the next elections I promise anyone seeing this, there is no scaling law of the current paradigm of machine learning that will get to AGI. All we are doing is scaling a mirage, and paying for GPUs that effectively become trash within 2-3 years of usage, and often ~1 year with heavy training I study ML at a graduate level, this is my perspective alone, but I have many years of experience working in deep tech. What you should fear is not AI, you should fear our politicians centralizing an oligopoly and abusing the fact that this country's education level is atrocious If you're undereducated, use these AIs to learn math If you want to do something for your country, learn linear algebra, and get an electrical engineering degree (I'm not joking) We were once consumers, but that world is now over. The world of abundance is now gone. We are becoming the cattle, and AI will become the fence, and it will be a shite fence if we let them build it around us Don't let them Free your mind

Mentions:#AGI#ML

So a doctor who is spending only 10 min. \[though studying for previous 15 years\] while sitting in a chair comfortably to diagnose cancer \[from test results performed by others\] and getting very vast riches in a form of a salary - is like CEO and that is not labor? You are trying to arbitrary define "value produced" based on some moral grounds or grounds "ML of sweat produced in a day", refusing to see that salary, wages is that measure of value and it's already defined by market. You are like a child or peasant farmer of old who seeing a king eating fancy cake says that you can also sit with important face & eat a cake and be a king, refusing to notice all other things like cost of error in his decision for entire country.

Mentions:#ML

Yep I was an adult during the .com bubble so I’m definitely familiar. I replied to another comment and said the same but the difference then is vaporware companies were getting billion dollar valuations because they had a .com website with zero cash flow. I feel like that’s quite different from today (though I do realize there’s companies out there overvalued, as there always has been and always will be) As for Elon… he says a lot of stuff and about 1% of it has any substance. Machine Learning has been around a long time but was limited by the technology of its time. Slow CPUs and small amounts of memory limited what ML could do. Training complex models would take forever or wasn’t even possible at all. With GPUs today we can train deep learning models with trillions of parameters which was unimaginable decades ago. It’s like Tony Starks dad explaining to Tony how he was limited by the technology of his time. He had good ideas but tech wasn’t there yet to realize them.

Mentions:#ML

We had people like Elon Musk saying that AGI was going to arrive by 2025. We had people saying AI will eliminate millions of jobs and automate them all away. It’s 2025, the biggest “AI” is just slightly more advanced LLMs and text-to-image/image-to-video AI with more computation. We have big techs back pedaling about AI taking over human labor. What about it has actually lived up to the hype. It’s definitely revolutionary but the money behind it is questionable. They always seem to promise something unrealistic that’s close to happening double or triple (or never) the time they promise. How is “AI” realistically going to bring in money. Also although “AI” is in its “infancy”, ML has existed for like 30+ years by now. And the paper on transformers has been out for 7. Let’s not act like we finally had a big breakthrough like a years ago and nothing substantial hasn’t happened. > You sound like people in 1998 Bro ever heard of the dot com bubble. I’m not even saying AI isn’t revolutionary or an important part of the future. I’m saying the hype and the money and valuation it’s generating is dubious. Nobody said pets.com and the internet was a shit idea in general, but it failed at the time because the money simply wasn’t what it lived up to.

Mentions:#AGI#ML

GOOG is sending TPUs to Sun to train ML and you’re bearish?

Mentions:#GOOG#ML

Google is objectively a great investment. If they think ML in space is going to be profitable, than take my money and let’s see if Gemini can find aliens up there.

Mentions:#ML

Why is that who believes only "AI bubble" always come with "it only has real value if it replaces all of us and we will live in a dystopia with AI our owners and we will have like a dog" I am starting to get a connection with not fully understanding what does Machine Learning does and "bubble" theory. I hate to tell you, but AI has been used long before LLM ever come to light (so ChatGPT), companies widely used ML/AI for statistics, search optimization, administration. I understand LLM is the new hot shit but ML isn't only about "replacing jobs" what about autonomics, robotics, pharmatics, genetics? ML is very good at understanding patterns and giving output based on them (obviously it depends on what data you feed, no, not every AI will hallucinate like ChatGPT and not every AI is a chat bot) I won't state AI hype currently isn't includes "job replacing" and It will but why all of you stop AI hype at that level why not go beyond that?

Mentions:#ML

I feel bad for whoever bought PLTR at 220 after hours. Eh, never mind, it was probably just some ML that analyzed the earnings release.

Mentions:#PLTR#ML

Google TPUs handle training just as well as nvidia at a much lower cost. Still need nvidia for customer workloads that require GPGPU, but not reliant for AI/ML workloads. Source: I work for GCP

Mentions:#ML
r/stocksSee Comment

Unemployment is not meaningfully increasing due to AI - it’s bullshit cover for short-term performance layoffs and attempted offshoring. My entire career is working with distressed companies across all industries and none of them can legitimately replace swaths of employees with AI. My own company spent buckets and hired outa big team of MIT ML PhD types to “deploy AI” in our firm and the portcos we work with. The result? Emails get written faster and we can pull old decks much quicker. That’s literally it. Not a single soul replaced. AI is currently a fucking joke that lives up to none of the hype. Any job requiring any nuance or delicacy remains untouched. Will that change? I’m sure. But right now and in the next few years? I have seen absolutely nothing to indicate AI being able to remotely replace anyone I have worked with.

Mentions:#ML

Shit, probably. I did some work there about 15 years ago and they were doing really advanced stuff back then, eg massive HPC/ML clusters doing drug discovery work, protein folding, etc. It's the only place I've met dudes with computer science phds.

Mentions:#ML

There is no Amazon partnership... MSAI is using AWS Amazon services such as server hosting and AI/ML testing environment (with control of certain warehouse cams and robots through test API) as client. "AWS Partner" is everyone that uses the AWS Services as client. It is a pure client/provider relationship.

Mentions:#MSAI#ML#API

No comments yet. As a fan of it’s always sunny in Philadelphia and a person who does ML as part of their job, I had a really good laugh at this.

Mentions:#ML

Okay so I’m going to throw my hat in the ring for the last time. I am currently blue in the face saying this. I am also going to not care about the risk of sounding conceited because you know what? I do know better than 99% of people here. For context I research, train, develop AI models for my job. I am a paid researcher in both the public and private sector. I have studied and studied and studied and write algorithms and write algorithms and write algorithms and read papers and read papers and read papers. Data scientists, AI software developers, statisticians and mathematicians who believe AI is capable of replacing people without creating massive amounts of technical debt in the process or leading to long term business/pipeline instability are deluded or lying for the biggest paychecks our field have/will ever see. This goes double for CEOs, board of directors and shareholders who are being conned. The success of what’s called “AI” in the case of natural language processing (NLP - like ChatGPT) and images is a result of the flexibility of neural networks (one flavour of ML) being able to interpolate in many directions (ask many queries, give many responses) from storing massive amounts of data in the form of its many parts. It’s a powerful memory unit which simply stores all the world’s data and spits out a form of it to you - the form being what you’ve asked of it. Lots of other stuff happens but at its core this is what it is. It’s incredible really, especially in how well it mimics the behaviour of human thinking/learning. But it doesn’t “think” or “learn” and isn’t capable of a lot of forms of thinking that we humans are capable of and which are essential to do the jobs we do. This becomes really apparent when asking AI to perform in low data tasks. Ask any of your favourite AI tools to give you a picture of a watch at 10:10. It will do it perfectly because that’s the way watch companies like to advertise their watches - as it shows off the arms of a watch in the most aesthetic way. Therefore, there’s lots of data of watches displaying that time online. Now ask it to give you a picture of a watch at 06:35. Not so pretty right? That’s because it doesn’t have any data to generate your output from and had no concept of time in the first place. It can’t understand and think about time. This is an abstract concept we humans interact with and debate about to this day and we can effectively use it all the while not fully grasping it. Now apply this to my work - I do research that adds value to both communities and companies - I work on crafting bespoke pattern recognition algorithms for each persons use. I solve these “deep industry problems” everybody thinks AI can routinely solve and replace people. And I work in such a low-data area (creative, critical, logical) that I have to turn off copilot/cursor/AI-suggested coding suggestions because they’re so stupid it’s an actual distraction. AI is powerful when used in the right places by human users with domain knowledge who actually know what they’re doing. It’s a tool. Anyone who is saying they’re replacing us is either being a con, or being conned. The layoffs you’re seeing now are either because the US is actually already in a recession which the stock market is not reflecting or because CEOs aren’t as smart as you think they are. This is an unprecedented level of fraud, stupidity, money and wasted CAPEX. Anyone making comparisons to how any other tool or hype has been introduced to humanity has no idea how much this isn’t like the previous times. And ironically if you read anything about predictions, you’ll know that when using historical data to predict the future, things can go horribly wrong. My advice? Go outside and care for your communities. If you start a business, put your workers and customers first. Who gives a fuck about licking the potential boot of AI if no one can feed their family, go to work to earn a living and experience joy. Instead of talking about how much profit AI generates for a few mega-assholes, let’s talk about what we can do to make living on this planet better for everyone.

Mentions:#ML#CAPEX

Finished my Master's in AI & ML and passed my Series 65 this weekend. Now, I am super baked and stacked with dog bones for the dogs, all beef hot dogs, buns, chicken nugs obviously, and mac and cheese. Windows open at a pleasant 68F. This is all I need to be happy.

Mentions:#ML

I don’t know that we have ever seen this successfully delivered. Retraining - for what? The ability to use LLMs/agents? They will be completely different in a year. FWIW, I hire scientists in tech and we are already seeing new grads missing the fundamentals of ML because everyone is pivoting to Language Models as the interface. Now think about the average joe without a PhD. How will retraining help them? Are they going to succeed in such a position?

Mentions:#ML

If you’re long ORCL, the real bet is on OCI growth, the MSFT tie-up, and whether they can line up GPUs and power; averaging down without those hitting is how you get bagheld. What I’d watch this print: OCI growth pace (still >50–60% y/y or slowing), RPO/backlog, Oracle Database@Azure customer logos and new regions, Cerner margin recovery, and any specifics on capex, data center power deals, and Nvidia H200/B200 delivery timing. OCI’s edge is often price/perf on GPUs and cheap egress, but it only matters if they can turn that into capacity and logos. If you want exposure with less pain: sell cash‑secured puts at levels you’d be happy owning, or wait for the call and write covered calls on any spike; set a hard line where the thesis breaks (e.g., OCI decel + weak backlog). On the ecosystem point: we’ve shipped data apps with Snowflake for warehousing and Databricks for ML, and used DreamFactory to quickly stand up REST APIs over Oracle/SQL Server so teams could ship without building gateways. Bottom line: ORCL works if OCI + Azure expands and GPU/power ramps show up; otherwise it’s dead money.

Mentions:#ORCL#MSFT#ML

Even the best AI/ML tools will average a success rate of < 54%, a lucky coin flip. I started writing something which would analyse shares myself (using known ML algos)… 51% success rate with training data. Make your own judgements or you’ll resent the tools that made them for you.

Mentions:#ML

You have it exactly right. Source: ML Engineer

Mentions:#ML

325 capital, an investmentfirm had to reveal their investments in their latest report, turned out that MSAI is among their investments with a 15 million position, shortly after that reveal the stock jumped a little bit... then we got follow up spam here and on other small time investor subreddits about some "amazon connection" that might explain the jump while they all exclude the 325 capital impact or do not mention 325 capital at all. Furthermore some of the latest releases MSAI look like an inside person tried to create some fake hype.... using the AWS label, showing the AWS server and ML login for firms but describe it like an exclusive access... trying to "hint a partnership" that is not there at all - most of the "push accounts" are literally dead accounts registered years ago with zero activity until now and all are focused on MSAI or "did an exclusive research".

Mentions:#MSAI#ML

>“You’re absolutely right!” I see you’re using pandas with this large dataset, sometimes pandas struggles with large matrices, let’s add 17 log files to find the root of the problem…. I have no doubt this could be done for significantly less computational resources than is currently being reported. Lmao so true >ML researcher with econometrics? Sounds like a certain profession I won’t mention here. Any experience with rough bergomi models and/or using ML for calibration No unfortunately, statistical learning theory on time series & nonlinear cointegration tests

Mentions:#ML

“You’re absolutely right!” I see you’re using pandas with this large dataset, sometimes pandas struggles with large matrices, let’s add 17 log files to find the root of the problem…. I have no doubt this could be done for significantly less computational resources than is currently being reported. ML researcher with econometrics? Sounds like a certain profession I won’t mention here. Any experience with rough bergomi models and/or using ML for calibration

Mentions:#ML

Lolzers I was an AI/ML researcher (applications to econometrics) before becoming a degen, and while Chat never fails to amaze me that fucker always gets something wrong and trying to code with it makes stuff unnecessarily complex, unneccessarily fast. AI economy is a bubble, definitely. No reason to lay this much people. There's also the possibility that some clever people (likely from East Asia) coming up with a simpler way to do linear algebra that requires less computational resources and drop NVDA down to earth's crust.

Mentions:#ML#NVDA

I see people with Guest Pass and vests that visited an Amazon Warehouse... most likely because MSAI rented AWS Services such as servers and the AI/ML testing environment.... so do many other firms that use AWS. *What is the smoking gun now?* Warehouse visitation selfies from many different businesses are all over LinkedIn, Facebook, Google Images etc.

Mentions:#MSAI#ML

This. Bots try to trick gullible people. The real deal: MSAI is using AWS Serivces (such as server and ML test environment provided by AWS), everyone that buys AWS Services can level themselves as "AWS Partner". A real partner (real collaboration and investment) would be allowes to use "Amazon Inc. Global Partner" label.. this is just an AWS Partner label that hundred of businesses have... Their testing environment has an amazon subdomain that is just an access gate to the AWS environment rented by MSAI. As part of the testing, they have access to certain warehouse APIs (f.e. specific cams and robots). AWS Partners that test the ML environment are allowed to visit the warehouses that is why they have guest ID cards and the amazon security vests. Some people try to blow that simple facts out of proportions - furthermore Amazon would just list them directly as global partners on their global partners list - amazon has no interest in "hide and seek" games when it comes to real partnerships.

Mentions:#MSAI#ML

Just a cursory search will show you they are working on a lot of different technologies and not just social media. Whether any of them will bear fruit is a different story. Social media ads make most of their money, but to willfully ignorant of their other endeavors is stubborn and stupid. Google makes most of their money on ads too, but both of them are bonafide tech companies. Can't say the same for RDDT. • Large Language Models (LLaMA, etc.) • Foundation Models for Vision/NLP/Multimodal AI • Generative AI Tools (e.g., for ads, chatbots, media creation) • Ray-Ban Meta Smart Glasses • Meta Quest VR Headsets (Quest 2, Quest 3, Quest Pro) • Horizon Worlds (Social VR/Metaverse Platform) • Project Aria (Sensor-rich AR Research Glasses) • CTRL-Labs Neural Wristbands (BCI-style input) • In-house AI Chips (Training & Inference Accelerators) • Custom Silicon Development (ASICs for AI workloads) • Reality Labs (VR/AR R&D Division) • Subsea Cables (e.g., 2Africa, Bifrost, Echo projects) • Meta AI Research (FAIR / GenAI teams) • Massive AI/ML Data Centers • Immersive Meeting Platforms (e.g., Horizon Workrooms) • AI-Powered Content Moderation Systems • AI-Powered Personalized Feed Ranking • 3D Avatars and Virtual Presence Tools • Gesture-based User Interfaces • Computer Vision Systems (for AR/VR integration) • Speech-to-Text and Multilingual Translation Models

Mentions:#RDDT#BCI#ML

that’s being a bit naive no? Just because they’re not the ones who collect the data doesn’t make them any less complicit in mass surveillance. Their analytics and AI/ML models are made to operate within client infrastructure. It’s that analysis that makes the data valuable in the first place, even if not being done directly by them

Mentions:#ML

Their valuation is high because technologically they are state of the art (even more) advanced AI/ML data operations + their political connections... Thiel funded Trump + JD together with Elon.

Mentions:#ML#JD

Back in 2010, I'd devour a 4 lb Chipotle burrito—shoved straight up my ass—while blasting Ke$ha's "Tik Tok." Take me back, dad. They shorted my meat? No biggie. I'd fire up their garbage chatbot Pepper, milk it for free BOGOs. That thing ran on Microsoft Clippy-level ML. Took it years to catch my grind, then slapped a soft account limit. Easy fix: spin new accounts with free Google Voice numbers. Ran that scam 2-3 years strong. Finally, they killed Pepper and said bitch in person. Like I'm some psycho? Chipotle's devolving into Taco Bell trash. Hope they rot.

Mentions:#ML

I can not believe how many times this had to be repeated: LLM chatbots are not the only, let alone the primary, form of ML/AI behind this boom. I have no clue why so many people seem to sincerely think all of this investment are just models for asking chatGPT to make you grocery lists or whatever. I have a colleague from grad school, who is a Biostatistician, who is using a huge amount of compute for deep learning models to power RNA sequence modeling for a pharma company. You have multimodal foundation models, ML/AI models designed to parse image/video/audio/sensor data for things like robotics and manufacturing, security and surveillance tasks, medical imaging tech, etc. Those also feed into deep learning models for 3D perception, object tracking, and planning/prediction transformers for things like self-driving cars. Your entire social media algorithm, from Tiktok/Youtube feeds and ads optimization and what posts show up on what sites and what ads get surfaced, are largely being moved to transformer architecture and new deep learning models. I can tell you from personal experience, deep learning models are being integrated all over the finance world. Graph neural nets are being used everywhere for doing AML (anti-money laundering) and real-time fraud checks on financial transactions and to capture fraud rings. I agree with many that it is \*very\* overhyped right now and will have some deflation, eventually. However you're absolutely clueless if you sincerely think all of this is for some fucking brownie recipes and roleplay chats on OpenAI.

Mentions:#ML#RNA

The AI improving advertising is traditional ML and such for targeting. It is not generative AI, at least not yet. They have pushed LLM-based text variations in ad, but there are only complaints about it by marketers. Every single domain (niche) expert I know suggests that you turn off the AI suggestion tools. On the other hand, their AI-based audience targeting, which is traditional ML and not LLMs, does help at times. The massive capex is into LLMs, which does not aid in revenue yet. There is some hope that generative AI for content will increase user screen time, but that is in very early stages. Please stop conflating all AI with this massive capex. If you look up the articles today, Zuckerberg is quoted as saying he "thinks" they are starting starting to see some ROI in the core business. It is a very weak and defensive statement. The improvements in AI to revenue are all on the non-LLM ML side. Meta's audience targeting is first in class, in my opinion, rivaling or better than Google's. But that is not the AI targeted by the expensive Superintelligence lab.

Mentions:#ML

lol one guy on /r/investing couldn’t figure out how to use it as an assistant so it’s toast 😂 that’s like blaming a hammer for your house collapsing. As someone who worked on major enterprise AI/ML deployments at Google for 7 years, I can tell you confidently that you’ve got sweet fuck all of a clue what you’re talking about… and likely less about what you’re investing in.

Mentions:#ML

Need advice from savvy investors. My advisor just moved from Schwab to Merrill Lynch and I have the task of either moving my portfolio with him to ML or stay with CS but not have it managed. My issue is for the last 6 years, he has managed it with very little return. I just checked my opening rollover and its virtually the same $ amount today. How is that possible? Plus, 2 of the products are not supported by ML so they have to be sold off. I've researched Fisher but have not seen very good posts about them. My Fidelity through work is working very well but I cant roll this over into it. I am not in the least up on the latest investing trends but I may have to get there. What say you experts? Am I an idiot for following a failed relationship or should I roll the dice and let it ride?

Mentions:#ML

What exactly triggered you? I dont understand. I'm against scammers myself and people who promise guaranteed returns or results. We did research for over a year on ML powered technology called boosted.ai . We will explain to people how they can analyze stocks using a simplified version of that technology (since its very complex for average person). That's it. The webinar is free, people can build a strategy for free and they can monitor pre built strategies also for free. What scam are you talking about I have no idea.

Mentions:#ML

I have no idea what you're trying to say. Monetized means someone has to give you money for that thing. Right now AI/ML is very impressive, but it's *losing* companies that train models and maintain the infrastructure massive amounts of money. It's benefitting companies that produce hardware for it, or build out datacenters, or assemble server racks, but for that to continue, the customers of these companies will need to figure out how to make money on AI/ML products.

Mentions:#ML

I'm referring to neural nets, clustering, CCA, all the other stuff that got lumped into ML when people started calling it that. LLMs are an application of ML methods to language, but that's a very specific type of data with its own set of concerns, and at least in my field, we tend to put LLMs in their own class of algorithms.

Mentions:#ML

*sigh* I'll explain why you're being downvoted. Tensors purpose isn't to blast every other chip out of the water in benchmarks. It's to accelerate on device ML workloads..and more importantly, do those workloads using less power. And yes, Google has Tensor processing units (TPUs) in data centers as well. They are two entirely different chips... And surprise surprise! The design of that chip prioritizes power efficiency (and scalability) over performance. Because when you're trying to run an absolute monster service (like search and AI overviews), scaling and power efficiency is a lot more important than individual chip performance.

Mentions:#ML

>how to interface LLMs and agentic programming with the deeper ML algorithms Can you elaborate? Any examples? What do you mean by "deeper ML algorithms"? Deep learning? Which is what LLMs are based on, basically creating a hybrid model?

Mentions:#ML

The useful algorithms are also pretty specialized, all the non-LLM stuff has been on the back burner but that's really where the growth is IMO. On top of that, we're just starting to think about how to interface LLMs and agentic programming with the deeper ML algorithms, which could actually start yielding some results.

Mentions:#ML

It wouldn't be hyperbole to say that *every* ML paper and project of significance prior to 2023 relied on CUDA. ROCm was a nightmare to deal with back then and had very little adoption within academic or industry circles. Custom is definitely the fastest growing, but a lot has changed in just a few years.

Mentions:#ML

We got overhang removed today, same ppl who bought 50ML shares recently, got their 17ML shares tradable today at price of 0.35 (they still have 50ML worth of shares price at 1.35, they won't dump their own money lol, locked in).

Mentions:#ML

People have lost sight of what LLM's are. They are chat bots. Really decent chat bots. They work by guessing what words have the best chance of satisfying you, based on the input prompt, their dataset and weightings provided by a human during training. They're a very useful tool. But surely it's obvious this is not the path to any form of sentience. ML more generally is even better. It's very useful for iterating over a complex problem with many parameters, such as finding new drugs and many other things. But it's not capable of thinking. It can't invent something really out of the box, only iterate. Super useful, but this isn't the Matrix.

Mentions:#ML

> It’s still just doing probabilistic outcomes. That’s what ML has been also why it can never come up with saying it doesn’t know what something is and makes something up. Hallucinations are due to bad training methodology. If you reward it based on accuracy, and punish it for refusing to answer, you encourage hallucinations. This can be remedied by increasing penalties for hallucinations. A lot of human workers have the same pitfalls. People act like they know more than they do, make an educated guess, and fail. Doctors mis-diagnose, sales people claim features that don't exist, construction workers make mistakes, human drivers crash, etc. It's easy to focus on the mistakes AI makes, but no one focuses on the preventable mistakes humans make. We hold AI to a much higher standard than humans. >AI evangelists can keep trying to sell it as some cure all etc, but from my experience and my academic work with ML it’s still doing the same stuff just at a bigger scale. >It won’t replace workers, it will just be yet another automation tool and frankly just a generation tool than being some “knowledge” center. AI has already replaced millions of workers, so it's a bit late to claim it won't replace workers. The only real question is how many workers it will replace. I was an AI skeptic back in 2023 for the same reasons you mentioned. But the pace the industry has made in the past 2 years is nothing short of incredible. When I tested LLMs back in 2023, it couldn't even correctly write a 10 line function to calculate a common financial metric. Now in 2025, it can build entire applications, identify security vulnerabilities and bugs in human-written code, and more.

Mentions:#ML

It’s still just doing probabilistic outcomes. That’s what ML has been also why it can never come up with saying it doesn’t know what something is and makes something up. You can try to make it as complex as you want but as someone that has done and worked with ML, it still boils down to making the best guess based on certain factors and probabilities and even then it’s level of accuracy can be terrible to ok to great based on what it’s given on any domain. Which has solely been based on only digitized information. AI evangelists can keep trying to sell it as some cure all etc, but from my experience and my academic work with ML it’s still doing the same stuff just at a bigger scale. It won’t replace workers, it will just be yet another automation tool and frankly just a generation tool than being some “knowledge” center.

Mentions:#ML

"Probability machine" is a massive oversimplification. It would be like arguing that the internet is just "fancy electrical and light signals". "Pattern recognition and replication machine" is probably a better description. Yes, LLMs select the highest probability output, but their complexity has gone far beyond what most people assume. With Trillions of weights and hundreds of hidden layers, there is a lot of patterns being represented. Most human work can be achieved by AI/ML because most jobs involve learning a series of pattern, and replicating it. The only thing current AI is incapable of is innovating outside the current framework of human knowledge. Think inventing a new style of music/art(not copying an existing style), making a new scientific discovery that doesn't involve existing research, etc.

Mentions:#ML

Dodgers lock in, NEED ML

Mentions:#ML

yup even though i have dodgers ML

Mentions:#ML

As someone who works for one of the big tech firms in ads… India is massive scale but low $. It sometimes costs more for infra and delivery to show an ad to a user in India then you get in return. Takes a lot of investment and targeting to squeeze margin and take home a healthy net - rather have expensive ML staff spending their time on high ROAS markets.

Mentions:#ML

Yeah, I don’t get the narrative of “LLMs don’t work” or “LLMs are under delivering”. There’s always companies and grifters that over promise and hype way too much. But LLMs add real value, which is different from the “AI/ML blockchain” crap of the 2010s.

Mentions:#ML

there's tons of applications of transformer based ML models. the entire digital visual space has been transformed from it every white collar job uses it extensively now every student uses it to cheat and are completely dependent on it

Mentions:#ML

Work in data science / ML. See a large number of companies that used to use UI path or other lesser known RPA tools like Kofax migrating away from them in favor of newer solutions. Would not invest in this one

Mentions:#ML

For your chart, Short put= ML (max loss) should be unlimited, right? Selling a naked put keeps the seller on the hook especially if the price of the underlining goes below the contract's strike. What license are you taking?

Mentions:#ML