See More StocksHome

ML

MoneyLion Inc

Show Trading View Graph

Mentions (24Hr)

1

0.00% Today

Reddit Posts

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models affect retail trading and investing?

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models Impact Trading and Investing?

r/smallstreetbetsSee Post

Luduson Acquires Stake in Metasense

r/investingSee Post

Best way to see asset allocation

r/wallstreetbetsSee Post

Neural Network Asset Pricing?

r/ShortsqueezeSee Post

$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...

r/wallstreetbetsSee Post

Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now

r/investingSee Post

Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?

r/StockMarketSee Post

Moving from ML to Robinhood. Mutual funds vs ETFs?

r/smallstreetbetsSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/stocksSee Post

hypothesis: AI will make education stops go up?

r/pennystocksSee Post

AI Data Pipelines

r/pennystocksSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/StockMarketSee Post

The Wednesday Roundup: December 6, 2023

r/wallstreetbetsSee Post

Why SNOW puts will be an easy win

r/smallstreetbetsSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/wallstreetbetsSee Post

I'm YOLOing into MSFT. Here's my DD that convinced me

r/pennystocksSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/investingSee Post

I created a free GPT trained on 50+ books on investing, anyone want to try it out?

r/pennystocksSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/smallstreetbetsSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/stocksSee Post

How often do you trade news events?

r/stocksSee Post

Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning

r/RobinHoodPennyStocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/pennystocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallstreetbetsnewSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/smallstreetbetsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsOGsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallStreetbetsELITESee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsSee Post

🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!

r/investingSee Post

AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)

r/StockMarketSee Post

Exciting Opportunity !!!

r/wallstreetbetsSee Post

🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙

r/WallstreetbetsnewSee Post

Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)

r/wallstreetbetsSee Post

The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle

r/investingSee Post

Treasury Bill Coupon Question

r/pennystocksSee Post

Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/stocksSee Post

The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth

r/wallstreetbetsSee Post

NVDA is the wrong bet on AI

r/pennystocksSee Post

Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/wallstreetbetsSee Post

NVIDIA to the Moon - Why This Stock is Set for Explosive Growth

r/StockMarketSee Post

[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?

r/investingSee Post

The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts

r/wallstreetbetsSee Post

My thoughts about Nvidia

r/wallstreetbetsSee Post

Do you believe in Nvidia in the long term?

r/wallstreetbetsSee Post

NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading

r/wallstreetbetsSee Post

Apple Trend Projection?

r/stocksSee Post

Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"

r/investingSee Post

Which investment profession will be replaced by AI or ML technology ?

r/pennystocksSee Post

WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology

r/pennystocksSee Post

$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41

r/wallstreetbetsSee Post

$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).

r/pennystocksSee Post

Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine

r/stocksSee Post

This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?

r/wallstreetbetsSee Post

roku thesis for friend

r/stocksSee Post

Training ML models until low error rates are achieved requires billions of $ invested

r/wallstreetbetsSee Post

AMD AI DD by AI

r/wallstreetbetsSee Post

🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨

r/wallstreetbetsSee Post

AI/ML Quadrant Map from Q3…. PLTR is just getting started

r/pennystocksSee Post

$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News

r/wallstreetbetsSee Post

DD: NVDA to $700 by this time next year

r/smallstreetbetsSee Post

VetComm Accelerates Affiliate Program Growth with Two New Partnerships

r/pennystocksSee Post

NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT

r/pennystocksSee Post

Netramark (AiAi : CSE) $AINMF

r/pennystocksSee Post

Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)

r/wallstreetbetsSee Post

Testing my model

r/pennystocksSee Post

Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)

r/wallstreetbetsSee Post

[Serious] Looking for teammates

r/stocksSee Post

[Serious] Looking for teammates

r/StockMarketSee Post

PLTR Stock – Buy or Sell?

r/StockMarketSee Post

Why PLTR Stock Popped 3% Today?

r/wallstreetbetsSee Post

How would you trade when market sentiments conflict with technical analysis?

r/ShortsqueezeSee Post

Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.

r/StockMarketSee Post

Stock Market Today (as of Mar 3, 2023)

r/wallstreetbetsSee Post

How are you integrating machine learning algorithms into their trading?

r/investingSee Post

Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits

r/pennystocksSee Post

Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare

r/ShortsqueezeSee Post

Why I believe BBBY does not have the Juice to go to the Moon at the moment.

r/investingSee Post

Meme Investment ChatBot - (For humor purposes only)

r/pennystocksSee Post

WiMi Build A New Enterprise Data Management System Through WBM-SME System

r/wallstreetbetsSee Post

Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT

r/ShortsqueezeSee Post

The Squeeze King - I built the ultimate squeeze tool.

r/ShortsqueezeSee Post

$HLBZ CEO is quite active now on twitter

r/wallstreetbetsSee Post

Don't sleep on chatGPT (written by chatGPT)

r/wallstreetbetsSee Post

DarkVol - A poor man’s hedge fund.

r/investingSee Post

AI-DD: NVIDIA Stock Summary

r/investingSee Post

AI-DD: $NET Cloudflare business summary

r/ShortsqueezeSee Post

$OLB Stock DD (NFA) an unseen gold mine?

r/pennystocksSee Post

$OLB stock DD (NFA)

r/wallstreetbetsSee Post

COIN is still at risk of a huge drop given its revenue makeup

r/wallstreetbetsSee Post

$589k gains in 2022. Tickers and screenshots inside.

r/pennystocksSee Post

The Layout Of WiMi Holographic Sensors

r/pennystocksSee Post

infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.

r/investingSee Post

Using an advisor from Merril Lynch

r/pennystocksSee Post

$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.

r/StockMarketSee Post

Traded companies in AI generated photos?

r/pennystocksSee Post

$APCX Huge developments of late as it makes its way towards $1

r/pennystocksSee Post

($LTRY) Lets Hit the Lotto!

r/wallstreetbetsSee Post

Robinhood is a good exchange all around.

Mentions

False dichotomies never work There are plenty of other options. One being, AI is legit, but the current financials and approach to scale are not. So a bunch of capital gets incinerated, then out of the ashes someone figures out how to scale and make a product that's actually profitable. Mag8 or whoever is getting desperate because they bet the farm on one path that doesnt look to be bearing fruit. But that doesnt mean there arent other paths. I like what IBM is doing for example and think small focused localized models..... basically self running ML.... is interesting and solves a lot of problems of the all encompassing mega model. If an enterprise can spin up focused models on their own servers and data for their specific needs, thats a win for whoever can provide that AI. That is NOT what the Mag8 are pushing or betting on

Mentions:#IBM#ML

Haha, well, the 'brain' behind the numbers is a custom ML model I've been training on COT data for a while. I process the raw institutional flows through it to filter the noise and get those Z-Scores and confidence levels. But at the end of the day, I’m the one interpreting what the model spits out to make sense of the macro picture. It's just a tool to keep my bias away from the charts. Glad it sparked some interest

Mentions:#ML

are you using an algo / ML / AI to time your entry, strike, and exits? or are you using TA? how are you so good? this shouldn't be statistically possible

Mentions:#ML

People are trying to explain to you that people have been using machine learning to design drugs for decades. Nobody is suggesting that this is going to be AGI designed drugs without human input/involvement. It’s a headline designed to keep inflating the bubble. Eli Lilly is just buying an ML company jfc.

Mentions:#AGI#ML

I agree, but it's always over-hyped and what AI can do only puts a tiny dent in the amount of time and money required to bring a drug to market. Coming up with drug candidates is one of the easiest parts of getting a pharmaceutical into patient's hands. The wet lab and clinical testing is the most time-consuming and expensive part and this really doesn't change that at all. Further, it is really difficult to scale and optimize AI/ML/etc models for these use cases as the timelines for measuring success are so long. By the time AI designs a successful drug and it is proven to work, it's like 10 years. By then that AI is way outdated. So how do you tune your model really well when you need 10+ years to show it an actual positive result (this is an oversimplification, as there are ways to give it intermediate data, but just a way to think about the challenges these models face in being truly useful).

Mentions:#ML

AI doesn't mean AGI. AI/ML has been used to develop new medications for twenty years now.

Mentions:#AGI#ML

I used to run things like Folding@Home. I think the confusion is people see "AI" and assume it just means ChatGPT when there are lots of AI/ML tools that have been in use for a long time in different ways. Something like machine vision to process scan results would probably qualify as AI, but no one can see beyond LLMs when they hear the term.

Mentions:#ML

All big pharma have been using HTS (high throughput screening) for many years now. It's basically an automated way of testing thousands upon thousands of compounds against a possible drug target and the use that data in ML and similar.

Mentions:#ML

I worked at Lilly on a contract about 15 years ago and they were running an absolutely monstrous Beowulf cluster for ML related to drug discovery. No idea if any drugs were directly a result of this, but this is to say they've been at this sort of thing for a very long time.

Mentions:#ML

Quantum computing cannot be bigger than AI because it's way harder to scale it's downstream usage. To do AI you need either basic programming knowledge (simple usage) or some linear algebra knowledge (read & implement papers). For quantum computing, even the most basic things require *some* math knowledge. Robotics (which what I'd bet on) and XR have lower capability requirements, and both of them combine *really* well with AI/ML. Image/video generation & understanding, language, tabular data, search algorithms, reinforcement learning... pretty much everything that AI/ML research gave us, it can be used for robotics & XR. Probably not so much for quantum computing.

Mentions:#ML

It's the exact opposite. The margin of safety is gigantic for the REGAL. In years and years of deep value investing, it is the most asymmetric opportunity I have ever come across with an enormous margin of safety. Every ML model any of those 6 people built, got 97% chances for a cure fraction above 35%. I then did further discovery to try to get to a more precise range for what the cure fraction is, as the unconstrained grid search predicted 62% to 68%. The cure fraction for sure is above 35%, it is even above 50% for sure. If you look here, when I did the same comparison of the each of the mixed-cure ML model with pure exponential constrained to the events, with a cap of 35% for cure fraction, you can see the GPS uncured mOS numbers become Illogical. At a 35% cure fraction cap stress test, with 12 BAT mOS, uncured mOS (GPS non-responders and responders that relapse and die) is 38m. That doesn't biologically make sense to occur in reality, because with the cure fraction cap of 35%, at 12 BAT mOS, that would mean 19 GPS dead at 72 events, and if you take 75% of 62 for non-responders, that is 14 Non-Responders and 5 Responders, the non-responders are not living that long to pull the uncured mOS that high, they may living on par or close to BAT The numbers from the unconstrained grid search of the cure fraction of 68% is what actually lines up with biological reality. The HR is groundbreaking right now, 99.99% chances topline HR is .31-.5 Also, this is covered in the post, but the mixed-cure model can't overfit. Here is what I covered in the post above: It is a mixture cure-fraction model with exactly 3 parameters (cure fraction, uncured median OS, and the mixing proportion) constrained by 2 hard data points: 60 confirmed deaths at month 46, and 72 confirmed deaths at month 58, out of 126 randomized patients. Three parameters minus two constraints equals 1 free parameter. There is literally no room to overfit. The constraint residual is below 10\^-10 -- machine precision. At the biological identity point -- where the uncured mOS equals the BAT mOS exactly, which is the only solution with 0 degrees of freedom -- the model produces BAT mOS = 11.4 months. The full Bayesian posterior, incorporating 7 published literature sources as priors, gives a MAP of 11.1 months, mean of 11.6 months, median of 11.5 months. All three estimators agree to within 0.5 months. The GPS model has 5 independent evidence streams all converging on the same answer: * The published literature prior (7 sources): weighted center 8-10 months * The hard event constraints: 60 events at mo46, 72 at mo58 * The IDMC decisions: trial continued without modification at both planned interim analyses, with arms visibly separated * Biological plausibility: cure fraction of 60-70% is consistent with the Phase 2 immune response rate of 64% https://preview.redd.it/7ug8jpjdgurg1.png?width=2941&format=png&auto=webp&s=ea11e0ed3b2ecd42403805c8be6cbe78dce74bb9 * The biological identity point: 0 degrees of freedom, BAT = 11.4 months

Mentions:#ML#HR#OS

I will see a lot of sorry faces when ai bubble pops, it will pop as most people will understand that LLMs are valued billions of times their worth. It might be the part of an actual AI, but I don't think it will be. It is the furthest thing from intelligent, the ML algorithms we wrote for GE about 10 years ago were much more "intelligent", and that one is still in use, creating value (saving money). It is a great thing, don't get me wrong I have subscripton and use it almost daily for what it is useful. (Spitting out garbage at a high rate, turns out people LOVE garbage)

Mentions:#ML#GE

ST Johns ML. Thank me later. Goodnight

Mentions:#ST#ML

But you don't need LLMs for that, at least not to the extent that it would necessitate a huge infra buildout and accompanying capex spend. You just need good ol' fashioned ML and good data for that.

Mentions:#ML

Traditional ML/DL algorithms were already handling content recommendation, profiling, and ranking. They are incredibly efficient in compute which enables trial-error based finetuning and have been strong for more than a decade, no sign of plateauing. So it seems disingenuous to lump it under the recent splurge on AI these past few years. I hope you're not trying to imply LLMs are somehow replacing these algorithms. LLMs do have their use in supplementing user interactions and content, but dont umbrella term AI to fit your narrative.

Mentions:#ML

While we may agree that certain tranches of the older generations have lost their critical thinking skills to demagogues and propaganda, I'm not sure the critical thinking skills of our younger generations are doing much better thanks to an over reliance on LLMs like chat GPT. - it's not just that some young folks are turning to bots as a sympathetic ear instead of engaging with real humans (see alao the popularirt of ai girlfriends)..... But look at how many people are starting to rely on LLMs to give them opinions instead of applying thinking skills of their own. Examples: - people asking chat bots what products they should buy or if this is a good contract instead of interrogating it to mine data - people using ML to generate code without a deep knowledge of how to write it efficiently, so their pruning jobs of pick the best output of 4 isn't as efficient. Studies are already showing it's adverse affect on society https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them Just like it only takes a few media channels to control older folks, It only takes a few billionaires who can control the LLMs to sway younger folks public opinions. . I see some kids continuing to develop these skills basically the privileged kids sent to private schools who are given enough individual instruction to compel them to develop these skills, but kids who go to schools which focus on product output because the student to teacher ratio doesn't allow for this level of attention are increasingly finding themselves being given ai output, perhaps massaged, presented as the kids work. We are pretty fucked A

Mentions:#ML

They are similar in that they both focus on data warehousing and have siems for cloud security. Databricks is a hot commodity because it specializes in warehousing well suited for AI/ML workloads, higher risk higher reward there. Snow is generally focused on more general cloud computing and storage but can also handle AI/ML just not built ground up to focus on it. It wouldn't be crazy to buy both. Personally only hold CRWD in this subsector

Mentions:#ML#CRWD

Set aside all ur amazing indicators, or ML just follow orange indicator in 47th whole term.

Mentions:#ML

Recognized relevant options trading discussion and formulated response unusual flow is useful but the hardest part is separating directional bets from hedges. a huge put sweep looks bearish until you realize its a fund hedging a massive long position. i mostly look at repeated prints at the same strike across multiple days, thats harder to fake than a single sweep. volume vs OI is key too, if volume >> OI its new money coming in not just rolling. we use options flow as one of the inputs in our ML model at [wormholequant.com](http://wormholequant.com) to score whether a setup is real or noise. free beta if you wanna check it out.

Mentions:#ML

They would have 24/7 satellite images hovering around Iran with ML for data analysis, no?

Mentions:#ML

exactly. everyone selling premium rn thinks theyre geniuses until VIX goes from 25 to 40. the question is always whether IV is actually overpriced or just looks high. [wormholequant.com](http://wormholequant.com) scores this with ML before you enter, helps filter the setups that are genuinely rich vs the ones where you're just picking up pennies. free beta rn

Mentions:#ML

thats the #1 killer. the fix isnt just tighter stops tho, its better entry selection. if you only take setups where the edge is quantifiably real you have less losers to manage in the first place. [wormholequant.com](http://wormholequant.com) does this with ML if you want something to test against your current process, free beta

Mentions:#ML

prospero is fine for basic flow but it doesnt tell you if the setup is actually worth taking. we built WormholeQuant to solve exactly that, ML scores how mispriced the opportunity is before you enter. different approach, free beta at [wormholequant.com](http://wormholequant.com)

Mentions:#ML

thats the whole problem right? you cant tell good from bad entries by looking at them. thats why we built [wormholequant.com](http://wormholequant.com), ML model that scores each setup before entry so you skip the weak ones. free beta if you wanna try it

Mentions:#ML

you can calc it yourself from past earnings moves on the ticker, or [wormholequant.com](http://wormholequant.com) does it automatically with ML. free beta rn. but DIY version: pull last 8 earnings, compare actual move vs what options implied, you'll see the pattern pretty quick

Mentions:#ML

this is actually why we built wormholequant.com. the whole point is scoring whether IV is mispriced relative to expected move using ML, not just telling you "IV is high." been testing it on earnings setups and the difference between "high IV" and "actually overpriced IV" is where the real edge is. free beta still open if anyone wants to try it! Thanks

Mentions:#ML

Thank you, glad the due diligence is helpful and insightful. And Yes, everything mentioned in the October 29th R&D call and what you mentioned about MM and other indications is incredibly important. With the fraction we are seeing from unlimited dosing, and the WT1 targeting, it's incredibly exciting the impact this will have in other indications such as MM. The cure fraction data (and the long term relapse free survival and post relapse survival, if there are any relapses) is paramount for this, as it shows the value of the platform to strategic acquirers. As for the machine learning models, each one and the ensemble, the mixture cure-fraction model with exactly 3 parameters (cure fraction, uncured median OS, and the mixing proportion) constrained by 2 hard data points: 60 confirmed deaths at month 46, and 72 confirmed deaths at month 58, out of 126 randomized patients. Three parameters minus two constraints equals 1 free parameter. There is literally no room to overfit. The constraint residual is below 10\^-10 -- machine precision. At the biological identity point -- where the uncured mOS equals the BAT mOS exactly, which is the only solution with 0 degrees of freedom -- the model produces BAT mOS = 11.4 months. The full Bayesian posterior, incorporating 7 published literature sources as priors, gives a MAP of 11.1 months, mean of 11.6 months, median of 11.5 months. All three estimators agree to within 0.5 months. For the REGAL trial to fail, one of three things would need to be true: 1. BAT mOS exceeds 23 months. No CR2 AML population has ever come close. Historical: 6-8 months. Venetoclax+Aza-era optimistic: 10-12 months. 2. The 60/72 event counts reported by the IDMC are fabricated. That is SEC fraud. 3. Survival curves can decelerate from 12 deaths in 12 months (from 66 at risk) without a cure fraction. That is mathematically impossible under any standard parametric survival distribution. Death is the endpoint. Not progression. Not response rate. Not a subjective RECIST read. Death certificates are definitive -- there is zero measurement ambiguity. 72 deaths out of 126 patients means 57.1% event maturity, past the pooled median. When you have this much event data this close to the end of a survival trial, the cure-fraction model is constrained so tightly that the answer is effectively determined. The math does not leave room for a different conclusion. This is a stars-have-to-align situation for machine learning, and is why I believe that not having a sizeable position in SLS will be a life regret.  There are 99.99% statistical chances of success and topline HR being .31 to .5, with possibility of less than .3. There is no other trial I am aware of where ML can be applied with this degree of structural precision. The combination of: (a) death as an unambiguous binary endpoint, (b) hard event counts from IDMC press releases at two time points, (c) the deceleration signature in the event rate that uniquely identifies a cure fraction, (d) a disease setting (AML CR2, non-transplant eligible) with extensive published survival data to calibrate priors, and (e) a trial that is 80%+ complete by events -- that combination does not exist anywhere else in oncology right now. Not for SLS-009, not for any other trial I have looked at. At unblinding, we will be able to see the relapse-free survival data and post-relapse survival data, if there are any relapses, and see which of the realities on the right graph attached is closest. https://preview.redd.it/k53wln093oqg1.png?width=3179&format=png&auto=webp&s=8ab5a6a525ab1fe8e35422c97ae74a8aa56a95c8

O150 + Miami ML 👌

Mentions:#ML

I’m sure they aren’t only developing LLMs. Companies offering top pay for PHD ML researchers aren’t doing the same thing. They’re researching the next big thing.

Mentions:#PHD#ML

Thank you, and SLS is the only position I've been adding to with new money every week for months now. I just haven't done the thousand hours of DD elsewhere, so I can't speak to any others. REGAL is successful up to a BAT mOS of 20 and the biological cap in this patient population, AML CR2 (not eligible for transplant) is 6 to 12 mOS. The 99.99% statistical probability of success for REGAL success is real and genuine. The upside from $6 is 7.5X to 29X, the real, genuine, upside from REGAL final analysis and readout. This is why it is all I've been adding too. This is the first biotech I've owned and I've been a deep value investors for years and this is the most asymmetric opportunity with a gigantic margin of safety that I have ever come across in my life. In addition, this is a stars-have-align opportunity for machine learning. I shared this in the SLS-009 Phase 2B Deep DD post, but for REGAL: The mixture cure-fraction model with exactly 3 parameters (cure fraction, uncured median OS, and the mixing proportion) constrained by 2 hard data points: 60 confirmed deaths at month 46, and 72 confirmed deaths at month 58, out of 126 randomized patients. Three parameters minus two constraints equals 1 free parameter. There is literally no room to overfit. The constraint residual is below 10\^-10 -- machine precision. At the biological identity point -- where the uncured mOS equals the BAT mOS exactly, which is the only solution with 0 degrees of freedom -- the model produces BAT mOS = 11.4 months. The full Bayesian posterior, incorporating 7 published literature sources as priors, gives a MAP of 11.1 months, mean of 11.6 months, median of 11.5 months. All three estimators agree to within 0.5 months. For the REGAL trial to fail, one of three things would need to be true: 1. BAT mOS exceeds 23 months. No CR2 AML population has ever come close. Historical: 6-8 months. Venetoclax+Aza-era optimistic: 10-12 months. 2. The 60/72 event counts reported by the IDMC are fabricated. That is SEC fraud. 3. Survival curves can decelerate from 12 deaths in 12 months (from 66 at risk) without a cure fraction. That is mathematically impossible under any standard parametric survival distribution. Death is the endpoint. Not progression. Not response rate. Not a subjective RECIST read. Death certificates are definitive -- there is zero measurement ambiguity. 72 deaths out of 126 patients means 57.1% event maturity, past the pooled median. When you have this much event data this close to the end of a survival trial, the cure-fraction model is constrained so tightly that the answer is effectively determined. The math does not leave room for a different conclusion. This is a stars-have-to-align situation for machine learning, and is why I believe that not having a sizeable position in SLS will be a life regret.  There are 99.99% statistical chances of success and topline HR being .31 to .5, with possibility of less than .3. There is no other trial I am aware of where ML can be applied with this degree of structural precision. The combination of: (a) death as an unambiguous binary endpoint, (b) hard event counts from IDMC press releases at two time points, (c) the deceleration signature in the event rate that uniquely identifies a cure fraction, (d) a disease setting (AML CR2, non-transplant eligible) with extensive published survival data to calibrate priors, and (e) a trial that is 80%+ complete by events -- that combination does not exist anywhere else in oncology right now. Not for SLS-009, not for any other trial I have looked at.

Hey, not a stupid question at all. So, the ML models I built are not "guessing" the split, they are mathematically deriving it from two immutable, SEC filed facts, 60 total events at month 46, and 72 total events at month 58. We know the total number of deaths, and the ML models tested for every BAT mOS from 8 to 23, and the biological/clinical cap in AML CR2 (not eligible for transplant) is 6 to 12 months. AML CR2 (not eligible for transplant) does not have a long tail. It is a relentlessly aggressive disease. In a standard parametric survival distribution without a cure fraction (like a Weibull), the hazard rate for a cohort of 54 elderly, heavily pretreated AML patients does not spontaneously cut itself in half. The only way the math works is if a large portion of those 54 survivors are experiencing a hazard rate of near zero, which is the exact mathematical definition of a cure fraction. The deceleration pattern, peaking at 14.1 events per 6 months during months 31-36, then dropping to 8.8 (months 43-48), then 6.2 (months 49-54), then 4.4 (months 55-60), this is a sustained, monotonic decline over 24+ months. That is not a random fluctuation. An exponential model produces a gentle decline (fewer patients remain at risk), but the cure-fraction model matches the shape of the decline, not just the endpoint. Furthermore, if you fit a Weibull, Log-Normal, Gamma Frailty, or Piecewise hazard to the same constraints (no cure assumption), they all overshoot. I tested 6 model families. Every single one that matches 60 events at month 46 and 72 at month 58 has an internal structure that includes either an explicit cure fraction or an implicit one (a sufficiently heavy tail in Weibull/Log-Normal that mimics cured patients). The math forces you there regardless of model choice. REGAL is successful up to a BAT mOS of 20 and the biological cap in this patient population, AML CR2 (not eligible for transplant) is 6 to 12 mOS. The 99.99% statistical probability of success for REGAL success is real and genuine. https://preview.redd.it/fo1bj8sqcbqg1.png?width=2082&format=png&auto=webp&s=d5f907a1989ce995f0db3a25699d7f90495bab8b

Mentions:#ML#CR

JUMIA!!! $JMIA...EM turnaround play.. Pan African E-Commerce company led by CEO Francis Dufay.. Company was founded in 2013 and IPO'd in 2019 to much fanfare hitting almost $55 a share in 2021.. but the burn was out of control and they didn't have a business that worked to serve the local markets... it was going to go BK right before Dufay took over as CEO in Feb of 2023 after running the Côte d'Ivoire operations since 2014 and growing it into their best/only performing market.. currently about $865M market cap... They just had their record revenue quarter in Q4 $61M up 34% YOY (GMV growing roughly 36%YOY).. They have brought the net cash burn down to around $1.7M for the quarter with $78M in cash and no debt.. much higher barrier to entry for competition than people realize.. The markets they serve are 8 countries 600M people (Nigeria, Egypt, Morocco, Kenya, Cote d'Ivoire, Senegal, Ghana, Uganda) they are young and growing and the last to the digitally connected party.. The first thing Francis did as CEO was get them out of several underperforming markets where they couldn't win in the near term down from 13 countries to 8.. He also closed the gaudy Dubai headquarters and reduced headcount.. everyone in corporate now lives in Africa.. he then spent the first two years rebuilding the supply in China (only place he hired in the first two years was in Shenzhen) where he needed to rebuild the sellers trust and retool the website to sell products that the customers could afford.. think white label tv's for $70 and sneakers for $6.. Then they consolidated all the warehouses in their core markets (they rent these buildings keeping costs low like ML does in South America).. the next thing was to target the rural "poor" where even his colleagues thought was crazy, but that is where the majority of the population lives.. so they set up pick up stations to lower the cost of last mile delivery and set up "J-Force Teams" to educate people on e-commerce.. Their cost to deliver to customers is down from $10+ to $1.90 and they are profitable on the variable cost on every delivery, and now more than 60% of their orders come from these rural areas..Amazon and Temu can't just pour money into these markets to compete..Jumia has built a super sophisticated partner networks for delivery and logistics..They use local partners to move the goods and and locals own the pick up stations this keeps Jumia from being ripped off because people aren't robbing their neighbors like they may be tempted to with a company listed in the USA.. these are countries with no national post office, they have solved this and plan grow a logitics vertical around this long term for non Jumia customers..Amazon can't just buy 1000 delivery trucks in NIgeria (they don't exist and the infrastructure to service such assets is nowhere to be found..Francis Dufay has cracked the code to grow this business and even at a 10x by 2030 it will be a fraction of the market cap of global E-commerce companies like SEA and MELI.. It is a combination of the digital technological revolution of the last 25 years in the US, the untapped growth off the floor in countries with GDP per capita under $10K, and the strong frugal leadership of Francis Dufay who acts like and is an owner and the excellent steward of your capital over the next 3 to 5 years..I am long JMIA

Haha no didn’t see the flare before this comment. Yup same, ever since the ML craze and knowing that to get small bumps in performance you’d need exponentially more compute power… it’s been a fun ride and I only wish I invested more.

Mentions:#ML

Alright stocks are over today OSU ML

Mentions:#ML

It’s like in ML when accuracy is low enough, just predict the opposite.

Mentions:#ML

The historical +-10.2% vs implied +-8% gap is interesting but be careful with the sample size - earnings moves aren't normally distributed so using mean abs move + std dev can be misleading. A few outlier quarters (like that +18%) can skew everything. That said I agree vol looks cheap here. If you're going long vol I'd look at a strangle slightly OTM rather than ATM straddle- cheaper entry and if the move is actually bigger than implied you capture more of it. Calendar spread is another angle if you think front-month IV is underpriced relative to back-month. We've been modeling this exact type of setup with WormholeQuant - scoring how mispriced IV is relative to expected move using ML. MU is flagged on our end too. Free beta still open if anyone wants to check it out.

Mentions:#ML#MU

So, there are two things, REGAL and then SLS-009 Phase 2B. For REGAL, there are statistically 99.99% chances of success. With a failure point of 21 BAT mOS for REGAL, biological/clinical reality being 6 to 12 mOS in AML (CR2 not eligible for transplant), cure fraction being above 50% as that is all that matches with uncured mOS (of GPS non-responders and responders that relapse/pass away) that is within biological reality, BAT mOS having 94% chances being set by Sept 2024, 99% by Dec 2024, making the top end of BAT mOS 14.5, and 99% accuracy for BAT mOS being 11.4 between 10 and 13 by the ML models, and 95% confidence by the ML models it is less than 12, the true statistical probability of success is 99.99% for REGAL. With BAT mOS of 16 (impossible territory), and if there wasn't a cure fraction, and just long survivors that relapse/pass away with GPS, slowly on a 36 month exponential curve, the 80th event would have triggered weeks ago, even if the BAT was pulling off a 16 month biological miracle. And this didn't happen, this alone guarantees success. I have not found any downside here yet. There hasn't been one bear thesis/contradiction I have found yet after conversing with hundreds of smart people, misc. doctors included in that, and statisticians/machine learning engineers as well. All 6 ML engineers I conversed with each had different approaches, and are getting the same/similar results. For SLS-009 Phase 2B ORR, the statistical probability of success is not 99.99%, but it is very high, and the post goes over the differences between the two situations and why the machine learning model results for each are different.

Thank you, glad the due diligence is helpful and insightful. And I've been a deep value investor for years. By nature, I only buy heavy concentrated positions when there is a large margin of safety. Before SLS, Centene (CNC) which was a huge deep value winner for me in 2025, from the mid 20’s to 40's, VF Corporation from the mid 11’s early 12s to now was also a huge winner for me. And Nokian Tyres (TYRES) as well from the mid 6’s. Each of these were deep value with heavy margins of safety. I have just as much conviction, honestly more, in the thesis for REGAL's statistical probability of success of 99.99%, which is real and genuine, that is the true statistical probability of success. I covered this in the Part 1 and Part 2 DD for REGAL, but with a failure point of 21 BAT mOS for REGAL, biological/clinical reality being 6 to 12 mOS in AML (CR2 not eligible for transplant), cure fraction being above 50% as that is all that matches with uncured mOS (of GPS non-responders and responders that relapse/pass away) that is within biological reality, BAT mOS having 94% chances being set by Sept 2024, 99% by Dec 2024, making the top end of BAT mOS 14.5, and 99% accuracy for BAT mOS being 11.4 between 10 and 13 by the ML models, and 95% confidence by the ML models it is less than 12, the true statistical probability of success is 99.99% for REGAL. With BAT mOS of 16 (impossible territory), and if there wasn't a cure fraction, and just long survivors that relapse/pass away with GPS, slowly on a 36 month exponential curve, the 80th event would have triggered weeks ago, even if the BAT was pulling off a 16 month biological miracle. And this didn't happen, this alone guarantees success. As for my position, I shared that in the post, and someone asked about it before, which I replied to here: [https://www.reddit.com/r/pennystocks/comments/1r5nbh0/comment/o5lscve/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/pennystocks/comments/1r5nbh0/comment/o5lscve/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) It's the only stock I've been adding to more months with new money. I'm still accumulating every week. My only other positions are those deep value wins from 2025, which I can't exit yet due to short term capital gains. If I could, I'd be all in. But don't let that influence your own positioning. Please only accumulate what makes sense for you. The reason I'm still accumulating is because the real, genuine upside from $6 is 7.5X to 29X with REGAL final analysis readout and buyout, with a 99.99% chance of success for REGAL and a huge margin of safety. This is the most asymmetric opportunity with a gigantic margin of safety that I have ever come across in my life, hence my position size. And thank you, appreciate your comment, and glad I could be of help with the due diligence. https://preview.redd.it/5dmqkylr1vpg1.png?width=2941&format=png&auto=webp&s=443ed5f9a8dc3ac01963cf958547187a7ddebfd4

https://preview.redd.it/23wlry8mhupg1.png?width=2941&format=png&auto=webp&s=ce142f3f59791e810a504e41bf9d5cb89debaebb Thank you, and I've verified/QA'd/compared at least for REGAL (not yet for SLS-009 Phase 2B) the results with 6 other machine learning engineers now, who all got the similar/same approaches. If you look across the other posts within the Q/A in the comments, there is a lot of helpful insight there from those discussions with other machine learning experienced people/engineers/statisticians in comparing results. Every ML model any of those 6 people built, got 97% chances for a cure fraction above 35%. I then did further discovery to try to get to a more precise range for what the cure fraction is, as the unconstrained grid search predicted 62% to 68%. The cure fraction for sure is above 35%, it is even above 50% for sure. If you look here, when I did the same comparison of the each of the mixed-cure ML model with pure exponential constrained to the events, with a cap of 35% for cure fraction, you can see the GPS uncured mOS numbers become Illogical. At a 35% cure fraction cap stress test, with 12 BAT mOS, uncured mOS (GPS non-responders and responders that relapse and die) is 38m. That doesn't biologically make sense to occur in reality, because with the cure fraction cap of 35%, at 12 BAT mOS, that would mean 19 GPS dead at 72 events, and if you take 75% of 62 for non-responders, that is 14 Non-Responders and 5 Responders, the non-responders are not living that long to pull the uncured mOS that high, they may living on par or close to BAT The numbers from the unconstrained grid search of the cure fraction of 68% is what actually lines up with biological reality. The HR is groundbreaking right now, 99.99% chances topline HR is .31-.5 Also, this is covered in the post, but the mixed-cure model can't overfit. Here is what I covered in the post above: It is a mixture cure-fraction model with exactly 3 parameters (cure fraction, uncured median OS, and the mixing proportion) constrained by 2 hard data points: 60 confirmed deaths at month 46, and 72 confirmed deaths at month 58, out of 126 randomized patients. Three parameters minus two constraints equals 1 free parameter. There is literally no room to overfit. The constraint residual is below 10\^-10 -- machine precision. At the biological identity point -- where the uncured mOS equals the BAT mOS exactly, which is the only solution with 0 degrees of freedom -- the model produces BAT mOS = 11.4 months. The full Bayesian posterior, incorporating 7 published literature sources as priors, gives a MAP of 11.1 months, mean of 11.6 months, median of 11.5 months. All three estimators agree to within 0.5 months. The GPS model has 5 independent evidence streams all converging on the same answer: * The published literature prior (7 sources): weighted center 8-10 months * The hard event constraints: 60 events at mo46, 72 at mo58 * The IDMC decisions: trial continued without modification at both planned interim analyses, with arms visibly separated * Biological plausibility: cure fraction of 40-70% is consistent with the Phase 2 immune response rate of 64% * The biological identity point: 0 degrees of freedom, BAT = 11.4 months

Mentions:#SLS#ML#HR#OS

Opportunity cost comes with all investing, correct. As for the timeline for REGAL and the 80th event, I would ignore the any day now comments. Here is what the 8 ML models ensemble I built predicts. [](https://preview.redd.it/sls-deepest-due-diligence-for-sls-009-machine-learning-v0-18x582vzmppg1.png?width=2941&format=png&auto=webp&s=3ee2a40ddd0d72b3bed85953bd8bacd0a84eb1cd) Short answer: The timeline is September 2026 to Feb 2027, and can stretch into a few months after that. https://preview.redd.it/w96dr0mqdupg1.png?width=2941&format=png&auto=webp&s=edf372c0010b6478ba4f831b4851494c6758128e I did another set of Monte Carlo simulations with the 8 ML models ensemble, and this is what the timing of the 80th event would be predicted with a BAT mOS of 10 (although I don't think it will be 10, as the Ensemble and the other machine learning model I built predicted 99% with 10 to 13, it is 11.3/11.4, and 95% confidence is it < 12) 80th Event Prediction with a BAT mOS of February 2027 (month 72) Point Estimate & Confidence Intervals (BAT mOS = 10m, ML ensemble) Metric Trial Month Calendar Date Point estimate (MC median) 71.6 Feb 2027 50% CI (IQR) 67–79 Sep 2026 – Sep 2027 95% CI 61–101 Mar 2026 – Jul 2029 The cure fraction is the bottleneck (bottleneck for events is what I mean, for patients and reality, this is groundbreaking), we know the event rate is decelerating hard Period Rate Trial start to IA (mo 0–46) 1.30 ev/mo IA to 72-event (mo 46–58) 1.04 ev/mo Today onward (mo 61+) 0.64 ev/mo (and falling) By Feb 2027 0.39 ev/mo By Feb 2028 0.26 ev/mo BAT mOS MC Median Month Calendar 95% CI 10m (ML ensemble) 71.6 Feb 2027 \[61–101\] 12m 68.5 Oct 2026 \[60–90\] 15m 67.2 Sep 2026 \[60–87\] 20m 71.3 Jan 2027 \[61–92\] The predictions are from Sept/October 2026 to September 2027 essentially, but the predictions of 6.5 BAT left (3-11) and 7.8 BAT left (3 to 14) and that last patient enrolled was 24 months ago, those BAT events throughout the rest of this year could get us there.

Mentions:#ML#CI

Thank you, I'm glad the due diligence is helpful and insightful. And I've been a deep value investor for several years and am semi-retired, but I really enjoy working and continue with deep value investing. Have a lot of years of experience in business, software engineering, machine learning/statistics, and a strong understanding of healthcare and pharma gained over time. And it really is 99.99% chances statistically, I know I sound crazy when I say that, but it's the truth. The 8 ML models ensemble I built predict the 80th event will occur in the range of Sept/Oct 2026 to April/May 2027. As for your question on the impact the REGAL final analysis readout will have on shares, I believe the move will be a lot like ABVX. ABVX's drug (Obefazimod) is for Ulcerative Colitis (UC), a crowded market dominated by AbbVie (Humira) and Pfizer (Xeljanz). Their Phase 3 Data: They crushed the placebo. 50mg Dose: 19.3% remission rate vs. 2.5% for placebo (Study 1). A 17% improvement in a disease where 10% is considered good. Wall Street realized instantly that this would become a standard of care. The buyout probability went to 100%, hence the 10X surge to $60 a share, which was like 1B in market cap gain in one day, dilution came afterwards. GPS 3-4X's survival (saves lives) in AML CR2 (not eligible for transplant), and there is a cure fraction with 62% to 68% predicted by the unconstrained grid search (which happens to align with the GPS immune response rate numbers), it will dominate CR1 given the results in CR2 (not eligible for transplant) and the cure fraction (it already beat Onureg in CR1 without unlimited dosing achieving 67 mOS in Phase 2 within CR1), and enters a market (CR2 Maintenance) with ZERO competitors. It is a monopoly from Day 1 for at least 5 to 8 years. ABBV and BMS will need to acquire SLS, the one that does will lose a ton of revenue, AbbVie will lose billions in revenue. The surge and buyout following will be astronomical within the range (10B-40B). https://preview.redd.it/s4wtj59qbqpg1.png?width=1911&format=png&auto=webp&s=b7e30ff41ac129b8c989a9eb70e8a364834f702f

Can you teach me how to build your ML models?

Mentions:#ML

Bank do it all the time. I sleep very very well every night. Traded 25+ years for a Primary Dealer, you learn the ins and outs very quickly. Retired at 50, and yes, there were some hairy, scary days along the line. Worst was 2008-2010, we were short Leh, MS, ML, AIG, CS, DB, C, and a few others. The other play was CDO's, there were multiple ways to play that. BUT, intelligence and research prevailed.

Mentions:#MS#ML#AIG#DB

For REGAL and the 80th event, I would ignore the any day now comments. Here is what the 8 ML models ensemble I built predicts. https://preview.redd.it/18x582vzmppg1.png?width=2941&format=png&auto=webp&s=f0ffe1762783e5bc06a70d2d1f392ef820e8cca1 Short answer: The timeline is September 2026 to Feb 2027, and can stretch into a few months after that. [](https://preview.redd.it/sls-deepest-due-diligence-for-sls-009-machine-learning-v0-tnyd4kso1hpg1.png?width=2941&format=png&auto=webp&s=1d69487ece54f9000e44b8a6344f8b65560a07fe) I did another set of Monte Carlo simulations with the 8 ML models ensemble, and this is what the timing of the 80th event would be predicted with a BAT mOS of 10 (although I don't think it will be 10, as the Ensemble and the other machine learning model I built predicted 99% with 10 to 13, it is 11.3/11.4, and 95% confidence is it < 12) 80th Event Prediction with a BAT mOS of February 2027 (month 72) Point Estimate & Confidence Intervals (BAT mOS = 10m, ML ensemble) Metric Trial Month Calendar Date Point estimate (MC median) 71.6 Feb 2027 50% CI (IQR) 67–79 Sep 2026 – Sep 2027 95% CI 61–101 Mar 2026 – Jul 2029 The cure fraction is the bottleneck (bottleneck for events is what I mean, for patients and reality, this is groundbreaking), we know the event rate is decelerating hard Period Rate Trial start to IA (mo 0–46) 1.30 ev/mo IA to 72-event (mo 46–58) 1.04 ev/mo Today onward (mo 61+) 0.64 ev/mo (and falling) By Feb 2027 0.39 ev/mo By Feb 2028 0.26 ev/mo BAT mOS MC Median Month Calendar 95% CI 10m (ML ensemble) 71.6 Feb 2027 \[61–101\] 12m 68.5 Oct 2026 \[60–90\] 15m 67.2 Sep 2026 \[60–87\] 20m 71.3 Jan 2027 \[61–92\] The predictions are from October 2026 to September 2027 essentially, but the predictions of 6.5 BAT left (3-11) and 7.8 BAT left (3 to 14) and that last patient enrolled was 24 months ago, those BAT events throughout the rest of this year could get us there.

Mentions:#ML#CI

I'm glad the due diligence is helpful and insightful. First off, when I first posted the DD, someone asked about my position, and I replied. You can go view that comment if you want. [https://www.reddit.com/r/pennystocks/comments/1r5nbh0/comment/o5lscve/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/pennystocks/comments/1r5nbh0/comment/o5lscve/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) I'm still adding and accumulating every week. Respectfully, focus on the due diligence and your own position. It doesn't matter what shares or position anyone else has. The only reason I mentioned that is because my money is where my mouth is. I wouldn't be sharing due diligence to help people if I didn't believe in my own theses. You can expect that from wall street, not me. Second, and just being honest, I can tell you that people severely overestimate the investment ability/ML/quantitative skills at a lot of funds. Many of the people that are great deep value investors and quants that do this, don't actually work at many of those funds. You'd be shocked. And I think there are some great institutional holders for SLS right now, Group One Trading is a great one. You'd be shocked at how lazy people are, and not many people do as extensive DD and building out the machine learning models for every scenario including stress testing impossible scenarios like I have done. Most big funds aren’t running models on tiny bios (this is the first biotech I've ever owned, given it is truly deep value with a gigantic margin of safety, and 99.99% of REGAL success), they will jump into SLS at $20+ when it’s after REGAL final analysis readout (or SLS-009 Phase 2B ORR readout) and ride it to buyout. I'm not sure of the exact number but I believe before interim analysis of REGAL on Jan 2025, amount of institutions was 35 to 72 And today, about 14 months later, that number is about 145 to 171+. If you research those institutions, and check their backgrounds for those that own lots of shares/calls, you'll uncover a lot of interesting owners. When looking at institutional ownership, I would focus more on looking at the numbers of shares/calls owned rather than the number of institutions (although that is a positive). Looking at who owns shares/calls is important. Some that stick out when I sorted through a few weeks ago are: 1. Group One Trading (these guys have a similar background to one of my skillsets, really smart machine learning/quant guys), they loaded on millions in call options. Assuming they did similar ML modeling that I did and came to the same conclusions for REGAL success rate being 99.99% 2. Dagco. I researched and they are a small asset manager in Ohio with about $500M in assets. They own almost all blue chips and standard broad based holdings. The only position that sticks out is 577,000 shares of SELLAS. They likely clearly see the asymmetric upside with the margin of safety no REGAL downside

The problem with most people trying to use 'AI' for their portfolios is that they are using ChatGPT. LLMs are language engines, not math engines. They hallucinate numbers, struggle with complex volatility calculations, and mostly just spit out generic 'make sure you buy bonds' advice. I went down this rabbit hole last year. I realized if I wanted actual portfolio analysis, I had to stop using chatbots and build an actual Machine Learning model. I put together a Python script using an LSTM that ingests my holdings alongside 40+ market indicators. I don't use it as a crystal ball to predict the future. I use it as an X-ray machine for my current risk. For example, human brains are bad at seeing complex correlations. You might own 15 different stocks across 'different' sectors and think you are perfectly diversified. But when you feed it into a proper ML model, the math will flag that 80% of your portfolio's movement is actually just one massive, heavily correlated bet on semiconductor supply chains or interest rates. Don't ask chatbots for investment advice. But absolutely use quantitative machine learning to uncover the hidden risks and overlapping correlations in your diversification.

Mentions:#ML

It’s called “data decay” and is a well-understood mechanism in ML. New data is worth infinitely more than old data.

Mentions:#ML

I did a similar study, with ML, using 680 features. So indicators, ratios based on whole market, distsnce from EMAs, so many things, until chatgpt and gemini said I already cover most scenarios. What I found is that micro caps, perform really welll like 10x in 4 years, without compounding, without leverage. I wss thining to add macro features too, but its a lot of work. After reading your post you convinced me to use them :)

Mentions:#ML
r/stocksSee Comment

I think that _this AI bubble_ and its buildout are about chatbots. They're keep saying it's to train bigger models, scale up, etc. Maybe they'll do world models but there are fundamental problems to solve here that scaling won't necessarily work for. More research is needed before that. I'm sure Meta and others are using "AI" to improve marketing, but they've been doing this for a long time. It's called Big Data or ML. Nobody needs a trillion in data centers to do that. People confusing these different things are what is driving the hype cycle.

Mentions:#ML

Most algo options work is either vol selling (systematic wheel, iron condors at predefined IV thresholds) or vol forecasting (predict where IV is headed, buy/sell premium accordingly). The second one is harder but the edge lasts longer because fewer people do it. We built the second kind — ML models that forecast price movement and vol on a 3-12 week horizon, generates signals with entries and exits. Not HFT, not scalping, just systematic medium-term options trading. Exactly what you're describing. [wormholequant.com](http://wormholequant.com), free beta.

Mentions:#ML

This is the best comment on the thread and it's not close. The simulation-to-production framing is exactly right. I've never heard anyone from the ML side describe it that way but it maps perfectly. Backtest is training data. Live is inference on out-of-distribution inputs. The model doesn't break, it just degrades quietly until you measure. Your AMZN leg story is the one data point that matters. People always pitch legging as free edge but they're only counting the wins. One unhedged short put in a fast tape and you're giving back months. I learned the same lesson the expensive way. Combo orders are insurance and insurance has a premium. The real theta point deserves its own post honestly. The gap between displayed greeks and where you can actually close the position is the entire game once you get past the strategy selection phase. Most retail traders are managing a model of a position instead of the actual position. Appreciate you sharing the 400-spread dataset. That kind of sample size is rare in these threads. Usually it's someone with 12 trades telling you their system works.

Mentions:#ML#AMZN

This is the kind of post that actually helps people and I wish there were more of them here. I build AI systems for a living and trade options on the side so I see this exact problem from both angles. The fill quality gap you're describing is something we'd call a simulation-to-production discrepancy in the ML world. Your backtest is the simulation. It assumes clean execution at theoretical prices. Your live environment has latency, spread width, queue priority, and market maker behavior that the model never sees. Every system I've ever built, whether it's processing documents or executing spreads, degrades when it hits real world conditions. The question is always by how much and whether the edge survives the degradation. The 8-10% haircut you landed on is smart and honestly more conservative than most people are willing to be with themselves. I track something similar on my own book. I sell premium on SPX and a handful of liquid single names, mostly 30-45 DTE strangles and ICs. My version of your logging is a spreadsheet I've kept for about two years now where every fill gets recorded against the theoretical mid and the NBBO at time of submission. Across roughly 400 spreads the average slippage from mid is right around where yours is, call it 6-9% depending on the width of the spread and time of day. Wider spreads on less liquid names are obviously worse. SPX weekly 25 delta strangles during midday are the tightest fills I get. On the legging point I'll push back a little. I stopped legging into spreads about a year ago after getting absolutely smoked once on an unhedged short put during a flash move in AMZN. The fill improvement across 20-30 trades was real, maybe 10 cents average, but one bad fill where I had to chase the long leg 40 cents higher than planned wiped out three months of legging gains. The math doesn't work in your favor once you account for the asymmetric tail. You save a little every time it works and lose a lot the one time it doesn't. Combo orders are paying an insurance premium to the market maker and I've made peace with that cost. The theta observation is the one that I think more people need to hear. Displayed theta on a platform is a theoretical number based on a model that assumes continuous markets and log-normal returns. Actual decay doesn't happen smoothly. It chunks. Your position can show $14/day of theoretical theta and then sit flat for three days and then jump $50 on day four because gamma accelerated into expiration. If you're managing a book based on smooth theoretical decay you're going to consistently be confused about why your daily P&L doesn't match expectations. I stopped looking at displayed greeks for position management and started looking at where I can actually close the spread right now vs where I opened it. That's your real theta. Everything else is a number on a screen. Good post. The data-driven approach to measuring your own execution quality is exactly the right instinct. Most people would rather blame the strategy than measure the infrastructure.

Mentions:#ML#AMZN

AI (Machine Learning) has definitely played a role for a while. The issue is calling it AI and pretending it’s better than it is. Like .com, like IoT - it’s an investment angle. Adobe had ML for 10 years, every single programming frontend had ML for 10 years. It’s the same shit. But 70% of people including corporate mangers are actually dumb as shit and scared of destroying their careers, So they follow any possible stock market boost they can. Dumb cunts are fucking every industrial interest for a 5% boost because they know their stock options aren’t shit to the CEOs

Mentions:#ML

Ticking time bomb? You have runs on multiple private equity firms in two weeks, massive loans for ML tech with dubious paths to profitability, commodities going haywire from not just the Strait blockade, but Iran playing checkers and still beating the fuck out of Mango with the Houthis stopping the Red Sea trade too. The bomb already detonated and we're running around like on 9/11 covered in dust waiting for a tower to fall on our heads.

Mentions:#ML

I literally just explained to you what I do with ESM... And then you call it junk AI... Lol, I see I'm conversing with an AI/ML guru over here

Mentions:#ML

I use ML products and workflows from Meta all the time. I use the evolutionary scale model for protein sequence modeling regularly. I'm guessing you don't use Meta for anything productive?

Mentions:#ML

Not surprised by this at all. Meta was one seen as a pioneer for ML/AI and lead the open source but has fallen way behind Google/Anthropic/OpenAi. Some of the product choices too have been awful.

Mentions:#ML

You're confusing narrow uses of AI/ML with LLMs, because that's the narrative/marketing that these companies put forward. They are very different things

Mentions:#ML

Exactly. Most people's VIX playbook is "panic" or "do nothing." Having predefined levels where you start selling premium is what separates trading from gambling. We automated this part — ML model flags when vol is mispriced relative to where it's actually headed. Takes the guesswork out of "is VIX 30 the play or is it going to 40." [wormholequant.com](http://wormholequant.com) if you want that kind of edge, free beta rn.

Mentions:#ML

814K on a single RCAT call with October expiry is a big bet for a small cap. Either someone knows something or they're hedging a large position. Worth watching but don't chase unusual flow blindly — half the time it's a hedge on an existing short or part of a larger structure you can't see. We track this kind of options flow data in our ML models to see if unusual activity actually predicts anything. Spoiler: sometimes it does. [wormholequant.com](http://wormholequant.com) if you wanna dig deeper.

Mentions:#RCAT#ML

Opinion of some random people is useless for you, if you wanna pay for some info get ML signals like [https://wormholequant.com](https://wormholequant.com) \- btw they and searching for 50 beta testers so it is free for last few days...

Mentions:#ML

Lol this is the realest comment in the thread. Everyone's out here acting like they "know" when they size up. Nah you're guessing with extra confidence. The dipping in approach is smart though — you're basically admitting you don't know by scaling in instead of hammering it. We took this exact logic and automated it — ML model assigns a confidence score and that determines whether it's a full position or a starter. Takes your ego completely out of the equation. Messing around with this at [wormholequant.com](http://wormholequant.com) if you wanna see how it works, free beta rn.

Mentions:#ML

This is one of the best posts I've seen on here in a while. The institutional flow argument for why SPX tails are overpriced is spot on - you're basically paying a tax because pension funds have to buy those puts regardless of price. Meanwhile nobody gives a shit about deep OTM wheat calls so they just sit there mispriced. The Convexity Score idea is really interesting. We do something conceptually similar but for options volatility forecasting - ranking where the biggest gap is between what the model predicts and what the market is pricing. Different application but same core insight: the edge isn't in being smarter about direction, it's in finding where the pricing is laziest. One thing I'd add, have you looked at how the tail frequency changes across different volatility regimes? Like do wheat tails cluster more when ag vol is already elevated or are they truly random? Because if they cluster you could potentially time when to load up on those cheap wings instead of buying every month and bleeding theta. If you're into this kind of quantitative approach to options we're building something in the same vein - ML models for options pricing inefficiencies. Different angle than your tail screening but similar philosophy. [wormholequant.com](http://wormholequant.com) if you're curious, free beta rn.

Mentions:#ML

Is either taking ML or get out of the trade early before it gets there. I do daily IC on SPX also and has been profitable

Mentions:#ML
r/optionsSee Comment

That's basically fixed fractional sizing which is honestly one of the most sustainable approaches out there. 10% max risk on a 2x setup is clean math. The hard part is knowing when that 2x is real and when you're just telling yourself it's 2x because you want the trade. We built our system around that exact problem - ML model spits out a confidence score so you're not guessing whether the setup is actually worth full size or not. Free beta if you wanna check it out [wormholequant.com](http://wormholequant.com)

Mentions:#ML
r/optionsSee Comment

Really solid discussion here. Seems like the consensus is: equal sizing is safer, conviction sizing can work but only if it's backed by data not feelings, and fractional Kelly is the gold standard if you can estimate your edge properly. For anyone interested in taking the "feeling" out of it — we're building ML models that assign confidence scores to options signals and that drives sizing. Still in free beta - wormholequant.com. Appreciate all the input.

Mentions:#ML
r/optionsSee Comment

Honest take lol. I think the issue is that conviction works until the one time it doesn't and that one time wipes out the gains from all the times it did. That's why we moved toward letting ML models quantify confidence instead of relying on how we feel about a trade. Takes the ego out of it completely. Building this into a platform right now — [wormholequant.com](http://wormholequant.com) if you're curious.

Mentions:#ML
r/optionsSee Comment

Scaling in and out is probably the best middle ground in this debate honestly. You're not betting the farm on conviction but you're also not treating every setup the same. The "keep cash for mean reversion" part is smart — most people go all in directionally and have nothing left when the real opportunity shows up. We take a similar approach with our ML signals — model confidence determines whether it's worth full size or partial. [wormholequant.com](http://wormholequant.com) if you want to see how that looks in practice.

Mentions:#ML
r/optionsSee Comment

Keeping buying power in reserve when selling puts is the part most people skip. One assignment on a big ETF at the wrong time and you're stuck. Same size makes sense for your strategy because the risk per trade is already defined by the premium and strike you pick. For anyone who wants to take sizing decisions out of their hands entirely — we built a system where the ML model assigns confidence per signal and that drives the sizing logic. Free beta at [wormholequant.com](http://wormholequant.com)

Mentions:#ML
r/stocksSee Comment

Market Cap wise, Nvidia sure, it's way larger. But how much larger is it when you think about product lines and market segment diversity? They just make GPUs and some supporting networking. That networking revenue was bolster by their near monopoly with ML/AI accelerator GPU usecase, but that is about to shatter. Jensen did try to get ahead by launching spectrumX Ethernet switches to help stay relevant as the entire data center industry has said they prefer to maintain go forward with ethernet, but now they face competition they didn't have before and AMD will quick take significant stake of the fast growing total GPU/DC TAM. AMD has an extremely stong platform with MI450 and their absolutely superiority in CPUs that thanks to agentic workflows are now at a 50/50 split of planned DC deployment in the large hyperscalers. I don't see Nvidia as a larger company. They are just a fad in my eyes and a huge risk for revenue reduction as their margins shrink and their monopoly is done.

Mentions:#ML#AMD#DC#MI

ML answers my questions more often to a reasonable degree of accuracy than it lies. I think the chance of this being a net negative is unlikely. Many use cases exist, and maybe they’ll use it in an extremely dumb way, but not every christian nationalist is a complete idiot.

Mentions:#ML

AI = affordable Indians ML = Mumbai labor LLM = largely lackluster minions from the other side of the world

Mentions:#ML

Respectfully, this is just an AI slop response from an LLM, that doesn't make any sense. I would encourage reading the DD and then coming back with questions. You didn't even provide context on what prompt was sent or what was provided. I would bet everything on what the LLM response said above being wrong. But it's okay, there will always be people that just don't read and don't think rationally and logically about business and due diligence, and just resort to generic LLM responses for decisions. You can bring a horse to the water but you can't make it drink the water. The LLM AI Slop said this: ""The 72-event count pins you to that curve" This is false. Why? Because: * We do NOT know which arm the 72 deaths came from. * We do NOT know arm-level survival curves. * We do NOT know censoring distribution by arm. * We do NOT know time-to-event distribution by arm."" There were extreme censoring stress tests done covered in the post above. And BAT (best available treatment) in AML CR2 (not eligible for transplant) has a biological cap proven study after study with BAT is 6 to 10 BAT mOS in AML CR2 (not eligible for transplant), and you can assume 6 to 12 mOS. The ML model for predicting when BAT median OS was set predicted 94% chances BAT mOS was set in Sept 2024, 99% by Dec 2024 5 ML models along with the mixed-cure model, and verified with 4 different machine learning engineers that all took different approaches, but all arrived at the same/similar results, resulted in BAT mOS being: 91% within 10 to 14, 80% within 10 to 13, and 99.99% within 10 to 13, being 11.4 I did cross validation with 5 different ML approaches for that Random Forest 10.4m \[10.2-10.5\]  Gradient Boosting 10.5m \[10.2-10.5\]  LASSO Regression 11.1m \[10.8-11.3\]  Neural Net Ensemble 10.8m \[10.5-11.0\]  5-Method Consensus 10.7m \[10.4-11.1\]  All 5 ML methods agree BAT mOS is 10-11.3m. None produces an estimate above 11.3-11.4m. The ensemble itself rejects BAT > 12m at >95% confidence. 99% chances BAT mOS was set in 2024, making the upper limit 14.5 for BAT mOS. In the impossible scenario that BAT mOS is 14.5, topline HR would still be 0.35 to 0.50.

Yeah, that's the issue, the definition of AI is kind of diffuse, it's hard to draw a line where an automation process is "AI". When the machines start to precess and adapt to data it falls into machine learning and then it's kind of under the AI umbrella term. As you said if the machine is working with a given set of parameters and doesn't adjust, or is adjusted by humans, it isn't really ML or AI it's just automation, but it's not like the customers of the tech would distinguish, I can easily see how you can just put an AI label on it to boost sales. And the hype of what "AI" can do is definitely out of proportion to what has delivered until now.

Mentions:#ML

I was around when Bill Clinton swore he did not have sexual relations with ML. I will be around when he says he did not have relations with Epstein.

Mentions:#ML

I rewrote an AES encryption algorithm on an Nvidia graphics card in 2017 using cuda for one of my graduate research classes and at the time swore that Nvidia gpus were going to be the future of computing. Everyone told me that their only use case would be for advanced graphics or highly parallel computing problems that didn’t meet your every day usage. I didn’t agree and thought they would be useful for workloads requiring high computation (ai/llms) but didn’t think that we’d see wide scale general applications of real ML models until the 2030s… should’ve gone with my gut and went all in on Nvidia instead of listening to my friends/colleagues at the time 🙃.

Mentions:#AES#ML

I thought GPUs becoming more prominent in data centers was pretty predictable. They're way more efficient for earlier ML models too and that had been growing rapidly for a while. I just didn't think it would be Nvidia that dominated the market. AMD and Intel were investing heavily in GPU development for years and they had way more familiarity with the enterprise side of things. Nvidia looked more interested in the consumer market. Oh well.

Mentions:#ML#AMD

If you disagree, you either don’t work in AI/ML where you get access to ALL models to test out for yourself, or you are a pretty horrible engineer

Mentions:#ML

But all of those things have used AI for a decade or more, and no one is competing at that level without AI tools. People seem to think AI=ChatGPT, which is not the case. Does Exxon need ChatGPT? Maybe, maybe not. Does Exxon need to utilize advanced ML tools to forecast demand, oil reserves, where to drill, seismic analysis, etc? Absolutely. Every day. AI is incredibly useful and already used every day. As I'm typing this, presumably Cloudflare is using some AI magic to make sure I can even post it. Now since they all use it, that won't give them an *advantage* necessarily - but that's different than saying they don't need to use AI at all. **Whether or not generative AI LLMs live up to the recent market valuation is a separate question.**

Mentions:#ML

I’m an expert in ML and think transformer based llms are a likely path to AGI. The key missing parts are all in training approaches, not the transformer structure itself which has excellent theoretical guarantees.

Mentions:#ML#AGI

This is geniuenly the worst fking take I have ever seen. Way before attention we had MCTS beating humans on very complex task, image recognition has benefited from huge AI breakthroughs and social engineering was being done using machine learning. People are just angry at the new hype thing, ML is going to stick around and continue gradual improvements.

Mentions:#ML

Good enough has always been the name of the game. Doesn't make sense to work on something until you have acceptance criteria. That cam carry broadley depending on the application, but that has also always been true. Optimization for it's own sake is valueless. Artisan coders will still have their place, but yeah it's really not about writing code, it's about design, and that will be research roles and academia focused positions, but even those have been leveraging AI/ML a lot longer than it's been in the public lexicon.  Yeah the more I think about it the more I agree with you that all coders are toast. Lol. 

Mentions:#ML

I am a programmer. Working with ML. I know the state of the industry and I use AI as a code assistant. Brilliant. Companies replacing junior programmers with AI are going to be screwed in 5 years time when they want senior Devs and have no one because the ones supposed to be gaining experience now are finding it impossible to find a job.

Mentions:#ML

https://preview.redd.it/hzpclucp0elg1.jpeg?width=1080&format=pjpg&auto=webp&s=7836ffc4eda63559420eeccb9cd134db251f82d3 VMHG - Victory Marine Holdings Corp. | Company Profile | OTC Markets [https://share.google/74ML7qLHN8SYdqb0Z](https://share.google/74ML7qLHN8SYdqb0Z) Dunn & Groux Beverage Holdings, Inc. (DGBH) OTC Markets Newsroom: Search for symbol VMHG to view the "Change of Control" announcement

Mentions:#VMHG#ML

The ML stands for mark up, sure. And html provides only structure but no executive or logic functions.

Mentions:#ML

I'm not convinced of your hypothesis yet. Look, there is a reason these few companies are hording 90% of the compute production. They are setting the future price at $1 a token by "subsidizing" it right now. With the exception of electricity, it does not actually cost all that much per token in the grand scheme of things. And it will only get cheaper for them as they scale ever larger and larger and become more efficient. They will have bundles and subscription methods that give you just enough of a discount to not leave but feel stuck. Basically Oracle's business model(they say Oracle doesn't have customers, only hostages). I think they are preparing for an ever large business model of a few players having a monopoly of compute with AI/ML technology being the "killer app" of this future. AI is just a means to this compute infrastructure.

Mentions:#ML

don’t disagree quantum AI ML quadrant in the cloud buzzword bingo has gone on for a long time, what op is referring to is a very real issue which is coming tho https://youtu.be/OkVYJx1iLNs

Mentions:#ML

I dunno. IBM, like MSFT, is a massive company that has actually pivoted multiple times to keep quietly hitting home runs while the online zeitgeist goes all in their demise. Their secotr niche ML/AI products are pretty badass, but all sold through 3rd party sector experts. If I'm running a Fortune 500 and looking to buy an AI product for a specific need, I'm going with IBM.

Mentions:#IBM#MSFT#ML

Curious — what ML framework are you using to tune it? Is it more classification-based (predicting regime) or probabilistic forecasting on the time series itself? I’ve found that probabilistic models tend to generalize better across regimes than pure signal optimization.

Mentions:#ML

Damn… time will tell… did you have an ML pipeline for it to learn after every week? Or you never changed your parameters?

Mentions:#ML

Out of the box then created ML pipeline to tune it after 4 month backtest… now It’ll papertrade rest of the year to fine tune but it’s already collected data for multiple regimes. Breakout, Consolidation, Sell off and Chop

Mentions:#ML

They just need to work on ML projects and join Anthropic /s

Mentions:#ML

Bet it all on Canada - 1.5 and USA ML today parlay to double up

Mentions:#ML

This is a good comparison because cars are the worst and most dangerous form of motorized transportation but because of political decisions and economic incentives is the most popular in America, similar to how not all AI/ML is inherently bad, but the worst form (LLMs/chatbots) are by far the most popular and most hyped.

Mentions:#ML

Looks like an ML training loss curve. Its kind of impressive

Mentions:#ML

ML is old as shit. They’ve been called GLE

Mentions:#ML#GLE

Raps ML, Open 10$, IBRX 10$

Mentions:#ML#IBRX
r/stocksSee Comment

Are you saying that a substantial number of customers are leaving AWS/Azure/GCP because of unacceptable risk to proprietary ML/AI data? I’m not sure I follow. I’m not aware of any companies that have moved from cloud to on-premise for security reasons. There are/were some (especially in proprietary finance) who didn’t ever move certain of their infrastructure to cloud, but they’re in the minority. And those businesses only represent upside to Azure/AWS/GCP, as they’ll likely capitulate eventually, as they see their peers/competitors managing the risk, and winning, because they can develop and scale so much faster. Can you provide any evidence that there are many (or any?) organizations moving from cloud infrastructure to on-premises for security reasons? I’m not sure what you’re describing is an issue at all. But I’m curious where you got the idea from. And happy to read anything you can provide.

Mentions:#ML
r/stocksSee Comment

I saw an interesting point that someone made, like GOOGL could probably just at some point use the data you have with the convos with gemini to build out a much better ad profile around you. So rather than show ads in the gemini chat, they just use all the data to build to target ads when you watch youtube or search. META seems to be benefits using AI with their ad platform. My belief is there isn't really one AI winner and LLM's for consumers isn't even what is going to matter. I still think businesses will just use AI for understanding their data and being to do things from. Like business have been using ML, machine learning, for a long time. However, you can't communicate out with with. I think there is also so merit to the agentic AI stuff. It's still really early and it's interesting since from surveys and what not, seems like most people aren't really using AI as much. It's more used probably in the software engineering field. I work there and use AI. However, there is a clear demand from CEO's from surveys around wanting to adapt it and use it. I think overtime, we will see some benefits, but I don't think it's going to replace as many jobs. I think it's going to hurt entry level stuff the most. Which is going to be younger people getting into the work force.

Mentions:#GOOGL#ML

That is utterly wrong. AI applications have been used profitably for over a decade. Not all AI is generative chatbots. Whether it's classical ML classifiers or neural net based anomaly detection, those models are being used effectively and profitably in countless fields. And just to get this out of the way before it's brought up again here: yes, those have been called "Artificial Intelligence" in the scientific discourse for decades. It's not a recent rebranding as some poorly informed people try to claim.

Mentions:#ML