Reddit Posts
[Discussion] How will AI and Large Language Models affect retail trading and investing?
[Discussion] How will AI and Large Language Models Impact Trading and Investing?
Neural Network Asset Pricing?
$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...
Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now
Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?
Moving from ML to Robinhood. Mutual funds vs ETFs?
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I'm YOLOing into MSFT. Here's my DD that convinced me
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I created a free GPT trained on 50+ books on investing, anyone want to try it out?
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Option Chain REST APIs w/ Greeks and Beta Weighting
Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!
AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)
🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙
Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)
The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle
Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth
Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
NVIDIA to the Moon - Why This Stock is Set for Explosive Growth
[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?
The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts
Do you believe in Nvidia in the long term?
NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading
Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"
Which investment profession will be replaced by AI or ML technology ?
WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology
$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41
$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).
Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine
This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?
Training ML models until low error rates are achieved requires billions of $ invested
🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨
AI/ML Quadrant Map from Q3…. PLTR is just getting started
$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News
VetComm Accelerates Affiliate Program Growth with Two New Partnerships
NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT
Netramark (AiAi : CSE) $AINMF
Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)
Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)
How would you trade when market sentiments conflict with technical analysis?
Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.
How are you integrating machine learning algorithms into their trading?
Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits
Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare
Why I believe BBBY does not have the Juice to go to the Moon at the moment.
Meme Investment ChatBot - (For humor purposes only)
WiMi Build A New Enterprise Data Management System Through WBM-SME System
Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT
The Squeeze King - I built the ultimate squeeze tool.
$HLBZ CEO is quite active now on twitter
Don't sleep on chatGPT (written by chatGPT)
DarkVol - A poor man’s hedge fund.
COIN is still at risk of a huge drop given its revenue makeup
$589k gains in 2022. Tickers and screenshots inside.
The Layout Of WiMi Holographic Sensors
infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.
$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.
$APCX Huge developments of late as it makes its way towards $1
Robinhood is a good exchange all around.
Mentions
Is either taking ML or get out of the trade early before it gets there. I do daily IC on SPX also and has been profitable
That's basically fixed fractional sizing which is honestly one of the most sustainable approaches out there. 10% max risk on a 2x setup is clean math. The hard part is knowing when that 2x is real and when you're just telling yourself it's 2x because you want the trade. We built our system around that exact problem - ML model spits out a confidence score so you're not guessing whether the setup is actually worth full size or not. Free beta if you wanna check it out [wormholequant.com](http://wormholequant.com)
Really solid discussion here. Seems like the consensus is: equal sizing is safer, conviction sizing can work but only if it's backed by data not feelings, and fractional Kelly is the gold standard if you can estimate your edge properly. For anyone interested in taking the "feeling" out of it — we're building ML models that assign confidence scores to options signals and that drives sizing. Still in free beta - wormholequant.com. Appreciate all the input.
Honest take lol. I think the issue is that conviction works until the one time it doesn't and that one time wipes out the gains from all the times it did. That's why we moved toward letting ML models quantify confidence instead of relying on how we feel about a trade. Takes the ego out of it completely. Building this into a platform right now — [wormholequant.com](http://wormholequant.com) if you're curious.
Scaling in and out is probably the best middle ground in this debate honestly. You're not betting the farm on conviction but you're also not treating every setup the same. The "keep cash for mean reversion" part is smart — most people go all in directionally and have nothing left when the real opportunity shows up. We take a similar approach with our ML signals — model confidence determines whether it's worth full size or partial. [wormholequant.com](http://wormholequant.com) if you want to see how that looks in practice.
Keeping buying power in reserve when selling puts is the part most people skip. One assignment on a big ETF at the wrong time and you're stuck. Same size makes sense for your strategy because the risk per trade is already defined by the premium and strike you pick. For anyone who wants to take sizing decisions out of their hands entirely — we built a system where the ML model assigns confidence per signal and that drives the sizing logic. Free beta at [wormholequant.com](http://wormholequant.com)
Market Cap wise, Nvidia sure, it's way larger. But how much larger is it when you think about product lines and market segment diversity? They just make GPUs and some supporting networking. That networking revenue was bolster by their near monopoly with ML/AI accelerator GPU usecase, but that is about to shatter. Jensen did try to get ahead by launching spectrumX Ethernet switches to help stay relevant as the entire data center industry has said they prefer to maintain go forward with ethernet, but now they face competition they didn't have before and AMD will quick take significant stake of the fast growing total GPU/DC TAM. AMD has an extremely stong platform with MI450 and their absolutely superiority in CPUs that thanks to agentic workflows are now at a 50/50 split of planned DC deployment in the large hyperscalers. I don't see Nvidia as a larger company. They are just a fad in my eyes and a huge risk for revenue reduction as their margins shrink and their monopoly is done.
ML answers my questions more often to a reasonable degree of accuracy than it lies. I think the chance of this being a net negative is unlikely. Many use cases exist, and maybe they’ll use it in an extremely dumb way, but not every christian nationalist is a complete idiot.
AI = affordable Indians ML = Mumbai labor LLM = largely lackluster minions from the other side of the world
Respectfully, this is just an AI slop response from an LLM, that doesn't make any sense. I would encourage reading the DD and then coming back with questions. You didn't even provide context on what prompt was sent or what was provided. I would bet everything on what the LLM response said above being wrong. But it's okay, there will always be people that just don't read and don't think rationally and logically about business and due diligence, and just resort to generic LLM responses for decisions. You can bring a horse to the water but you can't make it drink the water. The LLM AI Slop said this: ""The 72-event count pins you to that curve" This is false. Why? Because: * We do NOT know which arm the 72 deaths came from. * We do NOT know arm-level survival curves. * We do NOT know censoring distribution by arm. * We do NOT know time-to-event distribution by arm."" There were extreme censoring stress tests done covered in the post above. And BAT (best available treatment) in AML CR2 (not eligible for transplant) has a biological cap proven study after study with BAT is 6 to 10 BAT mOS in AML CR2 (not eligible for transplant), and you can assume 6 to 12 mOS. The ML model for predicting when BAT median OS was set predicted 94% chances BAT mOS was set in Sept 2024, 99% by Dec 2024 5 ML models along with the mixed-cure model, and verified with 4 different machine learning engineers that all took different approaches, but all arrived at the same/similar results, resulted in BAT mOS being: 91% within 10 to 14, 80% within 10 to 13, and 99.99% within 10 to 13, being 11.4 I did cross validation with 5 different ML approaches for that Random Forest 10.4m \[10.2-10.5\] Gradient Boosting 10.5m \[10.2-10.5\] LASSO Regression 11.1m \[10.8-11.3\] Neural Net Ensemble 10.8m \[10.5-11.0\] 5-Method Consensus 10.7m \[10.4-11.1\] All 5 ML methods agree BAT mOS is 10-11.3m. None produces an estimate above 11.3-11.4m. The ensemble itself rejects BAT > 12m at >95% confidence. 99% chances BAT mOS was set in 2024, making the upper limit 14.5 for BAT mOS. In the impossible scenario that BAT mOS is 14.5, topline HR would still be 0.35 to 0.50.
Yeah, that's the issue, the definition of AI is kind of diffuse, it's hard to draw a line where an automation process is "AI". When the machines start to precess and adapt to data it falls into machine learning and then it's kind of under the AI umbrella term. As you said if the machine is working with a given set of parameters and doesn't adjust, or is adjusted by humans, it isn't really ML or AI it's just automation, but it's not like the customers of the tech would distinguish, I can easily see how you can just put an AI label on it to boost sales. And the hype of what "AI" can do is definitely out of proportion to what has delivered until now.
I was around when Bill Clinton swore he did not have sexual relations with ML. I will be around when he says he did not have relations with Epstein.
I rewrote an AES encryption algorithm on an Nvidia graphics card in 2017 using cuda for one of my graduate research classes and at the time swore that Nvidia gpus were going to be the future of computing. Everyone told me that their only use case would be for advanced graphics or highly parallel computing problems that didn’t meet your every day usage. I didn’t agree and thought they would be useful for workloads requiring high computation (ai/llms) but didn’t think that we’d see wide scale general applications of real ML models until the 2030s… should’ve gone with my gut and went all in on Nvidia instead of listening to my friends/colleagues at the time 🙃.
I thought GPUs becoming more prominent in data centers was pretty predictable. They're way more efficient for earlier ML models too and that had been growing rapidly for a while. I just didn't think it would be Nvidia that dominated the market. AMD and Intel were investing heavily in GPU development for years and they had way more familiarity with the enterprise side of things. Nvidia looked more interested in the consumer market. Oh well.
If you disagree, you either don’t work in AI/ML where you get access to ALL models to test out for yourself, or you are a pretty horrible engineer
But all of those things have used AI for a decade or more, and no one is competing at that level without AI tools. People seem to think AI=ChatGPT, which is not the case. Does Exxon need ChatGPT? Maybe, maybe not. Does Exxon need to utilize advanced ML tools to forecast demand, oil reserves, where to drill, seismic analysis, etc? Absolutely. Every day. AI is incredibly useful and already used every day. As I'm typing this, presumably Cloudflare is using some AI magic to make sure I can even post it. Now since they all use it, that won't give them an *advantage* necessarily - but that's different than saying they don't need to use AI at all. **Whether or not generative AI LLMs live up to the recent market valuation is a separate question.**
I’m an expert in ML and think transformer based llms are a likely path to AGI. The key missing parts are all in training approaches, not the transformer structure itself which has excellent theoretical guarantees.
This is geniuenly the worst fking take I have ever seen. Way before attention we had MCTS beating humans on very complex task, image recognition has benefited from huge AI breakthroughs and social engineering was being done using machine learning. People are just angry at the new hype thing, ML is going to stick around and continue gradual improvements.
Good enough has always been the name of the game. Doesn't make sense to work on something until you have acceptance criteria. That cam carry broadley depending on the application, but that has also always been true. Optimization for it's own sake is valueless. Artisan coders will still have their place, but yeah it's really not about writing code, it's about design, and that will be research roles and academia focused positions, but even those have been leveraging AI/ML a lot longer than it's been in the public lexicon. Yeah the more I think about it the more I agree with you that all coders are toast. Lol.
I am a programmer. Working with ML. I know the state of the industry and I use AI as a code assistant. Brilliant. Companies replacing junior programmers with AI are going to be screwed in 5 years time when they want senior Devs and have no one because the ones supposed to be gaining experience now are finding it impossible to find a job.
https://preview.redd.it/hzpclucp0elg1.jpeg?width=1080&format=pjpg&auto=webp&s=7836ffc4eda63559420eeccb9cd134db251f82d3 VMHG - Victory Marine Holdings Corp. | Company Profile | OTC Markets [https://share.google/74ML7qLHN8SYdqb0Z](https://share.google/74ML7qLHN8SYdqb0Z) Dunn & Groux Beverage Holdings, Inc. (DGBH) OTC Markets Newsroom: Search for symbol VMHG to view the "Change of Control" announcement
The ML stands for mark up, sure. And html provides only structure but no executive or logic functions.
I'm not convinced of your hypothesis yet. Look, there is a reason these few companies are hording 90% of the compute production. They are setting the future price at $1 a token by "subsidizing" it right now. With the exception of electricity, it does not actually cost all that much per token in the grand scheme of things. And it will only get cheaper for them as they scale ever larger and larger and become more efficient. They will have bundles and subscription methods that give you just enough of a discount to not leave but feel stuck. Basically Oracle's business model(they say Oracle doesn't have customers, only hostages). I think they are preparing for an ever large business model of a few players having a monopoly of compute with AI/ML technology being the "killer app" of this future. AI is just a means to this compute infrastructure.
don’t disagree quantum AI ML quadrant in the cloud buzzword bingo has gone on for a long time, what op is referring to is a very real issue which is coming tho https://youtu.be/OkVYJx1iLNs
I dunno. IBM, like MSFT, is a massive company that has actually pivoted multiple times to keep quietly hitting home runs while the online zeitgeist goes all in their demise. Their secotr niche ML/AI products are pretty badass, but all sold through 3rd party sector experts. If I'm running a Fortune 500 and looking to buy an AI product for a specific need, I'm going with IBM.
Curious — what ML framework are you using to tune it? Is it more classification-based (predicting regime) or probabilistic forecasting on the time series itself? I’ve found that probabilistic models tend to generalize better across regimes than pure signal optimization.
Damn… time will tell… did you have an ML pipeline for it to learn after every week? Or you never changed your parameters?
Out of the box then created ML pipeline to tune it after 4 month backtest… now It’ll papertrade rest of the year to fine tune but it’s already collected data for multiple regimes. Breakout, Consolidation, Sell off and Chop
They just need to work on ML projects and join Anthropic /s
Bet it all on Canada - 1.5 and USA ML today parlay to double up
This is a good comparison because cars are the worst and most dangerous form of motorized transportation but because of political decisions and economic incentives is the most popular in America, similar to how not all AI/ML is inherently bad, but the worst form (LLMs/chatbots) are by far the most popular and most hyped.
Looks like an ML training loss curve. Its kind of impressive
ML is old as shit. They’ve been called GLE
Raps ML, Open 10$, IBRX 10$
Are you saying that a substantial number of customers are leaving AWS/Azure/GCP because of unacceptable risk to proprietary ML/AI data? I’m not sure I follow. I’m not aware of any companies that have moved from cloud to on-premise for security reasons. There are/were some (especially in proprietary finance) who didn’t ever move certain of their infrastructure to cloud, but they’re in the minority. And those businesses only represent upside to Azure/AWS/GCP, as they’ll likely capitulate eventually, as they see their peers/competitors managing the risk, and winning, because they can develop and scale so much faster. Can you provide any evidence that there are many (or any?) organizations moving from cloud infrastructure to on-premises for security reasons? I’m not sure what you’re describing is an issue at all. But I’m curious where you got the idea from. And happy to read anything you can provide.
I saw an interesting point that someone made, like GOOGL could probably just at some point use the data you have with the convos with gemini to build out a much better ad profile around you. So rather than show ads in the gemini chat, they just use all the data to build to target ads when you watch youtube or search. META seems to be benefits using AI with their ad platform. My belief is there isn't really one AI winner and LLM's for consumers isn't even what is going to matter. I still think businesses will just use AI for understanding their data and being to do things from. Like business have been using ML, machine learning, for a long time. However, you can't communicate out with with. I think there is also so merit to the agentic AI stuff. It's still really early and it's interesting since from surveys and what not, seems like most people aren't really using AI as much. It's more used probably in the software engineering field. I work there and use AI. However, there is a clear demand from CEO's from surveys around wanting to adapt it and use it. I think overtime, we will see some benefits, but I don't think it's going to replace as many jobs. I think it's going to hurt entry level stuff the most. Which is going to be younger people getting into the work force.
That is utterly wrong. AI applications have been used profitably for over a decade. Not all AI is generative chatbots. Whether it's classical ML classifiers or neural net based anomaly detection, those models are being used effectively and profitably in countless fields. And just to get this out of the way before it's brought up again here: yes, those have been called "Artificial Intelligence" in the scientific discourse for decades. It's not a recent rebranding as some poorly informed people try to claim.
I think one wrench in your theory is that, even though AI as we think of it (LLM's, really) are getting pushed super hard in marketing, and that that portion of the boom seems like itll pop, the overarching consolidation of computing resources towards companies instead of consumers is still going to make money for NVIDIA, microsoft, google, etc. and I think will resist some sort of crash. Even though theyre calling everything AI (typically thought of as worthless content generators), the boom still consists of cloud compute, cloud storage, ML infrastructure for scientists and big institutions (this is where the real innovation is, IMO. scientists are doing science in a totally new way that is advancing our understanding of the material world faster than ever -- completely separate from bogus AI generators) Even if consumers dont get a geforce now subscription or chatgpt or whatever -- the hardware and infrastructure is getting enclosed by these companies and they will charge you or your business rent. And you know they're going to be making more than the operating costs.
I think NVDA will continue to grow, but I don’t believe any of the points you made. CUDA - not sure how relevant it is today. pytorch is the one people use to interact with hardwares when doing ML stuff. As long as the hardware provider provides a good driver for pytorch, there won’t be much migration cost for pretty much most of the people. I do a lot of ML work and I also do some local training/inference of LLM on my mac book, pytorch is all I use and it supports MPS very well. Never need to know anything about CUDA. ASIC - it will not replace GPU, but we will see its shares for ML training/inference to continue grow. As it usually 3x or 4x cheaper than GPU, all the big cloud providers will invest in their own ASIC chip to cut down cost. We will see maybe a large chunk of workload running on ASICs in cloud at some point, well some percentage of workloads continue to use GPU. In general, I think NVIDIA will continue to grow, but I don’t think it will be in a better market position than companies like Google or Amazon.
> Amazon has already built a chip for ai that rivals nvdias. I can't comment on the rest, but I'm a ML engineer and can comment on this. No. Trainium does not remotely compete with NVIDIA. Trainium is more cost effective if you are training models on AWS, but in literally any other scenario, NVIDIA is the better option. There may even be instances where Blackwell chips are better to use than Trainium (on AWS). Also as far as I know, Trainium hasn't implemented all the features in pytorch / tensorflow, which means there are certain models you can't train at all...
It's really cool that we invest in AI/ML and Jamie Dimon et. al uses it for this purpose.
This is simply untrue. This is my area of specialty, and I get there's a lot of hype, but I'll break it down The core value proposition of modern logistics is primarily managed by LP, MIP, and network flow optimization I don't care how much training data you have, those are not ML/stochastic problems. Want proof? I'll give you one of the most basic problems in optimization You have 70 workers and 70 workstations. Each worker has a different effectiveness on each work station. What is the most sum total effective arrangement? There are more solutions to this problem than there are molecules in the universe. There is no pattern recognition to solve this problem. There is no training data to solve this problem. You make minor changes to this problem statement and the entire solution changes wildly.
I mean, I’ve been working in AI/ML for a decade now, but maybe I’m missing things
To give you hope in a see of despair I have job interviews lined up nearly every day for the next two weeks, half of them are 150k$ jobs. Bs physics, ms comp sci, 6 YOE, 1 in AI/ML space I have 0 extra projects/github/etc. stuff. I barely have a personal website
Correct, Deloitte is going through current restructuring that affects 181,000 employees, they started silent layoffs last month, so we do not know the total number Deloitte is planning to layoff. They have been very clear the restructuring is due to Ai and development. Deloitte also announced they will be hiring 50,000 new employees in India to handle the Ai/ML technology build.
Is anyone here an NBA fan? Give me reasons not to fullport the Pistons ML tonight
They know AGI isn't coming from ML / LLMs. Escalation of commitment is more powerful than anything else.
Problem is I grew with my dad as a trader and I was trading in myself in grade school. I can’t separate long term investing with trading. Last year ML had me up 17% and EJ was 14%. I get if I put it all in to VOO and VUG it would be similar results. I read ML Edge has .3 too but I’m sure my advisor would kill me if I swapped. I’ll check Vanguard for my EJ account this week. Appreciate the advice. If they can gate keep I’ll just roll 75% VOO and 25% VUG.
May as well hedge and put $500 on Pats ML
I think I’m paying 1.2ish, about 50% of it is in vanguard funds, rest are stocks and mutual funds. ML says if we do t beat the average then you should fire us. Look I know it’s not ideal and get the using Vanguard is the best way to go but I can’t control my impulses and I lose sleep. So it is what it is. Since handing over my money to them 7 years ago was the best decision of my life my life, even if it cost me 500k in net worth.
I work in AI. This has nothing to do with AGI. The big tech is pushing because this is the way to control the labor demand. Before it was outsourcing, now everything AI. I am in Yann’s camp. LLM is not the answer. Companies spent stupid number of CAPEX just to get LLM to be able to schedule a meeting like a human. Tons of money spent on managing “hallucinations “ while the underlying model is the next token prediction. ML is always difficult to quantify. Like sure Netflix came a long time ago with recommendations systems but right now for social media, we are so used to garbage or brain rot contents, it won’t matter if you use LLM or other ML to do this. Think of what tobacco companies did in the developing countries. They brainwashed and targeted the youth early to get hooked on this. The same wit AI slops, eventually we will get used to it and think that this is a new normal.
I’m considering dropping $500 on SEA ML using HOOD’s prediction market. Is this the top?
Ok, what is your AUM fee with EJ and ML? What are the expense ratios of the funds you're in? And what other junk fees are you paying? Any commissions, quarterly fees, etc? Now compare your fees to Vanguard's advisor fees. It would cost you 0.30% for the advisor and they'll probably have you in a mix of VTI/VXUS/BND, which have an expense ratio of .03%/.05%/.03%. My bet is you're paying over 1% AUM and your expense ratios are at least .20%. You shouldn't be paying more than 0.35% combined.
Haha steelmanning has been a popular term on HN for a while, that's probably where I picked it up. Not a big podcast listener, except for my wife's clean-energy-related podcasts, since she's in that field. Maybe there's some overlap between Jigar Shah and that crew and the ones you're talking about, since besides his service with the US Dept of Energy, he's also been a VC. *shrug* bitflips are the most common issue afaik, thats why I mentioned them. I looked into the damaging higher energy cosmic rays when the Google paper came out, but it didn't seem like it was as big a problem as I initially assumed, because they're very infrequent for a given die-sized area. Is that what you're talking about? My point with that post was just that the other post was completely riddled with errors, so people shouldn't use it to inform their opinion. The temp scaling thing was admittedly minor, since it's 4th power in kelvin, not centigrade, so there's not going to be any doubling of the temp, but just another example of how he hadn't thought very deeply about it if he's using ISS figures for a maintaining a habitable environment to spec out what's going to happen with hardware, that's all. It was just super sloppy, but full of confidence. Heh I don't love how LLMs have become all-consuming, either, ML was a lot more fun when it was just nerds doing stuff like you're talking about and publishing actual papers, before everyone jumped in with an opinion. But it's undeniably useful, and it's been great writing lots of throwaway code for stuff that I wouldn't have bothered spending the time on before. Haha thanks, we'll see what happens in the upcoming midterms... But fundamentally, we're gridlocked into status quo on a lot of levels, NIMBYs are super powerful and super active at the local levels, and that's what makes this kinda make sense. Pretty sure only China's been growing their electrical generation nearly quickly enough to do significant AI buildouts. Maybe that'll change, I hope it does because it needs to for decarbonization.
Haha steelmanning has been a popular term on HN for a while, that's probably where I picked it up. Not a big podcast listener, except for my wife's clean-energy-related podcasts, since she's in that field. Maybe there's some overlap between Jigar Shah and that crew and the ones you're talking about? Besides his service with the US Dept of Energy, he's also been a VC. I looked into the higher energy/more catastrophically powerful cosmic rays when the Google paper came out, it didn't seem like it was as big a problem as I initially assumed, because they're super infrequent. Is that what you're talking about? My point with that post was just that the other post was completely riddled with errors, so people shouldn't use it to inform their opinion. The temp scaling thing was admittedly minor, since it's 4th power in kelvin, not centigrade, but just another example of how he hadn't thought very deeply about it if he's using ISS figures for a maintaining a habitable environment to spec out what's going to happen with hardware, that's all. It was just super sloppy, but full of confidence. Heh I don't love how LLMs have become all-consuming, either, ML was a lot more fun when it was just nerds doing stuff like you're talking about and publishing actual papers, before everyone jumped in with an opinion. But it's undeniably useful, and it's been great writing lots of throwaway code for stuff that I wouldn't have bothered spending the time on before. Haha thanks, we'll see what happens in the upcoming midterms... But fundamentally, we're gridlocked into status quo on a lot of levels, NIMBYs are super powerful and super active at the local levels, and that's what makes this kinda make sense. Pretty sure only China's been growing their electrical generation nearly quickly enough to do significant AI buildouts.
It's just algos and ML. We've had those for ages.
>Coding is a way to make a machine do what you want it to do. >Prompting is a way to make AI (machine?) do what you want it to do. >So prompting is coding without the complexity of learning syntax. There's more to it than that. AI/ML is essentially pattern recognition/repetition. AI has become very effective at most knowledge work professions, not just coding. And it is highly likely that it will become effective at real world hands on tasks once inference gets fast enough for real time image processing. Self driving is already becoming a thing, which IMO is actually more complex and challenging than many other jobs that AI robots could automate. I think robots could automate a lot of manufacturing. For example, my brother worked a role that was essentially just loading a machine, letting it run, and then occasionally doing a couple procedures to fix issues that occur. Existing automation wasn't really capable of doing this role because of inconsistency of output, but a field promptable AI robot probably could do this job.
copium. Anyone that knows ML knows that LLMs aren't bottlenecked by compute anymore, they're bottlenecked by training data and architecture. Google is making the wrong investments.
How reliable are such systems and then two, do you know such examples of AIs? Ironically, I work in the AI field, building infrastructure at scale for ML workloads lol.
Who is 'we' retard Also all eyes are on GOOG's TPU and forward guidance if you're talking about odds to moon more than 5%, Cloud revenue is just the reason it'll be green. GOOG needs to keep showing TPU is being a real competitive bet as the alternate choice to approach ML computing, and it is here to stay. With just cloud alone, it will just go up a bit and move slowly as it always did since it hit 3T market cap(unless it was to recover after a giant correction).
Imagine falling for the AMD guidance ber trap, you retards do know majority of the ML training and cloud storage servers uses proprietary custom design ARM chips right? Lisa have been on her knees begging companies to buy some of their AMD chips for months.
A lot of it isn’t even ML, just very efficient low level arb algos on the lowest latency infrastructure
any big player/quants are using real ML for this LLMs aren't the right tool for the job
In reality there is like two type of AI, LLM's and ML. Machine learning has been around for a while and companies have been using this version of "AI" for a while. Company usage we hear about now would really depend on industry in terms of how companies are using it. Like I work in software engineering and we use AI in terms of writing code, reviewing code, and a few other things. It's a nice productivity booster, but I also think it has it drawbacks. Like some of the code is clearly written by AI, so I know the engineer writing that code probably doesn't fully understand what it does.
The Super Bowl will be an absolute snooze fest this year Except 4Q when the Patriots try to come back & they’ll be a little short Seahawks ML. Patriots Spread
Actually like this one a lot, this is a really interesting tech from a computer science perspective. I would have to think they are using continuous data to improve the product as well with ML models, which would put them in a competitive advantage over others without the data as well. Chart looks good to me imo, plenty of room to run on any volume.
Nope, DOE has fully authority to allow for design, construction and operation per the new MOU- https://www.nrc.gov/docs/ML2530/ML25303A288.pdf Streamlined review after it’s already built and operating for commercialization.
Now compare those two to Vanguard. With Vanguard, there are no account fees, no 12b-1 fees, no loads, and you can get VTI for 0.03% ER. If you want a financial advisor, Vanguard's AUM fees are 0.30% or less (as you get higher AUM). Are you paying less than 0.03% total in fees with ML and EJ, and 0.30% or less if using an advisor? I have over $2M with Vanguard and manage the accounts myself, so my fees are 0.03-0.05% total.
LLMs are not. ML methods that have been around for decades are
I work in the chemical industry and have used ML (ML was the hot topic while I studied my posgraduate program) and all those magical savings have already been done or been used to improve a bit the process. AI saved us around 80M USD last year (sounds a lot) but it is nothing compared to the 100B in revenue that we make.
People forget we’ve been using ML for work like this for a long time
There are doubts OpenAI will make it 2 years, so 20 is extremely optimistic Plus there are other much more secure and substantially less resource intensive ways to use AI. You can build a little model focused on a specific problem with your own hardware. Companies are already doing it. It's kind of like the next extension of ML.
The fact that you said p-value of 0.5 tells me everything you know about “medicine”. They had done two statistical studies: 1. Pre-specified efficacy population: 4.5% vs 7.5% regain rate. Here they don’t see much statistical significance but a one-sided p-value is still 0.07 (which is quite impressive). This is what crashed the whole thing. 2. Exploratory group: 4.2% vs 13.5% regain rate. Here p-value was 0.004 (very statistically significant). They failed the prespecified efficacy study because most likely this procedure is NOT FOR EVERYONE! In the other hand, it does work extremely well for patients above median GLP-1 weight loss. Yes there is a risk for false positives but that’s only if you claim this product to be more general in use. Also this is NOT a cosmetic procedure. It’s a 40 minute endoscopy. And FYI: I have a PhD in Biomedical Engineering and stats/ML is my bread and butter. I too know a thing or two lol. Lemme know if you any other questions.
This is why I’m very bullish on Meta. Almost all the infra is in house and meta owns its own data centers. It has massive amounts of user data and training data for ML models too.
Harvard ML against Brown at -130 I accept tips at https://cash.app/$Bear6669
Been evident for decades people are just impatient. I went back to uni and got a computer science degree and done my dissertation using ML before ChatGPT dropped.
My regarded theory is they're trying to deliberately create a systemic hazard situation ala 2008. It's mostly gotten drowned out by the zone being flooded by pro-AI PR and marketing but there has always been skepticism in academic circles that LLMs were the road to AI. There are various stances but the most obvious and simple one is that such a system would require a workable world model through which it could analyze cause and effect to even begin considering the possibility that it's outputs were in any way intelligent. LLMs may be a part of that system but it definitely wasn't the extent. I think they figured out a year to two years ago that this was the case but the investment levels were already catastrophic. So now the goal is to bluff it's capabilities (even the stories of "bad AI" in tests sell this) and give the impression it's critical to national security (the race with China) and the agreements are creating opaque interconnected deals spreading the liabilities around while giving the impression that everything is fine and proceeding well. In the end the goal is to make OAI and AI in general too big to fail, at least on paper, so if the music ever stops a government bailout will follow because of how "vital" AI is to national security. Normally I'd slap my "rational actors" hat on and say it doesn't serve their corporate interests because it doesn't. However, that discounts the cultish level of devotion that AI has as a concept in SV. For once it's about more than money because they believe AI is absolutely vital to the future of the human race and ensuring that AI development continues no matter what is their overriding concern. Note: I don't entirely disagree with this but I think there's a lot of merit in the idea of 'focus on current problems, AI will come' as opposed to just pushing directly for AGI or ASI. The amount of breakthroughs that ML assisted research has made in the last few years is astonishing and there's clearly a lot of value to be had there but they're all focused on AGI to the exclusion of all else.
i think i may have a problem eglin afb xirs and chinese shills. I drank 3/4 of a 750 ML bottle of vodka tonight and i feel just slightly tired. the alocholism has escalated.
I mean, LLMs _are_ predictive ML…
Had a look at the job ads on the q.ai website to understand the tech. They have roles for experimental physicist (*electro-optical and acousto-optical systems*), specialist AI/ML algorithm developers (*computer vision, edge devices and speech processing*), systems engineers (*wireless/RF, firmware, electro-optical, electronics-mechanical integration*), industrial designer (*consumer electronics*), software engineers (*coding, data and architect*) and management. So it seems like Q.ai is developing a non-acoustic communication interface that enables silent speech by using electro-optical sensors and edge AI to interpret micro movements and muscle tension from the user's face. Can be applied for both noisy or silent environments. Let's see when AAPL integrates this into wearables.
Do you think the regarded analysts know the difference between predictive ML and generative LLMs?
Had a look at the job ads on the q.ai website to understand the tech. They have roles for experimental physicist (*electro-optical and acousto-optical systems*), specialist AI/ML algorithm developers (*computer vision, edge devices and speech processing*), systems engineers (*wireless/RF, firmware, electro-optical, electronics-mechanical integration*), industrial designer (*consumer electronics*), software engineers (*coding, data and architect*) and management. So it seems like Q.ai is developing a non-acoustic communication interface that enables silent speech by using electro-optical sensors and edge AI to interpret micro movements and muscle tension from the user's face. Can be applied for both noisy or silent environments. Let's see when AAPL integrates this into wearables.
Ya theres no such thing as AI. Its machine learning. This software is valuable to many industries. But ML itself makes software cheaper and faster to develop. They are thinking all these data centers will be needed to create and run all the models. Long term that could be worth a lot. But competition drives costs down. The profit is all theoretical but a lot of very real money is going all in on it.
Anyone that was in the field of ML/AI knew that the systems were going to rapidly improve. Quantum's own experts are highly, highly skeptical of it ever being useful beyond an academic curiosity.
Rams or Seahawks? I'm about to do a ML bet
Seems incredibly risky in this economy unless their ML are savants.
Google only buys from Nvidia to sell Nvidia GPU to customer on the GCP offering. It does not use Nvidia GPUs for its own AI/ML compute at all. Google uses TPUs internally, which are designed in-house, and also what gives it an edge in 2 ways 1. Not being dependent on Nvidia to scale 1. Designing its hardware stack with the software (Gemini) that it runs in mind, allowing for far greater efficiencies than using an off-the-rack GPU.
One can always question. I thought something very wrong was up with my system as technical indicators and everything was positive. My ML model was also positive. Yet the outflow was huge! Seeing the pictures and the intensity of the tweets and posts (Trump and Vance planting a flag on Greenland) as well as the military threats. They surely made sure that the market was going to tank quite hard. Oh, I also detected some euphoria in a huge amount of calls on Thursday Retail was buying 3 calls for every put on Thursday in XLF, while DIA was almost two to one. Then insiders sold to retail in a huge dump on Friday. A perfect retail trap...
I mean that’s just semantics. We’ve been calling the field machine learning for 50 years and generally someone using the term “AI” is also someone who wasn’t in the field until ChatGPT came out so it makes sense for that term to specifically refer to that style of ML.
Yeah but their tech is only marginally better than Flock. You know, used 8 year old android phones running the equivalent of a gba advance game of ML models. These companies are just a who you know shitshow.
From the same report >Rigs drill oil wells, and an increased number of active drilling rigs indicates that U.S. producers are drilling more wells, which generally results in growing oil production. Our latest STEO shows the active rig count decreased year over year in 2024 through November in all L48 primary crude oil producing regions except the Bakken. The region with the most activity, the Permian Basin, declined from 310 rigs to 303 rigs between November 2023 and November 2024. The active rig count for these regions, which includes the Permian, Eagle Ford, and Bakken, declined 18% to 389 rigs since the recent January 2023 high. Data on 34 publicly traded exploration and production companies also suggest increasing well productivity is helping reduce companies’ production cost per barrel. Some companies are seeing efficiency from AI, but it's not LLM's, it's ML. I'm not saying AI is going to replace anything, but rather, AI can be a tool to increase efficiency. From HAL [https://www.halliburton.com/en/resources/the-rise-of-artificial-intelligence](https://www.halliburton.com/en/resources/the-rise-of-artificial-intelligence) >Results >The effective interaction between AI and the directional engineer marked a significant operational milestone for the operator. Human expertise, combined with predicitive analytics technology, formulated recommendations as part of a unified human-AI team. The LOGIX® automation and remote operations helped improve consistency during well construction, clearly demonstrated during the development of three wellpath trajectories. >The team achieved consistent performance improvement and drilling trajectory accuracy, which resulted in a remarkable 33% increase to the rate of penetration (ROP) when compared to traditional drilling methods without human-AI solutions. >The autonomous drilling platform demonstrated consistency between planned and actual DDIs indicated by significant smoothness in hole profiles, which minimized time and effort. Casing and liner run speed additionally improved by 15 to 45%, which reduced deviation from the planned path, enhanced steering efficiency, and minimized the tortuosity impact. There's been a loss of a lot of jobs in the industry [https://finance.yahoo.com/news/40-us-oil-jobs-lost-103032384.html](https://finance.yahoo.com/news/40-us-oil-jobs-lost-103032384.html) >New technologies to drill faster for cheaper, corporate mergers and robots replacing humans on rigs resulted in the disappearance of some 250,000 jobs since the sector's employment peaked in 2014. Production surged 50% during that time.
Software engineer, but AI has been great. I use it all the time at work and still in the camp of LLM's are pretty dumb, but real AI winners will just be companies that can actually take advantage of it. A great case example is oil and gas companies are using ML, machine learning which is still AI, to get more efficient in drilling to lead to cheaper break even prices. For example, something from $HAL [https://www.halliburton.com/en/energy-pulse/artificial-intelligence-drilling-accelerates-new-era-of-excellence](https://www.halliburton.com/en/energy-pulse/artificial-intelligence-drilling-accelerates-new-era-of-excellence)
But these examples are AI. Traditional ML uses fixed features for narrow tasks. That's not the case here. The AI used by Meta, Google, Amazon, UPS, Walmart, healthcare uses **predictive modeling, reinforcement learning and generative techniques to make dynamic decisions at scale**. They analyze massive, unstructured data, optimize workflows and even summarize complex info, doing things static algorithms simply can’t.
algo with advanced ML model calculated taco date and placed bets accordingly
There's a Linus video where they get an H100 running for gaming. It does fine, but they'll never be cost effective due to the memory and tensor core count compared to a gaming GPU. The notion that the bubble bursts and H100/200s go on sale for like $1,000 is dreaming. Even if the AI bubble didn't exist, they'd all be gobbled up by private enterprise for use in non-AI slop ML.
Aside from your snarky attitude, you’re half correct. Let me explain. If RV were pure white noise, variance swaps wouldn’t exist and GARCH type models wouldn’t even weakly work. Empirically, they do, just not cleanly, not linearly and not stably. Though it is true that you’re not predicting a physical process, rather, you are predicting the output of a reflexive system. In my view, you cannot point-forecast RV reliably, but you can identify conditional distributions, regime likelihoods and volatility pressure buildup. OP just doesn’t realise what the model is implicitly assuming about the world, things like stationarity, feature exogeneity and objective mismatches. Dealers must hedge. Funds must rebalance. Gamma must decay. Liquidity must thin at certain times. Those are real constraints, not opinions. I can guarantee OP will just find weak, brittle correlations, calendar quirks, microstructural noise and short-lived flow artefacts. And then proceed to extrapolate them, right until a regime boundary breaks. I would say, you should use the ML to classify regimes and detect distributional drift which is congruent with constraint stress, not the model chasing its down shadow. There’s a difference between a forecast and a seismograph.
Valid points. To clarify, this isn’t an ML or prediction model yet, so there’s no “training dataset” in that sense. I’m not forecasting price at all right now. What I’m testing currently is rule-based strategy evaluation, not signals. • Universe: Nifty 50 only (liquidity + avoiding survivorship issues) • Type: long-only, EOD data • Style: swing / position, no intraday, no leverage The backtests you’re seeing are deliberately limited and you’re right, that’s a weakness. Most of it sits in recent regimes, which absolutely increases the risk of overfitting. Before building the app, the next steps are: • testing across multiple regimes (2008, 2013, 2020, 2022) • walk-forward testing instead of static tuning • randomizing entries to detect curve fitting • focusing more on drawdown behavior than returns If it doesn’t survive ugly markets, I’m not interested in building around it. Trying to break it on paper before turning it into code.
I'm quite up to speed on AI/ML tech and news. This is the best take I've heard in a hot minute regarding scaled training hardware. Bonus points for keeping it in regard monke mode language. 100/10.
After pats score, then Texans ML