Reddit Posts
[Discussion] How will AI and Large Language Models affect retail trading and investing?
[Discussion] How will AI and Large Language Models Impact Trading and Investing?
Neural Network Asset Pricing?
$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...
Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now
Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?
Moving from ML to Robinhood. Mutual funds vs ETFs?
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I'm YOLOing into MSFT. Here's my DD that convinced me
Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)
I created a free GPT trained on 50+ books on investing, anyone want to try it out?
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Investment Thesis for Integrated Cyber Solutions (CSE: ICS)
Option Chain REST APIs w/ Greeks and Beta Weighting
Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
Nextech3D.ai Provides Business Updates On Its Business Units Powered by AI, 3D, AR, and ML
🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!
AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)
🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙
Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)
The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle
Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth
Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)
NVIDIA to the Moon - Why This Stock is Set for Explosive Growth
[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?
The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts
Do you believe in Nvidia in the long term?
NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading
Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"
Which investment profession will be replaced by AI or ML technology ?
WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology
$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41
$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).
Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine
This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?
Training ML models until low error rates are achieved requires billions of $ invested
🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨
AI/ML Quadrant Map from Q3…. PLTR is just getting started
$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News
VetComm Accelerates Affiliate Program Growth with Two New Partnerships
NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT
Netramark (AiAi : CSE) $AINMF
Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)
Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)
How would you trade when market sentiments conflict with technical analysis?
Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.
How are you integrating machine learning algorithms into their trading?
Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits
Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare
Why I believe BBBY does not have the Juice to go to the Moon at the moment.
Meme Investment ChatBot - (For humor purposes only)
WiMi Build A New Enterprise Data Management System Through WBM-SME System
Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT
The Squeeze King - I built the ultimate squeeze tool.
$HLBZ CEO is quite active now on twitter
Don't sleep on chatGPT (written by chatGPT)
DarkVol - A poor man’s hedge fund.
COIN is still at risk of a huge drop given its revenue makeup
$589k gains in 2022. Tickers and screenshots inside.
The Layout Of WiMi Holographic Sensors
infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.
$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.
$APCX Huge developments of late as it makes its way towards $1
Robinhood is a good exchange all around.
Mentions
For your chart, Short put= ML (max loss) should be unlimited, right? Selling a naked put keeps the seller on the hook especially if the price of the underlining goes below the contract's strike. What license are you taking?
Of all the ML models they picked for predicting prices, it was an LLM. 😆
Loving the comments here especially around how apple isn't doing this or that. My take is this. Apple is using executensive ML/AI throughout their eco system on consumer products but you generally don't know it's there unless you look for it. Everything from searching your images with text descriptions to monitoring your health information on device. More broadly apple is using AI on its server side for everything from service consumption and marketing analytics to observability on its infrastructure. I think Apple is being exceptionally smart in how it's rolling out features and in particular not promising the world on a technology that is still relatively new. I also think that they are working on a lot more than you will ever see or hear about, a lot of which might never make it to the devices. Regarding apple being an innovator vs. a company that just refines products. Personally I'll take my 2 year old MBP M2 Max over pretty much any other new current laptop other than a new MBP ( I have three new work windows laptops on my desk right now ). When I step back and look at the capabilities of their products they're pretty exceptional. They might be expensive, RAM and SSD in particular, but you can't argue that they don't work really, really well. Example, I'm running a 30B local LLM on mine, while running multiple Linux VMs and it all working great and on battery!
I reckon behind the scenes there's probably a disgusting amount of resource being thrown at advancing different verticals of AI. We just won't hear about it until successful. (Pure assumption that i can't substantiate with data) Have to imagine lots of conventional ML that already had utility in sectors like HLS for drug discovery or predictive financial/reconciliation models has prob benefited from the surge of investment from LLMs getting trendy
I'm sort of getting sick of saying this to every wide-eyed investor who doesn't understand the technology, but there is no possibility of ML on quantum computers for at least 20 years and probably much more. The QCs everyone is trying to develop right now, with great difficulty, do not have QRAM (quantum memory). That gives them a few hundred logical qubits to work with at most, with no other memory to hold the model. That makes them using for ML a nonstarter. Realistically, QMs will not be used for ML tasks within our lifetimes. Even if (or when if you want to be optimistic) we finally have QRAM, ML-type tasks enjoy at most a quadratic Grover speedup, much more modest than the exponential Shor speedup that a *narrow* class of problems (factoring composite numbers into primes, for example). But quantum computers, in terms of cycles per second and instructions per cycle, are much slower than our classical computers. It's just that those instructions can either do a *drastically more* (for things like factoring), or *modestly more* (for things like unordered database search and some ML-related tasks). But the clock speeds and IPS numbers will need to get way up before "quantum supremacy" (that is, when quantum computers outperform classical ones) can finally be achieved. We are currently in the development of **Noisy Intermediate-scale Quantum.** These cannot even factor. The hope is that after NISQ, we can get **Fault-tolerant Quantum** that can run Shor's algorithm. This is the point where quantum computers will become truly useful. But then we still need QRAM, which does not appear to be close at all, and then we still need to improve these technologies to make the quadratic Grover speedup actually matter more than the constant overhead from slower clock cycles/IPS counts. So no. There is currently no connection between quantum and ML. One day there will be, but that day is not soon.
I was with you till you said machine learning. When people say AI right now they’re referring to GenAI, which is a very specific field of ML focused on content creation. ML has been around for decades.
I both agree and disagree. This is kind of AI news because quantum computing will certainly have applications in ML. This is already being developed and the real blocker is the “computing” part right now. A professor I know said this was an active research area and he sees it being 5-10 years away from bring one of the big next steps in AI. That said, it’s probably too early to be hyped about AI with quantum computing.
Sure yea, there's also other forms of ML being used in other places. I was referring to the hype and the "bubble". The only self driving hype was Tesla and they ain't delivering. But every company needing to use the AI buzzword is all about gen AI (LLMs and image/video etc generators).
I never said they'd never cross, I said that without quantum memory, quantum computers will not be running any ML applications, and the quantum computers being developed are QRAMless. If or when QRAM is developed it will still be many years for quantum computers to outperform classical ones in ML applications. Proposed quantum speedup for ML is quadratic, not exponential like for QCs get for factoring and related problems, and constant factors from all the advances we've made in classical computing hardware will dominate for reasonable problem sizes until QC cycles per second catch up. I know this because CS is my educational and professional background and the above is the fundamental state of things. As for ML significantly helping QC research and I'm skeptical, but less confident.
It's massively more efficient than classical computers for a very narrow set of problems. While there are a billion proposals for how it could maybe help ML tasks, none of them are thought to be realistic or implementable without QRAM (and QRAM is way further down the road than just quantum computing). Everyone wants to merge "quantum" and "AI" because that would sell stocks like hotcakes, but they're not terribly related right now. Maybe in the future.
nah, software engineers are doing agentic work, and the AI/ML PhDs, data scientists, and career pivoting grifters that try to be on the AI teams are getting cut eventually
In companies I work with I see this a lot Basically two things are happening simultaneously: 1) everyone wants to be associated with the AI effort for their career, but they are useless nobodies grifting their way to job security and fooling no-one. They get cut eventually 2) there are several branches of AI/ML work being done by respective divisions like “data science” teams. But the only branch that matters are agentic work, which is creating agents with LLMs, which are the more recent teams and they are executing much quicker, just with software developers none of those time wasting PhDs. This is where the moat is. Other teams gotta go.
Quality matters more than quantity for ML.
Check [Rezolve.ai](http://Rezolve.ai), they use ML to automate and improve digital commerce, customer service, and internal business processes. Its solutions include a generative AI-powered sales assistant for e-commerce, an autonomous agent for IT and HR service desks, and tools for personalized shopping experiences.
Check Boosted.ai, they use ML to analyze stocks + LLMs to explain reasons for ranking drivers, stock picks etc. Builder.limex.com is a light version of it
Chat got is fun, but the true power of AI/ML comes when you combine it with autonomous robots, that are intelligent enough to do complex work. Stuff like pattern recognition(audio, video and other sensor data) to identify their surroundings, paired with intelligent systems capable of making decisions based on the data. This is what the next level of automation will look like and while you will only see robots, it is in fact a combination of various complex technologies. AI/ML will be one of the corner atones for it
My 2 cents are: Don't listen to those people very much, as well as people like u/ThePunkyRooster , basically they have no idea what they're talking about despite their credentials. Having worked in ML/AI on itself means basically nothing. Having PhDs in the area can mean something - depending on the exact nature of the research being done - but still large number of even ML/AI postgrads and researchers were unable to predict today's capabilities of LLMs and gen AI. So why should we listen to more predictions from them? Claims like "*Gen AI is garbage, expensive, and won't result in anything positive*" etc. are cocky, over-confident. In reality at this moment **no one** really knows how much this tech will develop further and how it will impact markets, industries - or not, not even the people who invented it and/or understand it on a very deep level. It's a waste of time to try to predict the future impact of Gen AI by looking at the current market sentiment or profitability of current AI companies like OpenAI. During the dot-com bubble you could've also keep pointing out how the leading companies are overvalued and not profitable enough to sustain themselves, and you'd've been right... and you could've claimed how online shops are only good for certain very specific things yadda yadda... and then there was a bubble and you could've gloated how correct your predictions had been... And yet, 10, 20, 25 years later, online commerce is an industry of trillions of dollars and has basically become the default for when you want to sell most kinds of goods. Speculation is a fun pastime, but really the only way to know is to wait and see if & how it pans out (or not).
Let's not forget about their Chief Scientist Mr. Bagnell is a co-founder of Aurora and is currently Chief Scientist. Mr. Bagnell served as Chief Technical Officer of Aurora from December 2016 until July 2020 and has led software engineering throughout much of Aurora's history. He also currently serves as a Consulting Professor at Carnegie Mellon University's Robotics Institute and Machine Learning (ML) Department. He has worked over two decades at the intersection of ML and robotics in industrial and academic roles. His research group has received over a dozen research awards for publications in both the robotics and ML communities including best paper awards at the International Conference on Machine Learning, Robotics Science and System, and Neural Information Processing Systems. Mr. Bagnell received the 2016 Ryan Award, Carnegie Mellon's yearly award for Meritorious Teaching, and was founding director of the Robotics Institute Summer Scholars program, a research experience that has enabled hundreds of undergraduates throughout the world to leap into robotics research. Before co-founding Aurora, Mr. Bagnell served as the Head of Perception and Autonomy Architect at Uber's Advanced Technology Group from January 2015 to December 2016 and as a professor at Carnegie Mellon from 2004-2018. He holds a Ph.D. in Robotics from Carnegie Mellon and a B.S. in Electrical Engineering from the University of Florida. Oh and their ex CPO who is now CPO at GM EVP of Global Product & Chief Product Officer of GM. Co-Founder & Chief Product Officer of Aurora. ***Director of Tesla Autopilot***. Lead PM of Tesla Model X.
You can do a hedged equity exchange. Large private banks like UBS, ML, Morgan Stanley, or JPMorgan can help with this. It’ll get you a diversified ETF over time for your NVDA and AAPL without creating a tax bill.
Nvidia wasn’t the “next thing”. Their core technologies that got them to blow up (CUDA, tensor, etc) was around for years before the stock started running it was AI, ML, and DL becoming more popular and accessible with Nvidia being the best positioned in GPUs. There’s no practical or widespread use case for quantum computing and said computers are over a decade away. I’m happy for whoever is benefitting financially from this but this is far from Nvidia’s situation
There’s a number of things going on. A few years ago now, a paper came out in machine-learning land that suggested (with a fair amount of hand waving) that AI capabilities would continue to scale exponentially with increased processing power. Basically Moores Law for AI. A lot of powerful people drank this Koolaid, and the implication of this belief was that AGI and super intelligence and the ability to do anything really was only a few years away. And that if you didn’t catch this train now; you’d be left behind forever. This lead to people throwing huge sums of money at AI. In the past year or so, we’ve come to realize that this isn’t what’s going to happen. Both because of real-world results, and more recent ML literature. Transformer abilities do not continue to scale exponentially; if anything, some problematic behaviours seem to get worse at larger scale. In addition, things like hallucinations seem to be a fundamental feature of the technology. Instead of being at the start of a rocket taking us to an unimaginable future of excess and ease… we are probably already basically at the plateau of what the technology is capable of. And concerningly… no one is making money off these current capabilities. Microsoft et al are spending hundreds of billions of dollars on this stuff, and are only making 1-5% revenue off those expenses. These big companies seem to realize this as well, and have started panicking a bit. You have Microsoft removing AI-specific revenue from their quarterly reports, and throwing Copilot at everything to see what sticks. Or look at OpenAIs actions. In desperation, these companies are announcing these circular deals to try and buy themselves just a bit more time… because they have nothing and are running on fumes at this point. The sentinel events here were some of the ML papers that came out earlier this year, the failure of ChatGPT 5, the failure of agentic AI in general, the failure of LLMs to signficantly improve productivity in real life (this data coming out in the last 6-12mo as well) implementations, and the persistent lack of revenue off capex on AI projects. This is all stuff that’s come to the surface largely in the past 6months. Hence why the tone has changed so much. **TLDR**: a few years ago people thought Moores Law would apply to AI and got overexcited about that possibility, spending hundreds and hundreds of billions of dollar chasing a pipe dream. When it turned out that wouldn’t be the case and the technology had already largely reached a plateau, people are starting to panic.
Ive worked in ML/AI for 20 years and Im telling you Gen AI is garbage, expensive, and won't result in anything positive. AI models are best utilized in highly specific areas; pattern recognition across huge sets of data. Things that dont have mass appeal, mass adoption, and are generally speaking not broadly marketable.
Shocker that ML or my WF wealth advisors would use a calculation to show a higher return
I talked to my wealth manager @ ML (played golf with him yesterday afternoon) who said he gets this question all the time. He said Merrill benchmarks against SPXTR index that reinvests dividends instantly and has no fees. TotalRealReturns uses VFINX, which includes drag from expenses, timing of dividends reinvested and tracking error. Over 10 years, these compound into a meaningful gap. I also ran the raw numbers through AI and the return was closer to ML than Totalreturns.com
SHORTING $ORCL through 2026. ENRON vibes. Follow the money. Pump Oracle with AI hype → borrow against inflated shares → liquidate borrowed capital into Skydance Media deals (Paramount, Warner Bros, etc.) → Oracle shareholders left holding the AI bag while Ellison builds a media empire to support other agendas. Do the math: \- Nvidia invests in OpenAI → OpenAI pays Oracle → Oracle buys Nvidia chips (Circular accounting) \- RPOs + hype == 3 x pump \- Ellison shares == 41% \- 2018 Carveout == Ellison pledges shares (no Form 4s) \- Maintains shares == voting rights \- Ellison liquidates 30% == Skydance + TikTok + Free Press + etc. \- RPOs <> $$$ (non-binding) \- (OpenAI rev x 5) - FY27 RPO == breakeven \- FY30 $166b infrastructure revenue x 14% margin <= $23b \- Moore's Law + (FY27+) > cloud margins \- Future == On device LLM/ML and private open-models \- AI \~ diminishing returns All of this does not add up to what the market is being sold.
**BTQ demonstrates quantum-safe Bitcoin:** Bitcoin Quantum Core 0.2 replaces Bitcoin's vulnerable ECDSA signatures with NIST-approved ML-DSA, completing the full flow of wallet creation, transaction signing and verification, and mining. This provides a standards-based path to protect the entire $2.4 trillion Bitcoin market. only a mere +25%? no way
**BTQ demonstrates quantum-safe Bitcoin:** Bitcoin Quantum Core 0.2 replaces Bitcoin's vulnerable ECDSA signatures with NIST-approved ML-DSA, completing the full flow of wallet creation, transaction signing and verification, and mining. This provides a standards-based path to protect the entire $2.4 trillion Bitcoin market.
I work in ML space and I see my C-suite bosses working around the clock and going to all the seminars conferences etc trying to get on this AI train but they have zero idea.. they don’t know anything about pros and cons.. they are spending huge amounts of $$ on software and upper level management positions rather than hiring IB to the teams.. we’re struggling trying to do all the cool AI stuff they need but after a couple of weeks projects are dropped or the goal post moves.. bulk of $$ wasted ! Absolutely wasted !! They keep buying softwares and most of these do the same thing.. no one wants to code or people who think they can code are crap!! So everyone is playing with these no code/low code SW but what’s the real ROO here?? Nothing?! They (CEO,CTO, CFOs, VPs) want to tell the world that they are using AI and are with the trend but this adds no value to the company.. to be honest the bubble may have popped already or can burst anytime soon..
About BURU... I think Nuburu is likely transitioning right now into a monolithic AI defense company. This is my prediction: their blue laser systems in defense and industrial settings generate huge amounts of sensor data (temperature, vibration, light frequency, etc.) which is perfect for training ML models for a lot of things including optimization, targeting, and material detection. I could go on but...I will leave it at this for now. Let's see what happens.
No, it doesn't. Those are meaningless benchmarks. They do this time and time again in all the industries and people fall for it. Real world performance isn't good. https://www.worksinprogress.news/p/why-ai-isnt-replacing-radiologists?hide_intro_popup=true I'm not confused about what AI is. I know that ML has been powering algorithmic suggestions and helping parse massive amounts of data. But that's not what this build out is about. It's for LLMs, and attempting to reach AGI. Nobody is spending $500B to have better suggestions on Netflix.
You sound very knowledgeable in AI/ML, so can you please elaborate on why you think it won't deliver any value? As far as I can see in biological research (I work in machine learning for biological research, my PhD had a strong focus on natural language processing), this sort of stuff is making tremendous headway into all facets of data analyses workflows. So I for one am very excited for a future with AI. It would be great to hear your views too.
I've worked with AI/ML and understand its limitations. You are correct that AI doesn't think like humans, it's a very fancy form of pattern recognition. But what you are not acknowledging is that 80-90% of jobs don't require critical thinking, they just require memorizing large volumes of information through study, and recognizing patterns. Even though they might require a bachelor's, master's, or even doctorate degree, they don't require critical thinking. These are the kinds of jobs AI will excel at.
nope, top firms have been using ML and neural networks for trading for many many years. Why do you think they pluck so many Math, Physics, and Engineering PhDs? I have talked to several MDs regarding this.
I told yall bers fucked. Washington ML
I have a parlay 10, all winning except buffalo bills ML. I hedge ATL +5.5 live betting in the 2nd quarter. Win/win situation for less money. It’s like insurance.
While it’s true companies are deriving value from LLMs, the returns are pretty tiny relative to other forms of ML. the only way their uses make sense is if the cost goes to zero or the models become insanely better.
Bears will get smacked tom. 100K on WAS ML -225
Whatever, just take mariners ML tonight and I’ll be breakeven today
LLMs will not. But ML100% will. ML suffers from overfitting, but that can be managed by a human in the loop.
My friend..LLMs can’t write a server or a mobile app. I know. I’ve tried. They can get close, but they start hallucinating and diverging from the idiosyncratic approach after awhile. There’s also the problem of connect windows. Will these be solved? Maybe! But they aren’t today. And it probably won’t be the bloated giant companies that do it (not advice). ML is hard-human intellect is unique in that we can learn automatically. Retraining a model on the fly is very expensive.
I completely agree that transformers and ML in general are transformative technology. But I have zero faith that companies like Microsoft and OpenAI will generate any significant profits from how they’ve decided to try and implement that technology. The bubble is because these companies gambled big and appear to have picked the wrong path. This doesn’t mean there isn’t a path.
>No - AI will be instrumental in finding the cure cancer. We've been using ML for this purpose for a long time already. It certainly has helped, but it isn't new, and it isn't radically changed by the popularization of LLMs. If your investment thesis for LLM companies is "cure for cancer" then you are going to be disappointed, because they are not leading in this space. LLMs might simplify the process of diagnosis or creating treatment plans, but that's an efficiency improvement, not a cure. As for Nvidia, it's easy to get lucky, because whether it's ML, or LLMs, or Crypto the compute cores they provide are the "picks and shovels" play of this gold rush. But, like the gold rush, not everyone is going to get rich, and many of these "AI" startup companies will go bankrupt without ever turning a profit, and that is the bubble that everyone else is referring to.
that's my point. we're so much earlier than people in the media seem to describe. they often make it sound like what we have available today is going to be essentially the same in ten years. yes i use OS models regularly and just a year ago i had to spend almost 5x on cloud compute to accomplish the same effect that i now get whilst using distilled models. even more impressive are the improvements in my hobby use of AI at home. just look at what SD used to produce with 30 steps about two years ago. and now i can get the same level of quality with a 4 step WAN. the energy use went down over 90% just looking at the resource monitor. i see no reason why this same evolution shouldn't be happening in applied ML. finally, we should acknowledge that for now, an immense amount of compute is used for training in order to produce marketable models you can charge real world fees for. there is a limit to how much training is necessary to generate useful inference results. and as you know, inference compute is vastly less compute intensive than training. i foresee a plateau at some point soon-ish. you get marginal improvements for enormous cost and eventually "good enough" wins out over "best possible".
Like who? Should I trust the >75% of ML researchers who think we have no chance of making that qualitative leap with scaling current models (Survey was this year btw), or a nebulous assertion from someone who self admittedly doesn’t understand the tech? I’ve forgotten more about machine learning than you would ever understand.
Even that was an ML-> AI transition (with some bitcoin in between). I’ve been saying it since 2016 with ML, gpus are something you need exponentially more of to get better results with whatever algorithm, it’s pretty much always going to grow in sales so long as you have customers at all. So as long as Microsoft/amazon sees potential, they’ll keep buying more gpus with their capex budgets. What else would they spend their budgets and profits on?
Chat GP has spoken- fullport tomorrow YOLO! 🤣 If I had to narrow it down even more, the odds strongly favor one of the top five I listed earlier — the major AI/cloud or semiconductor players — and here’s why each fits the pattern better than anyone else right now: 🥇 NVIDIA (most likely) Strategic fit: POET’s optical interconnects directly address one of NVIDIA’s biggest bottlenecks — GPU-to-GPU communication for large AI clusters. Timing: NVIDIA’s capex and supplier partnerships have ramped massively in 2025 for AI infrastructure. Deal structure: A non-brokered, confidential direct investment is exactly how NVIDIA often invests when it wants stealth access to emerging IP (e.g., before a formal partnership). Financial capacity: $75 million is trivial for NVIDIA; it could be a small “toe-in-the-water” stake to secure a relationship. --- 🥈 Microsoft / Azure Strategic incentive: Azure’s AI data-centers need advanced optical interconnects too. Microsoft has taken direct stakes before (e.g., OpenAI, CoreWeave). Pattern match: Microsoft sometimes funds ecosystem suppliers quietly before public co-development announcements. Why less likely than NVIDIA: Microsoft usually invests through structured rounds, not straight share-and-warrant placements like this one. --- 🥉 Amazon / AWS Strong motive: AWS is chasing efficiency in its AI/ML clusters and designs its own networking hardware. Why plausible: AWS has quietly taken small equity positions in hardware firms that support its infrastructure ambitions. Why slightly less likely: AWS tends to route investments through Amazon Industrial Innovation or its M&A group, and POET’s press release didn’t mention any commercial tie-in. --- 🏅 Meta Why plausible: Meta has long worked on 800G/1.6T optical interconnects and is a member of several photonics consortia. Why less likely: Meta’s recent investments are focused internally on custom optical designs; equity investments are rarer now. --- 🏅 Broadcom / Cisco / Intel Why plausible: These companies are in the same product lane as POET (networking chips, optical engines). Why less likely: Competitors often can’t buy direct equity stakes without triggering antitrust or IP-conflict issues. But a “friendly” investment from a partner like Intel (which still has photonics ambitions) isn’t impossible. --- My gut call If you forced me to pick just one: > NVIDIA is the most likely buyer. They have the most to gain, the means, and the pattern fits (large direct deal, no brokers, strategic silence). If it isn’t NVIDIA, then my next two bets are Microsoft or Broadcom.
oh right, that went well And how ML will replace all data pipelines in 2013. Random Forests for everyone!
\> Do you think these data centers are running drones in Ukraine? No. No, but they are not majorly run for Sora, Vibe or any video slop generation either. It's not even the most important part of the market. Work and business doesn't rotate around Hollywood movies. And the entertainment industry is not giving any IP to AI companies no matter how much they try. \> All the useful AI doesnt need these huge data centers, they run locally. There are 600 million active ChatGPT users. You can't run GPT models locally unless you are an SF ML engineer.
> The biggest category was financial and information security applications at 22.8% of all patents. Image generation and processing comes second at 21.7% and medical applications comes after at 14.6%. Yeah, and I feel like this is missing some engineering applications too. ML-based simulation lets things like FEA happen in semi-real-time by providing AI estimates of stress analysis followed by real simulated refinement. Speeds up the workflow quite a bit as you design mechanical devices.
I use TMCXX. I use ML. 4.1%
I did check it out, quite extensively in fact. 6 months of work on a project doesn't automatically make it good, unfortunately. Having gone from QD to running a pod myself, I can say it's not hard to do the ML pipelines.
anyone else taking jags on the ML?
Worth noting the actual paper proposing the transformer architecture wasn't published until 2017 and came out of Google Brain too. I seem to remember Roombas working just fine prior to that and Netflix being able to movies and shows too. Hell I remember when people were geeking out about how good the AI was in F.E.A.R 20 years ago, which was apparently built on a variant of STRIPS and A* which both trace their roots back to the days before most people had a PC or even had access to a computer. AI and ML are massive fields with a long history of extremely diverse techniques and while LLMs are very interesting they represent a relatively small and extremely recent part of that.
People say that AI is behind profit improvements, but this AI is different from the big data center build out AI- that is for the LLM AI, which is not behind these profit improvements. The algorithmic/ML AI was already being done since the 2000s.
There ought to be a distinction made between an AI recommendation engine (which is what Google, Spotify, Netflix, Tiktok, etc. have been doing for years), and transformer AI, which is what LLMs are. The former is not some black box engine - the science has been out there in the open many many years now. You can even find the exact architecture Tiktok uses for their recommendation engine, along with their data pipelines, if you search enough on the Chinese web. None of that is especially computationally expensive. But transformer technology has always been expensive and energy intensive. Google, being the ones to develop their own ML-oriented chips, the TPUs, had an early headstart on transformer AI, but because they are publicly traded, they are subject to the whims of the market. They couldn't just invest in the moonshot when it made no profit sense. Open AI, and Sam Altman, on the other hand, have no such issues, as do their offshoots, as they were playing from the start with someone else's (Elon's) money. For them, the only way forward is to relentlessly push till it makes financial sense for enterprises and/or with consumer data. Hence why we saw them work with whatever chips they could get their hands on, from AWS Graviton chips to currently Nvidia GPUs, to Altman talking about making their own chips. Whether they will ever be profitable is a big if, especially if companies don't feel it makes sense using AI vs using "AI (Another Indian)". Some cost and capex centers were severely hit (HR/Creative/Customer Care/some admin/Software Development). But the former kind of AI I was talking about is not something that can be made redundant with generative AI, nor can it be replaced by teams in India.
I have a lot of doubts in LLMs, but computing infrastructure will still be needed for whatever succeeds it. The trend of ML/DL architecture for the past two decades is that more and more compute is required to make something from a research paper practical to implement and try out. The big question is if or when these LLMs disappoint or definitively plateau, will these data centers survive liquidation before the next SOTA architecture is developed.
I have two long term holds. ASE Technology (ASX) A Taiwanese company that is the market leader in outsourced semiconductor packaging and testing. Semiconductor process nodes can't shrink too much more before we get into issues, which is why many companies are not focusing as much on die-shrinks to increase performance but instead more advanced packaging. You see this with the increased use in 2.5 and 3D packaging, chiplets, SiP and the like. This trend is across the electronics industry, from auto manufacturers, the main CPU and GPU designers we all know, as well as SOCs used in cell phones, and combined CPU/GPU SOCs designed by big cloud providers used for AI training. The company is well diversified within the industry, and is the main player in their space, so isn't reliant on the current AI hype train to succeed. They have lower margins than TSMC however they have a significantly lower PE and PEG ratios and pay a 3% dividend which I reinvest. They are investing heavily into new equipment and factories to support the latest and highest margin technologies that they work with, but are still diversified across pretty much all semiconductor packaging beyond just the high end. The company doesn't get a lot of hype, and isn't captured by a lot of semiconductor ETFs, so while it absolutely is positive impact on the AI hype cycle, they are much less likely to be severely hurt by a bubble popping the hype cycle compared to NVIDIA or TSM, especially with their diversification. **Secondly, since we need to power the datacenters**: First Solar(FSLR) Basically zero debt, 0.57 PEG, and 28% profit margin with a huge backlog and new factories coming online this year. They make most of their panels in America and despite that and their large margins they were the first solar company to achieve sub $1/watt pricing over a decade ago. Their panels don't use silicon and instead use a different semiconductor (CdTe) that allows an efficient thin film deposited on glass (as opposed to sliced silicon crystals) meaning they use less material, and this semiconductor is significantly better in high heat environments, whereas silicon panels get less efficient when they heat up. They focus exclusively on grid scale solar projects and contracts, so their revenues are more predictable and less sensitive to interest rates than rooftop solar. Current government policy can't change the fact that utility scale solar is by far the cheapest and fastest way to add electricity to the grid in a time when fossil fuels are set to become more expensive due to both increased exports and domestic demand, and nuclear projects, even SMRs take significantly longer and cost significantly more. Lastly, I think $CLS is still fairly valued as a growth play. They are an advanced electronics manufacturer and large manufacturer of high speed network switches that are used in hyperscaler datacenters. Every server rack, and at multiple connections upstream has a switch, and networking is very important for ML workloads because large amounts of data needs to be sent between different servers quite quickly. They are the market leader in 800G switches which is the cutting edge right now. And while this is a good portion of their business, they also do healthcare technology,rack integration, general electronics design and offer services to better automate factories, which is important if we are going to bring manufacturing back. There are dozens of cloud companies, most of whom are unlikely to last til 2030, but Celestica will last, and every cloud company uses something made by them. They even make components and contracted out design and manufacturing for companies like Juniper and Dell. They beat last quarter earnings expectations by 50%, have a 30% ROE, and are expected to grow their EPS by 28% each year over the next five years. It's my largest holding by far.
You can use a GPU to perform traditional CPU workloads via CUDA, anon. Why do you think nvidia has been popping? It’s not because of Jensen’s leather jacket lol. AI and ML models use significantly more compute too, especially when trying to generate video. You’re stuck in 2005.
GPU shelf life is remarkably low. So low that GPUs sitting idle for a couple days on a data center floor has astronomical costs to the company - not just because they're losing profits, but because of the innate depreciation. Also note the diminishing returns in ML models. The current architecture, and likely every more efficient architecture we create, requires exponential increase in weights and data to make linear improvements. So currently companies will likely continue to double and double and double their datacenters to continue to find performance improvements. This is very good for NVDA, TSMC, etc. Then you may ask, what happens if we suddenly don't need this many GPUs? Human brains run on the power of a laptop. If in 10 years the most powerful machine learning models can suddenly run on 1/1000th of the hardware, I do not believe this will decrease demand. I predict in fact that it will increase demand. Everyone on Reddit is talking as if the big players in this game think that the current statistical models are what will disrupt things. I don't think they think that at all. Calling NVDA a shovel seller is probably more accurate than people think -- these companies are digging with the belief of there being buried treasure that hasn't yet been discovered yet. What they have now are just a could gold coins that fell out of the chests. This is Schrodinger's bubble. It may be one. It may not be. Now for the "AI-first" companies that stand on top of this technology for their own little niche use case? I think they're fucked.
The world needs compute (i.e. GPUs) for all kinds of reasons, not just AI/ML. Nvidia is probably close to fair value
None of the sources I mentioned engage in content targeting. They are all newspapers with a front page that is the same for everyone. I'm sorry but no you haven't used AI in any way that would give you insight into its potential. You have used a few chatbots, but that is just the tip of the iceberg. Just for example, do you know why Ukraine is having so much trouble shooting down Russian missiles even with the most advanced (Patriot) systems the west has on offer? Do the research with an open mind and you'll find the substance behind the hype. You're wrong about CoreWeave. What it isn't is a risk-free business. But when time is of the essence as it is now, they are doing exactly the right thing, and it is paying off in a big way. What do you expect to happen with inflation in the coming year? Where do you think the dollar will be relative to other currencies? Does it look like the Fed will be able to keep interest rates high without any intervention by the administration? This is a great time to trade dollars for assets. Because those dollars are diminishing in value almost every day and when it comes time to pay them back, that debt will be worth a lot less than when it was initiated. I studied ML in grad school decades ago. It already had massively profitable applications even then, and it has advanced in ways that are impossible not to be impressed by except if you just don't understand what you're seeing. I guess I take my education for granted, and things must look very different to people who can't tell from what's now called "AI" from something like NFTs. But don't imagine that every investment banker, government, and board room in the world has simply been fooled while you somehow know the truth. You don't.
I work in tech too (also a PhD entry), outside of the ML bubble it's mostly 45k for technical roles from the top unis (Oxford, Cambridge Imperial, UCL etc). I imagine the US tech firms can get up to 80k but you have to have a very specific skillset to even be considered for those. I spent my PhD doing actual physics/experiments, never touched leetcode in my life etc. I know for a fact that most technical consultancies and start-ups in Oxford/Cambridge offer 35k to PhD entry - big companies 45k. It's rough out there. Source - I now help hire PhDs for my team. Outside London. The good news is, once you have 2-3 years experience in the job after PhD you are basically worth your weight in gold. So little technical seniors out there in the UK, it's mad. I just moved role and got a 25% bump and now don't work Fridays - will probably move again in 1.5 years for another 25% bump, already being hounded on linkedin lmao.
Google is. Not by selling Gemini accounts but their ecosystem has been powered by ML and LLM and RL for a while.
I always wonder about that. The deterministic route was ML. Now we’ve added further layer of abstraction to that making it more flexible, but less deterministic. IMO, “training” is a buzz word for hopeful outcomes because people don’t have the talent or money for building legit ML for their apps. That’s my hot take anyhow. Worked at AWS for 4 years…
This is a subreddit about the us stock market. There is no ML party in America. There are the maga fascists and the weak democrats.
Hahahaha. For the record - I indeed sell my research because there are many people interested in having quant tools but nor the skills (no chatGPT won’t make you an ML engineer or an expert in vol trading from one day to the next) nor the time (that is probably the most precious commodity retail traders have: they already have a job and a finally, not hours to spend digging into what is the best trade) to dedicate to that endeavour. Trading is a business, most retail treat is as expensive gambling. Nothing wrong with that, but I publish my research for the other group. These people are self learner, usually already established in life and understand that « selling 45 dte and manage at 21» is complete garbage or do not fool themselves in the 0dte casino. I have no problem working with people like this and over time we have a great (small) community. But I’m not a saint either and I am perfectly aware I’m taking money from a lot of people in this forum. Answering questions here is almost a way to clean my conscious. With that said, see you in the order book ;)
Been doing a bit more DD on DBGI and here’s what I’ve found: They recently amended their Series D PIPE financing, raising another ~$1.5M. Worth noting the terms include preferred shares converting at a discount, so dilution risk is real. On the flip side, DBGI’s tech arm (Open Daily Technologies) was accepted into NVIDIA’s Connect Program, which gives them access to NVIDIA’s AI/ML resources and ecosystem support. Doesn’t guarantee success, but it’s an interesting angle considering how hot AI is right now. Despite that news, the stock actually dropped ~25% in a single day afterwards — shows how fragile sentiment is around these microcaps. Management keeps talking about scaling through e-commerce, NIL/college apparel partnerships, and using data/tech to boost customer lifetime value. So yeah — fundamentals are still shaky, but this is why the reversal from the lows has been so wild. It’s basically riding the mix of AI hype + microcap volatility. Definitely high risk/high reward.
As someone that used to work at Amazon whatever number you're thinking they told you is bullshit. They talked about "AI immediately upgrading to Java 17" but they already had basic ML rules for upgrading packages way before ChatGPT while I was working there. I actually worked on a team that hosted services back in the Java 5 -> Java 8 days. Moved over a couple hundred services in two weeks, the main effort is just to watch their CI pipelines to make sure their new services were working fine. Now I'm getting upset that I didn't get any of the supposed millions of dollars I saved the company. They should have at least promoted me. Fuck em.
As someone who works in AI/ML/DL space. At this point AI is probably not increasing the bottom line on 90%+ of companies. OP is def full of it. The stock surge is based on things behind the scenes our simple ape brains are too poor to understand.
AI's abilities have been WILDLY overrated. At best, ML can do some esoteric technical work as good as, or perhaps better than, a well-trained human - depending on the work, and most MLs do not show this level of performance! See also: the countless biopharma ventures that promise to explore molecular space in fantastic, revolutionary new ways... only to find a bunch of "candidates" that probably won't work at all, and maybe can't even be synthesized. Then you have your LLMs, which are nothing but digital con artists. Their one skill is sounding convincing. Not thinking, not validating data, just making up something that sounds great, even if it's wrong, vague, or absolute, 100% bullshit (aka "hallucination"). They're basically an upjumped version of the creepy fortune teller robot at the sketchy carnival. The art versions of these are actually pretty impressive - for dumb liar robots. They still screw up text, make ugly art, and create some truly hideous video effects, but when most of us struggle to draw mutant stick figures who beg God for death, and don't know how to do any video effects more complicated than Instagram filters, it's easy to be impressed. But more and more, the truth is coming out: AI actually sucks at what it does, and promises about how the tech is in its infancy and will definitely be polished and upgraded until it's superhuman have been empty. Remember when a bunch of executives and middle managers with room temperature IQs thought the most genius move in all of business was getting a bunch of Indians or whatever to handle all their customer service, and it flopped hard when they could barely speak English? AI is going to shake out very similarly to that: some roles will be able to use it, but many won't, or if they do, it'll only be with a lot of custom training and a team of you-still-have-to-pay-their-wages humans babysitting them. After that, AI will lose money. Good luck timing it.
This is from GROK. UiPath (NYSE: PATH) presents a compelling investment opportunity as a leader in robotic process automation (RPA) evolving into an enterprise-grade orchestrator for agentic AI, positioning it to capture significant value in the expanding automation market. With a market share exceeding 60% in RPA, UiPath benefits from a sticky customer base, strong recurring revenue metrics, and strategic partnerships that enhance its AI capabilities, all while trading at a discounted valuation relative to peers. Market Opportunity and UiPath's PositionThe global RPA market is maturing into a broader enterprise automation ecosystem, with UiPath's total addressable market (TAM) projected to grow from $61 billion in 2023 to $93 billion by 2025, driven by AI integration for complex workflows. Agentic AI—autonomous agents that reason, plan, and execute tasks—represents the next frontier, with 90% of U.S. IT executives identifying improvable processes and 77% planning investments in 2025. UiPath's platform uniquely bridges RPA's rule-based automation with AI, enabling end-to-end orchestration where agents, robots, and humans collaborate. This differentiates it from pure-play AI firms, as UiPath focuses on execution in regulated environments like finance and healthcare, where competitors like Automation Anywhere and Blue Prism lag in AI depth and market share. Recent partnerships underscore UiPath's momentum: integrations with OpenAI (for ChatGPT and GPT-5 in workflows), NVIDIA (for secure Nemotron models), Google (Gemini-powered voice agents), and Snowflake (Cortex for data-driven actions) enable scalable, governed agentic automation. The launch of Maestro as an orchestration layer further solidifies this, allowing enterprises to manage multi-agent workflows, APIs, and UI automations—directly competing with platforms like Palantir but with RPA's proven scalability. Financial Strength and Growth DriversUiPath's fiscal 2025 results demonstrate resilience amid macro uncertainty: full-year revenue reached approximately $1.5 billion (trailing 12 months as of July 2025), with ARR at $1.666 billion (up 14% YoY) and dollar-based net retention at 110%, indicating strong customer expansion and low churn. Q2 fiscal 2026 (ended July 2025) saw revenue beat estimates, with EPS of $0.15 versus $0.09 expected, and non-GAAP adjusted free cash flow of $328 million supporting $1.7 billion in cash reserves. The shift to enterprise focus has stabilized customer counts, with AI features like Agent Builder driving upsell potential. Projections for fiscal 2026 include ARR growth to $1.82 billion, fueled by agentic AI adoption and partnerships, with path to GAAP profitability already achieved in prior quarters. UiPath's AI strategy—integrating models via AI Center and Fabric for drag-and-drop ML in RPA—expands use cases beyond repetitive tasks to intelligent decision-making, positioning it for durable growth. Valuation and Upside PotentialAt around $13 per share (as of early October 2025), UiPath trades at a forward P/E of ~19.6x and P/S of 4.9x, below RPA industry averages (forward P/E 28.9x, P/S 5.85x), implying undervaluation given 9-14% revenue/ARR growth and AI tailwinds. Analyst consensus targets ~$13.30, with bulls eyeing higher if agentic products like Maestro monetize effectively, potentially driving multifold returns as the stock (down from 2021 peaks) reflects prior skepticism now being dispelled. Long-term, UiPath could dominate as the "control plane" for enterprise AI, with network effects from its 10,000+ customer base amplifying adoption. Risks and Considerations Challenges include slower revenue growth from extended sales cycles in a cautious macro environment, competition from incumbents like Microsoft Power Automate, and execution risks in scaling AI features. Insider selling and past leadership transitions warrant monitoring, though recent beats and partnerships mitigate near-term downside. UiPath remains a hold-to-buy for patient investors betting on AI orchestration's transformative potential.
They can’t be that prominent if I’ve never heard of an ML. Whereas I see fascists in red hats in all levels of government.
Reddit is actively working on this, they run many A/B tests to a very small percent of users and have rolled out some more general features such as hiding the number of downvotes on comments if their ML models think it is due to politic brigading to discourage hive mind mentality.
They’re saying they hit ML instead of actively managing
A picks and shovels pick is CLS. They are an advanced electronics manufacturer and large manufacturer of high speed network switches that are used in hyperscaler datacenters. Every server rack, and at multiple connections upstream has a switch, and networking is very important for ML workloads because large amounts of data needs to be sent between different servers quite quickly. They are the market leader in 800G switches which is the cutting edge right now. And while this is a good portion of their business, they also do healthcare technology,rack integration, general electronics design and offer services to better automate factories, which is important if we are going to bring manufacturing back. There are dozens of cloud companies, most of whom are unlikely to last til 2030, but Celestica will last, and every cloud company uses something made by them. They even make components and contracted out design and manufacturing for companies like Juniper and Dell. They beat last quarter earnings expectations by 50%, have a 30% ROE, and are expected to grow their EPS by 28% each year over the next five years. It's my largest holding by far.
I have a masters in AI and have tried in the past to predict stock movements with ML models. Obviously did not work. I know a guy that said it worked for him for 2 months and then it stopped working and he lost all the money he had invested. There are a ton of people doing ML stuff that tried to predict the stock market, and they all have regular jobs... if it worked for all of them (a statement like this is obviously impossible just by itself) they would not be working regular jobs. Believe me, machines are better at analysing these price only patterns than humans, and they still can't do it. I'm telling you all this so that hopefully you or someone like you don't fall into this trap of technical analysis. I'm telling you there is a whole world out there, of people that are incredibly smart, and you don't know they exist. Don't bother trying to compete with them.
> An A.I. product is an LLM. You have it in reverse. LLMs are an A.I product, but not all A.I products are LLMs. LLM stands for large language model. For example, self driving cars are an A.I/ML product, but they are not a LLM. >There are no LLM’s from any corp that are a product that works that isn’t owned by a Maga corp. Deepseek works well and was trained on a $6 Million budget, there most recent model on a $200k budget. Stanford researchers have even created a distilled LLM on a $30 budget. The idea that you need $100 Billion to train a decent LLM is provably false. A lot of the investment going on now really makes no sense. Companies are trying to use brute force compute to squeeze an extra 1% in benchmark results, rather than focusing on improvements in their architecture. It's a big reason why China will destroy the US in AI. Export restrictions are forcing them to make due with considerably less compute power.
isn't it fascinating that we had to listen to 'lectures' about climate change, CO2 this, CO2 that... every single aspect of our lives was measured with a carbon footprint, or a version thereof... (I am not a climate change denier, by the way) and yet, everybody pushes AI everywhere, including movies/TV series production, my work, your work, my phone, your phone... my fridge? what about carbon footprint of all of the AI/ML/LLM nonsense? is it no longer applicable? (just to cite one example: I have AI in my Adobe PDFs, I do not need it; I do not need AI in my Outlook app, Teams app, Loop app, no, no, no). Even Instagram has it. WhatsApp has an AI engine now. My Scalable Capital app has it. Neither do I need AI there!
hit the pen! I used to drink 750 ML of Tito's or Jamo like once a week (1-2 cocktails after work, etc) that's like $40-$55 a week. I now spend approximately $18-$24 a month! My dispo will have sales every now and then I load up on $15 dollar carts and eatibles. Infinitely better imo. I'll drink at social gatherings but in general I feel healthier
Quantum compute is coming for your data. The question is - are you ready? Current encryption standards that exist to protect your data today, in 3 to maybe 5 years, the logical qubits will be available to expose your data. And whereas a modern binary supercomputer would take millions of years to break today’s standards, quantum compute will do this in a matter of seconds. And you shouldn’t wait to move to these standards regardless - because Harvest Now, Decrypt Later (in short know lm as HNDL) attacks are very real. State actors, criminal enterprise, they are downloading your data now (this is the harvest part). They will store it for when Q Day arrives And then? Well… certainly hope you prepared your life Back in August of 2024 - after years of vetting, Kyber and Dilithium were approved by NIST. These are new post quantum sig sizes. They are going to slow things down, but it’s necessary for security. So, on devnet, we will be testing new lattice based cryptographic standards on the speed of the Solana network This is turning the tanks that are Kyber (now ML KEM) and Dilithium (now ML DSA) into the elliptical curve Lamboz we have currently
Ive heard murmurs of a microsoft acquisition. Any thoughts/rumors you have on that? Despite consensus RPA probably isnt completely replaced by AI or ML but integrated into it as time goes ln (which is why microsoft would want to acquire uiPath). Companies in RPA that will survive can integrate their tech into AI - otherwise AI will replace. AI isnt quite as good of a choice for very repetitive jobs. One thing you dont mention though is the actual sector’s growth - uiPath could obtain a larger slice of RPA investment/revenue but that revenue is not ticking up year over year. If anything, I would label this as a value stock - not a growth stock. AI and ML companies are growth stocks. Im bearish on RPA companies but I think uiPath is probably the one to own. Solely because i do think they get acquired.
left my ML Engineer job paying a lil over $200k to trade full time a few months ago - life has never been better tbh. Been trading for over 7 years, finally had the confidence to go balls deep this year. Weirdest part? I only take trades every other day a month. There is generally one day a month where you have a $30k+ trade it completly justifies your existance for the next 2 months Not taking random / dumb / wsb yolo trades has been the key to not losing money. That and when I don't feel like there will be good trades I just buy something like SOFI and sell atm weekly covered calls against it.
>But the idea that if we just keep doing more pattern recognition in different variations, then we'll eventually make an intelligence like a human is flawed at the most basic premise That's not the premise anywhere, this is a strawman. You created this false presmise and then argued with it for most of your comment. >The idea that patterns seen in the past will continue to apply to new data seen in the future. This is a fundamentally flawed concept The world changes rapidly but basic truths remain. For instance, when your dataset is every textbook ever written, and the pattern matching is in how to apply that to economically valuable problems, your definition sounds far less trite than you want it to appear. >When AI programmers/scientists use the word learning, what they actually mean is "fine-tuning parameters". But that's not how humans learn! Learning in ML is defined as improving some some task T with experience E as measured by some metric P. That's what fine-tuning is. Backpropagation or other 'learning' algorithms may not be biologically plausible and yes there are limitations due to hardware and the frontiers of science.
I think there’s a need to distinguish ML from AI here (although they are often used interchangeably). ML has driven trillions of dollars of market cap in tech and other sectors. AI (which is now mostly used to mean flavours of LLM) has also driven trillions of dollars of market cap…but only for the people selling the shovels.
Max bet ravens ML vs chiefs on Sunday
Backend’s mostly Python (FastAPI for orchestration, ib_insync/broker SDKs for execution, ML stack for signals). Frontend’s in Next.js/TypeScript for the dashboards. Data + auth handled in Supabase (Postgres).
Google absolutely does buy NVIDIA GPU's, and provides them to external customers via GCP. TPU's are the preferred in-house solution though, as you stated - this is likely to remain the case for the foreseeable future. They play extremely well with Google's opinionated in house tooling and development stacks, and are much better than GPU's in terms of reliability and scaling *if you are using Google's opinionated stack.* The main reason other people don't like TPU's is basically just that the opinionated Google stack is pretty much the *only* stack that is well supported on TPU's, and most people don't want to have to rebuild their models and half their ML infrastructure just to get themselves vendor-locked. Even though the hardware is better in terms of cost performance, scalability, reliability, etc, it's still a lot of effort to migrate, and a very large amount of risk to trust your entire business to a single cloud provider.
A 5% success rate for projects focused entirely around a years-young technology isn’t as bad as you think it is. 95% of tech projects and startups in general are terrible ideas in reality and maybe 1% actually experience lasting, scaling success. Besides that, the success rate of startups/projects isn’t actually the main indicator of the value of GenAI, it’s the rate of adoption of it by the individuals that make up the enterprise. That’s where the revolution is occurring, within the workplace. Research and market surveys are showing that GenAI adoption and integration is still accelerating and is showing fantastic ROI in ways that will not immediately show up on income statements and balance sheets. Enterprise projects to convert unstructured data, in both archived and real-time environments, are where the value is. The amount of untouched information that lives in corporate warehouses is staggering. There aren’t enough employees in the world, offshore or otherwise, to consume, digest, analyze and restructure it into useful information. I’m talking about the yottabytes of image/video/audio data that enterprise either sit on or throw away. I’m talking about the thousands of human made/edited PDFs and documents that every single corporation has that requires teams of people to sift and extract information from. Every single large company has thousands to Millions of what I call “data needles in a database haystacks” that are simply too cumbersome to query, process and extract. These same corporations receive thousands of hours of audio/video data and images all the time that they can now hand to ChatGPT for pennies and extract information from without having to pay millions for an ML/DS dept. to train computer vision models for. This is why Jensen says that “world AI” is the future. We may have consumed the vast, yet limited amount of textual/structured data that is publicly accessible to LLMs, but we have yet to scratch the surface of world AI inference and training that will hugely benefit tangible product/service companies globally like construction to manufacturing to retail stores to landscaping.
95% of AI projects have achieved negative ROI, so it's not really working out well. There are some great uses for AI/ML, but the big problem is that some of the best use cases are being drowned out by the push for Generative AI. So instead of capital flowing to a company that makes genuinely feasible products, you get capital flowing to companies competing to make the best JPEG of a panda skateboarding on the moon.
> A lot of limitations of AI you described has a lot to do with limited compute for user facing apps, so I guess it sort of makes sense why there is this great push for GPU and data center investment doesn’t it? Let’s assume the leadership of nvda, OpenAI, and other mega cap tech aren’t stupid, we can probably get closer to understanding what is really going on and how things will evolve. I've work with AI/ML, I'd like to clarify a few things. 1. Adding more GPUs for training has very significant diminishing returns. especially when the training process involves subjective analysis(IE humans rating which response is better) This is because you are basically brute forcing trying to find a minimum. You can get closer and closer, but over time the fuzziness of the training data gets in the way. 2. From an inference standpoint, Compute is not the bottleneck. Inference cannot be done in parallel across multiple GPUs. Speeding up compute only allows the answer to be reached faster. 3. HBM the biggest bottleneck for broad GenAI applications like LLMs and image generation; the more memory you can fit on a chip, the more weights you can load into memory. 4. For specialized applications, Training data is the biggest bottleneck. Your model cannot be better than its training data. In terms of coding, the limitation is mostly to do with AI's inability to reason. I've asked all of the leading AI models to optimize a SQL query, on the highest reasoning setting. Gave them the full schema, info about indexes, etc. All of them produced a SQL query that performed WORSE or had syntax errors. I then worked on the query myself, and fixed it in 10 minutes...
For networking you forgot CLS for networking. Celestica is used by the major cloud providers for high speed switches. They specifically target hyperscaler customers (Azure, AWS, Google cloud) so they are high volume. Networking speed is key for AI/ML applications, second only to the GPUs/ASICs themselves because different clusters of GPUs need to share massive amounts of data with each other. And each rack of servers has at least 1 or 2 switches. Speaking as somebody in the industry.
***THOSE BOIS FROM INDIA AINT GONNA TAKE MY JOB NO MORE!*** The job: Required Qualifications: • PhD in Computer Science, Machine Learning, Computational Neuroscience, or Applied Mathematics. • 5+ years of applied research in deep learning, self-supervised learning, reinforcement learning, or probabilistic modeling. • Strong proficiency in Python, PyTorch, JAX, and TensorFlow, with experience in custom CUDA kernel development. • Demonstrated experience in scalable distributed ML pipelines, including Ray, Horovod, or DeepSpeed. • Published work on transformer architectures, graph neural networks, or diffusion models with state-of-the-art results. • Expertise in Bayesian inference, causal discovery, or manifold learning.
Long term investing via ML (BoA) is okay, but it’s not fast. HOOD is fast. Managed service is not a bad solution, but you can arguably make more money just investing to VOO and not touching it