See More StocksHome

ML

MoneyLion Inc

Show Trading View Graph

Mentions (24Hr)

10

233.33% Today

Reddit Posts

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models affect retail trading and investing?

r/StockMarketSee Post

[Discussion] How will AI and Large Language Models Impact Trading and Investing?

r/smallstreetbetsSee Post

Luduson Acquires Stake in Metasense

r/investingSee Post

Best way to see asset allocation

r/wallstreetbetsSee Post

Neural Network Asset Pricing?

r/ShortsqueezeSee Post

$LDSN~ Luduson Acquires Stake in Metasense. FOLLOW UP PRESS PENDING ...

r/wallstreetbetsSee Post

Nvidia Is The Biggest Piece Of Amazeballs On The Market Right Now

r/investingSee Post

Transferring Roth IRA to Fidelity -- Does Merrill Lynch Medallion Signature Guarantee?

r/StockMarketSee Post

Moving from ML to Robinhood. Mutual funds vs ETFs?

r/smallstreetbetsSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/stocksSee Post

hypothesis: AI will make education stops go up?

r/pennystocksSee Post

AI Data Pipelines

r/pennystocksSee Post

Cybersecurity Market Set to Surge Amidst $8 Trillion Threat (CSE: ICS)

r/StockMarketSee Post

The Wednesday Roundup: December 6, 2023

r/wallstreetbetsSee Post

Why SNOW puts will be an easy win

r/smallstreetbetsSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/wallstreetbetsSee Post

I'm YOLOing into MSFT. Here's my DD that convinced me

r/pennystocksSee Post

Integrated Cyber Introduces a New Horizon for Cybersecurity Solutions Catering to Underserved SMB and SME Sectors (CSE: ICS)

r/investingSee Post

I created a free GPT trained on 50+ books on investing, anyone want to try it out?

r/pennystocksSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/smallstreetbetsSee Post

Investment Thesis for Integrated Cyber Solutions (CSE: ICS)

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/stocksSee Post

How often do you trade news events?

r/stocksSee Post

Palantir Ranked No. 1 Vendor in AI, Data Science, and Machine Learning

r/RobinHoodPennyStocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/pennystocksSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallstreetbetsnewSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/smallstreetbetsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsOGsSee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/WallStreetbetsELITESee Post

Nextech3D.ai Provides Business Updates On Its Business Units Powered by ​AI, 3D, AR, ​and ML

r/wallstreetbetsSee Post

🚀 Palantir to the Moon! 🌕 - Army Throws $250M Bag to Boost AI Tech, Fueling JADC2 Domination!

r/investingSee Post

AI/Automation-run trading strategies. Does anyone else use AI in their investing processes?(Research, DD, automated investing, etc)

r/StockMarketSee Post

Exciting Opportunity !!!

r/wallstreetbetsSee Post

🚀 Palantir Secures Whopping $250M USG Contract for AI & ML Research: Moon Mission Extended to 2026? 9/26/23🌙

r/WallstreetbetsnewSee Post

Uranium Prices Soar to $66.25/lb + Spotlight on Skyharbour Resources (SYH.v SYHBF)

r/wallstreetbetsSee Post

The Confluence of Active Learning and Neural Networks: A Paradigm Shift in AI and the Strategic Implications for Oracle

r/investingSee Post

Treasury Bill Coupon Question

r/pennystocksSee Post

Predictmedix Al's Non-Invasive Scanner Detects Cannabis and Alcohol Impairment in 30 Seconds (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/stocksSee Post

The UK Economy sees Significant Revision Upwards to Post-Pandemic Growth

r/wallstreetbetsSee Post

NVDA is the wrong bet on AI

r/pennystocksSee Post

Demystifying AI in healthcare in India (CSE:PMED, OTCQB:PMEDF, FRA:3QP)

r/wallstreetbetsSee Post

NVIDIA to the Moon - Why This Stock is Set for Explosive Growth

r/StockMarketSee Post

[THREAD] The ultimate AI tool stack for investors. What are your go to tools and resources?

r/investingSee Post

The ultimate AI tool stack for investors. This is what I’m using to generate alpha in the current market. Thoughts

r/wallstreetbetsSee Post

My thoughts about Nvidia

r/wallstreetbetsSee Post

Do you believe in Nvidia in the long term?

r/wallstreetbetsSee Post

NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading

r/wallstreetbetsSee Post

Apple Trend Projection?

r/stocksSee Post

Tim Cook "we’ve been doing research on AI and machine learning, including generative AI, for years"

r/investingSee Post

Which investment profession will be replaced by AI or ML technology ?

r/pennystocksSee Post

WiMi Hologram Cloud Developed Virtual Wearable System Based on Web 3.0 Technology

r/pennystocksSee Post

$RHT.v / $RQHTF - Reliq Health Technologies, Inc. Announces Successful AI Deployments with Key Clients - 0.53/0.41

r/wallstreetbetsSee Post

$W Wayfair: significantly over-valued price and ready to dump to 30 (or feel free to inverse me and watch to jump to 300).

r/pennystocksSee Post

Sybleu Inc. Purchases Fifty Percent Stake In Patent Protected Small Molecule Therapeutic Compounds, Anticipates Synergy With Recently In-Licensed AI/ML Engine

r/stocksSee Post

This AI stock jumped 163% this year, and Wall Street thinks it can rise another 50%. is that realistic?

r/wallstreetbetsSee Post

roku thesis for friend

r/stocksSee Post

Training ML models until low error rates are achieved requires billions of $ invested

r/wallstreetbetsSee Post

AMD AI DD by AI

r/wallstreetbetsSee Post

🔋💰 Palantir + Panasonic: Affordable Batteries for the 🤖 Future Robot Overlords 🚀✨

r/wallstreetbetsSee Post

AI/ML Quadrant Map from Q3…. PLTR is just getting started

r/pennystocksSee Post

$AIAI $AINMF Power Play by The Market Herald Releases New Interviews with NetraMark Ai Discussing Their Latest News

r/wallstreetbetsSee Post

DD: NVDA to $700 by this time next year

r/smallstreetbetsSee Post

VetComm Accelerates Affiliate Program Growth with Two New Partnerships

r/pennystocksSee Post

NETRAMARK (CSE: AIAI) (Frankfurt: 8TV) (OTC: AINMF) THE FIRST PUBLIC AI COMPANY TO LAUNCH CLINICAL TRIAL DE-RISKING TECHNOLOGY THAT INTEGRATES CHATGPT

r/pennystocksSee Post

Netramark (AiAi : CSE) $AINMF

r/pennystocksSee Post

Predictmedix: An AI Medusa (CSE:PMED)(OTCQB:PMEDF)(FRA:3QP)

r/wallstreetbetsSee Post

Testing my model

r/pennystocksSee Post

Predictmedix Receives Purchase Order Valued at $500k from MGM Healthcare for AI-Powered Safe Entry Stations to Enhance Healthcare Operations (CSE:PMED, OTCQB:PMEDF)

r/wallstreetbetsSee Post

[Serious] Looking for teammates

r/stocksSee Post

[Serious] Looking for teammates

r/StockMarketSee Post

PLTR Stock – Buy or Sell?

r/StockMarketSee Post

Why PLTR Stock Popped 3% Today?

r/wallstreetbetsSee Post

How would you trade when market sentiments conflict with technical analysis?

r/ShortsqueezeSee Post

Squeeze King is back - GME was signaling all week - Up 1621% over 2.5 years.

r/StockMarketSee Post

Stock Market Today (as of Mar 3, 2023)

r/wallstreetbetsSee Post

How are you integrating machine learning algorithms into their trading?

r/investingSee Post

Brokerage for low 7 figure account for ETFs, futures, and mortgage benefits

r/pennystocksSee Post

Predictmedix Announces Third-Party Independent Clinical Validation for AI-Powered Screening following 400 Patient Study at MGM Healthcare

r/ShortsqueezeSee Post

Why I believe BBBY does not have the Juice to go to the Moon at the moment.

r/investingSee Post

Meme Investment ChatBot - (For humor purposes only)

r/pennystocksSee Post

WiMi Build A New Enterprise Data Management System Through WBM-SME System

r/wallstreetbetsSee Post

Chat GPT will ANNIHILATE Chegg. The company is done for. SHORT

r/ShortsqueezeSee Post

The Squeeze King - I built the ultimate squeeze tool.

r/ShortsqueezeSee Post

$HLBZ CEO is quite active now on twitter

r/wallstreetbetsSee Post

Don't sleep on chatGPT (written by chatGPT)

r/wallstreetbetsSee Post

DarkVol - A poor man’s hedge fund.

r/investingSee Post

AI-DD: NVIDIA Stock Summary

r/investingSee Post

AI-DD: $NET Cloudflare business summary

r/ShortsqueezeSee Post

$OLB Stock DD (NFA) an unseen gold mine?

r/pennystocksSee Post

$OLB stock DD (NFA)

r/wallstreetbetsSee Post

COIN is still at risk of a huge drop given its revenue makeup

r/wallstreetbetsSee Post

$589k gains in 2022. Tickers and screenshots inside.

r/pennystocksSee Post

The Layout Of WiMi Holographic Sensors

r/pennystocksSee Post

infinitii ai inc. (IAI) (former Carl Data Solutions) starts to perform with new product platform.

r/investingSee Post

Using an advisor from Merril Lynch

r/pennystocksSee Post

$APCX NEWS OUT. AppTech Payments Corp. Expands Leadership Team with Key New Hires Strategic new hires to support and accelerate speed to market of AppTech’s product platform Commerse.

r/StockMarketSee Post

Traded companies in AI generated photos?

r/pennystocksSee Post

$APCX Huge developments of late as it makes its way towards $1

r/pennystocksSee Post

($LTRY) Lets Hit the Lotto!

r/wallstreetbetsSee Post

Robinhood is a good exchange all around.

Mentions

Marcos López de Prado believes ML quants are already starting to leave non-ML quants behind. He believes that in the future, the stock market will be so efficient due to ML that it’ll be hard for retail investors to go from rags to riches, but people’s retirements and the financial economy will be more stable. In his book (Advances in Financial Machine Learning), he describes the stock market as mining for gold. In the beginning (the Gold Rush), it’s easy for laymen with simple tools (like econometrics in the earlier stock market) to strike it rich. As all the larger easier to obtain gold pieces are removed, it becomes much harder for laymen to get rich from mining gold. But nowadays, newer tech inaccessible to laymen are mining traces of gold that total much more gold than was ever mined during the Gold Rush—he sees ML and “experimental math” as the current forces doing this.

Mentions:#ML

This is great advice. "Real" options trading is all about statistical distributions. With only 1k, you can't manage sizing and risk in a reasonable way without trail risk threatening ruin. Options strategies only use a few percent of an account, so 1k means your value at risk should be less than 100 at a time. There are very very few strategies that can succeed like that. 1k is a lotto ticket, not really trading. Taking that 1k and but and holding in a broad market ETF is definitely my suggestion. Or practice asset allocation between something VOO/VTI, and fixed income ETFs. When you have around $20k, you can start thinking about trying "the wheel" on some stock that's around $100-$200 if you want to do your toe in options trading. My opinion: if you start options trading with less than $5k, you will probably lose it all even in a normal market due to stochastic processes. You're really really constrained until 10-20k. Things get a little more comfortable at 50k. You can manage positions comfortably at 100k. Final note: of you want to be good at options trading, take A LOT of math in school. Probability, statistics, numerical methods, differential equations, optimization techniques. All of those are also prepping you for AI/ML. Try to get to the point that you can derive Black-Scholes by hand, understanding the limitations and approximations.

No, my point, which you entirely missed, is that those are all narrow AI loss functions, and completely unusable for training a general purpose robot. This is your problem, you admit you don't really understand the field, and claim you want to learn. But at the same time, your head is so far up your own ass, you think you know more than you do, to the point of claiming actual experts don't know what they're talking about. Go take an actual ML course, instead of pretending you already know everything.

Mentions:#ML

What are you even talking about? If you actually work with LLMs, you'd know that "understanding," "debugging," and "analyzing" are precisely what they're capable of doing. Your claim of delivering ML projects while simultaneously denying these basic capabilities is frankly unbelievable. Building a toy model "from scratch" doesn't give you insight into the cutting-edge capabilities of state-of-the-art systems. That's like saying you understand modern aerospace engineering because you've made a paper airplane. Your dismissal of these terms shows a fundamental misunderstanding of how LLMs operate. They absolutely do demonstrate forms of understanding through their ability to contextualize information, generate relevant responses, and perform complex reasoning tasks. They can debug code by identifying errors, suggesting fixes, and explaining the logic behind those fixes. And they certainly analyze text by extracting key information, summarizing content, and drawing inferences. It's people like you, with a little bit of knowledge and a lot of misplaced confidence, who hinder progress by dismissing genuine advancements. \_\_\_ >'analyze'.... no LLM does this, lol. For a particular story subject, I can explain my own message reasoning more in-depth with the assistance of an LLM. I have done that successfully: writing an unfinished text response about a topic with context, then asking the AI to continue the given message. Some words, phrases, or sentence ideas are good; I include it in my message, then repeat the process at some point again if I need to. I know the topic, so if something is off or wrong, I either correct it or dismiss whatever part is has said incorrectly. The LLM demonstrably knew this topic pretty well here—with decent, useable feedback: [Here's an example of the process](https://poe.com/s/cZAFEciT6Ou9MXEUkBcN) It helped greatly in developing a comprehensive and insightful response to the given story topic. I was able to quickly respond to a Reddit post about a narrative series topic, and it easily got double the most upvotes of anyone from the thread. A large majority of the people looking at the thread liked the response I had done using extensive LLM assistance. LLMs demonstrably can analyze given texts, as my example clearly shows. The prompt I provided, and the resulting approval from the niche Subreddit community following that given thread, is a perfect example of how these models can analyze complex narrative contexts, understand intricate character motivations, and generate coherent, nuanced content that resonates with a \*knowledgeable audience.\* —It analyzed the given context about Elaine's background and extrapolated how her experiences as Kaiser would have shaped her character. It demonstrated an understanding of the psychological transformation required to go from a failed Princess candidate to a ruthless ruler, all while maintaining consistency with the established lore of the Tower of God universe. This isn't just simple text prediction; it's a sophisticated analysis of character development, societal structures, and the long-term consequences of traumatic experiences. The model showed an ability to draw logical conclusions based on given information and expand on themes in a way that was deemed valuable by the community. Your dismissal of these capabilities as "bs hype" is not just misguided; it's again: willfully ignorant. You're either working with severely outdated models or you're fundamentally misunderstanding how to utilize these tools effectively. The fact that you can't see the analytical capabilities demonstrated in this example suggests a serious lack of expertise in the field you claim to work in.

Mentions:#ML

You keep using words like 'understanding', 'debug', 'analyze'.... no LLM does this, lol. I deliviver ML projects at work, and I use them every day. I've built my own from scratch for fun. I'm not making any sweeping claims, I just don't perpetuate the bs hype behind them.

Mentions:#ML

Considering most ML/AI/LLM overhead runs on gpu's because of their better hash rates than cpu's, AMD would be the better underdog bet over Intel.

Mentions:#ML#AMD

I'm glad someone here gets this. That ML has been around since spam filters, that the thing in Netflix recommending movies, or when eBay suggests a category is just the same stuff. All an LLM is doing is picking the next most likely words. Here's my general problem with all of this stuff. It's really all a trick. Often quite a useful trick, one that can help but a trick that can also yield crappy results. Like you can build a categorisation NLP thing that takes text and says "this is most likely a builder/chef/programmer" but it isn't properly smart. It's just scoring word counts, maybe prioritising order or words. And that trick can help pare down a giant data set to get a few results, but you still need a human being to check if the results are sane or not. And it really doesn't matter than if it's 2% or 5% inaccurate, you still need the human. You're always going to need the human. So adding more data or more processing doesn't add much more value. People are talking about AI factories or self-driving vehicles, but the value of a self-driving vehicle is zero intervention. That you can get drunk and it drives you home, or read a book as you travel. If you still have to watch the road, the value is tiny. And factory robots are about machines that just keep on doing the same thing very reliably all day. A robot that is making the most probably guess with a thing means a lot of goods leave the factory wrong.

Mentions:#ML

How much of the AI/ML industry do you know? I have a feeling you’re not aware of exactly why nvidia has a monopoly on AI growth. If you think it’s simply because of the GPUs, you’re wrong. You’re forgetting the frameworks for the code and software as well. You might be underestimating the inertia of their dominance both hardware and software side. Thinking AMD is close at all to catching up is a huge red flag that you might not know what you’re talking about. Sure it’s possible that they’ll lose their massive lead and dominance, but the signs of that happening will be very slow and gradual. There’s nothing that AMD or any other company can do to suddenly shift the tides overnight because of the inertia. So when it starts happening, then it makes sense to start fearing a top, but that hasn’t happened yet so it sounds like you’re stepping front of a full speed train here. Regarding the market cap, do you think that once a company hits number 1 that they stop growing? All of the top companies will continue to grow and hit 4T eventually together and beyond. The global economy isn’t at a point where growth is a zero sum game yet. It’s quite possible that Apple, Microsoft, nvidia, Google, Meta, and the like continue to go up and up and double what they are today over the next few years because of how massive this new tech is.

Mentions:#ML#AMD

With respect, this language is exactly why I think it's overvalued. We already have ML/AI models everywhere, or memory hungry algorithms/data structures (which are related but not quite the same thing), in everything including robots.  It's not a new phenomenon, everyone just became aware of it with LLMs. The reason why GPUs weren't selling like crazy before a few years ago is because extremely large models are impractical for most applications. You either need to host it somewhere else and make an API call for inference (slow, requires internet, adds privacy concerns like security considerations that non-internet devices don't worry about), or you need a model small enough to perform inference on a local device that likely doesn't even have a GPU, or has a small one.  Neither of those limitations has changed, and for both reasons, there's an obvious ceiling to the applicability of very large models for most applications. Also, most applications really don't even need the wheel reinvented with a large model, there's smaller ones that are extremely effective at a lot of tasks already that don't need to be replaced by something expensive and bloated to do the same thing. I think the hype has grown beyond what those of us on the ground actually use the technology for.

Mentions:#ML#API

If you’re trying to generate the entire code for your app, then yes it’s going to fall short. If you understand that problems can be broken down and rearchitected such that it’s decomposed parts are solvable by something of intern skill level and that this intern works tirelessly and responds immediately. The skill floor for using this is quite low, the skill floor for using this to create something meaningful isn’t. Likewise, if you’re simply using it to “write an article” it’s going to be pretty generic. But if you’re using it as primarily an editor and an idea cross pollinator it’s extremely powerful. Basically if hallucinations are a problem for what you’re doing, you’re probably not leveraging it correctly. I think fundamentally we need to question what “knowledge”, “understanding”, “creativity” etc really mean, and why we think that it’s a unique human trait. Source: 12 YOE in ML/AI.

Mentions:#ML

They started earlier with hardware. Then they invested heavily in CUDA so all ML libraries were developed with cuda backend. OpenCL was not nearly as popular. So now they hook up all research to their cards and win. And then new AI revolution happens and everybody want to use ML in their business and NVDA has all the software and hardware and ready to deliver. And only then other companies wake up. So you can win if you anticipate the market before it started to move, just like in trading.

Mentions:#ML#NVDA

I did not take my own advice today on a 4 leg MLB ML parlay. $460 to win $3100. Had a cash out offer halfway through of about $1600. Sent a screenshot to my betting group chat. Did not cash out. Made $3100. Nothing is black and white.

Mentions:#ML

You have to consider possible outcomes beyond look at past performance and future projections based off current trends. Right now, Nvidia is benefiting from GenAI hype that requires training and inferencing with massive foundation models. Right now, Nvidia's primary customers are the other big tech companies including the big 3 cloud companies (Amazon, Microsoft, Google). Mix in some Meta and Tesla. Right now, the hundreds of billions in GenAI investment has very little to show for it in terms of viable business models (revenue generation, profitability). If this doesn't materialize soon you could see investors demand the big tech companies to back off their annual GPU infrastructure spend. If the big tech companies stop upgrading GPU clusters every year or do so at smaller scale that will impact Nvidia meaningfully. Furthermore, foundation models aren't economically viable right now. They cost too much to train and inference. You could very well see a shift from general purpose massive foundation models to smaller expert models that are particularly good at one narrow task. I work in AI/ML as an applied scientist and am seeing this shift in thought. Smaller models are cheaper to train and inference and make sense economically. This will reduce the demand for state of the art massive GPUs made by Nvidia. Lastly, every single one of Nvidia's primary customers is looking to reduce their dependency on Nvidia GPUs by building their own AI accelerators. The point isn't to completely replace Nvidia but to supplement their demand for Nvidia GPUs with their own solutions. Microsoft has their Maia accelerator being made by TSMC. Google has their TPU accelerator being made by Broadcom. Amazon has their Trainium and Inferentia accelerators. Apple has their own on-device accelerators with M1/M2 and neural engine on mobile and apparently are getting into server level accelerators as part of Apple Intelligence push. Meta has their MTIA accelerator. Tesla has their Dojo D1/D2 accelerator. You have Groq LPU which outperforms during inference. Then you have AMD with their accelerator lineup for the next few years and RoCm to compete with CUDA. You have tons of big tech companies teaming up to open source standards for networking and accelerator computing to eliminate Nvidia vendor lock. If these companies are able to reduce their demand for Nvidia chips particularly on the lower end then you will see Nvidia revenue growth grind to a halt or decline and what's worse their profitability could contract considerably with less demand. Double whammy impacts to top line and bottom line.

Mentions:#ML#AMD

Obviously AI, especially in the investing world, is some form of overhyped. However, machine learning is and has been powering the online world in ways that aren't directly obvious. For example, Metas recommender systems are multiple times larger than OpenAI for example, and probably make them a shit ton of money. Companies may not be able to package up models and sell them directly, but ML is not going away and scale is only going to increase.

Mentions:#ML

I think Machine Learning AI will ultimately be more transformative for businesses than generative AI. I’m trying to identify the early innovators in various industries who will leap ahead of their competitors when ML can predict which business decisions will optimize their performance. (Pharma drug discovery appears to be aggressively pursuing this strategy.)

Mentions:#ML

Your thesis is extremely flawed and based completely off of unfounded opinion instead of industry knowledge or any facts. So no, it won't crash. They still have backorders of massive magnitude, and more companies will be investing, it's just getting going. Every tech company enlisting different copilots and other basic AI things are seeing huge results. Most tech these days needs an element of "AI", some form of model training, in order to offer advanced enough features to be competitive beyond older basic functionality, so more and more will be based off of it, requiring more ML, LLM, etc. and the vision side of AI is just getting going. The bigger question is how much upside can there be with how much is getting proced into this meteoric stock price rise, that's practically unbelievable. Question more about competition. But everyone is quite behind, and Tensor/CUDA is so ingrained, it will be a couple of years before anything can really make a dent at all. nVidia was smart and made the software freely available (different than open source), so adoption is massive and no real reason to move off of it, meanwhile it drives enormous hardware sales, which they make crazy profit from, and double dip with crypto miners. If they make a dumb move like try to monetize Tensor/CUDA, that would accelerate folks moving to open-source alternatives, and lead them to purchase more AMD and others, perhaps. However, at crazy hardware and energy costs, or cloud provider costs, efficiency is king, so even 10% speed wins make a different in purchasing decisions. If you spend $5million a year on inference/ML, it's enough to consider change, let alone 20, 30%. Hence why nVidia is killing it due to software in conjunction with the hardware. Nevermind the whole tooling ecosystem, etc. No crashing in site. Any profit taking will barely make a dent, also.

Mentions:#ML#AMD

Unfortunately, I’m not up-to-date on Intel’s gpu offerings except for the bitching sessions that Steve has on gamers Nexus when it comes to horrible arc drivers. (I ❤️ Steve and he’s given them too much slack) I’ve also heard of their dc dpu offerings but have little exposure to them that I can share. My main public sentiments are on Intel is interest how drastically differently / how thirsty they are approaching lunar Lake  They’re dropping hyper threading and picking a better solution to how there approaching bspd power via, finally talking perf/watt The way Steve Job discussed it with the introduction of Intel chips As they moved off of IBM POWER. So many and more 180s and freshness to count. On one hand, it’s impressive to see them making and moving forest on so many innovations but on the other hand, it’s like what the hell were you doing for the last 10 years or so when everyone was eating your lunch and your laptops were boiling by balls?! Literally, the only folks touting anything intel recently (that I can mention) besides was meta due to their highly customized xeon scalable skus and ocp solutions and GN trying to prop up arc on good faith only to have intel shoot them selves in the foot time after time 🎤  I mean it’s cool but y’all need to win me back.  Nvidia has their Tyson-esque ML / llm / consumer and profession portfolio , amd has a Conor mcgregor -esque record with tons of rep (ryzen, epyc, rdna, original Vega) but notable smack downs ( desktop graphics, llm) that we still remember vividly, and intel has… New phone who dis? 🤨

Mentions:#IBM#ML

I work as an applied scientist in AI/ML and I believe there will be many efficiencies gained by ML in the future. There already are and have been for more than a decade. Most of them occur in less dazzling applications like recommender systems, search engines, ad placement, etc. The current GenAI chatbot hype being pandered to the laymen is just that, hype. It is nowhere near ready for primetime as it isn't reliable enough for any business critical or safety critical application. It works fairly well as a writing aid as long as you're willing to fact check it. It works for low level customer support. Beyond that there is very little adoption and productivity being gained from it on a broad scale (enterprise business, etc). There are a lot of fancy demos out there and money being raised but very little in terms of applications being adopted at scale. I say this as someone working in the space. AI will continue to be transformative. We've just gotten way too far ahead of ourselves with GenAI specifically. Same thing happened during dotcom bubble. The internet transformed the world but the stock market was way too early. It took Microsoft 16 years to recover to dot com peak after it burst. Oracle, IBM also took a decade or more to recover. Some of the top tech companies in terms of market cap during dot com never recovered. This includes Cisco, Intel, etc. The bubble is already starting to pop if you pay attention to what's going on in the startup space where I work. Many of the unicorn AI startups are starting to fold or looking for an early exit because they aren't gaining any traction with an actual product that's generating enough revenue to justify continued investment. This includes Inflection AI, Humane, Stability AI, etc. AI companies that have raised hundreds of millions the last year or two at billion dollar valuations facing massive down rounds, looking to sell IP, or get acquired for pennies on the dollar. There are a few winners at the top being propped up by massive investments from big tech (Anthropic, Mistral). Unless some actual profits start to get generated from viable business models soon I expect investment to taper off on massive GPU clusters and LLMs. I think we see a shift to smaller expert models that can be trained more efficiently on less compute to do narrow tasks instead of asking a foundation model to do everything. My two cents.

Mentions:#ML#IBM#IP

How do you think someone is going to achieve clean data? When working on ML algorithms/neural networks in the past (research paper on Twitter misinformation, around ~5 million tweets), cleaning data took me about 4 months. Running and optimising the algorithms took about three days.  So, great, the algorithms are fine, but do we even have enough engineers to properly clean all that data to make decent models?

Mentions:#ML

Yep I got them in a ML parlay tonight a 6 leg hoping for the best

Mentions:#ML

AI in Video Editing Automated Editing: Tools: AI-powered tools like Adobe Premiere Pro with Adobe Sensei, Magisto, and Lumen5 use machine learning to automatically edit videos by analyzing footage, selecting the best clips, and creating cohesive narratives. Features: These tools can automate tasks such as cutting and trimming, adding transitions, and synchronizing audio, making the editing process faster and more efficient. Enhanced Effects and Visuals: Tools: Platforms like Runway ML and DeepArt.io allow users to apply AI-driven visual effects, color grading, and style transfers to their videos. Features: AI can enhance video quality, upscale resolution, stabilize shaky footage, and apply sophisticated effects that would be time-consuming manually. Content Analysis and Tagging: Tools: IBM Watson Video Enrichment and Google Cloud Video Intelligence provide AI capabilities for analyzing video content, recognizing objects, scenes, and activities, and generating metadata for easier searching and organization. Features: AI can automatically tag and categorize video content, making it easier to manage large libraries of footage. AI in Sound Editing Noise Reduction and Enhancement: Tools: iZotope RX, Adobe Audition with Adobe Sensei, and Auphonic use AI to reduce background noise, remove unwanted sounds, and enhance audio quality. Features: These tools can automatically detect and repair audio issues, such as hums, clicks, pops, and distortions, improving the clarity and quality of recordings. Automated Mixing and Mastering: Tools: LANDR and CloudBounce offer AI-powered mastering services that analyze and process tracks to optimize their sound quality for different listening environments. Features: AI can balance levels, apply equalization, compression, and other effects to create a polished final mix. Speech Recognition and Transcription: Tools: Descript and Otter.ai use AI to transcribe audio and video content, providing accurate text versions of spoken words. Features: These tools can also identify speakers, translate languages, and generate subtitles, enhancing accessibility and searchability. Documentation and Resources Research Papers and Case Studies: IEEE Xplore: Access a wide range of research papers on AI applications in video and sound editing. SpringerLink: Provides numerous articles and case studies on AI in multimedia processing. Industry Reports: Adobe Reports: Adobe regularly publishes reports and whitepapers on the impact of AI in creative industries, including video and sound editing. McKinsey & Company: Offers insights into how AI is transforming various sectors, including media and entertainment. Online Tutorials and Demos: YouTube: Channels like Adobe Creative Cloud, Blackmagic Design, and iZotope provide tutorials and demonstrations of AI tools in action. Tool Websites: Visiting the official websites of tools like Adobe Premiere Pro, iZotope RX, and Lumen5 often provides case studies, tutorials, and user testimonials showcasing AI capabilities.

Mentions:#ML#IBM

I think 5 years is actually quite conservative. Again I believe you are just treating this as a hardware problem - i.e., Just Design a better chip and how performant those chips are - and so underestimating just how much goes into this. It's actually a dual a hardware issue *and* a software issue. The parallel computing software and API which actually lets these ML tools do the complex linear algebra directly on GPU - i.e., "talk" to the GPU - is critical here. And currently, Tensorflow (the largest AI platform *by far*) interfaces with CUDA, which is proprietary to Nvidia. There is a long chain of things that have to all go right here: * First MSFT needs to design a competitive chip. * Then they need to create an actually cost-effective/marketable version of that chip. * Then they need to secure fabrication, likely in Taiwan. * Then they need to create a parallel computing architecture / API that is rough parity with CUDA (again, *incredibly* non-trivial, low-level/machine code). * And then they need these ML/AI tools to actually want, integrate, and support their new API. * And then they need AI Scientists/ML Engineers around the industry to adopt to their new computing API. Each stage of this process has considerable design/development lag, and then market adoption lag. It *will* happen and if anyone can do it it's the brain trust at MSFT/AAPL, sure. But few of these stages can be done in parallel (hah) and each one has a considerable lag associated with them.

I have yet to see a compelling argument how they could any time soon - they're locked in, at least for the next 5-10 years, as *The* supplier for AI architecture. The individual companies building out the individual products go up and down and bust cyclically, but they all are going to Nvidia because everything runs in Nvidia cards and all the ML tech (Tensorflow, Pytorch, etc.) interfaces with GPU's near exclusively through CUDA. This moat will be crossed eventually, that much is inevitable - it's just a matter of time. But it's a slow, slow trudge. TF/Pytorch/keras/etc. have only *just* recently opened up to non-CUDA architecture after 10+ years of being exclusive, and even then it's very preliminary/beta. The closest competitor is AMD and that gap may as well be Usain Bolt vs Dale the Intern, at least in the realm of ML parallel computing architecture. And even when a viable competitor finally rises, it's also a matter of getting *(extremely)* limited fab to meet demand AND breaking into an industry that is basically overwhelmingly committed to massive Nvidia investments.

Mentions:#ML#AMD

There’s a lot more to it than that. Many thought like you that it would be a simple algo … but novel things happen all the time when driving. The lightning changes. The roads change. People make dumb choices as do animals. The weather changes. These are just things that happen all the time. For a time, it was becoming very common to “trap” robotaxis with a single orange cone. What ML/AI can do is create models for these scenarios.

Mentions:#ML

Beyond networking Google's TPUs are built by Broadcom and Google is *heavily* invested in this area. Google's recent layoffs spared most of its AI/ML and TPU groups. It's also hiring a lot of AI/ML and chip engineers right now and building new AI data centers (eg, Kansas City). Its data center expenditures for AI from q1 23 doubled in q1 24 from about 6 billion to 12 billion and they expect these expenditures to continue to increase for AI etc.

Mentions:#ML

Put it in fanduel or draftkings and put it on mavericks ML today

Mentions:#ML

This. But also I suspect part of why ML framework dev is so slow is companies hire entire teams of “ML engineers” and none of them actually understand how the hell the frameworks work. So beyond stringing together some layers (best case) they’re basically useless. Because the backends take time but they’re not that hard to write. It’s just vectorized versions of each type of layer you could want in a given backend language (CUDA, openCL, Vulkan, Intel One). Then of course you have the DNN and BLAS libs but those just provide optimized routines, they aren’t strictly necessary.

Mentions:#ML#DNN

Google has already switched to using TPUs since ages ago. Remind me how many actual ML teams outside of Google actually use TPUs?

Mentions:#ML

Because AI is not just LLM. The generative model can be applicable in a lot more things. There are also a lot of works on new ML applications that would solve many problems we have in life. Medical fields is going to be so different 5-10 years down the road. And all research fields are gonna have drastic shifts

Mentions:#ML

Machine learning is everywhere, medicine (e.g automatic insulin pump) , military, economics, robotics, climatology, It's not just AI. It's ML that contributes to efficiency gains and improvements that boost earnings. But this will slowly become priced in, many companies have been profiting from ML for years

Mentions:#ML

I personally use ML, Vanguard, and Fidelity and Fidelity is my preferred choice as well. But I’ll make small case for ML here and it’s for their Preferred Rewards Membership, which will not be applicable for your child but it’s actually fairly decent once you get a good amount of money in there. At the Platinum Honors Tier ($100K+) you get a 75% credit card bonus rewards. If you combine it with a Cash Preferred Rewards (1.5%) and Customized Cash Rewards (3%) you can get 2.625% cash back on every single purchase and up to 5.25% on select categories.

Mentions:#ML

$ML Moneylion

Mentions:#ML

Thank you sincerely. Senna hurts so I’ll try a larger dose of ML

Mentions:#ML

Also chip floorplan layout optimization is often aided by ML tools now

Mentions:#ML

Well. They do. The diffraction pattern of lithography masks are designed with AI tool because the multipatterning is really hard problem and ML helped them progress.

Mentions:#ML

Fidelity has a broader range of offerings, and I don't feel like I can trust ML as much because they are on the other side of so many transactions. It's easy to connect a Fidelity account to your bank account so you can make online transfers.

Mentions:#ML

There r translations engines being developed to port CUDA to AMD GOUs, with minimal performance loss. Not a financial analyst but am an ML engineer. Translation engine also offers a minimal performance hit. That could hurt nvidias dominance.

Mentions:#AMD#ML

You literally can buy TPU space to build ML models on the google cloud website lol

Mentions:#ML

So take the Mavs ML this guy is saying

Mentions:#ML

1. Paramount Global- Stock is trading at dirt cheap prices at the moment because of a cyclical advertising downturn, and because investors don't think Paramount+ can compete with Netflix even though it's one of the fastest growing streaming services. They can also benefit from Generative AI reducing costs of production, but haven't really been vocal about it. 2. Portillos(PTLO) - Local restaurant chain that is always packed that is incredibly popular in the Chicago area, and expanding throughout the country. Was very overvalued when IPOd so I was avoiding it, but its now trading at a reasonable price. 3. Intel(INTC)- Trading at less than 5% of the value of Nvidia, even though their book value is higher. The thing is products built around AI still need traditional compute hardware for things like running the web server, processing requests, etc. Additionally, the AI boom creates demand for fab space, and Intel can lease their production capacity for profit. They're also likely to get a ton of government subsidies, so you can profit at the taxpayer's expense. 4. REITs in general. There is a huge panic right now over real estate, and IMO the impact to REIT prices is not justified. REITs are down about 30% on average, and office REITs are down 90%. Most real estate distress are with privately held, highly leveraged office properties. REITs, on the other hand, have quite safe amounts of leverage, and most REIT market cap is in sectors that are doing well like datacenters, self storage, logistics, cell towers, etc. Datacenter REITs in particular have a lot of potential to benefit from the AI boom- AI applications require a lot of compute. Whereas an entire companies traditional applications could run on a fewservers running dozens of virtual machines, occupying just a single rack, AI/ML applications deployed at scale requires a ton of space and power.

A lot of drug companies are using machine learning to recognize chemicals that are likely to be effective drugs. These chemicals still need to go through the long multi year testing and approval process, so it isn’t creating profit yet. Cybersecurity companies are also using AI to recognize and intercept malicious activity in the networks. PANW is a great example. DuoLingo is using AI in their foreign language software. The hype has really been about generative AI like Chat GPT. I’m more interested in ML (machine learning) AI as a transformative function. Adaptive pricing is executed by ML.

Mentions:#PANW#ML

I meant hype more in that the way they market their data is over-the-top, not investor interest ‘hype’. The CRO side of the business (legacy Quintiles) has fared very well during the sector-wide slow down caused by interest rates. Being frank, I really don’t see IQV dropping much. My comment was more about expectations that the stock suddenly pops off because of the data assets. IQV has spent years and years building ML models on their datasets and they still haven’t found a silver bullet. Leveraging healthcare data in drug research and development is just an incredibly complex endeavor in a highly-regulated industry.

Mentions:#IQV#ML

Haha, the current valuation is unsustainable over the long term. But I wouldn’t bet on it , the market will stay irrational long after I am broke. Also nvidia could shutdown product development and it still take 5-10 years for them to be dislodged, not because they are that far ahead. Just that software support in PyTorch and every other model ML library will take that long to migrate to a competitor to CUDA even if the processor was better

Mentions:#ML

I opened a vanguard account just because I couldn’t get good money market funds at ML. Good to know some options at ML

Mentions:#ML

lol I don't really know the intent of this post, but it's funny seeing so many comments dismiss it in the exact way the OP describes it would be dismissed. If it matters, I've been involved with AI for almost a decade now, and I've been all aboard the "hype" around it since [this bad boy dropped](https://youtu.be/TmPfTpjtdgg). It's crazy how I was ranting about this stuff 10 years ago, showing people how a (now very obsolete) algorithm *learned* how to play a game and how we'd soon see this thing "take over," and how those people could not give a shit and barely acknowledged that what I was showing them was even remotely interesting. Those same people are now ranting about how fearful they are of the inevitable AI takeover or how dismissive they are of such a reality because they can't fathom AI getting any better than it is right now. And, here I am, just watching this timeline unfold basically as expected. I digress; my point, here, is to highlight this point made in the OP's post: >That is not what is happening with AI, and it isn’t what is going to happen in the future. Progress is being made at a blistering rate. People who aren't constantly paying attention to every single development and published paper have no idea what's going on. They just wake up one day and go "Oh wow look at that, AI can just make videos and music out of nowhere. This must be something that was developed overnight and is a magical thing that can only happen once or twice in my life. Any discussion about further development is silly and just hype!" The people who have actually been paying attention and know something more than what's on the surface are all generally looking in the same direction in terms of where they see this stuff ending up. It's a legitimate technology that offers legitimate value, and the people behind it aren't bullshitting. The limits of AI development has, frankly, not been even so much a matter of limited compute/data as much as it's been a matter of interest. When I started my first year of my Masters, there was about 150 students in our university's "Intro to ML" class. The next year, it was 300. Our school had to add a number of AI classes and hire faculty to support the explosion in interest. And many of those students are now working on PhDs doing research in AI or working in the industry working on AI. The amount of funding pouring into those projects has been increasing rapidly over the last decade, and this includes *a ton* of support from the government, military, etc. And this is definitely not limited to the US, of course. Basically, a lot of people see a lot of value in this stuff, and it's not just some shiny toy they can sell to consumers to make a quick buck. Very smart people and very powerful people recognize *this* is it, and they need to be on top of it otherwise they'll be left behind. They know that it's a race to "something," and although I won't try to pretend to know what that "something" is, I will say that the first thing I thought when I saw that DQN demonstration video 10 years ago was, "if this AI can learn to play a game, you can just treat making a better AI as a game, and if it can learn to do that, then the result of that game will be an even better AI, and if you repeat that process, you're going to end up with *something*." Singularity or not, that process has already begun: products like ChatGPT or Copilot are already being used to develop better AI. It's more indirect and clunky than just unleashing an AI and watching it become sentient, but it's effectively the same process, and that process is only getting more and more efficient *every day*. As for whether or not this is a bubble: that depends entirely on if this thing will burst. To be frank, the progress of AI, from my perspective, has actually been slower than what I thought it would be over the last decade. It's possible that people are expecting stuff to happen much faster than it actually will, and *that* can definitely result in this hype collapsing. But, to be clear, companies like OpenAI are basically sprinting towards what they believe will be a technology that will literally usher in a new era of mankind. I'm not saying they'll actually be able to, but (a) if they do, then that's it and (b) even if they don't, the technology they'll have built along the way will be incredibly valuable (as it already is).

Mentions:#ML

Iam currently running copilot subscription. It’s a lot of work just to get it working properly. I’m learning coding now. Great for work flow automation. Change an invoice to pdf then file it in the appropriate files. That’s weeks for a noob like me. But ask it about mining operations work investing and it has rendered some results I couldn’t find on my own complete with “ investing is a high risk bla blah” I know economic theory and ML have been hand in hand for decades and they are producing results. I’ve spent the better part of my year looking for consumer level access and it does not exist, for us

Mentions:#ML

I don’t usually post in r/wallstreetbets but enjoy reading the theses here. This one here is my cup of tea. I work in AI and ML since 2010, and have moved from Academia and Research to the Financial and Investment industries were my role predominantly is innovating processes using AI. There are a few things I’ve noticed which should give a strong hint as to what might unfold short to medium term. 1. Most of the clients are extremely reluctant to adopt Generative AI, citing legislation, liability, client privacy and so on. Jobs and tasks which take hours can be done in second but they chose not to. Out of 11 possible projects this year only 2 went through! 2. A few other clients have started dipping their toes in LLM use, but vastly prefer off the shelf models instead of using custom ones. Companies like NVDA, MSFT, AMZN, digital ocean, Anthropic, Mistral, etc are the ones who will reap the immediate benefits from PaaS revenue when ML engineers use their APIs. 3. GPUs are in extreme demand, instances which are needed have waiting queues and so on. I had to reserve with AWS an extortionate amount to keep a very large instance available. Whether that’s artificially inflated or not, I don’t know. However I have the suspicion that GPUs won’t be what will keep driving AI forward in the long term. Medium to short term, absolutely yes. Now, IMHO an AGI is not yet within grasp. It might be fairly soon when quantum computing becomes mainstream. Search and find who develops that technology, that would be my bet into who will achieve it, and who will mostly reap the benefits. Furthermore whereas Transformer based models have revolutionised the industry (and am I glad they did!) I doubt they are close to an AGI. But they did form the cornerstone and got everyone excited in AI. If you asked me five years ago if I ever though we’d see AGI in our lifetimes, back when deep learning was in its infancy I would have said probably not. Now I’m starting to change my mind. The repercussions of long term AI (after the bubble bursts and the wind settles) are as you explained profound, not just to the markets but the human society as a whole. What’s interesting is that there are now moves to limit and control AI development from various different stakeholders; Governments and Legislators, those who own the Data (aka Financial Times, Reddit, etc) and of course Ethics committees. I’d be surprised if OpenAI and other similar companies get free reign to develop and do as they please in the near future anyway. So the long term to AI/AGI isn’t just paved by technological progress, but by market, governmental and user adoption. Currently what I’m seeing is that industries which stand to benefit greatly from it, are very hesitant to adopt it.

Take a look at https://arcprize.org/ Try to solve the first puzzle. Now consider that a data center consuming massive amounts of power cannot reason through this problem, and progress on this benchmark is leveling out. Watch this video https://www.youtube.com/watch?v=dDUC-LqVrPU&t=15s Or read the paper https://arxiv.org/abs/2404.04125 Most people I know who are looking at ML and not just eating up the hype do not believe that the current methods of ML will lead to AGI, and we are only on a first step to understanding intelligence now.

Mentions:#ML#AGI

Talk to any serious PhD in ML and they'll tell it's a bubble 

Mentions:#ML

You don't think chemical interactions are calculable? What does "do not have AI equivalents" mean? You realize you can achieve arbitrary function/modeling complexity with simple y=mx+b's, especially when combined non-lineraly trillions of times? You claim to have this great foresight, but that means you saw chatGPT coming, too, right? And I hope you made the astronimcal gains such knowledge empowers... LLMs are not the point - the fact LLMs can arise from simple computation *is*. (Along with all the other fields where ML techniques have left human heuristics in the dust - the real AI hype is about the capabilities of compute, not LLMs....)

Mentions:#ML

Dudes post history is 1) steroids 2) comments about being cheated on and 3) discussions about his FIRST SWE job with Booz Allen (not what one thinks of for tech, AI, or ML). But seriously, you’re ignoring NVDAs moat right now. 98% market share of anything is pretty good. Now consider the market is something that all of the NASDAQ 100 wants to own and you have yourself a serious growth story. Lastly, yes AAPL makes more revenue but NVDAs profit margin is insane right now (+75%)!

Mentions:#ML#AAPL

Bye den considering more crackdowns on China's acess to chips, at what point do people stop saying you can't monetize AI and start reading sci-fi or better yet ML textbooks??

Mentions:#ML

>In January, Tesla showed off Optimus robots folding laundry in a demo video that was immediately criticized by robotics engineers for being deceptive. The robots were not autonomous, but were rather being operated with humans at the controls. So in theory, humans remotely operating a robot can be used to train an AI/ML algorithm, such that it eventually becomes partially or fully autonomous. IMO robotics is the next phase of AI after generative AI- If Waymo can make fully autonomous cars, which handle millions of potential variables, then there are jobs that are way more repetitive that can be automated with AI That being said, Musk is always overpromising and underdelivering.

Mentions:#ML

Ok I literally train ML models as my daily work, and this guy knows only shit about the Tech. NVDA builds softwares, known as Cuda, supports much better the most popular AI infra then AMD

Mentions:#ML#NVDA#AMD

I’m going to choose to follow what you’re saying because the title implies that you know more about ML than 99.9% of people on this subreddit.

Mentions:#ML

First, spend 10hrs on YouTube searching AI and ML future use. Then unleash your imagination. AI will change the world at an exponential speed for the good or bad. Medical breakthroughs, cures, and new treatments is what I’m personally most excited about. Terminator (w Arnold) is the other side of that coin but it’s all possible. NVDA’s Blackwell is capable of compiling the entire internet and email it to you in 1 second. Marinate on that. Use your imagination. Imagine you’re worth 50B. Is 10B worth immortality? 

Mentions:#ML#NVDA

>I work in tech So you work in the tech industry/a tech company, but not necessarily a tech role? > studying AI as an engineer. So you're a CS student in an intro to ML/AI class?

Mentions:#ML

That’s kind of my point. Right now LLMs are cool and sexy but ultimately they need to make money for the investment to continue, and it remains unclear what the use cases are for GPTs to generate much revenue. Companies have been using ML for years to refine and improve their algorithms and I don’t think that improving those algorithms with better/faster chips creates the transformational use cases that people are dreaming of when they talk about AI.

Mentions:#ML

No one has the ML training Infrastructure setup for large scale learning of autonomous vehicles like tesla. Here being ahead matters. I have sat in both and nothing RN beats tesla auto-pilot in commercial vehicles. Waymos are excellent too but a different segment that tesla can beat over a few years. All arguments against everything else is valid but the self drive stuff is where no one else is close. Whats my angle here: nothing. I don't care about Elon or the stock. I am just a software nerd who is super deep in ML and no one has an AI training setup like Tesla does for things running in the real world (like cars and robots)

Mentions:#ML

LOL they tried recruiting me. Their "Engineers" seemed like shit. Stop with the kool aid. There is nothin their ML / AI does better than a big bank

Mentions:#ML

I can't imagine not having different strategies. Just make sure they're all different and mutually exclusive -- long term / short term volatility event-based AI/ML arbitrage equity long 130/30 there's a ton of other strategies out there -- and every one has its time in the sun. so run all of them so that your pnl won't fluctuate too much

Mentions:#ML

Algos gonna absolutely dump at 2:00:02. Now, that doesn't mean the market is actually gonna go down or even not close at +2% after JPOW speaks... it just means that whatever the ML models read as sentiment in the notes is going to trigger an auto sell.

Mentions:#ML

Well in that case you have me beat on ML experience, but if there is one thing I have learned about machine learning over the years it's that demos and marketing often far exceed the actual capabilities. > I'm no Apple shill. I use a Pixel and a PC. But what they've done, is build a useful, non-gimmick AI ecosystem with practical uses that will massively improve their current digital assistant. Well for what it's worth, that's what they **claim** to have done. I will believe it when I see it. I don't doubt that Siri will be improved, but that isn't saying very much. I also wouldn't bet the farm that many people start using Siri significantly more than before, or that it will be a huge selling point for iPhone.

Mentions:#ML

Can you provide meaningful examples where the market is pricing in too much hype or an overconfidence in a particular capability/product? Seems like most of the gains have been related to hardware and compute and less so on the service and consumer product side. With the exception of apple, who could buy or outspend to market any competitor on earth barring an antitrust action on an acquisition. Not trying to be pedantic but really want to understand where the “bubbles” people refer to in AÍ/ML seem to be.

Mentions:#ML

tl;dr I’m gay and my dick is small Anyone else ever feel like we’re reliving the pre-2018 market euphoria? Remember how miserable the volatility was then? The more ridiculously green days we get in a row, the more a severe correction feels inevitable.. maybe not now, but soon. ML/AI is fluffing the tech sector and overextending it far beyond realistic levels - as soon as it fails to deliver on the growth it’s promised, the downside IMO will be violent. Worse market sentiment, higher earnings pressure, massive consumer debt all having a chance to catch up…![img](emote|t5_2th52|31226)

Mentions:#ML

I'm an engineer in the industry who works with clusters of Nvidia GPUs for ML workloads and the amount of eviction we get due to Amazon and Google running out of capacity is still just as bad as it's been for the past few years. I don't see the demand falling annnny time soon.

Mentions:#ML

I have no idea how it will play out, but I would push back on the idea that people will want AI apps compared to what apple is offering. Apple is going to be able to use the device to find suggestions and things for you. Like the demo where the person is talking about their mom flying in and AI being able to look through your text messages and find the flight info and make restaurant suggestions. I do think a lot of the AI stuff makes more sense to be built into the device and could be something I do think people will want to take advantage of. I'm more in the camp of Machine Learning, ML is a bigger revenue driver for enterprise while LLM's might offer some, but they aren't as good as ML.

Mentions:#ML

Hard to say, I mean it feels like regardless of it's pro or not, Apple wants more of their products using the M chip. I'm sure it's much better margins for them and better for the overall developer environment. Just my personal take, but apple showed some of the power of the LLM's and it's cool that a lot of the features are just being baked into the iOS. I still think LLM's have less of a path of ROI compared to things like Machine Learnings, ML's in terms of enterprise spend. We are about due regardless of "AI" pc or smarphones, since more hardware upgrade cycles are like 3-5 years and the pandemic was about 3-4 years ago when people started to upgrade their devices.

Mentions:#ML

Just because your grandpa's calculator can run ML algorithms doesn't mean it suddenly became Skynet. Machine Learning is just one trick in AI's bag. If you think they're the same, you probably think microwaving a hot dog makes you a chef. Do some reading before you embarrass yourself next time.

Mentions:#ML

I know you're bearish around some of the AI stuff, which makes sense. I do think that you have to think of AI as three things: Machine Learning (ML), Large Language Models (LLM's), and then Artificial General Intelligence. Before LLM's, a lot of AI work was just ML. That's actually basically what SNOW/PLTR/DeepMind/etc kind of do. They take data and run models on them to help make predictions. A lot of new AI buzz is around LLMs. Personally I don't think LLM's really are going to have a ton of ROI compared to MLs. The thing is, with the Apple announcement, they should off a lot of really cool things with ML in particular. I do think people will want stuff like this: [https://x.com/ishanagarwal24/status/1800232391551951156](https://x.com/ishanagarwal24/status/1800232391551951156)

Mentions:#ML#SNOW#PLTR

VOO and VT are my no brain investments and make up a good majority of my investments. I only dabble in stock picking when the positions are pretty obvious. Thank you 2016 me for being a PC and ML nerd and buying a decent amount of Nvidia, AMD, and Tesla. But I have no idea what to pick at the moment. Lol.

Apple is trying to not say AI lol We're 45 minutes into the WWDC keynote presentation and no one's said AI or "artificial intelligence." Instead, we've heard presenters say "intelligence" or "machine learning." And every time they say "insightful," know that's code for AI/ML.

Mentions:#ML

Apple are rumored to have negotiated a lifetime architecture license from Arm at unknown but best in market terms - i.e. Apple gets all Arm's updates and innovations and don't pay anything extra from their original license unlike the rest of the market. Real world this means every chip designer taking Arm IP needs Arm to out design Apple to offer a better chip. That's a tall order on CPU but more possible on AI/ML inference, graphics, interconnect/fabric.

Mentions:#IP#ML

You must be trolling. How are you an AI/ML scientist and can’t see how crucial memory/HBMs will be in this space? How will the data yielded from AI models that run millions and billions of simulations (for things like drug discovery, as an example) be stored and processed? Micron has a crucial role in this space and there’s a reason the Biden admin just gave them billions and why they are working with NVIDIA.

Mentions:#ML

Sorry you don’t understand the basics. I work in AI/ML as an applied scientist. Lol.

Mentions:#ML

They've done ML on sensor and in the cloud for years.

Mentions:#ML

It's going to be fine....until the grand dipshit himself declares Martial Law, and then stays in power until he dies, becoming an actual American Dictator (oh, and I know there will be multiple responses of "there is already a dictator" blah, blah, blah). No....Trump will end America as we know it. He's now a convicted Felon, who while he can't vote, will be allowed to be president, he will force them to give him Presidential Immunity, and then he declares ML. Game over. Welcome to Russia #2.

Mentions:#ML

Yep. Been saying this forever. I’m not even deep in ML and it’s been stupidly obvious. Anyone who gets caught out by all this was just stupid. The growth curve was visible a long time ago.

Mentions:#ML

I agree with everything you've said and have independently arrived to the same conclusions myself (also a Software Engineer, working with ML for the last > decade). I have already started buying puts (farthest out) because it is impossible to predict when the downturn will start, just like no one can reliably time the market. I'm viewing this as kind of a DCA into my conviction, except this time it's a bearish one. I plan to keep adding to puts and eventually rolling them out if the bubble persists. The return when the bubble pops will make sitting on losses in the short-term worth it.

Mentions:#ML

https://www.botkeeper.com/blog/whats-the-difference-between-ai-and-ml "While the two terms are related, they’re not exactly interchangeable. AI is the idea that a computer or machine can think in the same manner we do, like visual perception, decision-making, voice recognition, and translating language. ML, on the other hand, is a sort of subset of AI that instructs a machine on how to learn based on repetition and data processing—the more you feed it, the more it learns."

Mentions:#ML
Reddit user placeholder image