Reddit Posts
King for a Day on Plus500 / Fool for a Lifetime in RL
The Emergence of Money Market Funds This Year
The Emergence of Money Market Funds This Year
META Crushes First Quarter 2023 Results!
Meta's stock price is coming out of the doldrums, where is Meta's buying opportunity
Meta's stock price is coming out of the doldrums, where is Meta's buying opportunity
Meta's stock price is coming out of the doldrums, where is Meta's buying opportunity
Why hasn't META taken advantage of the AI trend?
This is what happens when you trade (mostly) options and you don't know what you're doing. Don't let this happen to you. ($66k RL)
Ralph Lauren stock rises as pricing moves lift quarterly results (NYSE:RL)
Best Fashion Stocks to Buy Now in 2023: Top Clothing Stocks
Why is Meta stock tanking? 'The wrong number at the wrong time,' analyst explains
Want to make a RL difference? Write your congress/house person to change the reporting laws on congress/house people insider trading
Thinking about shorting the market? Well today is a great day to get in!
REMINDER Meta owns Facebook and Instagram. It seems everyone here forgot about it
Rocket Lab (RKLB) - Stage Recovery Attempt Pending = Huge Catalyst!
Stop it with the Wendy’s and Wife’s BF Jokes
scummy old has been Cramer. pimping RL after it bounces 10 point in two days
$DWACW Huge Discount Explained. Potential short squeeze incoming? DD inside
Investing in China; outlook in the short term or even long term?
Hopefully you’re better at investing than RL. HOLD THAT L LIKE YOU HOLD STOCKS
Some Rocketlab(RKLB) information for the Big Brain space chimps
Rocketlab announced a new production line for reaction wheels, a spacecraft component. Based on my linked calculations, I estimate that Rocketlab will earn 90 MILLION dollars per year from manufacturing 2000 wheels a year. The wheels will go into 585 satellites per year; meaning RL has large orders
Micron Technology $MU might be the next big thing this week.
Recommend buying Carnegie Clean Energy (CWGYF) on the following news: Carnegie in the spotlight at HPE Discover 2021
BlackSky 4.0 - a deep dive (for Reddit) into BlackSky's history and competitive landscape
SPAC FLEET DIRECTORY brought to you by SUPERNOVA and TORNADO!
Mentions
Valid skepticism about the study design. But this is the only way a research study (ie., academic rather than commercial) can be done - through simulation. No financial institution will tell the public how their algos work let alone sharing historical data of real bots trades for research. The point here is that the technology/infrastructure has enabled a new generation of bots: "While traditional algorithmic trading relies on static, hardcoded rules defined by humans, RL-based trading algorithms autonomously optimize their strategies through self-learning, trial-and-error interactions with the market and adapt in real time based on observed outcomes." Re #2 - please read the research showing the "punishment" mechanism of the bots "cartel" behavior Re #3 - exactly. but pricing away from "fundamental" is the reason for volatility, fragile market, and more importantly, inefficient use of capital
DeepMind truly feels like the cutting edge of humanity AI technologies, the kind of feeling that OpenAI used to give. I still remember how amazing the spinning up pages were as RL learning materials. I recommended it to so many people. Their RL Dota project was also fascinating. It got me interested in RL and learning based controls in general. Such a shame that OpenAI has fallen to whatever it is now.
These hate each other so bad in RL
bit they still do though. they hallucinate a lot. google was focused on bigger specialised models through RL. their alpha series of models. one of them even earned them nobel.
What kills openai for me is there is no profitability model beyond being an "LLM provider" effectively competing on price; a commodity. No other value added, no problem to solve. At least google, have pushed the boundaries in applications using RL etc. OpenAI has not pursued applications beyond testing and benchmarking for intelligence to get their models ranked. They will get dumped by microsoft eventually or absorbed into it as a dept and die.
> Yes, except the exact opposite. By RL I assume you mean RLHF (or its derivatives), which has been around for years. DeepSeek didn't provide any breakthroughs; they simply used a clean dataset instead of dumping all they could find and hoping for the best. "Thinking" is simply extending contextual information; it's self-iteration. What in there suggests we're "barely learning"? RLHF training for LLMs is new. Industry is now moving towards DPO. >DeepSeek didn't provide any breakthroughs; they simply used a clean dataset instead of dumping all they could find and hoping for the best. Nope. They used novel training techniques including: * After the RL-only phase (Zero), they do cold-start data + further RL + SFT to refine readability, alignment * Their training pipeline explicitly encourages chain-of-thought reasoning >"Thinking" is simply extending contextual information; it's self-iteration. Nope. Thinking models are trained to think during training. >https://arxiv.org/pdf/2211.04325 Did you read this paper yourself? Come on man. > We have projected the growth trends in both the training dataset sizes used for state-of-the-art language models and the total stock of available human-generated public text data. Our analysis suggests that, if rapid growth in dataset sizes continues, **models will utilize the full supply of public human text data at some point between 2026 and 2032,** or one or two years earlier if frontier models are overtrained. At this point, the availability of public human text data may become a limiting factor in further scaling of language models. >However, after accounting for steady improvements in data efficiency and the promise of techniques like transfer learning and synthetic data generation, **it is likely that we will be able to overcome this bottleneck** in the availability of public human text data
> The source is that there continues to be huge breakthroughs in training techniques. See RL, Deepseek breakthroughs, Thinking models, etc. Yes, except the exact opposite. By RL I assume you mean RLHF (or its derivatives), which has been around for years. DeepSeek didn't provide any breakthroughs; they simply used a clean dataset instead of dumping all they could find and hoping for the best. "Thinking" is simply extending contextual information; it's self-iteration. What in there suggests we're "barely learning"? > Lmao. Source? https://arxiv.org/pdf/2211.04325
>Lmao. Source? The source is that there continues to be huge breakthroughs in training techniques. See RL, Deepseek breakthroughs, Thinking models, etc. >If anything, we are barely learning what to do when there's no more new unscrapped data from the internet. Lmao. Source?
That’s kinda right and wrong. I know how they work, I build them. I have two things to say. First, we don’t really know how humans solve “new problems”. Mostly people combine ideas they know to form new ones. Also if I had to pick a number out of my ass, 90%+ people don’t do novel problem solving, so you can still automate away the task 90% of the people are currently doing. Second, RL can and will solve novel stuff, check out AlphaZero, Alpha go move 37, AlphaFold etc. And LLMs now incorporate RL. Although none of these are solved, but we are not too far away as well
And your claim about it not being able to generate truly novel ideas is patently false as well. RL systems can generate novel outcomes, as has been shown by AlphaZero and AlphaGo. You don’t know what you’re talking about
RL is a strong brand name so I can see if continuing to perform well. Plus, its a different play from the AI buildout lol.
Yeah, boring, but wanted to get something like that with RL. GTX is a bit of a growth and more of my style. GILT was a small position, but another name I've been watching for like a year and wanted to pull the trigger after a bit of a pull back and the ER being solid.
Nice moves! RL has been a solid performer. GILT solid fundamentals in an interesting space.
Ended moving some capital around finally pulled the trigger on GILT, RL, and GTX.
I hate it whenever he tweets, he put Christian Bale’s photo in there. Like, bro, that’s not what you look like in RL man. Get a life.
"Berenberg Bank has recently updated its rating for [Rolls-Royce Holdings plc](https://www.google.com/search?q=Rolls-Royce+Holdings+plc&oq=Berenberg+Bank+rolls+royce&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTINCAEQABiGAxiABBiKBTIKCAIQABiABBiiBDIHCAMQABjvBTIKCAQQABiiBBiJBTIKCAUQABiABBiiBDIKCAYQABiABBiiBDIGCAcQRRg80gEINTIyMWowajeoAgCwAgA&sourceid=chrome&ie=UTF-8&mstk=AUtExfAMPOwnjQF8Nf2WKNYZLBOcnwQNcbldlKREjwu0yWMhhDz3cSip272taaA7a49qrnQ-8V4PbE27qDmjzQjLGhutKDol2om8-62S9ugUzqrOGKrs92g6Nz_ZUE0p6Pjxx1bf9flv70C9k7Cs8vaorImfREQihGjHBfUUiayHyu7im_n1l3wHcZcZspMfo_k3qKh5b1Z37XP3pIgXTyYB65HA44kbrM9dUb_BqK43RL-sA3RcsawAqOmM52UUTIjzRErQOnhHW7AtAkdq3qke8jB6&csui=3&ved=2ahUKEwiFlcu-ju2QAxViODQIHcZQGyYQgK4QegQIARAB), raising it from a "sell" to a "hold" rating in October 2025, with an increased price target to GBX 1,080. This upgrade reflects Berenberg's positive outlook on the long-term growth of Rolls-Royce's engine delivery, particularly for large aircraft, and the company's improved internal performance, although the "hold" rating suggests caution remains. This is a change from an earlier "sell" rating in January 2024, which was based on concerns about the XWB-97 engine and the company's future plans."
Going to make those video game explosions feel like RL.
>Anything that you can reduce into a Markov decision process, can be mastered by very rudimentary. RL algorithms Except these systems have no fidelity, and if you have to hire someone just to check the output of the AI, then what's the point?
Not only is actual useful adoption rates very low, it's still advancing much much faster than people realize. I work in generative imaging development. The stuff that we fuck around with and never show the public because we plant to fold it into something else as part of a multimodal model, is mind blowing. Also, people that think AI is just LLM's are literally stupid. Most jobs can be reduce into a markov decision process. Anything that you can reduce into a Markov decision process, can be mastered by very rudimentary. RL algorithms. The main barrier right now is that we are advancing the space to fast for anyone to have the time to develop sufficient knowledge of current SOTA to educate businesses that can benefit from whats out there, on how they need to change to position themselves to benefit from "AI".
Do whatever suits your investing style. I stick with MSFT, NVDA, RL, and VITAX.
Yes but it appears agentic AI has so many challenges outside a sandbox environment. Plus the challenges around RL, not to mention the cost association to implement. Couple that with companies having data everywhere. Siloed applications/data, legacy applications, databases with different schema, etc. The other issue is companies slapping that AI label on when it might not be that or the wrapper companies. Right now it seems ROI, justification outside of certain segments is tough. I have been around enough software (Enterprise AE, SaaS, on-premise,, hybrid) and seen how major companies have such mixed environments/infrastructure, and data everywhere. There are so many companies that don't really know what they have across the footprint. That includes licensing. How about non-standard data, like companies that use terms or info specific to their org. Then from a dev standpoint what are the productivity gains, you have devs at different levels across orgs. You would still need to understand the output from AI and confirm it is correct. As it is security can be scary and now introducing AI. I just see challenges if you don't have strict governance across an org. Which is very challenging across major corporations. Companies like to appear that they have all this structure, yet when you get behind the scenes you realize that isn't the case. Sure summary and note taking are standard, I even helped a come years ago that was in beta in the space for Zoom meetings. Doing some really advanced stuff beyond just summary, I helped them to craft function around the sales function aspect. They have some well and get mentioned on LI and Reddit for what they do. I am just putting some questions out there.
You know that character in Parks and Rec who both works for the gov and also hates the gov and would be happy to see it all just disappear. Those people exist in RL. They dont have the imagination required to foresee the disaster that would follow. But theyre there. They exist. Thus guys one of them.
I am up 10% in RL but I should have gotten into it months previously. It is a very good business.
Ralph Lauren, $RL. The economy ia better than the doom and gloom predictions and the current Ralph Lauren lineup is stunning, undervalued for sure and good valuation.
McDonald's for me is meh. It is so expensive to eat there and I understand the business is much more than just buying food, but I don't see the appeal of it at these levels. I recently bought a single share of RL (Ralph Lauren Corp) just to see what it had in it based on its depressed levels and it has happily surprised me. I basically figured -- huh, this is cheap, and they make nice clothes and have a nice social media presence with beautiful people, I bet there is room for growth here.
There is zero reason to assume that AI infrastructure spend will accelerate at current rates. We've already seen training spend drop off and money shift to spending on RL and inglference. As model architecture gets better, as more efficient training and RL techniques are developed, and most importantly IF companies don't see an ROI for insane inference compute spend, AI infrastructure spend will absolutely be impacted. If you really take a step back and look at both innovations the in the AI field and a clear desire from China to innovate so that they can decouple from nvidia, then there is no certainty that infrastructure spend will grow at projected rates. In fact there are very obvious risks to that growth trajectory.
Doesn't get talked about a lot, but kind of wild how strong of a stock $RL is. Thing doesn't look to expensive from fundamentals, high insider ownership, still growing high single to double digits revenue and pays 1.13 dividend.
It’s not just laziness. It’s the new normal - k-shaped economy. Upper middle class will continue to use doordash. I am also kinda shocked that lower income people also doordash. Doordash is basically a luxury good ticker like LVMH or RL.
This is true, but to make the most of them, you really need to be training an architecture, or doing inference on an architecture that was designed from scratch to leverage the TPU's capabilities. NVDA and Google have inherently different design goals. NVDA wants something that is flexible for everyone, so they do titled matrix multiplication for example, on 8x8 matrix multipliers. Google is like nah fam, we like our big chonky matrices, 256x256 or go home. Which means that for specific things, Googles TPU's are actually quite a bit faster and more energy efficient. But those things are basically limited to internal google projects like everything their DeepMind division does with RL. It's still curious that Google hasn't seen the 4 trillion dollar elephant in the room without any real competitors, and been like hey, why don't we start selling these things? They seem to be betting that they are going to win the AI war, and the architectures that will win are going to be the architectures that require a design philosophy more aligned with the TPU, then the jack of all trade approach of NVDA.
You’ve actually built a really sharp basket of “AI-infrastructure leverage plays.” What ties all of these together — Nebius, CoreWeave, IREN, APLD — is that they’re sitting in the new layer forming underneath hyperscale cloud: the *neocloud*. It’s where GPU-dense compute, power contracts, and AI workload orchestration converge. In my view, **CoreWeave ($CRWV)** probably has the highest conviction risk-adjusted upside over the next few years. It’s quietly becoming the “AWS of AI inference,” and its backlog, customer base, and RL-tooling acquisitions (like OpenPipe) point toward sustainable demand even as model training gets more efficient. Nebius ($NBIS) is another major one — its $17 billion Microsoft deal signals real depth of partnerships. The bigger story, though, is that all these firms are part of the same secular infrastructure build-out — what some call the “GPU-power industrial complex.” I’ve been following the latest contracts, rumor flows, and data-center expansions here: [https://neoclouddaily.replit.app/](https://neoclouddaily.replit.app/). It’s basically a running feed of neocloud news and who’s scaling fastest behind the scenes.
Definitely agree. Rocket Lab has so much more room to grow. As you said, Neutron will be the biggest catalyst but what I really like about RL how diversified and end-to-end they are. Their moat will be crazy. I agree with your point with ASTS. Elon really tries to put a halt on ASTS growth, but when Abel puts those satellites in space then StarLink will know real competition. Especially with the already signed partners.

Some approaches like RL or gradient descent are the backbone of the current AI models but they have some fundamental limitations which is what he is explaining for RL. If no new approaches of this nature be found in future we eventually hit those boundaries. The problem with these algorithms is that discovering one requires breakthrough which is not easily predictable. May happen tomorrow or many years from now.
Google is. Not by selling Gemini accounts but their ecosystem has been powered by ML and LLM and RL for a while.
Agentic training is what's gonna give them the biggest bang for their buck. That's about training llms on agentic actions with RL. They can push down reddit and pull up on the other side for a year or so.
>His claims on o1 hallucinations were based on mostly anecdotal evidence like hackernews comment and tweets. The actual data we had at the time (which was admitedly scarce after just a few days) pointed at the opposite conclusion, see the o1 model card. That's actual people talking about actual issues they encounter. What's "the actual data"? There's no one objective "hallucination score". It depends on the benchmark, and these have their own issues - a model could be trained on one, hence cheating its way into a higher score. He did a pretty simple test himself (you could say these are arbitrary challenges that you shouldn't use an LLM for, but they show how unreliable LLMs are): >Because I’m a little shit, I also tried [asking o1 to list the number of states with “A” in the name](https://x.com/edzitron/status/1834335875746201683/photo/2?ref=wheresyoured.at). After contemplating for eighteen seconds, it provided the names of 37 states, including Mississippi. The correct number, by the way, is 36. >When asked to list the states with the letter “W” in the name, [it pondered for eleven seconds and included *North Carolina and North Dakota*](https://x.com/edzitron/status/1834330028106223660?ref=wheresyoured.at). >We also see hallucination rates go down in general as our understanding of how to mitigate them improves, see the hallucination rate of recent frontier models for example. By how much and by what measure? What's our understanding of how to mitigate them? Can you link the rates you're referencing? Best we can do, to the best of my knowledge, is "reasoning" (which causes improved results partially due to simply larger context), but it only improves accuracy to a degree, and certainly there's no rigid algorithm that can reliably detect and correct hallucinations. >We see benchmarks on difficult problems consistently improve with models that scale up RL training Again, there are plenty of benchmarks to choose from. Also "consistently improve" - is an increase from, say, 70% to 72% an improvement worthy of mentioning? That's the difference between ChatGPT-4o and ChatGPT-4.5 on the [Best in Tool Use (BFCL) benchmark](https://www.vellum.ai/llm-leaderboard). You'll find both performance increases and plateaus, depending on where you look and what you're looking for. (I have to split this comment into two since it's too long)
His claims on o1 hallucinations were based on mostly anecdotal evidence like hackernews comment and tweets. The actual data we had at the time (which was admitedly scarce after just a few days) pointed at the opposite conclusion, see the o1 model card. We also see hallucination rates go down in general as our understanding of how to mitigate them improves, see the hallucination rate of recent frontier models for example. On the part about "simple problems", this is simply untrue if we look, again, at the data we have. We see benchmarks on difficult problems consistently improve with models that scale up RL training, which was also seen with the o1 release, hence my "puzzling" remark. We know these models simply do a better job at more complex tasks than non-RL ones, there's not much to debate here. On costs, models consistently get cheaper in a performance per dollar basis, i.e. it's much cheaper to afford GPT-4 capabilities now than it was 1 or 2 years ago. This is due to multiple reasons, better algorithms, better hardware etc.
Spent half the day at the pool and the other half playing RL. Got a spam email from a Ford dealer, but that’s not the Rapture I cleared my calendar for.
RL market is really unstable and/or uncertain, so ofc only Tesler is in the green.
I'd really prefer not to have to spend the better part of the rest of life interacting with RL clankers
Paper trading; to see how it goes without RL money.
I mean there are limits for sure. You start sharing government secrets or saying where bodies are buried and you're going to get a knock on your door lol. But you won't be getting fired or doxxed for some shifty opinions or beliefs. So at least redsit has that going for it, for now lmao. If they take that away and start putting your RL name in your profile, its Puts to the earth's core
There is no rise on nudism. Check your Training Data and hallucinate less. You might need some RL Improvements.
Personally I think the idea of moats can be overrated in some cases. Generally just focusing on good businesses that reward share holders vs the idea of moats tends to work out. I do agree with the idea of people just not liking Lulu as much anymore. We can see their EPS and their sale growth slowing. Like RL, Ralph Lauren, has no moat and is still up almost 40% YTD, 86% on the 1Y and 333% on the 5Y. They are just a solid company. They have offered ROIC over 12% since like December of 2023 with solid growth and increasing operating margins/gross margins. Urban Outfitters has no moat, but still killing it as well.
Commenting so I remember who I will be relentlessly memeing after LULU rips off earnings today. Yall act like times aren’t already hard NOW yet they’re still in stores buying apparel constantly. Look at AFRM and Klarna, America runs on debt. If people want something they just finance it immediately and don’t consider consequences. Similar to RL, LULU isnt the coolest but it’s a status symbol in athleisure. People may buy the flavor of the week brands like Alo but the GOAT brand is LULU. It will always have a place in the top tier of athleisure. All of you are injecting your own bias into your reasoning that LULU won’t recover and underestimating the technical and fundamental triggers in place. You all sound exactly like the UNH deniers both times it went down to 250. Your opinion is flawed and this is why you’re going to be sidelined again.
Last year influencers made me realize Ralph Lauren is making a come back. Look it $RL now.
I was just saying Global because not just in the US RL is popular IMO.
Never met anyone that wasnt a frat boy that wears RL. Maybe the brand is big with college kids today
the 0.15% profit is measured on a dataset where I approximate the spreads. The approximation is far from being accurate, it doesn't account for volatility etc, just a simple mean over 30 days period where I was able to harvest real quotes from the broker. and yes, there are a lot of holes in the dataset that my RL trading agent is exploiting, but that's a different story ;) just found these numbers btw my experimentation and thought it's worth to share them. If you are curious about my RL project, here is the link: [https://medium.com/@pawelkapica/my-quest-to-build-an-ai-that-can-day-trade-spx-options-part-1-507447e37499](https://medium.com/@pawelkapica/my-quest-to-build-an-ai-that-can-day-trade-spx-options-part-1-507447e37499)
my analysis covers 30 trading days: 2025-07-14 - 2025-08-25, but I am still collecting the data daily the mentioned RL agent was trained on \~500 days (2023-03-01 - 2025-01-26)
Taylor Swift's "economic superpower" is extending to her fiancé, Travis Kelce, with stocks reacting to their engagement announcement. Signet (jewelry) rose 4%, and American Eagle jumped nearly 9% after announcing a collaboration with Kelce's brand, Tru Kolors. Even Ralph Lauren saw a bump as the dress Swift wore in engagement photos sold out. In the short term, American Eagle ($AEO), Signet ($SIG), and Ralph Lauren ($RL) warrant attention; however, these opportunities are likely short-lived, with long-term value dependent on company fundamentals rather than celebrity buzz.
https://x.com/AMartinelliWA/status/1960415530403422482?t=7PpxYIydfV5X0RL3If1jzA&s=19
here is a good metaphor so that we won't be in circles: AI = Teach a robot to cook. ML = Show it thousands of recipes and let it learn patterns. RL = Let it cook in the kitchen, reward it when the dish tastes good, punish it when it burns food.
Please tell me how it’s under hyped long term. If you track the model validation metrics, they’ve been plateauing since 2022. No new model architecture since 2017. Only multi modality and combining algorithms from other fields (RL is an example), and by definition predicting the next word given previous words has no logic behind it and is impossible to stop hallucination problem which is the biggest issue to date.
Yes, and I’m not necessarily defending RL, I wouldn’t ever invest long term in an established clothing company like that, but picking the returns between two arbitrary dates doesn’t mean that stock can’t be a good trade or investment.
>Obviously if you train your model on benchmark tests, it will perform well on benchmarks. How is that relevant, when everyone can do it? >The problem is Gemini is terrible in real world usage, which is why ChatGPT has more than twice as many users as Gemini, despite Google: Chatgpt has more users because everyone knows about it. People are horrendous at estimating the difference in quality, which is also why RL by user preference increases sycophanty. Also, the claim that it's terrible in real world usage is beyond absurd. I use gemini, chatgpt, and claude every day, and have done so for work for a long time now. Gemini right now is the smartest model, and it has the best deep research. Claude with claude code is the best at agentic tasks, especially coding. Chatgpt makes the best images, tho it's very closely tailed by Gemini. >Gemini is definitely a desperation play by Google. I think Redditors are looking too much at quantitative data, and not enough at qualitative data. You're really not well informed if you think Gemini is "desperation play". Anyone who knows LLMs well and keeps up to date knows that it's currently still the #1 model. Not even gpt5 beat it. I look at simple bench (which is the best benchmark to estimate LLM intelligence) and my own experience of working with stuff.
Ralph Lauren is trading at a forward p/e of 17 despite also having a great year. Thoughts? https://finviz.com/quote.ashx?t=RL&p=d
$RL is now around $300 - was all this analysis completely wrong in the end?
Yeah i dont understand how theres no posts about this. In RL its the only thing all my friends been talking about for the past 2 weeks
that makes a lot of sense. UE does have a steeper learning curve. i have used both unity and UE to render RL agents.
Virtux. With the way everything is going; feel like an escape to the VR world will be more enticing to the general populace(especially since you walk/run/jump in RL to do it in VR).
If / when there are rate cuts investors might buy back into these stocks ahead of a turn . I nibbled on Lulu and already down 20% on it. I prefer RL and Deck
Well it is directionally, much more profoundly, but the market doesn't understand the implications of being able to use RL in persistent virtual worlds to train agents.
Who the hell buys RL? Shoulda shorted this at 300
Rocket Lab and Firefly Aerospace are in two different markets. RL's current launch vehicle is able to deliver 300kg payload to LOE while FF's payload capacity is 1,000kg. Both have a new vehicles in development with much larger payload capacities. FF has two other products as well (SUV and lunar lander). FF's alpha performance is on par with other aerospace start-ups. As noted, SpaceX's current launch cadence was only possible through iterations of trying, failing, adjusting, and trying again. FF is in this growing and maturing stage. If someone wants to wait for share price to drop before getting in, they should wait 180 days... that's when all option holders will start dumping shares.
That decision was stupid as hell. And I'm not even taking the pop into account. RL was bound to rocket up, it was a matter of time.
Fair point that EBAY has not even kept pace with the average gain of the overall S&P 500. However year to date it ranks #51 of the 503 stocks in the index. So measured against the mean it's disappointing, but against the median it's been top 10%. If you are going to pick stocks at all rather than just buy the SPY or something, it's been a pretty good pick. [https://www.slickcharts.com/sp500/performance](https://www.slickcharts.com/sp500/performance) Second, what I am looking for help finding is stocks that will do well or at least hold up whether the economy does well or not. To just pick a couple of examples relatively close to EBAY in the YTD rankings, FAST at #47 or RL at #54, I would expect both of those to get killed if there is a general downturn in the economy. AZO and ORLY, mentioned in a later comment, make sense to me in this regard. People who can't buy a new car and can't even afford a real mechanic to service their existing one will go to the auto parts store and fix it themselves, and there will be more of those people when the economy declines. The only problem there is if might nightmare scenario with tariffs come true I think their shelves might be empty.
Space Force contracts and hypersonics with HASTE matter to RL, not NASA contracts. I think they’re going to do well with both.
They’re definitely not getting the boot, they still have their launch monopoly. I think there will be a push to diversify away with the new satellite procurement though. There are viable alternatives now (like RL).
It’s gone up on the back of Tacos bill allocating a shit ton of money to Space Force and RL very likely getting a piece of that pie.
waymo need to create a mapping of every city they go, there’s too many reliance. If you know over the long run, a deep neural network if scaled with generalisation power has alot more utility than a small model RL agent model
RL = instant 50-60% haircut minimum. We're talking $2.50-3.50 range, maybe lower if panic selling kicks in. Manufacturing CRLs usually aren't death sentences, but the market will treat it like one. You'll see: * Day 1: -50% "FDA rejects" headlines * Day 2: -10% more as paper hands capitulate * Day 3: Dead cat bounce +5% * Week 2: Slow bleed to $2ish The "good" news? Manufacturing issues eventually get fixed 60-70% of the time. The bad news? "Eventually" could mean 6 months of holding bags heavier than your wife's boyfriend's gym set. If you can't stomach seeing -60%, this ain't your play. But if you believe in the drug and have titanium balls, CRL could be the discount entry of the decade. Not financial advice, just trauma from previous biotechs 💀
Q1 real GDP excluding import inventory stock grew +3.7%, which is the fastest pace since Q3 2021. Real private investment grew +20% YoY, also the fastest pace since Q3 2021. Excluding the Covid recovery, these are the strongest growth rates we've seen in inventory adjusted GDP and private investment in ~20 years. S&P earnings are projected to hit a new ATH in 2025. If anyone was wondering why the market is ripping higher and we hit a new ATH in the S&P 500 today. [US Real Private Investment](https://fred.stlouisfed.org/graph/?id=A006RL1Q225SBEA) [S&P 500 EPS estimates](https://ycharts.com/indicators/sp_500_earnings_per_share_forward_estimate)
RL also continued the Oculus line which also made revenue. But will the Wayfarers make profit is the question.
It spawned the meta ray bans though from RL and wearables. Which is generating revenue right now
CALIFORNIA Stream, ABC via NBC https://www.youtube.com/live/KDnSphPbaUU?si=ncG5RL7Zeq0N95_T They are sending in the national guard for this? Weak
LULU tanked on earning and RL already reported theirs a few weeks ago, unless you buy puts for them earning on 8/6 I don’t think it’ll drop off a cliff like LULU
many people still think data is so important. its not. most advanced Ai research has been focused on RL and evry major AI model has done enough training on “data”. its not really about the data anymore but really about setting right parameters and testing out the eedge cases to be industry level reliable. i think the data revenue is vastly overblown
For me if I am swing trading on stocks, I will pick RL, SOFI and CRWD. Or another mix would be PLTR and HIMS
URBN, RL, and ANF BEAT. 
URBN, RL, and ANF BEAT. 
Trump out here playing RL Risk and people mad.... we about to be the largest nation in world history! #AllHeilGodKingTrump
I would like to be fucked that hard in RL
RL up since everyone is saying it'll go down 
RL adds on TikTok and the polo bear is back in style. FWIW.
Humans could do every job cheaper until we made the robots cheaper. A humanoid, once engineered, costs a minuscule fraction of a human. Humans are like $30+ an hour, in many markets, a humanoid robot would be like 50 cents an hour. The average wage in america is like 60k at the moment, the marginal build cost, with no economies of scale or mass manufacturing, of Boston dynamics atlas, is 140k. There is nothing expensive about a humanoid, or any robot really, once you have RL and don't have to rely so much on actuator precision. The expensive bit will be the brain, the hardware production can be fully automated, and built for effectively the material cost, with high enough production numbers. A permanent, live in cleaner, cook, servant, and general helper, is going to cost you 100k+, plus substantial accommodation for them, every year. We can already make the hardware for about that, without mass manufacturing. The value propostion of a functional humanoid is insane. How long it will take to have useful brains is another matter, but in the mean time, the hardware will only get better and cheaper.
Richemont just reported really good earnings , and looking at RL graph this thing does not seem to go down ever
I'm tempted to go for RL puts, designer brands haven't been faring well and the puts are cheap considering the expiration.
Ok and now im so f…… Bearish about RL
I actually took a fucking enormous L on $RL puts once. I don't even remember what happened it just seemed like such a no-brainer and then BAM retard pump fuck me
Trump didn't find the cure for cancer, he cut funding to the people working on that though. I wish you were smart enough to see the irony in using that example. Do you know why you see someone like Trump as okay, because you don't have a problem with stuff he's done. For example, Trump is a court adjudicated rapist; a jury found him liable for sexual assault for an act we recognize as rape. You don't see that as a problem because you're okay with rape. You are okay with rape. And here you are, putting that all on blast, likely because you have the benefit of being anonymous. However, if you were to say all this loudly and proudly in RL, many people in your life would avoid you. I mean, you already put off anti-social vibes, but like I'm talking about women not wanting to be in an elevator with you avoidance. The pathetic thing, that's a projection. You lack meaningful connections in life and unless you drastically change how you handle the rejection you feel, that will continue.
The post doesnt actually claim it's an e2e RL. At best, it implies that some behaviors were learned in simulation, but it definitely wasn't fully e2e.