Reddit Posts
Anyone know if you can claim 4k in AGI reducing losses if you're filing jointly?
Where should I keep my savings for a house down payment? High tax bracket in CA.
Apple releases a multimodal LLM model, WIMI AI tech became the AGI mainstream trend
Apple releases a multimodal LLM model, WIMI AI tech became the AGI mainstream trend
New Path to AGI by VERSES AI? I'm going all-in.
New Path to AGI by VERSES AI? I'm going all-in.
New Path to AGI by VERSES AI? I'm going all-in.
{Update} $VERS Genius Beta Program Welcomes Cortical Labs and SimWell as Strategic Partners
Verse AI - Which is Publically Traded - Claims They Are Close to AGI - Invokes OpenAI 'AGI' 'Assist' Clause - Warning: May Be BULLSHIT
This AI Penny Stock Proves Path To Artificial General Intelligence
$VRSSF Q3 2023 Corporate Update: Next-Gen AI Platform and AGI Ambitions
VERSES AI (CBOE:VERS) (OTCQX:VRSSF) Q3 2023 Corporate Update: Next-Gen AI Platform and AGI Ambitions
Which brokerage institutions have solo-401k plans in which you can invest in a different company's fund at no additional cost-- AND-- allow for loans
OpenAI researchers warned board of AI breakthrough ahead of CEO ouster
I'm YOLOing into MSFT. Here's my DD that convinced me
AGI HOAX: DEV - Ilya had 60 hours now to name evidence of safety concerns or wrong doing to justify burning an entire company to the ground
Like the Tower of Babel, God broke up OpenAI because they were trying to create God
This is going very badly for Microsoft as the fallout continues and is "AGI" to blame here? Ilya Sutskever should resign from the board.
VERSES AI’s (CBOE:VERS) (OTCQX:VRSSF) Genius™ Platform Achieves Milestone with 1,500 User Registrations
Capital gains/loss offset strategy for future house down payment?
Before you have any crazy thoughts, just remember... a loss is not a 100% loss.
High-income earners, beware of paying higher taxes on your investment income (if you have any Kekw)
High income earnings, beware of additional taxation on your investment income
VERSES AI, A Canadian Cognitive Computing Company Announces Launch of Next Generation Intelligent Software Platform
WiMi Hologram Cloud Drives Productivity Transformation
WiMi Hologram Cloud (WIMI) to build the road of AGI industry
WIMI integrates a series of synergy technologies seizing the market opportunity
The BEST Way to Invest in Artificial Intelligence?
The BEST Way To Invest In Artifial Intelligence?
ChatGPT Set off a global big model boom, WiMi Hologram Cloud(WIMI) to build the AI + XR ecological strategy
AI big model industry: WIMI Focuses on AIGC into the AGI high growth space
NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading
The Golden Year for AI: WiMi Hologram Cloud(WIMI) innovates its Mechanical visual strength
What to do if I'm nearing MAGI limits for Roth IRA contributions but not sure when I'll hit it
WIMI Hologram Cloud(WIMI) Started Its AI Commercialization In The AGI Era
IonQ Pursues a Revolutionary Step in AI: Striving to Develop Quantum Machine Learning Models Matching Human Cognitive Abilities
IonQ Pursues a Revolutionary Step in AI: Striving to Develop Quantum Machine Learning Models Matching Human Cognitive Abilities
A Quantum Leap In AI: IonQ Aims To Create Quantum Machine Learning Models At The Level Of General Human Intelligence
Curious To Hear Some Community Opinions on MAIA Biotechnology (MAIA)...
The Artificial Intelligence Stock with the BIGGEST potential
Tesla is way overpriced beyond all the hype and fanaticism
WiMi Hologram Cloud(NASDAQ: WIMI)Is Dedicate To Develop In AGI
Interest in Gold Miners Increases as Bank Fiasco Causes Market to Seek Safe Haven Assets $ELEM $NFG $ARTG $AGI $WDO
What do you think about the potential impact of AGI advancements on the liquidity released by the Federal Reserve?
VERSES AI ($VRSSF) The ONLY pure horizontal AI play
OpenAI's Business Strategy - What is their Eng Game?
Dr. Techy| Musk calls ChatGPT an ‘eerily like’ AI that ‘goes haywire and kills everyone’
Will stock losses affect my income for Roth contribution?
White Paper on the AI Ecosystem by Verses’ (NEO:VERS | OTCQX: VRSSF) Dr. Karl Friston
VERS.n name a top 5 Artificial Intelligence Stock to Own by BayStreet.ca
Will current concept of investing survive Technological Singularity?
I turned $100 Robinhood account into $1000 via options and it ended up costing me $20k
Would the 1st company on Earth with confirmed, legit AGI (Artificial General Intelligence) become the most valuable upon this confirmation?
My employer doesn’t offer an HSA but I have a high deductible plan, do I still get the same benefits if I contribute my own money after tax?
Allianz to pay $6 billion over Structured Alpha fraud, fund manager charged
https://www.reuters.com/business/finance/allianz-pay-6-bln-over-structured-alpha-fraud-fund-manager-charged-2022-05-17/
The Real Reason Elon Musk Bought Twitter and NOT Reddit!
Gold to 2k? looks like gold keeps climbing and will hit 2k.
Seeking El Dorado - Finding the next Amazon amid all the hype
My former employer just sold and I must sell my shares. How can I avoid or reduce capital gains tax?
Question on a defensive strategy from a not savvy investor
Want to cash out on stocks, what long term capital gains considerations should I take into account?
Would a long-term synthetic stock play for GLD/other precious metal ETFs be an effective way to save money on taxes from the sale of physical metals paying for investment fees?
Post Squeeze Tax Strategy To Help Spread the Wealth - #PhilanthropicApes
Estimated Taxes, and why you (probably) won't need to pay them [U.S.]
Mentions
Do you realize that GOOG could reach AGI and be worth basically +infinity? There are way safer companies to play this game.
Worth noting that the top 10 or so companies in SPY or QQQ have a really outsized contribution to the overall performance YTD... And overall the AI narrative has driven a lot of increase in value. So the risk would be if the narrative changes, such as if the LLMs won't result in large productivity gains or AGI, there could be a whipsaw in the markets. But you never know when that could happen or what the prevailing story then will be.
Tech giants of the Dot Com era were overvalued but they were also, you know, GOOD. We are possibly entering a recession, and all of these companies are complete shit. Google is useless, Facebook is, what is that? Instagram is algorithmic hell, while Windows is just becoming a joke of an operating system. These companies are riding on the massive moat of former glory, while torching cash on the promise of AGI. This won't end well. Apple will probably be ok. And they still churn out quality hardware with an operating system that doesn't suck. I value that because I am old school.
I don’t care about AI, lmk when it’d AGI.
It’s funny. I studied Op Research at an ivy league university in the early 2010s, and many of the best books were from the 1960s, because there was a huge boost of innovation at that time. People back then thought they would be able to crack no-hard problems in no time. And then it just… slowed down for a while. There was still progress being made, but nowhere near the speed of before, when computers admittedly also first became available. I believe we see something similar now, with the next breakthrough in LLMs etc pp, and everyone still seems to think that pace will always continue and want to extrapolate the last few years for decades. I would be super happy if it did, but doubt it. I think in 5, 10, 15, 30 or 50 years someone will have a great idea again, and we will get AGI and truly autonomous driving anywhere etc. But I think it’ll need another breakthrough, not just steady improvement of what we do now.
The technology has been over invested on before the technology has been developed enough to reach the claims people are making. Eventually the technology will reach these claims but I mean some people say AGI will not happen for another 300 years! But even still it will catch up, how to make a profit on the fact non-technological minded people invested to early and how to capitalise on its inevitable rise?
it has a few narrow use cases that you can already name. The rest is hype and bullshit stuff that either doesn't work or works but doesn't do anything useful. There's no secret AI usage you're unaware of. It's saturated it's usefulness. It's not going to get better soon. AGI is 10-15 years out based on compute capacity alone never mind the techniques. Market goes back to normal within the next few months as CEOs and product managers start being honest with themselves.
> Am I just totally missing something? Musk (through Tesla and his other companies) is betting that AI will drive a never before seen growth in every sector, all over the world. Real world AI let's humanity reach goals that are seemingly impossible and almost unimaginable. Some people say AGI is 18 months away. Even pessimistic ones say 3-5 years. If that is true, then in a decade the world will be a *very* different place. Yes, I know this all sounds too good to be true and I am not saying it will be. I am saying that is what this valuation is based on.
The case is that the LLM scaling is unlimited and it will turn in to some AGI overlord which is as absurd as it sounds but people are very hostile on this topic
AI is a marketing stunt. LLMs are just cracked out autocomplete being sold as AGI
>The same culprit was responsible for why SaaS never lived up to expectations, or "digital transformation", and why they still haven't. It's just not what was promised. AI has uses as a tool, but even the most sanguine rhetoric is being walked back, such as "imminent AGI" because even the hype men are realizing expectations need a reset. Huh? SaaS ate the world. Technology got so large that GICS had to put a bunch of it in communication. You don't know what you're talking about. Failure in business is expected. It doesn't matter if someone fails, the industry keeps growing. Period. The future always progresses.
I don't know how "it's because..." is somehow supposed to be anything but cope. If the failure rate is so high, all because of the same problem, for which there seems to be no easy fix, then it's not far from the tech being, at least, of less value than what was initially perceived. The same culprit was responsible for why SaaS never lived up to expectations, or "digital transformation", and why they still haven't. It's just not what was promised. AI has uses as a tool, but even the most sanguine rhetoric is being walked back, such as "imminent AGI" because even the hype men are realizing expectations need a reset. >Negative press is more exciting than positive. This is pure cope. Press around AI has been overwhelmingly positive, regardless of the factual nature of the claims. There's no shortage of AI "success" when it comes to layoffs, the only problem is none of those stories come with evidence of anything truly remarkable occurring. It tends to be smoke and mirrors for margin padding. If AI were delivering on its promises we'd all hear about it, not just you as you seem to think is the case.
They're announcing AGI to Trumpy
I hope they trained AI models with WSB. Our thoughts and jokes set AGI progress back months.
When Tim Cook is replaced by AGI Siri, there will be no stopping them.
AI, but a particular niche.... Everyone is shooting for AGI with AI but I don't think it needs to get smarter so much as we need to build guardrails and validation to let LLMs, in their current state, do work that people do now. AGI means it makes the same mistakes as humans. That seems like a sensible target cause then it could, in theory, use the same software as people. But AGI is further away than it looks, as LLMs don't have a real mental concept or conscience like the human mind. And even if it's reached sooner than expected you are still left with human level intelligence, so mistakes. All the software we have now is designed to guide humans (UX) and make sure they don't do stupid things (multiple levels of validation). It's built to work and not blow up even if the users are pretty dumb (which they often are). When someone figures out how to build this same safe space for modern AI (LLMs), and can deliver deterministic transactions, along with a designer to build/modify roles, then they will be worth many billions. RAG/ MCP / GPT Actions are all a path to this. My wild guess about the future is the LLM builders will come and go, but the infrastructure that supports LLMs and gives them this guidance will be the real money maker. These tools would quickly become a foundational layer of every enterprise.
Google just discovered AGI Game Over Elon
Free open source models are rivaling the corporate models, particularly in context dependent and agentic situations, which is where this is all headed. You can put a really good AI on a Raz Pi right now. OpenAI, Anthropic, xAI are all focused on the AGI aspect, but agents are the next way forward and open source models and frameworks are already there. You'll see the big models perform well on benchmarks, but they end up being crap in the real world.
5-10 years to become sentient? As in, AGI?
Why? In some contexts tax exempt income is added to your AGI to calculate income. Also keep in mind most funds will have some amount of cash, and might not be 100% exempt
It will be IF we find some use case more then to resolve O(n) problems faster then typical computers because, from my dumb brain, thats really the only known use case for them. Its still good, but not enough to change our world. If we find some way to use that tech to improve lets say AI and actually make a AGI like the big tech compagnies hope, then yeah Quantum is gonna be the next big thing, but in like....40 years. Right now, quantum computing is mostly in the phase of finding way to add more Qbits. It his climbing year on year, but we are far away to have enough compute power. So in summary, from my opinion: - the next 10-20 years will be about adding more qBits. - the 20 after that will be probably where we find new use case for quantum computing.
AGI soon bro trust, just a couple more billions
Ok come now. AI is life is sold with the promise of AGI and SI. But there are clear signs we are starting to hit the limits of LLMs. I say this as someone that uses AI professionally. The hype on these models is like 80% BS and stock pumping.
When algos achieve AGI there will be signs 📉🔻
Now we just need it to say it fucked your mom and AGI will indeed have been reached.🤖
>The MIT study published last week shows that 95% of attempted AI implementations at companies fail. Ah the parrotted MIT study. The report fails to clear the bar for any good statistical study. Low n-study with no sampling validity or measurement clarity. There is no data or appendix to reproduce this. Ignoring all this....just because a pilot doesn't progress doesn't mean it isn't delivering any value (would help if study had any measurement clarity). The "study" also attributes the biggest issue is lack of memory and context window. This is something models have been evolving and getting better. > And if you understand the math behind it you'll know that it can be useful as a tool under highly skilled hands of field experts, but that it's not going to be a general "replace all workers" tool like the claims from tech would have you believe. 1. Never claimed it will replace all the workers 2. It doesn't have to be used by highly skilled field experts. Like not even close. A junior programmer with the appropriate model can perform close to a senior programmer (doesn't mean senior programmer doesn't have experience or experience doesn't matter). 3. You are misunderstanding the difference between task and job. 4. Custom models with enough memory and context windows for sector specific are already on the way. These models even assuming they don't replace workers will still be running on GCP, AWS, MS servers. The need for compute will skyrocket and the models will be licensed by companies creating their own models. [AI will be a cash cow for MS,AWS,GCP, ORCL] > I think you forget that the VAST majority of people are just now becoming aware of what big tech does and the younger populace, being much more technically literate, is likely going to see a shift relative to the populace currently. Don't see it at all. Younger people are caring less and less and are pivoting more towards consumerism. Take a look at the TikTok ban - TikTok (chinese company) quite literally is collecting billions of data points and Trump wanted to ban it and the younger generation threw a fit. People are content with the dopamine drip and the algorithm feeding them exactly what they want. > but now there are companies starting with new business models, building the same (and arguably better) services that big tech offers. lol like what? > I think you are severely underestimating the irritation of people that the AI models are trained off of their data, without their permission (sorry burying stuff in the T&Cs might count legally but not to consumers). And all it takes is one lawsuit to completely change the legal framework, or for one law to rewrite what can and cannot be done. Not particularly. Like I mentioned vast majority of people don't even understand. Even if they did they really don't have many options for them to opt out of. Every social media company is collecting information. Your comments are being collected by Reddit and then sent to Google for their models but you are still on here debating an internet stranger. Sure all it take is a law but with how much funding and influence the big tech has? I'll keep my money on big tech and you can keep hoping for reforms that might one day happen. > The models aren't "intelligent" in the human sense. They run statistics on massive datasets and return the most likely set of words based on the input set of words. The human brain, which is the most effective intelligence we know of today, runs on 20W. That's not even enough energy to power the old fashioned tungsten lightbulbs. I do ML. Nobody claimed these models are sentient or intelligent. They don't even need to be "intelligent" - you are confusing AGI with AI. LLMs are just part of ML and we have had ML for years now. It turns out the human brain as special as it is - is still a pattern recognizing statistical machine with a bigger context window and memory. The models don't need to be "intelligent" for them to generate value nor do they need to something special that only humans can do. > It's really best if you learn a little about things, because you seem to be basically building your view based on what you hear from people who have a vested financial interest, not based on independent reviews and a fundamental understanding of the technology. My work literally entails around DE/ML. I work with these models regularly. I don't think you quite understand the nuances of AI...you keep saying "math" but I don't see any actual evidence for your statements or your so called math.
Where is AGI? Where is the negative interest rates? Are the Feds stupid?
I 100% agree on this take but I have some counterpoints. While current LLM architectures may not be the ones that take us to AGI, they enable it. We’re plateauing current architecture (see gpt5) and the next frontier needs new breakthroughs. China is well positioned to be the ones discovering the next thing, giving them the top GPUs will just speed that up. Second thing, google “the bitter lesson”. Regardless of the architecture more compute means more intelligence. The GPUs alone won’t make China win the race, but they definitely pave an easier way for them.
I'm not a scientist but I don't think achieving AGI is a matter of compute, human brains don't have infinite compute so the breakthrough needed is probably not in the hardware side to reach human level intelligence. If we want to surpass human intelligence, that's a different story, but you first have to reach human level intelligence first. The LLMs are pattern recognition machines from my understanding, they don't have intelligence in the same way a 3 year old kid does.
LOL tell me you don't understand coding vs code logic. No, Google and MSFT logic is not designed by AI. A developer tells the AI they want to create a button, guarentee you the AI will put the button somewhere randomly it doesn't belong. The prompts require the dev to be explicit, then AI can write the code faster than a dev could. BUT it still requires the dev. It's not AGI. Never will be as an LLM model.
If I'm OpenAI and reach AGI, I would end the partnership with Microsoft
Ilya Sustkeber aka the brain behind OpenAI — who can get billions in funding and compute from anyone by snapping his fingers —- uses TPU for his super top secret AGI mega project https://www.datacenterdynamics.com/en/news/ai-startup-safe-superintelligence-to-use-googles-tpu-chips-for-research/
I wouldn’t say AGI isn’t real. We don’t know if it’s real or not- but if it is it’s gonna be at least 20-30 years out, maybe even longer. We are in the very basic dial up stage of AI now. We don’t know its full capabilities yet
China didn't cause the export restriction either. Just give them the actual chips lol. It's not like the American AI companies having full access to it has helped them break out of the AGI as GPT5 has shown. Meanwhile Sora and Veo 3 parlour tricks have open sourced versions in Qwen.
I think the scare here is to enable China to win the AGI race. Once AGI is reached, they can have their AI design the next technology, thus losing the dependency on the US' tech. Also this was not Trump's idea, the ban was put out by Biden, and even Obama had some bans on Chinese chip acquisition. Trump does have a brain, a brain that only thinks on how can he profit himself from this. After a 1M dinner at Mar a Lago with Jensen Huang all of a sudden the restrictions are not so tight. It feels to me he's selling the US' last hail mary on this race to fill his pockets. America first I guess?
Sure Microsoft and Google and Meta are making money, but not on AI. They're burning billions in cash on AI. No one has ever seen this much risk taken on for a completely unproven and apparently unprofitable product. We will have huge software winners? How can you be so sure when there is zero companies making money off Generative AI products right now? AGI isn't promised at all.
Microsoft is also bloated as fuck. But if you're comparing the two, I have a lot more confidence that in 5, 10, 20 years Microsoft will still be printing money with Office and Azure. Who the fuck knows what AI spending will look like. Even if AGI does become a reality, is that going to necessitate trillions of dollars of spending every year to support it? Processing power that cost millions 20 years ago is now extremely cheap.
Is the “AGI” in the room with us now?
Just half a billion more dollars and we’ll have AGI. Trust me bro we’re so close. We’re gonna fire all the humans it’ll be so awesome
Musk just said AGI coming this year!!!! Already ordered my mega yacht.
You could tell AGI was all bullshit as soon as Elmo started talking about doing it.
The models they put out consistently perform better than openai's. And they often come out months ahead of time. They just don't have the brand recognition openai does. Openai is (it seems) trying to make more of a platform for using ai for the average person, while anthropic is just releasing better models. I'm not sure what makes anthropic so ahead of the game. With the release of gpt5 and how it compares to opus or even sonnet 4, openai has to be panicking. A year ago they hyped up GPT-5 like we would be on the verge of AGI by the time it was released, and then they finally release it and it's not better than a model that came out 3 months ago from anthropic. Some people might argue that it is better than sonnet 4, and it might be in some very specific niche cases, but not enough to say that it's a better model
Not sure if it was in this post or another where I took part un that conversation. Universal basic income in the early stages & Democratic Socialism later. Unless someone comes up with a better plan. I'm all ears. Should be noted that AI will result in some job creation in the short-term, which should help to offset some of the job losses. But in a post AGI world, life will be very different. It's not possible for the vast majority of us (me included) to know what that looks like. I'd guess just 20-40% of people work. The rich will throw the necessary bones to the rest of us to keep their wealth & the current system as intact as possible. That will guide how this plays out.
We won't hit AGI with our current technology. We will hit it with our future technology, and the future is minute for now, an hour from now, a year from now. Go ahead and sell your AI stock, I will buy more. And then when we have the next major breakthrough, you' ll want AI stock again. Veo 3 and Genie 3 are so incredibly recent, how tf has it stalled out? They are only going to get better.
Shhhhh, Elon said his LLM will be AGI VERY soon... LOL! There is no one richer but I'm still waiting on his roadster I was hyped up for...
No it's not :) otherwise it would be still called ML. AI is an umbrella marketing term, that hints are AGI - for the trick to work, people need to believe it's inteligent, thus they use as many words as possible to anthropomorphize it.
LLMs will reach a limit sooner than later. They require too much energy and money to drive their advancement already and the gulf between what they can do now and a true Artificial General Intelligence is as wide as that between the Milky Way and Andromeda galaxies. Whatever technology results in AGI, it won't be LLMs.
Specifically LLMs will never lead to AGI. It will take a totally different kind of computing framework and hardware we can’t imagine rn. Quantum alone as we understand it isn’t the AGI golden child either.
AGI will never be real I can't believe people still think it's coming You can pour a million billion trillion dollars into develop It will always be a autocomplete chatbot at best, end of story
yes, true AGI doesn't exist yet. It might be feasible when we get quantum computing off the ground.
The infrastructure will inevitably prove valuable. We struggle in creating an AGI because we don't understand our own consciousness well enough yet. Right now, we are hoping for a "sum of the parts is the whole" outcome. And like you said, we're taking the long tail approach by feeding anything and everything into these systems, hoping it'll spit back something that resembles us. But because we don't even know what makes us tick, trying to recreate is extremely resource-intensive. But one day there may be a breakthrough. Someone is going to find the secret sauce that makes consciousness work - or at least, how to replicate it well enough to fool almost everyone. Once we have that spark, all we need is a brain to transplant it onto; something that's going to facilitate the exchange of data and compute decisions. When that time comes, we'll have energy-secure, cloud-hosted, big-data supportive, machine-learning-ready systems in place ready to transmit this thing all the fucking data in the world. Then we just have to cross our fingers and hope for the best 😂
Highly doubtful spending more money in AI infrastructure will bring anything revolutionary. I’m sure it can make what ChatGPT does *now* better, but how is it gonna bring about anything revolutionary/AGI like Altman recently claimed in an interview in the next few years? I think it wouldn’t be a stretch for people who have studied machine learning in the last 2-3 decades to believe that crunching even more absurd amount of data and equally more parameters won’t make an algorithm revolutionary or whatever hype word they’re using nowadays. If you do research in this field, I would love to hear your thoughts on it to perhaps persuade me otherwise.
If you truly believe it’s going to mass replace employees, make people 10x more productive, or achieve AGI, it’s this generation’s Manhattan Project on steroids and whoever gets there first has infinite growth potential. It’s a gamble you have to make if you truly believe in it.
OP clearly thinks that. I think you're underestimating AGI, let alone existing LLM's. Even the "shitty" publicly available models already outperform humans on speed, scale, and volume of tasks, even with all their flaws. A single model can process more text in a minute than a human could read in a lifetime, and it can do it for millions of users simultaneously. It's less about "smart" but about scale and efficiency. Or maybe all the Reddit bros are right! ChatGPT5 sucks ass and has exposed the smartest entrepreneurs on the planet cuz they're actually big dumb idiots wasting their trillions of dollars.
Nobody thinks that. Everybody knows about ML and all the other things that can help process massive amounts of data. But LLMs are the things that these big companies are getting people excited about, because people suddenly think they can _think_. It's _words_, like the kind my employees use! But it doesn't work well enough, and that's showing. Zuckerberg did not "replace mid-level engineers by the the middle of 2025." As far as the other stuff, JEPA is out and not that impressive. Moore's Law weighs heavy, we are nearing the limits of how much more tightly and efficiently we can pack silicon in without getting _really_ expensive. Now, when these new data centers come online, and they're ten times the previous size and they _still_ can't create AGI, so we put them all together and get something 100 times the size, maybe, _maybe_ we can create something like an AGI. Something that is _almost_ as smart as a real person. You know, something almost as smart as two people could create by accident if they just forgot to use condoms one night. All this, for trillions of dollars. Silicon Valley has fucked up its microdosing and gone off the rails.
Many valid points throughout this thread but I’d be interested to know how many of those here have coded before (more than a simple web app or hello world). The rate of improvement for AI is astounding, but it doesn’t change the fact that any AI today is fundamentally based on mathematical models that can only mimic the information used in its training. It can’t create anything truly novel. I work as a developer and have been increasingly using AI tools in my workflow. It’s efficient at times, and at others it requires a ton of hand holding. The root cause of that is that it can’t hold the context of the entire business, or even an entire project in its “memory” when directed to perform a specific task. This is something that will continue to improve, but the processing and energy requirements will eventually constrain things to logarithmic gains at best. There’s no doubt that the efficiency gains will impact the job market, but you still need subject matter experts because the work of instructing an AI to perform a task is nearly as much as it takes to do the work yourself. It just helps you to not have to think about the lower level patterns and syntax to apply. That level of thinking is like 10% of the job of a software engineer. This isn’t even cope because if AI was sufficiently advanced enough for me to lose my job I would just start a company myself. When AGI “arrives” it won’t be truly sentient, it’ll be a mirror of collective human knowledge up to that point and we’ll be too dumb by then to know the difference. We’ll forget what it means to innovate.
Nvda kind of genius for letting big tech execs think they could achieve AGI if they just buy trillions of chips using LLM approach. The fallout from all this is going to be crazy
Imagining a bunch of dogs running after the scent trail of AGI, then they stop running and look around in random directions "where AGI?"
GPT 5 sucks balls! If all the hype is about AGI and getting in on the ground floor of Terminator times, sell the tech and nuclear reactor shit cause we won’t be hitting AGI if this damn Open AI keeps sucking balls!
Bad actual economic data, plus people waking up to AI investment limitations. Bubble could pop soon. May depend on Gemini. I'm actually a huge AI advocate and user but the hype around SI and AGI is unreal. LLMs are not there yet and it's clear as day if you actually use them.
Personally I don’t think throwing more compute and more data at LLMs will give us anything like the progress we saw from pre-gpt to gpt and that’s not some deluded statement. For a long time until this new paradigm much of the research focused on performant models with very small inputs, as in how performant of a model can you get with as small of a data set as possible. That’s now turned on its head and we’re promised that with trillions of dollars of new data centers consuming increasing quantities of fresh water and energy we will get to AGI. It’s not going to happen. And I don’t know if you live under a rock but there was huge advances in computing throughout the decades you mentioned including in AI which pre-AGI obsession we’re mostly narrow models tailored to specific problems. There’s still tons of human labor in getting these models to perform and train well by data annotation and part of that has been crowdsourced via Captcha style tests where regular users help originally with identifying crazy variations of text which led us to OCR (being able to pull text out of images) and now mostly focuses on self driving data like identifying traffic lights in images. At some point though there is a physical limit on data and quality annotation of that data. In the 90s when computers finally beat chess GMs we were only a decade away from the machines curing cancer and having self replicating robots but it turns out that computation techniques for training computers to play chess just aren’t that generalizable. Now we’ve trained the computers really well at predicting next words in text which gives us a feeling of magic and some quantum leap of progress but soon enough we’ll see that the gap between true reasoning, intelligence and predicting the next word is a vast chasm. Will it improve? Of course. Will that bring interesting new tools and workflow augmentations with it? Sure. Are we at the precipice of some new super intelligence age of humanity? Not likely. As someone who writes code day to day to make a living I’ve seen the tools in the wild. They’re useful and even impressive in a lot of ways but also very much not at the same time. You can clearly see the models don’t actually know anything. But it’s ok bro you don’t need to know what to tell me, we can have different opinions. It’s ok.
We’ve been 10-20 years away from AGI for at least 30-40 years
That's true for traditional LLMs, but all the recent gains have been from test-time scaling which requires spending much more for inference. Especially for high end stuff like ARC-AGI or IMO, the inference costs are insane since they're practically brute forcing it.
It doesn't really matter. The point is that ML is insanely useful for many problems. People focus too much on if AGI is achieved, but even if it is not then it still brings a lot of value.
Everyone was promised AGI but is realizing they got glorified chat bots that search google.
Probably the almost AGI bubble before the “AGI”, then the actual functional AGI.
Goog wil crack AGI and you'll be rich AF.
Somebody managed to convince the hive mind that LLM will lead to AGI if we just scale it. The wild part is that big tech leadership other than aapl fully bought in.
The truth teller that presaged that AGI is basically around the corner and ChatGPT 5 will revolutionize the world.
Yep. Far too many are completely convinced that AI is an all knowing entity that utilizes logic to determine an answer. To them, “hallucinations” are just bugs that will be ironed out. That’s why there’s so much hype. The average person believes we’ve achieved AGI. The reality of it is that we’ve generated a great way to statistically represent different contexts and make very good guesses about what should be there in relation to existing work.
yeah, I'm starting to see this come through our management "Why don't we use AI to automate this" -- because we automated it with a script 10 years ago and that works better. The AI hype has done a good job of confusing them between high quality AGI (which isn't here and would be a genuine revolution) and LLM Generative Agents (which are a niche automation technology that is very limited).
'The market' didn't believe it at the time. We've known there were problems with GPT-5 for a long time. It's release has been massively delayed and lots of insiders have talked about disappointment for months. I think we're reaching the limit of the GPT architecture. My prediction is the focus from here on will slowly shift to something new like Diffusion Language Models (DLM) or Hierarchical Reasoning Models (HRM). Don't forget there's also a huge amount of work and potential in the vertical integration of existing models and some really important 'incremental' gains to be made from things like more effective RAG systems that access the volumes of data enterprises deal with. This last one is not sexy, it's not AGI but it is really important for practical use cases and should be *relatively* simple.
even AGI arriving won't stop the crash. my PLTR puts are getting resurrected
AGI is simply not possible on the scale that investors believe it will be. It would be catastrophic to our energy infrastructure to conceive of having widely available AGI models.
You are cooked. The only way meta wins is if they create AGI. How are they going to monetize AI? They aren’t selling chatbots. The most AI they have is chatting with characters in instagram. No way to monetize it’s a money pit right now.
Bendy’s behind Wendy’s. None of you regards came up with Bendy’s before. AGI is finally here!!! Calls on everything tomorrow
The single worst equity to own if the AGI narrative ruptures. Debt and dead data centers without half a backup plan...
The only reason Sam Altman is pushing this is because they get out of their contract with Microsoft if their model is considered AGI.
> I can get my users even more addicted to my platform There is a marginal return curve here. User are not going to increasingly more and more addictive and the return on capital certainly isn't linear...meaning more META spends doesn't mean more users get addicted. > increase my profit margins significantly Assumptions with zero evidence. > as time goes in this technology will become more and more effective and even self learning and improving Assuming you even perfect AGI or SSI. Both of which are years if not decades away and no company can afford to spend billions for that long. > also if don't invest in this technology someone else will and possibly put me out of business i would go with it Or you could use your cash for better opportunities, diversify your revenue source, build data centers that will act as compute for the companies, invest into energy sources for these companies, acquire other companies, go into robotics. You don't have to chase the it thing. Apple is sitting quietly in the corner buying back shares getting their money ready for when the dust settles.
Makes sense and is sounds structured a lot more like how Google is structured. The thing I think is most important is not putting all your eggs in just one basket. I personally do not believe LLMs are going to get us there. It is why I believe Google is by far the most likely company to get to AGI first because they do so much more broad AI research than just LLMs.
If Sam Altman doesnt achieve AGI he should be fucking deported
anyone thinking LLMs were going to become AGI in a year or whatever was smoking crack or mainlining pure venture dollars. The capex from this craze is going to have reverberations for a long time
Yeah internal analytics and optimizations are fine but that's not the type of money they have been spending. They have been throwing billions in search of SSI and AGI. >there are many ai tools that would be perfect for social media music making video generation perfect tools to edit videos with. There already are MANY tools that leverage this. Lot of competitors in this space and the juice isn't worth the squeeze. Even assuming they have great models where will they run? The bulk of the money will go to AWS, GCP, and MS.
If you’ve ever been on the Mako at SeaWorld, this is the apex where you can see the entire park, Orlando, and all the life choices you’ve made that lead you to that moment. Shorting UNH to 240. As for the rest of the Market, we hit the Ai bottleneck: https://fortune.com/2025/08/14/data-centers-china-grid-us-infrastructure/ We allowed hicks to hobble our competitive power for over 40 years and now the consequences are bearing fruits of destruction. Our infrastructure is incompatible with AGI and China might get it first. We relied on oil, coal, and gas while ai will rely on all types of energy, especially renewable sources to hit AGI. Like Altman said, they have bigger models internally that they cannot release due to cost and lack of energy sources. China does. We lost because we allowed liars to tell us that windmills, solar, and geothermal were bad bets. We listened to preachers instead of scientists. Out grid is incompatible with the future. Ai is a bubble, in the United States. We voted the wrong way consistently and constantly. Now it shows.
Let's hope we get AGI fast enough to carry our bags into infinity
AGI is basically here already… It’s a little buggy and rough around the edges but generally the LLMs know more than you about every topic you can think of and a ton of shit you’ve never heard or thought of. Unless you’re a professional in that field the LLM is going to know more than you. That is artificial GENERAL intelligence. Now they are chasing super intelligence which may or may not pan out but AGI is basically here and it’s only getting better and it will be practically applied to disrupt jobs there’s no way around it.
None of them because they’ll all fail and the real AGI will come out of nowhere from an algorithm laid out by a college paper in like 2029
Who do you trust more to lead us into AGI/AI. Sam Altman - OpenAI Sundar pichai x demis hasabis - Google Elon Musk - Xai
LLMs will never be AGI so we need something more. And LLMs were the result of decades of research and breakthroughs from many different researchers. The probability that OpenAI has made a hidden breakthrough to full AGI is miniscule.
Genuine question: how would you or anyone else have a clue how close a corporation like ChatGPT has come to AGI?
I understand your viewpoint, but respectfully - I disagree. All of that sounds like it’s coming from the opinion that AI needs to be AGI or fully replace an entire human role or else everything leading up to it is a waste. AI is not a chatbot. That’s the most consumer facing application, and of course one that gets the most publicity. The real money is behind the scenes for business applications, workflow efficiencies, analysis or speeding up existing processes. For example, last year Microsoft claimed to save ~$500m through their call centres alone. The transformer architecture of AI is applicable to so many areas, including images, sound, data analysis and of course text. Listing current limitations and diminishing returns is valid, but as this area is now one of the most valuable industries globally, it’s also valid to think that we’ll discover approaches that don’t have the same set of limitations. 3 years ago if you told someone that we’d have multimodal AI agents with context windows of 1m tokens nobody would have believed it. I’m not saying company’s aren’t overvalued in their current state, but all those chasing AI developments are doing so because they know this isn’t a fad - it’s here to stay in the forms of chatbots, receptions, data analysts, coders / testers, researchers, gaming, film production etc and will only get better.
Sam also said they're making profit on inference compute. Google designs their own chips in-house, so they're probably making money on inference too. The money they're sinking is going into training. If their bet on AGI within 3-10 years ends up working out, they will eventually stop needing to spend so much on training compute
Remember, even $1 trillion isn’t a lot of money. Zuck will blow more than that on trying to achieve AGI and end up creating something that only he ends up using like the failed metaverse…
Yes, or possibly. We obviously don’t know how to get to AGI. But the LLMs hype does get funding into broad research regarding AI, like I read chimese researcher were working on a processor that worked more like a brain, modelled after an animal brain. Breaktroughs can happen any time in any direction. We can also still be centuries away.