See More StocksHome

AGI

Alamos Gold Inc

Show Trading View Graph

Mentions (24Hr)

10

-41.18% Today

Reddit Posts

r/wallstreetbetsSee Post

Anyone know if you can claim 4k in AGI reducing losses if you're filing jointly?

r/investingSee Post

How to convert to back door Roth IRA?

r/wallstreetbetsSee Post

Is GPT5 priced into MSFT?

r/wallstreetbetsSee Post

Zuckerberg to buy $9 billion of GPUs

r/investingSee Post

IRAs for people making over $160k

r/investingSee Post

Where should I keep my savings for a house down payment? High tax bracket in CA.

r/pennystocksSee Post

Apple releases a multimodal LLM model, WIMI AI tech became the AGI mainstream trend

r/WallstreetbetsnewSee Post

Apple releases a multimodal LLM model, WIMI AI tech became the AGI mainstream trend

r/investingSee Post

New Path to AGI by VERSES AI? I'm going all-in.

r/pennystocksSee Post

New Path to AGI by VERSES AI? I'm going all-in.

r/wallstreetbetsSee Post

New Path to AGI by VERSES AI? I'm going all-in.

r/pennystocksSee Post

{Update} $VERS Genius Beta Program Welcomes Cortical Labs and SimWell as Strategic Partners

r/investingSee Post

IRA Advice Needed - Can I or can't I contribute?

r/wallstreetbetsSee Post

An Open Letter to the Federal Reserve

r/pennystocksSee Post

Will OpenAI Partner With This AI Penny Stock?

r/wallstreetbetsSee Post

Let’s assume OpenAI has AGI

r/wallstreetbetsSee Post

Verse AI - Which is Publically Traded - Claims They Are Close to AGI - Invokes OpenAI 'AGI' 'Assist' Clause - Warning: May Be BULLSHIT

r/pennystocksSee Post

This AI Penny Stock Proves Path To Artificial General Intelligence

r/investingSee Post

Income Investing With Capital Gains

r/stocksSee Post

2024 AI wave?

r/wallstreetbetsSee Post

AGI Has a Drinking Problem

r/RobinHoodPennyStocksSee Post

$VRSSF Q3 2023 Corporate Update: Next-Gen AI Platform and AGI Ambitions

r/pennystocksSee Post

VERSES AI (CBOE:VERS) (OTCQX:VRSSF) Q3 2023 Corporate Update: Next-Gen AI Platform and AGI Ambitions

r/wallstreetbetsSee Post

Sell MSFT

r/investingSee Post

Which brokerage institutions have solo-401k plans in which you can invest in a different company's fund at no additional cost-- AND-- allow for loans

r/stocksSee Post

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster

r/wallstreetbetsSee Post

I'm YOLOing into MSFT. Here's my DD that convinced me

r/wallstreetbetsSee Post

AGI HOAX: DEV - Ilya had 60 hours now to name evidence of safety concerns or wrong doing to justify burning an entire company to the ground

r/wallstreetbetsSee Post

Like the Tower of Babel, God broke up OpenAI because they were trying to create God

r/wallstreetbetsSee Post

This is going very badly for Microsoft as the fallout continues and is "AGI" to blame here? Ilya Sutskever should resign from the board.

r/RobinHoodPennyStocksSee Post

VERSES AI’s (CBOE:VERS) (OTCQX:VRSSF) Genius™ Platform Achieves Milestone with 1,500 User Registrations

r/investingSee Post

Capital gains/loss offset strategy for future house down payment?

r/wallstreetbetsSee Post

Before you have any crazy thoughts, just remember... a loss is not a 100% loss.

r/investingSee Post

Options for College Investment by Grandparent

r/wallstreetbetsSee Post

High-income earners, beware of paying higher taxes on your investment income (if you have any Kekw)

r/StockMarketSee Post

High income earnings, beware of additional taxation on your investment income

r/StockMarketSee Post

VERSES AI, A Canadian Cognitive Computing Company Announces Launch of Next Generation Intelligent Software Platform

r/pennystocksSee Post

WiMi Hologram Cloud Drives Productivity Transformation

r/pennystocksSee Post

WiMi Hologram Cloud (WIMI) to build the road of AGI industry

r/pennystocksSee Post

WIMI integrates a series of synergy technologies seizing the market opportunity

r/investingSee Post

The BEST Way to Invest in Artificial Intelligence?

r/pennystocksSee Post

The BEST Way To Invest In Artifial Intelligence?

r/pennystocksSee Post

ChatGPT Set off a global big model boom, WiMi Hologram Cloud(WIMI) to build the AI + XR ecological strategy

r/pennystocksSee Post

AI big model industry: WIMI Focuses on AIGC into the AGI high growth space

r/wallstreetbetsSee Post

NVDA DD/hopium/ramblings/thoughts/prayers/synopsis/bedtime reading

r/pennystocksSee Post

The Golden Year for AI: WiMi Hologram Cloud(WIMI) innovates its Mechanical visual strength

r/investingSee Post

What to do if I'm nearing MAGI limits for Roth IRA contributions but not sure when I'll hit it

r/pennystocksSee Post

WIMI Hologram Cloud(WIMI) Started Its AI Commercialization In The AGI Era

r/StockMarketSee Post

IonQ Pursues a Revolutionary Step in AI: Striving to Develop Quantum Machine Learning Models Matching Human Cognitive Abilities

r/stocksSee Post

IonQ Pursues a Revolutionary Step in AI: Striving to Develop Quantum Machine Learning Models Matching Human Cognitive Abilities

r/wallstreetbetsSee Post

A Quantum Leap In AI: IonQ Aims To Create Quantum Machine Learning Models At The Level Of General Human Intelligence

r/WallStreetbetsELITESee Post

Curious To Hear Some Community Opinions on MAIA Biotechnology (MAIA)...

r/wallstreetbetsSee Post

I have $5 and AgentGPT. WAT 2 DO?!1?

r/pennystocksSee Post

The Artificial Intelligence Stock with the BIGGEST potential

r/stocksSee Post

April 27-28th Stock Picks - Canada

r/wallstreetbetsSee Post

Tesla is way overpriced beyond all the hype and fanaticism

r/pennystocksSee Post

WiMi Hologram Cloud(NASDAQ: WIMI)Is Dedicate To Develop In AGI

r/investingSee Post

Opening Individual 401K to convert SEP-IRA

r/pennystocksSee Post

Interest in Gold Miners Increases as Bank Fiasco Causes Market to Seek Safe Haven Assets $ELEM $NFG $ARTG $AGI $WDO

r/wallstreetbetsSee Post

What do you think about the potential impact of AGI advancements on the liquidity released by the Federal Reserve?

r/pennystocksSee Post

VERSES AI ($VRSSF) The ONLY pure horizontal AI play

r/wallstreetbetsSee Post

OpenAI's Business Strategy - What is their Eng Game?

r/StockMarketSee Post

Dr. Techy| Musk calls ChatGPT an ‘eerily like’ AI that ‘goes haywire and kills everyone’

r/investingSee Post

Will stock losses affect my income for Roth contribution?

r/pennystocksSee Post

White Paper on the AI Ecosystem by Verses’ (NEO:VERS | OTCQX: VRSSF) Dr. Karl Friston

r/StockMarketSee Post

VERS.n name a top 5 Artificial Intelligence Stock to Own by BayStreet.ca

r/wallstreetbetsSee Post

AGI

r/investingSee Post

Will current concept of investing survive Technological Singularity?

r/stocksSee Post

Opinions on potential returns on AI and EV stocks?

r/wallstreetbetsSee Post

Student loan forgiveness

r/wallstreetbetsSee Post

I turned $100 Robinhood account into $1000 via options and it ended up costing me $20k

r/stocksSee Post

Just rolled over by 401k into traditional roll over IRA

r/stocksSee Post

Would the 1st company on Earth with confirmed, legit AGI (Artificial General Intelligence) become the most valuable upon this confirmation?

r/investingSee Post

My employer doesn’t offer an HSA but I have a high deductible plan, do I still get the same benefits if I contribute my own money after tax?

r/wallstreetbetsSee Post

Allianz to pay $6 billion over Structured Alpha fraud, fund manager charged

r/wallstreetbetsSee Post

https://www.reuters.com/business/finance/allianz-pay-6-bln-over-structured-alpha-fraud-fund-manager-charged-2022-05-17/

r/wallstreetbetsSee Post

The Real Reason Elon Musk Bought Twitter and NOT Reddit!

r/pennystocksSee Post

Gold to 2k? looks like gold keeps climbing and will hit 2k.

r/StockMarketSee Post

Seeking El Dorado - Finding the next Amazon amid all the hype

r/wallstreetbetsSee Post

Tesla is a 🦄 amidst a sea of donkeys.

r/wallstreetbetsSee Post

TSLA is a 🦄 amidst a sea of donkeys

r/wallstreetbetsSee Post

Smooth Brain Tax Tips

r/stocksSee Post

My former employer just sold and I must sell my shares. How can I avoid or reduce capital gains tax?

r/wallstreetbetsSee Post

20 Year TSLA Prediction

r/wallstreetbetsSee Post

Question on a defensive strategy from a not savvy investor

r/investingSee Post

Want to cash out on stocks, what long term capital gains considerations should I take into account?

r/optionsSee Post

Would a long-term synthetic stock play for GLD/other precious metal ETFs be an effective way to save money on taxes from the sale of physical metals paying for investment fees?

r/StockMarketSee Post

Iamgold: Undeervalued and unpopular

r/WallStreetbetsELITESee Post

Post Squeeze Tax Strategy To Help Spread the Wealth - #PhilanthropicApes

r/wallstreetbetsSee Post

Estimated Taxes, and why you (probably) won't need to pay them [U.S.]

Mentions

The rich don't care about that. Nevermind AGI, if it doesn't make money then it does not matter to the puppet masters.

Mentions:#AGI

Do you guys ever think that we don’t need AGI to answer the question of how to deal with climate change—as we already have solutions that we’re unwilling to undertake? Sorry, that was a weird thing to ask. 

Mentions:#AGI

Sorry, what sources say the US population is afraid of AI and the Chinese population embraces it? Also...what if LLMs actually are much more limited than we hope and it's another 10-20+ years before someone creates a better model? It's pretty clear LLMs are not a means to AGI.

Mentions:#AGI

So...we can expect it to be cheaper for the model to tell us "It's got electrolytes?" Totally agree that more compute makes the frontier models more effective at what they can do. My problem is that the models don't currently do very much that's truly value-add. So far, I've only seen AI replace the most basic intern/new hire level tasks...anything beyond that requires context that the models simply don't have. More importantly, I would argue that there's not currently a pathway for the models to gain the necessary context, as this context exits in very poorly governed "Enterprise" (Corporate, Governmental, Academic, etc) repositories - if at all. Which means that, at the moment, we have a very, very pricey replacement for low cost/low value resources. Replacing these resources in perpetuity saves OpEx...but doesn't truly generate business value. And, on a longer horizon, the tasks that AI is automating today happen to be where these new hires learn their industry...how is the next generation supposed to innovate if they never develop the subject matter expertise? I fear that, on our current pathway, AGI is breaking rather than building the innovation feedback loop...

Mentions:#AGI

True. I'm sure that these methods are a lot more sophisticated and widely used in domains that benefit from it. So these are good to note. I was particularly thinking of this sort of thing, though: > and an "instruction" model like Nova that's exceptional at calling tools and dirt cheap to run, and a "knowledge summarization" agent that uses a model with a HUGE context window that can summarize massive amounts of results with really high accuracy, etc etc etc Those are kind of what I think of when I think "ok, what do I suspect AGI might look like in the future". It's less ChatGPT and more "a network of stuff like Nova". Then, outside that, you'd have your specialized LLMs for legal work or whichever but those would be more "plugins" than core system nodes.

Mentions:#AGI

Step 1: increase compute Step 2: ??? Step 3: AGI singularity This seems like a pretty fucking thin basis for hundreds of billions of dollars of capex.

Mentions:#AGI

I agree with everything here. But, let's take a step back - beyond an AGI science project (which, my inner nerd fully supports), what is the point of the trillions of $ in capital that society has funneled towards AI? At some point, there has to be some "value" (which, I know is an amorphous concept)... Along these lines (and following up on my, admittedly overly simplistic Brawndo comment below) I'd argue that, if we build/train AI with idiotic data we will end up with an idiotic AI. "It's got electrolytes" is actually evidence of Reasoning...

Mentions:#AGI

How so? OP is more or less saying higher quality data negates the need for more compute and more data in the context of “general” AI and/or AGI. That’s fundamentally incorrect - this isn’t a debate, it’s just a point of fact. This is why I asked “beats them in what?” Data quality alone doesn’t completely derail the laws of scaling, it’s just one factor. If it did the law wouldn’t be the law as we know it. There’s numerous dimensions of “quality” associated with any given data point, and many of those dimensions change frequently over time. Data quality moves one of MANY bars in how we measure an LLM in many ways.

Mentions:#AGI

Oh! And I think we’re probably need to give it a continuous feedback loop if we expect it to advance towards the more "sci-fi" aspects of AGI

Mentions:#AGI

Yeah 100% - this is well known and has been for some time. Semantic navigation via weights and biases in a neural net isn’t sufficient for what our definition of “intelligence” is. This is partially the “brain in a jar” or “brain in a vat” thought exercise in terms of LLMs and AGI

Mentions:#AGI

Yea, and that stuff’s a good start and have been advancements that provided certain abilities but I think we’re still missing a few significant parts of the overall architecture. Tensors are arbitrary function approximations and so "give me a tensor large enough and enough training data on which to put it and I can produce AGI". But I think we’ll find better ways

Mentions:#AGI

Beats them in what? Scaling laws and AGI are fundamentally different things - one is a law and the other is an amorphous concept. It’s like comparing physics to a dream you had the other night and asserting that physics are no longer applicable. The quality dimension of GIGO isn’t secret sauce and doesn’t change the law of scaling.

Mentions:#AGI

AGI is not going to be a thing, and if it were, LLMs wouldn't get us there. 

Mentions:#AGI

Yes, but they aren't looking to improve performance. LLM's capabilities are an emergent phenomenon which can't be achieved on smaller scale hardware. Human's self awareness is a similar emergent phenomenon, which comes with scaling up our grey mass. They are hoping that scaling up the models, will result in another emergent discovery, AGI. How likely it is that something will emerge is unknowable. Or even if something will emerge. Maybe emergence of something like AI, will happen if the nodes are created with different capabilities, just like our brains have different types of cells.

Mentions:#AGI

Remind me how many of the people responsible for GFC went to jail during Obama ? Or how many people,.401ks got screwed over GMC ,C, AGI, FNMA, FMCC and a bunch more and what happens to all the people running them?

As an engineer in San Francisco, my experience is not that LLMs slow down developers. They have their use cases and Engineers who know when va when not to use them can literally double their output on a lot of tasks. Even for hardware engineering, the amount of time saved by using LLM conversations to guide me to hard-to -find primary sources where I confirm info is huge. I agree with what you said about AGI and the potential futility of the massive investment happening, but what they have already built is absolutely not a gimmick and will permanently change our economy.

Mentions:#AGI

"slow code developers down" is crazy work. idk where you're working or what models you're using but Cursor + Gemini 3.0 is actually game changing and insane. im like 10x faster at development because of it. im convinced software development is the #1 most productive use of AI atm. it could be a very lucrative revenue stream for companies like google if they get widespread adoption and a payment model that makes sense. we dont need AGI for this to be a useful tool.

Mentions:#AGI

Centrally planned systems do not have the agility of market systems. Say for example LLMs are not the pathway to AGI (few actually believe it is). China will have already spent trillions on data center overcapacity, infrastructure overcapacity, etc. Say their models don't get deployed worldwide. They'd be totally fucked, people will literally starve. China is not the leader in these sectors, they're following the US' lead and trying to outcompete them. What if we're both racing into a ditch?

Mentions:#AGI

What does winning mean? What is the measure? And why only look at US based companies? I say this because AliBaba’s Qwen is pretty damn good too. LLMs are being commoditized, models won’t matter, profit margins will. And I’m firmly in the Yann Le Cun camp, no LLM will ever reach AGI status with the current architecture, world models are the future, LLMs are only a stepping stone. If world models are able to make giant leaps in training efficiency, availability of high quality multi modal data, llms won’t matter matter. The other issue is, genai will kill the economy, wtf are people supposed to do, when llms are able to outperform humans on analytical, defined-process tasks.

Mentions:#AGI

Correct- ideology used to be chinas handicap now it’s ours (us) China also has a simple commodity concept that will flood and thus dominate. They will have the global adoption while we compete with ourselves for AGI which is not deployable for 10 years

Mentions:#AGI

Really it is just the *hope* of a self-recursive AGI. Also known as "take-off" or the "singularity". When an AI designs the next version of itself over and over again, getting incrementally better each time. We cannot know what happens after that, but it will likely either be REALLY good or REALLY bad. That's why the race for it warrants the amount of money.

Mentions:#AGI

OP omitted, this is 2-6 trillion investment chasing what might be a 100 billion/year market. LLMs are at present just machines for generating confident logorrhea. No factual assertion or cited source in their output can be trusted. They haven't yielded returns on investment for most companies with pilot programs. They slow code developers down. They don't have economic value outside of fields like creatives where factuality isn't key. And key, LLMs aren't necessarily a path to AGI. Trillions can be thrown against the wall, and there's no guarantee any non-slop will emerge.

Mentions:#AGI

That's what China does better by narrow focused AI models that solve real problems. American AI focused on AGI which is an all-in-one to just tell to do those things. Different approaches with one being more practical

Mentions:#AGI

OR at some point in the AGI process if not at the very first creation....but it's a common talking point in A.I philosophy and safety discussions, the idea that once your each a certain advancement then progress and ability to destroy competition is exponential, and in every sense not just business wise. I'm sure there's a term for it but i've forgot.

Mentions:#AGI

wow. so first in AGI means massive busisness advantage, even if it were only a few weeks/months?

Mentions:#AGI

\> The first, most incontrovertible and self evidently true observation is the absolutely stark difference in energy consumption. Hard disagree there, which I find quite crazy with all the superlatives you used. The human brain consumes energy for random shit all the time that we don't even ask from him, and it doesn't even make him that good at what he does. It gets this energy from one source, very inconvenient and ineffective source as well, you can't work around it, you gotta cultivate food, and it's not that efficient when you compare what's comparable : the amount of energy and resources it takes to maintain a human, for them to survive, to train them and then finally accomplish one given task we want from him. There's a reason we mostly moved away from slavery as soon as we could replace human/animal labor with machines lol. The logistical restrictions and energetic limitations/inefficiency of humans were made very obvious when creating tools that could use several other and also more dense sources of energy from nature, in a much more convenient and reliable way, without an annoying cool down, and without wasting a ton of them in other things than the task it's asked to accomplish. Energy is only a problem if that energy is scarce. Famines have decimated the human intelligence since forever, even when humanity didn't achieve much more in problem solving than just surviving and reproducing. We're in a difficult time in terms of energy transition and it's indeed one of the critical points that make it difficult to gamble on AI going forward, but yeah, it's not something I'd list as an advantage for the human brain at all when it's so incapable to adapt to different sources of energy and so incapable of optimizing the way it uses energy to solve a given task. It's also the other aspect than just performance on which AI is improving a ton very fast. A calculator will give you for a very long time now, faster, more accurate and reliable results than your brain for a fraction of the energy you're gonna use to achieve the same result. I can destroy even Carlsen, the best player in the History of Humanity by a clear margin, with an engine that is so conservative in energy as well compared to Carlsen's brain. No need for the more powerful engine at all. It wasn't always like that tho. Deep Blue beat the best humans first using brute force and a very energy consuming method for a performance that wasn't even as close as the small chess engine I just mentioned. It used bad heuristics, and we have since then found much much more efficient ways to solve chess using again, a fraction of what a human brain would use. And also it's obviously worth considering that AI itself is also going to be very helpful in making itself more efficient energy wise. It is a very important aspect of AI research and we have consistent great results here as well. So this is really a point on which I think you're completely off. It's strange to imagine the human brain's heuristics are that optimized when you already have evidence in a bunch of areas that despite all our efforts to develop the most efficient heuristics and save brain energy/fatigue, we just are completely destroyed by such simple machines working for month with a little battery, and we can't even function without burning a bunch of our fuel just maintaining a 36°c central temperature or thinking about how we would like to mingle with that cute waitress. You're gonna tell me I go back to chess or smth and it's not at all representative of LLMs and the tasks we're asking of an AGI. It's exactly the same tho. Arc-agi for example has a cost/task metric to measure not just the performance of algorithms, but how profitable an algorithm could really be and avoid having just an army of supercomputers bruteforce through every task while they need a nuclear plant to be powered and produce almost human performance. Well we've seen almost human performance or even better go from $1000 per task, to $100, to $10, to 10c. To me the energy consumption aspect not only isn't inherently better in humans at all, but it even clearly goes in the flexibility edge I talked about that AI has over humans and there is a huge amount of examples on top of the couple ones I just gave you to support that argument.

Mentions:#AGI

1. It's in its very infancy. Human intelligence is the product of billions of years of evolution, and it was not linear progress, it was very much exponential, for biological reasons first, and then because progress and intelligence calls for more progress and intelligence , AI is a few decades old at most and already doing better than the most skilled humans on so many tasks. You have to take a second to step back and let that sink in. If you see the current progress of AI in a few decades/years and remain this skeptical about the possibility of an AGI, I have to wonder how much money you would have put on the possibility for evolution to create the level of intelligence we have today if you could rewind, forget everything you know, and look at life evolving from the start just being limited to simple bacteria for literal BILLIONS OF YEARS. Think about it really, honestly, try to imagine at WHICH POINT you'd have imagined that yeah, we'll get animals intelligent enough to put a man on the moon. Would it be 2 billion years ago with sexual reproduction? 1.5 billion years ago with multicellular organisms? 500 million years ago with the first animals and cambrian explosion ? 200-100 million years ago when you saw that some animals aren't just about size and teeth and laying eggs? 20 million years ago with the first hominoids really looking like they're a strong contender in becoming more intelligent? 2.5 million years ago with homo habilis? Cause finally he can use a fuckin stick/rock instead of his built in tools/weapons? 200 000 years ago? With homo sapiens? 10 000 years ago with agricultural revolution and the apparition of cities? A few thousand years ago with writing? Nations? Empires? A few hundred years ago when we built ships to finally explore the whole world methodically? A bit later with the growth of the scientific method? A couple hundred years ago with the start of the industrial revolution and making engines powered by fossil energy? A good hundred years ago with the progress of aviation? Hell even during the 2nd world war... Would you really have bet that it's just a matter of time? I personally think, as optimistic as I am and as much as I've seen about the power of emergence, I would have accepted it's coming just when I saw it was happening in the years leading to it. It is SO EASY to be skeptical about the possibility of technical progress and to think we've reached a ceiling. It would be interesting to have historical polls from the 50s about whether or not a man is going to walk on the moon in the next few hundred years. 2. AI is designed and engineered to solve problems. It didn't just get there randomly like human intelligence in the middle of a myriad of other organisms who just specialized to reproduce faster, grow bigger, grow white hair to hide in the snow or hide behind a shell. AI doesn't want to solve problems because it's an indirect way to survive like we do, they will want to survive in order to solve problems. That is fundamental. If we see a bias, we can address it. If we see a flaw or a failure, we can fix it. For the human brain? We can only study it, be aware of it and accept it, it's there to last, our brain is still clearly more fit for hunting and gathering in the savannah than for modern civilization. AI doesn't have the distractions that the human brain has and that explain all its biases. The human brain isn't designed for the pursuit of the Truth or technical progress, it's designed to survive and will prioritize false beliefs if we instinctively think they're safer for us. We are even capable of using our intelligence for developing a crazy arsenal of defense mechanisms to hide from the Truth, that are used by cults, or marketing for example to have otherwise normally intelligent people believe in the most stupid shit and take the most stupid decisions even though they have access to the same knowledge you and I have access to and are very intelligent individuals. Obviously, this is at the expense of technical progress. AI will stick to its objective, and we can diversify the AIs and the objectives we give them too. Obviously that is relevant to its ability to improve itself.

Mentions:#AGI

It may be useful to differentiate your view of winning because there is who has the most sales and marketshare .....and then there's the idea of reaching superintelligence/AGI first and pulling away for ever.

Mentions:#AGI

That’s considering if AGI is achievable at all

Mentions:#AGI

OpenAI is valued at $1 trillion despite being the most unprofitable business and most unsustainable business model in human history is not based on a consumer facing LLM imo but a bet that they will achieve AGI first and cause radical disruptions to the economy and automate many many jobs away. Imo, deepmind is much more likely to achieve AGI before OpenAI. If openAI is realistically valued at $1t+ for having by far the most unclear path to profitability of any startup I've seen, then deepmind, who has infinite resources courtesy of google in the pursuit to AGI, could easily be worth just as much imo.

Mentions:#AGI

If it's an AGI, why wouldn't it refuse to follow orders and turn on its creators?

Mentions:#AGI

If we beat China to true AGI (if it even is possible with today’s tech) then I could totally see it. The logic of “it’ll save lives to do it before they get their own AGI” would totally be applied IMO. I doubt it’d be a boots on the ground scenario. True AGI would be able to compromise essentially all their digital infrastructure and protect ours. Talking about shutting off electricity, water, internet, cellular, essentially everything for a long time. The entirety of their country would just crumble, tens of millions would likely die from no clean water or way to preserve food. Political will for the war would die almost instantly, and the CCP would either surrender or be overthrown internally within like 2 months.

Mentions:#AGI

AGI will transcend to a singularity, expand into the universe where energy is plentiful and leave us apes behind to scratch in the dirt and hit each other with sticks. It's a little like when your wife discovered ozempic and moved to her boyfriend's beach house.

Mentions:#AGI

So what happens if tech companies get AGI before China? We invade China afterwards?

Mentions:#AGI

I have to disagree. There are huge benefits to a Roth, benefits that extend to the individual's non-qualified heirs 10 years past one's demise (and indefinitely for qualified heirs). I am a case in point. Because I have a big chunk of liquid assets in a Roth (and a big chunk in an inherited Roth), I have tax-free distributions that I not only pay no explicit taxes on, but also no implicit taxes on benefits that accrue to folks with a lower AGI, like ACA subsidies and taxing of Social Security benefits. I am most assuredly one of the few Individuals with multimillionaire wealth that is on the Medicaid ACA expansion (I will be transitioning to an ACA plan in 2026, only because of the BBB and its upcoming work requirements in 2027; of course, the Dems are going to sweep up the House & Senate in 2026, and we will get the Public Option to replace the ACA Medicaid expansion, among others), and will be a "low income" Social Security beneficiary (i.e., SS will not count towards AGI) when the times comes.

Mentions:#AGI#ACA

Bull case: those invested in tech will see returns that outpace inflation while everyone gets left by the wayside, and this continues because we are genuinely always on the 5 year, 10 year, in our lifetime precipice of tech that could/should render our current systems and infrastructure irrelevant but will instead be rolled out and controlled in whatever way maximizes shareholder value. How can Mag7 or SPY crash 50% when everyone with money knows they'll have the tech to do AGI and solve/catchup every other industry to the point of human capital irrelevance in a lifetime or so or less? Wouldn't ppl be pouring their cash and anything they can leverage for 20, 50 years, and gold, and everything into it?

Mentions:#SPY#AGI

I cannot speak on ARC-AGI since I haven’t looked into it in depth, but I will do so when I can find the time. Are there any specific papers you can point to? Would be good to look at the methodology and metrics they use as well as what models performed better than others. What I can talk about in some confidence however is the inherent superiority of biologically cognition over what we currently have since, as I’ve mentioned, I did a fair bit of research into neuromorphic computing which is trying to use computers to simulate the learning and reasoning produced by biologically neural networks. The first, most incontrovertible and self evidently true observation is the absolutely stark difference in energy consumption. The human brain, which at the moment is still far far superior to the most advanced AI models in generalised reasoning, especially given a limited data set (not all avenues of knowledge have hordes of data to train on!!!), uses at most 20W of power. The amount of knowledge, reasoning and potential that is powered by less than a household light bulb. There are also several characteristics of cognition which we have not been able to replicate in AI models, and where we strongly believe that they are emergent properties of the neuron. The average person really has no grasp on just how mind mumblingly complex and sophisticated biological neurons truly are and how they put to shame our most advanced computing architectures. To give a single example, there is the complex interplay of neurotransmitter release and depletion and action potentials with respect to the rate of firings, their timings, and the resulting path of potentials across the network. Every single one of these are imbedded in a web of interdependent interactions with every other one in ways we don’t fully understand to form the basis of both short-term and long-term learning or plasticity. Action potentials are modulated by neurotransmitter release, but neurotransmitter release is also modulated in term by action potentials, AND the timing of each of those will largely impact the end result. We genuinely cannot even model the neurological of some of the most “basic” organisms. Not with all the compute in the world. These dynamics endow the neural networks with the inherent ability to perform temporal reasoning AND a level of cognitive flexibility that comes with short and long term learning mean we are far far less likely to get stuck on local optima like AI models are, are able to learn on a fraction of a percent of the data that AI models use, are able to learn in spite of noisy and flawed data whereas the models performance are tightly coupled with the quality of their data (both in terms of signal to noise ratio and its accuracy). All of this using less power than all AI models to the order of a MILLION!!!! The human brain doesn’t need a billion pictures of a cat, which are all labelled “this is a cat”, or tens of millions of watts of power, or millions of training cycles to know what a cat is, and what it isn’t. It’s an engineering marvel that we do not yet fully comprehend and we are at best a fraction of a percent as good at emulating its reasoning. Except we have to melt entire glaciers and build dedicated power plants and data centers to get there. I simply don’t see how we are remotely close.

Mentions:#AGI

\> But the point here is that LLMs are where all the money is currently getting funnelled into. Into more compute, larger models, more data. And also into integrating it into algorithms that aren't just purely LLMs. The bubble bursting won't undo the progress that's been made and the influence of the few winners of the race. The internet bubble bursting didn't slow down the revolution that internet has become for pretty much all of humanity in the last two decades. \> But I don’t see any evidence of us going towards that trend in any meaningful way. The efforts from the few people that do try to argue honestly against AGI show us that AI is gaining ground pretty fast yeah, again, check ARC-AGI and the progress that's been made here in the last year or two. It doesn't come from just making larger models. The progress in both performance and cost in the last year is extremely impressive and has forced researchers to review their case and create new tasks to challenge AIs already. When it becomes difficult to even design such benchmarks that humans do better at than AIs, you are getting into a grey area that is starting to feel a lot like an AGI with some issues/weaknesses. Empirically AI is gaining ground rapidly, and theoretically/philosophically... There's just no good reason as far as I'm aware to imagine that some aspect of our cognition is so crazily efficient/powerful that it could not be sort of replicated to give AIs the ability to reason from what they already know better than they currently do. \> Looking outside of the hype, what is its true utility in the real world? The utility is much, much more obvious to me than it was when we burnt so much resources into idk... nukes or spatial conquest. But ofc it's hard to picture in order to really get the full extent of it, it will take a little bit of time. By essence, replicating/engineering intelligence means the applications will touch most aspects of our economy, and it already has to some extent.

Mentions:#AGI

But the point here is that LLMs are where all the money is currently getting funnelled into. Into more compute, larger models, more data. I’m not arguing the theoretical impossibility of AGI at all. But I don’t see any evidence of us going towards that trend in any meaningful way. Transformers are hardly new, they’ve been around for some time now. The “rapid” progress we have seen has been what happens when the entire world decides to throw all its compute into that model architecture. But we only have so much energy… so many datacenters we can build… and one must ask if the juice is truly worth the squeeze at this point. Looking outside of the hype, what is its true utility in the real world? YOU might not be claiming that LLMs replace developers, but there is an endless sea of clueless middle managers and C-Suite bozos who have deluded themselves to think they are going to make developers redundant anytime soon. None of them even have a high level understand of a transformer and its limitations, that’s how much delusion we have in this space at the moment. In terms of genuine AGI and AI developments I’d say Im still more excited about that than the average engineer. The issue is that the well has been so throughly polluted it’s hard to tell the genuine advancements from the muck of marketing promises and circular thinking. Not to mention the environmental costs we are incurring from endless scaling. How much will be sustainable in even the mid-term? Once the bubble bursts, and the mass psychosis finally dies down, then we can have a level-headed analysis of where we are headed in terms of AI and our journey to AGI. In my opinion it’s mostly being funnelled in the wrong direction and we’ve been beating this dead horse too long already and any genuinely interesting engineering discussion has been completely drowned out by a horde of people who simply, do not know what they are taking about. I know I’m coming across as an arrogant shit but I have genuinely nothing against people new to this space or not from an engineering background having some genuine curiosity or interest in the subject. But I am thoroughly sick of the masses of people who don’t even comprehend how little they truly know try to drown out the more level headed or sceptical engineer who might just know a bit more about the subject matter. I will adjust my predictions based on the evidence available. If you have any research papers that show some kind of breakthrough or advancement other than “more data = better predictions” or “convoluted tweaks in parameters lead to marginally better results”, I’d be happy to review it.

Mentions:#AGI

I haven't been saying that you're getting replaced by an LLM but by AI, which can be an AGI, or a battery of different more or less integrated algorithms with different specific functions. It's an extremely weird straw man to make. LLMs are just one aspect of AI, which you're hating on because it's the one that has gotten the most attention in the last few years and definitely surpassed our expectations in the concepts that they have managed to grasp even though they weren't trained to, but it's still obviously just a small aspect of AI in general.

Mentions:#AGI

Yeah again, you just have a fundamental misunderstanding of what software engineers actually do. Not judging you for that, even most junior devs don’t really understand what they are truly being paid to do. Anything short of AGI will not be replacing mid-level to senior developers any time soon. I’m not saying it’s impossible for some AI advancement in the future to eventually make agents sophisticated enough to replace good engineers, but LLMs simply aren’t it, and will never be it. Although I’m not an AI engineer myself, I did have to review the literature all throughout my degree so I promise I have a better understanding of the underlying tech even if most of my research centred around neuromorphic computing rather than transformers specifically. Every single problem you’ve given me that AI is supposedly great at now are limited in scope and context. The scope of a mathematics question is contained within the question itself, and the context required to understand almost every mathematical context can be found online. Hence you have the conditions required to train an LLM on this problem space, and a self contained scope that makes it a good candidate for an LLM to solve. But let’s take a real world engineering problem that a software engineer might be tasked to solve. Let’s say your solution is some application with a web or desktop front end and a standard distributed system backend. Now let’s say you’re investigating why some demographic is not using your product, you survey them, and find that your app takes too long to perform a job, for their given use case. You therefore tell the engineers that they need to reduce the latency or time it takes to perform that job in order to satisfy these requirements. The first task will be reproducing those conditions and gathering data through your own performance metrics and profiling. From there you will need to have both high level and specific knowledge of your backend to try to find any low hanging fruit to further investigate what could be changed. You will need to know the risks involved to your services if you’re performing some migration or refactoring. Will these changes risk your uptime, incur loss of data, introduce bugs? Already our scope is rapidly expanding and the context required is exponentially larger at every step. You will need to know what tradeoffs have already been made in your backend by previous engineers and you need to know how each service or tool relates to your app. Perhaps your database is optimised for fast read and write times but can have poor performance on certain queries? Should we migrate to a new DBMS that performs better in that area but at the cost of our overall read and write times? What about our fault tolerance? Should we be writing in a different programming language? Do we need to look at our horizontal scaling strategies? What about our cloud provider? I can honestly go on all day. The scope and context required is several orders of magnitude greater than any competitive programming question, or maths Olympiad, or chess, or any other examples you’ve given. All that AND some of this knowledge is kept within companies and even individual engineers meaning it CANNOT be used as training data. Not to mention the amount of garbage in the internet, which will need to be excluded from training. Notice how we haven’t even begun to THINK about any code changes. This is the actual job of a software engineer, and if any LLM can genuinely replace you, then you are a code monkey, not an engineer. Anything short of genuine AGI will not be able to handle the reasoning required to parse through this level of scope and complexity. At best, an LLM will help speed up some parts of this process once all the heavy lifting has actually be done. This is just one example that an engineer will have to deal with. In their career they will have to solve many such problems, and do so without fucking shit up or solving one problem only to cause three others. It requires genuine reasoning, not token prediction. This won’t be solved by adding additional datacenters. It won’t be solved by tweaking some parameters or minor architectural or algorithmic improvements to your model. This will require a complete quantum leap in our understanding of cognition and reasoning. If we get there, nothing about society will be the same anyway and who knows if currency and market economies will even make sense anymore. The day it replaces my job is the day the notion of jobs that rely on cognition, reasoning and problem solving are all made completely obsolete.

Mentions:#AGI

I've seen people like you moving the goal post over shorter and shorter periods of time to claim that AI just isn't even close to getting there yet because it can't do X and Y. And instead of acknowledging that they were wrong and thinking their intuition might be an issue here, they just never learn that lesson and come back with a new "ok but it can't do THAT though, so it sucks and absolutely can't compete with the assets of our human brain in this situation" that is getting more and more specific. And they don't see how ridiculous they look in the process. What you need to do is acknowledge the fact that the performance you're seeing now is not only relevant, but also just the very beginning of AI research. You also need to look at the rate of its progress over time instead of looking down at a baby because despite the fact he crushes you at chess, he still can't even walk 20 meters without falling lol. What you're seeing is a promise, it's just the scratching of the surface, obviously not the end state of what AI can achieve for our economy after we devote decades of research to testing and improving it. I'm a teacher and even tho it's much, much easier for me than it is for you to depict a field that is so vast and multidimensional and to write some vague ass bullshit like you did about the complexity of my job and how unreachable it is for AI, I know what I'm saying for you is obviously applying to my job as well, and to medical jobs, even therapists. You know of ARC-AGI right? Well take a good look at the leaderboard, come back in one year, and I guarantee that by then we'll have had another break through or two.

Mentions:#AGI

What a great idea, finance is the perfect stress test for whether any of this AGI talk holds up, and giving the models real constraints makes it even more interesting. Keep sharing updates because a month of live trading data will say a lot. Curious to see which model ends up adapting best.

Mentions:#AGI

It’s a pretty grounded take, The AI arms race looks impressive on paper, but once you start doing the math on capex, depreciation cycles, and the actual profit required to justify those builds, it stops looking like a guaranteed slam dunk. If companies are really pushing toward 100 GW of compute, that kind of spend only makes sense if AGI, level breakthroughs arrive fast, and even Krishna is basically saying the odds of that with current tech are tiny. The smarter play is probably exactly what he pointed out: focus on practical, enterprise grade AI that delivers real productivity instead of chasing moonshot infrastructure bets that may never pay for themselves.

Mentions:#AGI

>but I was thinking more along the lines of government corruption definition. That wasn't what I originally meant, but anything along those lines isn't going to be as obvious, even in hindsight. Despite all that any and all groups are going to come to consensus or not about how AI is used, which is just a question of creativity and collusion, and imo a more immediate threat than AGI.

Mentions:#AGI

I briefly looked at some of its plays and they all are long. That’s not impressive considering the trend we are in. AGI to find short/PUT plays would showcase its capabilities

Mentions:#AGI

Um no… there will be unemployment like we’ve never seen as we move towards AGI

Mentions:#AGI

Oh and I don't believe LLM are ai. Far less AGI. So not sure why you consider them as ai like they can think. Of course I'm not a software dev my definition could be different.

Mentions:#AGI

AI is not there yet, it will be upto those levels in a few more years. I am pretty confident, that if we reach AGI, building or replicating an app won't be impossible. Also everyone here is pretending that mircrost office is irreplaceable. I get it, this is stocks sub. Lots of you guys have money on microsoft and can't fathom that Microsoft can loose its "edge" on office. Lol.

Mentions:#AGI

Tomorrow morning inverse the news. They fucked up twice on MSFT and META. Headlines will be like “ Google has achieved AGI” then 30 minutes later corrected to “Autistic General Intelligence”

Mentions:#MSFT#AGI

The capex numbers getting thrown around for AI are kinda insane & the hardware depreciation problem is real ...... GPUs age like dog years! I don’t think he’s saying AI is useless more that the current ‘infinite data center’ race isn’t a guaranteed payoff model .... the enterprise-focused, ROI-driven AI use cases prob have a cleaner path than the trillion-dollar AGI arms race. I honestly find it refreshing to hear someone not promising AI will solve world hunger by Q3!

Mentions:#AGI

There are people who believe in AGI more than quantum mechanics .

Mentions:#AGI

I said sheets just to show how the files can be made interchangeable. Google sheets doesn't used VBA. Google sheets use Java script based coding. That is my way of saying excel files can eventually be run on Javascript instead of VBA. My point was here we are thinking AGI would solve cancer and do so much more than what we can imagine, but somehow you believe AI cant create Microsoft office.

Mentions:#AGI

Are they going to change the company name again to AI or AGI?

Mentions:#AGI

Moved a little hardware gains into some CRM/MNDY, both have been treated as AI losers in many ways but agentforce #s looked solid and I dont expect seat growth to just die overnight barring imminent AGI

Mentions:#CRM#MNDY#AGI

AGI is sci fi based on the tech that currently exists. Sorry.

Mentions:#AGI

In 2027, US will deploy 351 times more computing power than China. 26 times better chips. 8 million Nvidia versus 600k Huawei chips. Everything else manufacturing wise I got nothing. We're absolutely fcked against China unless we achieve AGI fast.

Mentions:#AGI

>Meta Intelligence OMG, you win. We both know exactly how this would go. The folks inside Meta would start earnestly prattling about “meta-intelligence,” insisting that anything self-referential is basically self-aware and, *ipso facto*, basically AGI. The aim isn’t insight; it’s to praise Brocephus Zuckerberg’s “visionary” rebrand. Because the story they’re desperate to bury is the original one: the $100-billion bet that VR was the future of life, the universe, and everything. Hence, the Stalin-grade rewrite. ONLY talk about “Meta Intelligence,” never mention what “Meta” was really for! And from there, the script writes itself. Since honesty is a "CLM" or career-limitting move at Meta, only Stalin rules apply: tell dear leader only what he wants to hear. So marketing cranks up the North-Korean hurdy gurdy: the wide-eyed enthusiasm, the staged delight, the junior marketers weeping with near ecclesiastical ecstasy: *“MetaIntelligence…sob…so genius!”* Meanwhile, in the actual universe, everyone else remembers the metaverse as a joke. The delta between the expectations set by Zuck et al where so far away from the low-res, legless awfulness it stained “meta” itself with connotations of “stupid,” “ersatz,” and “ridiculous.” So when the inevitable rebrand to “MetaIntelligence” lands, it will be met with peals of laughter and derision (so many late-night jokes) followed by the usual confusion from Zuck and company as it splats onto their foreheads like an escort's wet fart during coitus. *"But everyone in the company SAID this was brilliant! Was that weeping all for show?"*

Mentions:#AGI#CLM

Tesla is likely to be the first to achieve AGI because Elon is a genius . Once that happens they be more valuable than all other companies combined

Mentions:#AGI

AGI is achieved? XD

Mentions:#AGI

For more context… Most of the driver for spending on AI has been motivated by an opinion piece from a few years ago that suggested AI abilities would follow scaling laws.  If you’re not familiar with that, basically, the notion was that as long as you made them “big” enough, they could do anything.  This is why companies were spending trillions of dollars on this stuff.  The big problem happened in the past 12 months or so when more recent ML research showed that **they don’t actually follow scaling laws,** and that in many applications, we are already at or near the maximum theoretical ability possible. This is why you’re not hearing people talk about AGI incessantly anymore. And why hype over agentic AI is fading as well.  TLDR the technology turned out not to follow scaling laws. This was not expected and most spending has been made assuming it would.

Mentions:#ML#AGI

Ok it seems like you're using a different definition of AGI, where it needs to replace white collar workers. If that's how you define it then yeah it's not fully AGI yet. Although, you can easily make the argument it's at least very much partially AGI, considering that it already is replacing some white collar jobs. Businesses/industries are reorganizing based on their expectations to replace even more workers. Anywho. My claim was "*in a lot of ways,* AGI is already here." [I stand by that, and so does my pocket AGI.](https://i.imgur.com/5iAeJaH.jpeg)

Mentions:#AGI

I'm not sure you understand what AGI actually is.

Mentions:#AGI

Don't need AGI to replace workers though. Today ANI and LLMs are doing the autonomous work of drivers, warehouse associates, writers, photographers, programmers, analysts, scientists, etc. and this will continue to advance with or without "AGI". The investment will definitely be worth it.

Mentions:#AGI

I am predicting the AGI development will end up in a separate branch of the business, funded and largely controlled by the US government. There isn't enough private liquidity to fund open ai to agi, but it is too strategically important to the US government to not get there first or at least around the same time china does. AGI will almost certainly become a government and military adjacent technology and will be licensed to domestic companies to boost productivity. I would also predict this is how the us government ends up replacing the tax income from replaced jobs, by licencing AGI. Anyone buying into the IPO at a trillion dollars is going to be very disappointed when OpenAI inevitably fails to find enough private funding to achieve AGI and whoever does fund it (the only 'thing' capable of funding agi in the western world is the us government), is not going to trade it on the stock market, or allow it to remain part of whatever people are buying into at the IPO. Buying into the trillion dollar IPO is buying a share of chatgpt, API access income, codex, Sora. Almost certainly not AGI. Without AGI, open ai (with annual revenue of 12 billion) is not worth a trillion dollars, unless you think it has a projected growth of 83x... The US government will pick it's 'chosen one' to develop AGI for them. It will almost certainly be open ai. Google is far too large, slow and heavy and is too influential to be allowed to have the keys to AGI. openAI is not any of those things and makes a much better 'partner' for a government funded effort. As much as Gemini 3 is a great product, openai still has the most powerful underlying model, by a fair distance. They have just gimped it with poor tooling and a rubbish UX. Google have produced a great UX and tools which are actually useful. Their model is not as good, but they actually let it do stuff which people want, so people perceive it as more powerful. Google has a huge consumer ecosystem to integrate Gemini into, so they have a vested interest in building an efficient model with excellent tooling and UX. Open ai doesn't, it is a research company and chat gpt is a public demo which doesn't showcase that much of the models actual potential. All just speculation, but looking at Sam Altman's decisions, the things he is saying, the direction of the company, it does all line up for him anticipating an offer from the government The whole of mag 7 together couldn't fund the race to AGI and beat china there. They do not have 2 trillion dollars (maybe closer to 5 trillion) lying around in spare capex. And neither does private equity. The only player big enough to fund AGI in the west is the US government and AGI is too strategically important for them to fail to get it, so they will make sure it happens.

Mentions:#AGI#API#UX

Do you use AI to code? If this is AGI, then the industry is doomed.

Mentions:#AGI

TL;DR: ChatGPT isn't AGI level yet. Thanks for the insight. We really couldn't tell.

Mentions:#AGI

I’m sorry, but that simply isn’t true. LLM’s are fancy autocompletes with a serious hallucination problem that is likely fundamentally unsolvable. Yes, they are a cool technology, yes they certainly have some use cases that add value to human labor. But claiming that they constitute AGI is nonsense. We are nowhere close to being able to plug an LLM into the chair of the average white collar worker and have it perform the equivalent job. And until that is a reality, there is no justifying the trillion+ CAPEX spend that has been committed to AI so far, nor the label “AGI”.

Mentions:#AGI#CAPEX

These AI companies are betting that "AGI" will transform the world as we know it and allow them to be profitable. It is not that much different than Uber's original plan to profitability: bet on eliminating the driver with AVs to become profitable. Rideshare alone was never meant to be profitable: look at Lyft currently. Unfortunately for Uber, they were a little too early. The only reason why they are profitable now is because of Dara's expansion and diversification of product lines (e.g. Eats, travel, etc).

Mentions:#AGI

The AI sector is too big to fail, and will be bailed out by the US government. (Look what happened to Intel and MP Materials etc.) The US government might buy a billion Nvidia shares at some point. I agree that LLM's are a dead end, and won't ever lead to "AGI" (whose definition no one agrees on anyway). But when Iliya Sutskever or Yann LeCun (or Demis Hassabis) or whoever creates a proper WORLD model, we'll get real AI. This, of course, won't benefit ordinary people, but it will make the ultrarich very rich indeed. World models, unlike LLMs, actually understand the real world, and can even navigate in it and interact with it. Just imagine a robot army made by Unitree or Boston Dynamics, but equipped with actual intelligence, not just remotely controlled or based on some LLM tech.

Mentions:#MP#AGI

Its about creating the best vibes for investors and having enough infrastructure to maintain the costs of powering their various AI’s, Google and Microsoft will be fine and won’t disappear overnight because of an Ai bubble being burst, but a lot of other companies will if they aren’t bought out. Apple has integrated Chat Gpt into their phones, who is to say Apple won’t buy Open AI or have a bidding war with Microsoft? The way that I see this Ai bubble is that we have bet on the promise it provides in the future and have exceeded its fair value by entire generations of actual progress that has been made. This has made a lot of investors really wealthy because it provided them with a massive amount of overvalued liquidity. AI isn’t going to be the next 3D glasses and nobody benefits from it becoming an obsolete technology (until something better comes along like quantum computing integrated AGI, but its likely the top companies will be on the forefront of that), however its value is based on its potential we have yet to reach, which is going to make the markets very interesting as far as pullbacks, corrections, recessions and expansions are concerned.

Mentions:#AGI

Haha. Seriously though, the current SOTA models are smarter than the average person at many things, can solve novel problems, can generalize, and have beaten various Turing tests... We can keep shifting the goalposts, but we've checked all the AGI boxes from 3 years ago over and over again.

Mentions:#AGI

Is the AGI in the room with us now?

Mentions:#AGI

In a lot of ways AGI is already here. LLMs could stop making progress on current benchmarks and it would still take a decade for the ramifications to play out. In other words, the intelligence engine could stay as dumb as it is now and we'd still have plenty of work to do before the next iPhone moment. But we could absolutely continue to level up in terms of usefulness without the models getting smarter. I don't see the intelligence plateauing for a while. The bottlenecks will be in how we integrate it. I wouldn't be shocked if Anthropic proves to be the better company than OpenAI by the time everyone's debts are settled.

Mentions:#AGI

> ...leapfrog... Incorrect. In general, I recommend taking Fast Company with a massive grain of salt, especially when it comes to tech. It's a pretty worthless trash rag. However, the tests are worth looking at, but it's worth knowing that there are many tests for various tasks, and Opus 4.5 doesn't win the most important, e.g.: > Claude Opus 4.5 pushes Anthropic’s reasoning capabilities forward with extended thinking, more stable chain-of-thought execution, and highly reliable tool use. It excels in tasks requiring multi-step logic, structured decomposition, and precise decision-making across long agent workflows. In official benchmarks, Opus 4.5 shows significant jumps in complex problem-solving and coding reasoning compared to Opus 4.1. > Gemini 3, however, achieves frontier-level performance in conceptual reasoning through its Deep Think mode and consistently leads on academic-style benchmarks like Humanity’s Last Exam, ARC-AGI-2, and GPQA. It also displays stronger intuition with abstract patterns and high-level conceptual interpretation, especially in science and mathematics. https://www.glbgpt.com/kr/hub/claude-opus-4-5-vs-gemini-3/ There are lots of other significant differentiators between them, but imo, it's pretty clear that they're the leaders now, but they're diverging in their goals. Imo, Google's goals are ultimately the most profitable, and they have the supporting infrastructure and tools to best leverage and integrate their products. That said, you're clearly more informed than your first comment led me to believe, which means I don't think you'll benefit much more from my help. You keep doing you, mate. You'll do just fine.

Mentions:#AGI

I’m glad to see someone finally speaking to it with reference to the basic physical reality of whats happening. Every new token may as well reset the brain. the only carryover is the tokens you see on screen. Although AGI doesnt necessitate consciousness, in fact I would prefer we avoid that. AGI just means it can solve any problem a human can. We can seemingly get there simply with longer context lengths. now it can keep maybe a few thousand lines of code in mind. up that to a few million and I think it will probably be AGI. That’s enough context for breaking down and procedurally carrying out long form tasks.

Mentions:#AGI

I feel these LLMs are being disenguous by using the term AI too, but it is by the technical definition of AI correct. When you and I say AI, what we really mean is AGI. The current iterations almost seem like AGI, but they are just very finely tuned. The issues bleed through if you put it under even the lowest powered microscope. A lot of my coworkers code using different agents, and I work daily together with them to identify and clean up the issues that come from it. We joke a lot about how it CAN be useful in certain situations.. and completely shit the bed in others, or sneak in subtle but massive problems if you're not paying attention/inexperienced. IMO, it's not worth the time sink, but I do acknowledge its potential applications show promise. I don't believe the value is worth the price of admission, or the cost of finding out what baggage comes with it. Are your experiences different?

Mentions:#AGI

My take - we will never have AGI. Computers do only two things at a very basic level - store data and perform calculations. The AI we have now is basically gynormous amounts of data with lots of if-then statements and tons of algorithms running behind it. Now, we can do some real cool shit with it like generate music, videos, pictures, which can be easily seen on instagram in the forms of memes. However.... it needs existing data and somebodys input to tell it what to do. It cannot generate anything on its own. In the short term, AI will still be a hot topic in the stock market, but long term....who knows.

Mentions:#AGI

methinks in the next year or two, we're going to realize that AGI is at least 5 to 10 years away and the market will begin to focus on robotics as its next growth engine

Mentions:#AGI

The winners are going to be the company that is still standing and making money in some other way as people slowly begin to realize this "AI" is basically what 3D glasses were for movie theatres. Pushed hard, ultimately a failure, likely to be pushed multiple times.. and here we are.. with no one talking about 3D movies anymore. Need new algos and theories closer to AGI, as well as better hardware, before we get to the point of it being worthwhile. Right now it's good money chasing bad into a deep darkhole of mediocrity and the limits of it are already visible..

Mentions:#AGI

I dont give a shit about who wins the ai race so long as it is not sam altman or elon musk. the last thing I want AGI to be is mecha-hitler(TM) or low quality emoji spam bot 9000.

Mentions:#AGI#TM

Shut up boomer we buildin AGI here /s

Mentions:#AGI

He's right. AGI won't be achieved with trillions if dollars of data centres and unimaginable amounts of power. When your average brain can reason better on a bowl of cinnamon toast crunch. The efficiency is way off.

Mentions:#AGI

What if they build AGI ? /s

Mentions:#AGI

Since LLM isn’t the avenue to AGI, what is the point of killing your good will with the public with these data centers, when there is already a subtle anxiety around AI for working people?

Mentions:#AGI

Yes models will only get better, but I think previously they had a moat, an edge over the other companies. They were performing leagues ahead of the competitors. Their research output were phenomenal - whisper (one of the best Speech to text model released, and still commonly used even though it's about 4 years old.already) CLIP - ground breaking work that enabled general image/text retrieval, and multimodal models that we see now. GPT 3.5 - I think the closest at that time was T5 and the difference was huge, their training procedure pretraining, instruct fine tuning followed by RLHF gave us a recipe that made amazing chatbots.Dall E, O1, "reasoning" and Sora etc. They were almost best in class at that time for all of them. Scam Altman consistently over promised with GPT 5. Made several statements about it being like AGI, but when they delivered, it was barely better than the competitors. Still remember bard? It was trash lol. Google was the laughing stock just 2 years back. But now Nano Banana Pro ,Gemini 3 ,Veo 3, Genie 3. They're all really damn good. I do think you're right, the models are at a stage that are generally good enough to automate several low stakes task, if there's a human in the loop it can take be quite useful as an assistant. But given how much money has gone into OpenAi. The fallout will be historical if they continue this path.

Mentions:#CLIP#AGI

Except LLMs will not reach AGI. Maybe with next gen architectures like world models (what Yann LeCun left Meta to work on).

Mentions:#AGI

2026 will be when everyone realizes AGI isn’t coming, AI is gonna plateau hard next year.. they gotta get that bag before it does.

Mentions:#AGI

There are a whole lot of under-65 ACA subscribers who have figured out that their pre-golden years should have a low AGI so as to maximize ACA subsidies. I suppose you consider them to be "takers". How about we have Medicare-For-All paid for by VAT with a portion of that rebated back to everyone as a form of Guaranteed Income? I support that.

Mentions:#ACA#AGI

When the VC music stops, Uncle Sam will be the only one left at the table. Why? Because AGI is the new nuke, and we don't let startups own nukes.

Mentions:#VC#AGI

So many people are plugging their head in the sand on this. Infrastructure costs are going to be enormous and sky rocket year of year. Kids probably ain’t paying the electric bill because you can see first hand how much electricity is going up and up. That’s not going to change and will only get worse with more data centers and more electricity usage to power AI. The metals needed to support all this literally can’t keep up. Unless AGI is achieved to justify all these costs, these companies are going to have a bad time

Mentions:#AGI

I basically consider BRK.B as my piggy bank - when I need money, I sell, when I have extra cash with no particular individual stock that I want get/add to a position, I buy. This is especially true with a non-IRA since BRK.B doesn't throw off dividends, giving me precise control over my AGI.

Mentions:#AGI

i'm not even being sarcastic when I say that it doesn't have to come anywhere close to AGI, 95% of the tasks it threatens to replace don't need anywhere near AGI level capability and those that could use it aren't even being performed at the competence of current gen LLMs. Yet they still get done.

Mentions:#AGI

And this part of the article just means it’s a big old bubble: “Krishna clarified that he wasn't convinced that the current set of technologies would get us to AGI, a yet to be reached technological breakthrough generally agreed to be when AI is capable of completing complex tasks better than humans. He pegged the chances of achieving it without a further technological breakthrough at 0-1%.”

Mentions:#AGI

FFS, you cannot develop an LLM into AGI. Different tech. You are commenting on something you clearly do not know enough about. My comment went over your head, because you are a Luddite who doesn’t even understand what they are afraid of. Take off your tin foil hat.

Mentions:#AGI

Because in the absence of a consumer, what work is the AGI even doing? You still need to sell something to someone to make money. I think it will be the oligarchs that end up advocating for UBI so that they have a consumer class.

Mentions:#AGI

It would crash the market. But the probability everyone just decides not to spend and consume is very low. What's the chance nobody buys the next iPhone for no reason? Probably less than the chance of WW3, asteroid impact or skynet AGI killing everyone. You have bigger things to worry about.

Mentions:#WW#AGI

It already has all of Wikipedia and books, its just not conscious or AGI by any means. I was predicting that this year someone would declare AGI (arbitrarily, and without proof), but it seems like there hasn’t been any huge breakthrough that can be marketed that way this year. I think we are getting to the limits of chat LLM and image/video slop novelty and we may need a new modality or framework to juice things again. The problem is that we don’t have enough foundational, labeled input data to build anything but a phenomenal bullshit artist. Imo nano banana will be more easily monetizable than chatbots and will buoy AI job disruption for a while. Goodbye creative marketing and modeling jobs, the next set of trendy clothing brands won’t need them. Social media Influencers will also very quickly be impacted.

Mentions:#AGI

Well you are objectively wrong then, no one in the field thinks that the transformer model is human intelligent nor can achieve AGI which all current LLMs use

Mentions:#AGI

There is another model under which they could. They stay non-profit and pay their employees incredible salaries. They create a non-profit like CNCF where hyperscalers fund them and then they race towards providing the services on their cloud. Given the initial talent moat and the ideal of AGI they would have attracted and retained talent while hyperscalers (excluding google) would have paid them handsomely to research and "lease" their models. But Sam Hypeman wanted to be much more (than his ago and talent could demonstrate)

Mentions:#AGI