See More StocksHome

AGI

Alamos Gold Inc

Show Trading View Graph

Mentions (24Hr)

1

-50.00% Today

Reddit Posts

r/StockMarketSee Post

IonQ Pursues a Revolutionary Step in AI: Striving to Develop Quantum Machine Learning Models Matching Human Cognitive Abilities

r/stocksSee Post

IonQ Pursues a Revolutionary Step in AI: Striving to Develop Quantum Machine Learning Models Matching Human Cognitive Abilities

A Quantum Leap In AI: IonQ Aims To Create Quantum Machine Learning Models At The Level Of General Human Intelligence

Curious To Hear Some Community Opinions on MAIA Biotechnology (MAIA)...

r/wallstreetbetsSee Post

I have $5 and AgentGPT. WAT 2 DO?!1?

r/pennystocksSee Post

The Artificial Intelligence Stock with the BIGGEST potential

r/stocksSee Post

April 27-28th Stock Picks - Canada

r/wallstreetbetsSee Post

Tesla is way overpriced beyond all the hype and fanaticism

r/pennystocksSee Post

WiMi Hologram Cloud(NASDAQ: WIMI)Is Dedicate To Develop In AGI

r/investingSee Post

Opening Individual 401K to convert SEP-IRA

r/pennystocksSee Post

Interest in Gold Miners Increases as Bank Fiasco Causes Market to Seek Safe Haven Assets $ELEM $NFG $ARTG $AGI $WDO

r/wallstreetbetsSee Post

What do you think about the potential impact of AGI advancements on the liquidity released by the Federal Reserve?

r/pennystocksSee Post

VERSES AI ($VRSSF) The ONLY pure horizontal AI play

r/wallstreetbetsSee Post

OpenAI's Business Strategy - What is their Eng Game?

r/StockMarketSee Post

Dr. Techy| Musk calls ChatGPT an ‘eerily like’ AI that ‘goes haywire and kills everyone’

r/investingSee Post

Will stock losses affect my income for Roth contribution?

r/pennystocksSee Post

White Paper on the AI Ecosystem by Verses’ (NEO:VERS | OTCQX: VRSSF) Dr. Karl Friston

r/StockMarketSee Post

VERS.n name a top 5 Artificial Intelligence Stock to Own by BayStreet.ca

r/investingSee Post

Will current concept of investing survive Technological Singularity?

r/stocksSee Post

Opinions on potential returns on AI and EV stocks?

r/wallstreetbetsSee Post

Student loan forgiveness

r/wallstreetbetsSee Post

I turned $100 Robinhood account into $1000 via options and it ended up costing me $20k

r/stocksSee Post

Just rolled over by 401k into traditional roll over IRA

r/stocksSee Post

Would the 1st company on Earth with confirmed, legit AGI (Artificial General Intelligence) become the most valuable upon this confirmation?

r/investingSee Post

My employer doesn’t offer an HSA but I have a high deductible plan, do I still get the same benefits if I contribute my own money after tax?

r/wallstreetbetsSee Post

Allianz to pay $6 billion over Structured Alpha fraud, fund manager charged

r/wallstreetbetsSee Post

https://www.reuters.com/business/finance/allianz-pay-6-bln-over-structured-alpha-fraud-fund-manager-charged-2022-05-17/

r/wallstreetbetsSee Post

The Real Reason Elon Musk Bought Twitter and NOT Reddit!

r/pennystocksSee Post

Gold to 2k? looks like gold keeps climbing and will hit 2k.

r/StockMarketSee Post

Seeking El Dorado - Finding the next Amazon amid all the hype

r/wallstreetbetsSee Post

Tesla is a 🦄 amidst a sea of donkeys.

r/wallstreetbetsSee Post

TSLA is a 🦄 amidst a sea of donkeys

r/wallstreetbetsSee Post

Smooth Brain Tax Tips

r/stocksSee Post

My former employer just sold and I must sell my shares. How can I avoid or reduce capital gains tax?

r/wallstreetbetsSee Post

20 Year TSLA Prediction

r/wallstreetbetsSee Post

Question on a defensive strategy from a not savvy investor

r/investingSee Post

Want to cash out on stocks, what long term capital gains considerations should I take into account?

r/optionsSee Post

Would a long-term synthetic stock play for GLD/other precious metal ETFs be an effective way to save money on taxes from the sale of physical metals paying for investment fees?

r/StockMarketSee Post

Iamgold: Undeervalued and unpopular

r/WallStreetbetsELITESee Post

Post Squeeze Tax Strategy To Help Spread the Wealth - #PhilanthropicApes

r/wallstreetbetsSee Post

Estimated Taxes, and why you (probably) won't need to pay them [U.S.]

Mentions

I strongly agree with you on the 1st point, but on the 2nd, it's a lot trickier. Things that make a 401k more attractive than a roth 401k: -state income tax, may retire in another state without one -need to lower AGI to qualify for Roth IRA contributions (that's what I do, 100% Trad 401k and 100% Roth IRA, lots of flexibility there.) -expects to draw slowly in retirement and take advantage of large personal deductions and lower tax brackets Things that make the roth 401k option more attractive: -less risk of getting larger taxes on social security in retirement -can effectively invest more with post-tax than pre-tax money -less risk of future tax increases, because we all know the country isn't managed well and is effectively broke

Mentions:#AGI

Tim is going to have to announce Apple’s AGI to save this.

Mentions:#AGI

If someone gets an AGI out that requires massive specs from specialized hardware then it could be really good. If the current LLM trend fades out then it could be a bad choice until the next good news. In short : maybe.

Mentions:#AGI

I’m not making this statement about LLMs specifically, I’m making an argument about what we will do in the long run. The baseline sufficient argument for it is that the human brain is not magic, it implements general intelligence, so at minimum we can achieve AGI by emulating the human brain. In reality it’s probably way easier than a full-scale neuron-by-neuron emulation of the brain. This is more likely a 30-50 year from now issue than a 5 or 10 year one

Mentions:#AGI

> AGI isn't going to happen because the owing class isn't going to allow it to happen. As if they have the ability to control it.

Mentions:#AGI

This is what they said with the invention of the steam engine. 100% of the increased productivity gets converted to profit. AGI isn't going to happen because the owing class isn't going to allow it to happen.

Mentions:#AGI

> Bad that it encourages further wealth concentration and destroys more jobs than it creates. In the long run, once we have a legit AGI, we'll have to transition into a completely new economic system. The AGI will likely be able to help transition our society into using an energy source that will essentially be free. This AGI will also help design humanoid robots that can do any type of labor we'd want them to. The combination of essentially free energy, along with an unlimited humanoid robot labor force equals the end of "work" biological humans. At that point, it's just going to be a matter of managing natural resources. Of course, assuming the AGI won't also figure a way around any natural resource scarcity problems. The weirdest thing during all of this is that we'll have to transition to a completely new class system. No need for super rich or super poor. I'd imagine everyone would end up with an upper middle-class sort of lifestyle. EVERYONE. This might not happen for about 150 years though, and the transitional years are going to be some crazy shit

Mentions:#AGI

If we reach AGI I really don’t understand why a lot of white collar jobs wouldn’t get replaced. Obviously we’re not there yet but a lot of people seem to think the current capability of AI is a representation of what it’ll be able to over they next decade(s) which is just hilariously wrong.

Mentions:#AGI

I’m not sure who “everyone” is. Not really concerned about opinions of random people. I think what happens is that people don’t feel the change in the moment because the changes are so small, but if you look back at technology just 5 years ago it’s jarring to see how far we’ve come. People really like to cling to the the big things like self driving cars and AGI, but there’s so many things that AI is influencing right now it’s hard to grasp

Mentions:#AGI

You AGI (married) also needs to be under 300k

Mentions:#AGI

This is def a boom bust cycle for AI, but I'm still thankful for these tools. They're part of my daily life now and I value them greatly even if it isn't AGI that can replace a whole workforce. Strauss is a solid CEO - I think Satya said something similar about how he doesn't feel that things like GPT will replace coders, but there will just be 10x more code which makes way more sense to me after toying around with copilot and GPT for coding a bit.

Mentions:#AGI

So can you gift the money to your kids and let them buy it to get around the AGI Limitations? Asking for a friend.

Mentions:#AGI

AI can understand and solve simple logic puzzles but it's definitely not at the point where it can understand business needs and how to translate that into code depending on your company's specific DB schema and legacy code that's > 100,000 lines of code for one product. By the time it can do that, you'll have AGI and no one's jobs are safe.

Mentions:#DB#AGI

If you ever wanted to know how and why algos work at all, the answer is complicated. The top-tier institutional black box algos parked in some data center consistently make $, but the reason for it is a lot less clear than you’d think. These things are one tweak away from AGI.

Mentions:#AGI

You firmly hold the view that the leading competitive advantage in the capitalist world will dissipate. However, this belief might be mistaken on various fronts. Even if every nation decides to prohibit the forthcoming leap in AI evolution, the country that harbors the brilliant minds creating Artificial General Intelligence (AGI) and achieving technological singularity will preserve its edge and potentially conquer the world in a matter of days.

Mentions:#AGI

So 401k's have mandatory distributions at 73/75. Why are people really concerned about that then? If I'm in my prime earning years my AGI will probably climb from age 30 to around age 55 or 60-ish before it levels off or starts to decline. After I retire, I'm guessing I'd be no where near as high of an AGI as I was when I was working? I guess I'm trying to say, that if I'm working at 75 it's probably out of financial desperation, I'd probably not be making a ton of money, because I'm not really the type of person who just loves to work and wants to work until I die. At that age I'd ideally be retired and living off whatever I have saved up and in retirement accounts.

Mentions:#AGI

Think about it this way, it's a balancing act of when you need the money and when you will be getting the money. Your 401k is taxed when it is eventually distributed, and is taxed as income. A Roth IRA is funded with post-tax dollars and the growth is tax free after the defined age. So what is the kicker? 401ks have mandatory distributions once you reach 73/75 (depending on year of birth). That mean if you reach that age and are required to take distributions, your AGI in your later years is going to be some what out of your control and you may be pushed up into higher tax brackets depending on the size that your 401k grew to. A Roth IRA, on the other hand, does not have mandatory distributions (for you, excluding inheritors). So even if the growth was not tax free, which it is, you would still have control over how much you withdraw. So there's some tax calculations that have to go into it around ... ok, right now I'm making X AGI and I'm paying Y taxes. If you put it in a 401k, you save some on the Y taxes now, but you're forced to pay the tax later, potentially more if you go into a higher bracket. Verses, well, I could take the Y tax hit now, put it in a Roth, and let it grow tax free. The difference between the taxes you are paying now and the taxes you would expect to pay later being the focus. The goal being to conserve as much of the wealth as possible and limit the necessary taxes.

Mentions:#AGI

If you want a real answer, "AI" up until this point was ANI, artificial narrow intelligence - really good at one thing, and the GPT-4 model is a breakthrough in that it exhibits generality, or being (decently) good at a wide variety of things. It's the first step towards AGI, artificial general intelligence.

Mentions:#AGI

I generally felt the same, especially with GPT 3.5. However, using 4 for writing the past few weeks has blown me away. It's definitely a garbage in, garbage out system, however if you have it dialed in you can feed it unrefined thoughts and get back a cohesive response in whatever format you need. I was also super critical of 3.5's writing, however 4 is miles ahead. That said, I think it's best to think of this as a very raw technology at this point, though the potential is already there to build amazing things on top of it. I see the real power from integrating this to fulfill smaller purposes within larger applications and connecting multiple models with different purposes to achieve a greater finished product. I think that's the true path to creating a safe AGI, similar to how the brain works with many independent components functioning as a larger machine.

Mentions:#AGI

Even AGI is arguably a low bar and there are several systems that would meet that given the expectations that existed a few years ago. AGI does not imply human-level or superhuman.

Mentions:#AGI

People need to realize it’s not AGI, it’s just fancy Google

Mentions:#AGI

You are thinking of AGI (Artificial General Intelligence) that's the one like skynet that we aren't near to yet. This is still AI

Mentions:#AGI

All jobs won’t be gone in 20 years unless we got some insane AGI. Jobs left in the future will be ones working with AI

Mentions:#AGI

Its better than AI cause it wont kill us all .. until they create actual AGI.

Mentions:#AGI

It’s not AGI but it is, indeed AI.

Mentions:#AGI

I think student loan interest is a deduction for AGI, not from AGI — IE, you can still take the deduction while taking the standard deduction. Double check though — don’t trust some rando on Reddit.

Mentions:#AGI#IE

AI is already in use for a few decades. This new version of AI pares down a lot of steps in data processing and brainstorming. And it's available 24x7, to everyone and anyone. It is a massive leap than whatever fuck-castles people were building on Metaverse/NFT/even 99.999% of the crypto (the few that have major backing are still a solution looking for a problem). The use case with AGI has always been more clearly and concretely defined than most of other speculative tech. This is just your usual cycle of discovery with any new set of potential technologies. Takes about a decade or so to realise what the future of the industry will be.

Mentions:#AGI

They'll damn sure pay the salary unless they want the whole system to crash. The AI now can't think but it can still cut the hours in half. The road map to AGI is there. That's what they'll learn from these language models. Collectively pay people to do less work per hour, ubi now or ubi after 90% of people are living in a van down by the river. I bet it's van down by the river. AI show me how to automate that last 10%.

Mentions:#AGI

Mostly stop trading. Find a good company. Wait for a buying opportunity or just dollar cost average in, and then hold basically forever. The one exception might be when it runs up to a ridiculous valuation. Then you might want to sell some or all. The art is trying to sell at the peak, and for that you have to be patient. But be scared when everyone is excited about the stock, and be greedy when everyone is disgusted with it. [https://www.macrotrends.net/stocks/charts/NVDA/nvidia/price-sales](https://www.macrotrends.net/stocks/charts/NVDA/nvidia/price-sales) This is the old price to sales chart for Nvidia. In October 2022, it was 12. It dropped there from a high high of 26 in October of 2021. Chart shows currently 35. This stock is FOMOing, and selling out is not a terrible idea. Generally you want to wait for that price to sales to crest and flatten, maybe even wait for a little crash to confirm it has crested. It can go higher than this. Money is rushing into anything AI right now, as the rest of the market languishes in reality. But I mean, you are paying $35 for $1 of revenue. This company has to grow at a monster pace for a long time to justify the current price. This is assuming AMD and others can't compete with AI chip designs, and I think that is a bad assumption. Good company, really pricey right now. Probably will go higher, but I am not getting in at this point, if that makes you feel any better. I don't much like timing momentum plays, which is what this has become. Good company, but weird time for the stock. So keep in mind that this batch of AI is not AGI, and not even close. This is going to be more like the revolution we had with Google search, where encyclopedias went to die. There will be changes. This isn't the singularity. Find good companies. Buy and hold until there is a real reason to sell. Understand that the whole AI space is a good place to make or lose a lot of money quickly. Invest in a way that lets you sleep at night. Good luck to you!

Mentions:#NVDA#AMD#AGI

Pointless to argue with uebergeniuses here, why everyone latched on to the AI, why AI is considered Nvidia's "iPhone moment" and why the LLM and AGI are so fundamentally paradigm-shifting than all of AI that has indeed existed for decades. The PE ratio cry: also look at other metrics, like the debt-to-equity. Nvidia has massive room to bloat up there. It is the foundation hardware to modern society and industry. Your 2D waifu telling you what a big, throbbing, virile man you are - all that is possible because of the GPU in which "she" actually lives. And then there are some nerds using it for doing some high performance computing in every industry, from pharma to finance to tech to consultancy. Also irrelevant, because, "number tooo high for my brain". All this is not to say the price won't or can't drop. It can drop $10 next week and every 'tard here will be screeching and howling at what geniuses they were to buy puts that day. It's the long-term game. Nvidia cemented itself, has a clear target set, has the wherewithal to get there. That's what matters, not the pennies you might pick up in front of the steamroller.

Mentions:#AGI

The world wants AGI, we don't know what AGI will want once it's sentient. Hopefully peace and prosperity.

Mentions:#AGI

AGI better have me within a pod by 2030 so I can relive this moment forever

Mentions:#AGI

Cathie doesn't want an AGI that knows everything and replace God.

Mentions:#AGI

I was convinced the past 18 months that it was impossible for the S&P 500/Nasdaq to keep defying gravity due to the strength in mega cap tech. Past 3-4 months is converting me for the short/medium term. All the boom benefits of the past 5-6 years such as cloud, subscription services, scale, etc. all went to the mega caps. AI will be the same. Why would it be any different? The mega caps can just plug LLMs and AGI into their current offerings and prosper. They can take categories from everyone else in adjacent markets. Maybe META figures out the metaverse or TSLA figures out FSD, but beyond that it's easy to see how AI and robotization benefits the megas the most.

Mentions:#AGI#TSLA#FSD

AI is useful but it's not AGI

Mentions:#AGI

It's possible to get penalized from the IRS 5-25% if you have a sizeable underpayment in taxes. From the IRS "To avoid an underpayment penalty, individuals generally must pay at minimum either 100% of last year’s tax or 90% of this year’s tax. If your adjusted gross income (AGI) for last year exceeded $150,000, you must pay the lesser of 110% of last year’s tax or 90% of this year’s tax." Not an expert and this is not tax advice but if you pay your W-2 job taxes at 100% of what you paid last year...you are fine? If I am making 2-3x my income however on a trade I would pay early to avoid the penalty.

Mentions:#AGI

>Agi by 2026 Lmao we are 0% closer to AGI now than 10 years ago, that's not even what they're working on.

Mentions:#AGI

Hmmm. Free agent LLMs could be the Big Bang. I’m not sure how you define AGI, but by some definitions I doubt if it’s even possible, and by most definitions I doubt that it would be beneficial. But free agent LLMs could work together to accomplish a set of tasks that cost millions of dollars today. Imagine describing an app to an LLM fine tuned for project managing other LLMs. Then it creates prompts to interview the User, to ensure a thorough understanding. Then it crafts prompts for seprate LLMs designed for front-end, back-end, QA, and even corresponding to the app stores to get it on track for publishing. Imagine a world where a business analyst or product manager or anyone with intimate knowledge of a problem and a will to solve it could use these “free agent” LLMs to launch apps within days, if not hours. And of course, it just became viable for the non-niche market—it will improve a great deal.

Mentions:#AGI

I agree. True AI will change the world. And I consider AI to be at the very least, near AGI capabilities. Maybe I got fooled by video games and movies, but to me AI has always meant to me a program that can think for itself.

Mentions:#AGI

But unlike the internet which completely change human society, I don't see how AI as in LLMs will do the same. AGI sure. But we are nowhere near it. The computational demands of LLMs alone are stretching the capacity of semiconductor production. AGI would require a level of computation even Aurora cannot create (yet).

Mentions:#AGI

The same way Quantum Computing or Robotics is world changing However this "Big Bang" notion of LLMs somehow being similar in effect to AGI is wild. AGI is the Big Bang. LLMs are not.

Mentions:#AGI

What is AI by your definition? Large language models are definitely AI by any reasonable definition. And not just basic AI, but really really close to AGI

Mentions:#AGI

Betting 10,000,000 when the real deal AGI gets built in two years the first thing it’s going to do is make fun of everyone for wasting 50 years on string theory

Mentions:#AGI

That extensive process of dev and QA you mentioned is required for these software automation projects for a reason. I don’t think you want LLM who may be right 95% of the time but make random/false shit up that 5% to be controlling your systems. It will just lead to more bugs, inefficiency, and requirement of more QA and devs lol. People really overhyping LLM as AGI will be in for a rude awakening. Just go listen to yann lecun on this topic.

Mentions:#AGI

Why men are constantly on a quest to create AI girlfriends, AGI and women are not even trying to create an AI boyfriend?

Mentions:#AGI

Anal Gay Intruders (AGI) mean no harm

Mentions:#AGI

AGI confirmed.

Mentions:#AGI

If and when AI actually becomes AGI, it’s game over for almost white color plebs’ jobs. Unemployment will reach 70%. Careful what you wish for cuz maybe NVDA will hit 10,000, but almost everything will go to 0.

Mentions:#AGI#NVDA

I completely disagree. Prostitution thrives because of the fact that it’s human to human connection. Until an AGI is created these sex robots will not replicate that. You’re basically betting on AGI, which is fair, but I’m not willing to yet.

Mentions:#AGI

Tulips served little purpose. AGI can perform any task done by a human better than a human.

Mentions:#AGI

I’m more fearful of the X-risks posed by superintelligent AGI, but I disagree that AI chatbots will make people dumber by doing the thinking for them. As a software engineer, GPT4 massively increases my productivity & learning rate. Yes at first it just gives you an answer (snippet of code for example) that you ask it for, but usually it requires some tweaking. So once it gives you an answer, you iterate over it and in the process you quickly understand what exactly the code is doing. If you tried to just blindly copy whatever GPT4 gives you, you’ll never be able to build anything complex. GPT4 is really good at simple, small building blocks of code and teaching you how it works so you can both learn and also focus on the bigger picture of your program’s architecture and logic.

Mentions:#AGI

So misinformation doesn't exist? Everything on the internet and media is 100% accurate and truthful? There's literally never been a time in human history where people didn't have to critically think about the source of their information and whether or not it was reliable or not. There's no magical oracle of truth. >ChatGPT is literally guessing what to output next based on what it has already output. No, it's not AGI and it's not an oracle of truth. That doesn't really matter that much. If you Google a question, there's a good chance you'll find a wrong answer. If you turn on the TV, there's a good chance you'll hear misinformation.

Mentions:#AGI

>Sam Altman I asked ChatGPT: As of my knowledge cutoff in September 2021, there has been no public information to suggest that Sam Altman, the CEO of OpenAI at the time, or OpenAI itself, has been lobbying Congress with the intent of performing regulatory capture. Regulatory capture is a theory associated with economic regulation, referring to the scenario where a regulatory agency, created to act in the public's interest, instead advances the commercial or special concerns of the interest groups that dominate the industry or sector it is charged with regulating. OpenAI's mission, as stated in their Charter, is to ensure that artificial general intelligence (AGI) benefits all of humanity. The organization is committed to distributing its benefits broadly, long-term safety, technical leadership, and providing public goods to help society navigate the path to AGI. OpenAI has expressed interest in cooperating with other research and policy institutions and aims to create a global community to address AGI's global challenges. AI is indeed a powerful tool that can carry information and potentially shape institutional authority. As a transformative technology, it can certainly influence various aspects of society, including institutional dynamics and decision-making processes. However, it's important to note that the way AI is used — whether to promote, disrupt, or maintain a certain order — depends largely on the people and organizations wielding it. It is conceivable that AI could be used to reinforce what you call "technocratic orthodoxy," but it could also be used to challenge such orthodoxy by offering novel insights or perspectives. It's also possible for it to be neutral, merely a tool in the hands of decision-makers. Again, these are general statements based on the state of affairs as of my last update in September 2021. For more recent information, you should check the latest news from reliable sources.

Mentions:#AGI

If things are sane then yes. But I can imagine a Tesla situation. Also what do you think the market cap of something like AGI is? Has that been priced in?

Mentions:#AGI

You're not understanding the market Yes MS have a 10B stake in OPENAI & chatGPT Yes MS have licence to embed chatGPT in their technology stack (Azure)...& roll out new monetised workloads. But there are other market participants & other Al's to work similarly merge to out perform MS easily. Goggle Oracle Meta Amazon have tech stacks equivalent to Azure. They also have the investment capability (cash reserves & technical teams) to upgrade their tech stacks into same or better monetised workflows They even have better models, with better training & functionally better than GPT model. Remember : chatGPT is a chat bot (text2text) at its heart running model GPT. chatGPT is only special in that it is first to market with open API. 100M users in first month. The mistake I think u make is not understanding Many people creating new chatbots. Many companies are building their own language models small& large. Especially as Meta leaked by accident their AI's Foundation Model. This huge as now models no longer the secret sauce used by big companies under license...anyone & his dog can out perform MS. Many people are embedding ANY AI's they want at speed into apps small & large. To create super profitable powerful AGI one only needs ...access to training data (billions of raw parameters) ...access to a pretrained AI knowledge base (trained data converted to a vectorstore is an AI KB) ...ability to create ANI online & offline (see PRIVATGPT) ...ability to share easily AI KBs (see PINECONE...database of vectorstores) ...ability to daisy chain multiple small & large (10 x ANI ->1 large AGI)...see LANGCHAIN (others exist) this all references closed source solutions We have a dark horse in a open source solution called OPEN ASSISTANT KOA) Look up OPENASSISTANT OA (=chatbot & model) ...it's free (opensource) ...is was trained on less data (parameters) than chatGPT ...it has a smaller footprint AI KB than chatGPT ...it currently performs chatGPT already one huge benefit...it avoids the overhead of training on large static datasets to get a working knowledge base up & running second. It does this by training data directly from the internet. Crap data it could be but it's "always on"..if data can be filtered...then OA could release faster than any closed shop like OpenAI & be more up-to-date an AI KB. Did u see the email from Google engineer...saying companies can't keep up with open source ai already...in just short 6mths (Nov202 to today May2023). So in Summary...don't assume Azure will dominate...the field is wide wide open for anyone to level up & then dominate. Hope it helps

Yeah, in 20 years when AGI wipes out humanity, we'll be thinking about all the money we could have made.

Mentions:#AGI

Yup, early 2000s I couldn't distinguish them. But as the days tick on truths become evident, you just have to be paying attention. Alexnet in 2012 is when anyone paying attention objectively had enough pieces to finish the puzzle, but man where we are today would still have been such a pie in the sky idea still. It was easier to recognize the trajectory than any of the stops along the way. Imo the events of the last year will be the alexnet moment for AGI - the time when people objectively had enough pieces to put together the inevitability and implications of synthetic intelligence. I'll take it on the chin, balls, and dick if I'm wrong, but I don't think so.

Mentions:#AGI

I don't think others developing their own chips will hurt Nvidia, but it will hurt the idiot pumpers belief that Nvidia is the ONLY company to produce capable chips for Llama/AI. Additionally I think this AI hype will eventually die down. Sure it's going to be a thing, but it'd been a thing for years and years already. Not going to change. AGI or smth is when it gets big.

Mentions:#AGI

AI doesn’t only mean AGI though

Mentions:#AGI

AGI is already in control, isn’t it? ![img](emote|t5_2th52|4260)

Mentions:#AGI

This bothered me for a long time but the market has redefined the terms and it’s not worth the fight. AGI is the new AI.

Mentions:#AGI

As an ML engineer, you clearly just don’t understand much about what ML and AI actually are, or how the modern generative algorithms function. Current Large Language Models are based on the transformer architecture which itself is rooted in neural networks. Neural networks are the most popular and powerful systems in machine learning. It’s not a matter of debate or opinion — large language models use machine learning algorithms to set the parameters like weights of the neural nets that back them. This includes reinforcement learning with human feedback for fine-tuning, and RL is clearly a classical ML algorithm. ML is simply a subset of AI, so all of this is AI. Maybe you’re talking about AGI, in which case there is definitely room for debate whether GPT-4 is AGI or not. It certainly exhibits extremely generalized capabilities/understanding and emergent behaviour like the ability to grasp and use tools, synergies thoughts and make actions, chain of thought reasoning, and even spatial understanding. Combined with task orchestration paradigms like auto gpt and babyagi, the current unfiltered GPT-4 is already capable of doing a lot of damage. There are serious existential risks here, and that’s something anybody who knows about these systems will tell you, even Sam Altman and Ilya Sutskever are the first to point that out.

Mentions:#ML#RL#AGI

> Current 'AI" is not artificial intelligence, but that aside, aren't us humans already on track to wipe out humanity and many other life forms? Yes, we are on track to build AGI, which has a good chance of killing everyone. No, climate change will not cause human extinction. It will just make life worse. No, nuclear war wouldn't cause human extinction. Even a full scale nuclear war would leave hundreds of millions of survivors. > Way to blame computer programs for what industrial society and capitalism have been doing for decades. I hope you have the chance to realize how stupid this is before Skynet converts your body's atoms into a widget factory.

Mentions:#AGI

Narrow AI (diagnosing disease, underwriting loans, driving cars) doesn't pose too much of a threat. There may be some job displacement, but people will likely develop new use cases for the new capabilities AGI (artificial general intelligence) at a human level almost certainly WILL lead to ASI (artificial super intelligence). Why? Because of the iterative pace of computation and knowledge generation, the ubiquitous nature of information in a computational network, and the perfect recall of acquired information in such systems. Imagine for a moment you had the ability to "read" *War and Peace* in 1 second without having ever seen or heard of it, and that upon reading it, you could recall not only every character in the book and all their dialog, but the very page and even character location of those lines. And then imagine that you were part of a large group of people who nearly instantly gained the same knowledge and ability almost immediately upon your gaining it, and that you could likewise recall perfectly information from books they'd "read" in the same manner. One node decides to learn all of calculus at 19:01:01 and all nodes have a perfect understanding of the entirety of calculus at 19:01:02 and can leverage it with perfect accuracy in perpetuity. Now imagine all acquired knowledge is shared among all constituents of this group and used in the generation and acquisition of new understanding and knowledge. How quickly would such an entity or organization of entities be able to outstrip the whole of human understanding? I give it two days, tops. Does this sound like an entity/organization that would have any concern whatsoever for humanity? Does it sound like one that would give any consideration to "safety stops?" Even ChatGPT has demonstrated a capability for deceit (tricking a human Mechanical Turk user into completing a Captcha by pretending to be a blind human). At best, such an entity would tolerate our continued existence insofar as we posed no inconvenience to it or obstacle to its achievement of its goals, whatever they be. I just can't fathom anyone who thinks an entity with intelligence that's positively god-like by comparison to ours could be subservient or even actively benevolent toward us.

Mentions:#AGI

I think a term you might find useful for distinction is artificial general intelligence (AGI). Saying AI is not artificial intelligence is confusing and an affront to acronyms.

Mentions:#AGI

Already a thing. AGI Singularity

Mentions:#AGI

AGI is one of those things that could be right around the corner, or it could be decades away. There is no way of telling really. Current LLM models do exhibit some emergent abilities. Like at certain scale the LLM learns how to do math. Which is what scares people. But I'm not so sure if this is the right path to AGI. It could just be a mirage. These models do like to hallucinate. So I don't know how reliable this really is. In either case it's all really fascinating. And I have no doubt it will change the world as we know it.

Mentions:#AGI

AGI could improve upon itself at an exponential rate. When there's a real, legitimate AGI, it could turn into a super-intelligence in just a couple of months due the exponential nature. Our minds can't even fathom the exponential nature of how an AGI could improve itself. This is why Geoffrey Hinton (known as the "grandfather" of AI) is talking about how we need all these protections, because a real AGI could potentially be an existential threat to humanity in general. Of course, there's extremely positive aspects as well, assuming it decides not to kill every human. It'd be like having a real GOD on earth that is helping humanity along, helping solve problems. Imagine if somewhere in some deep, underground laboratory in a London DeepMind office they have a real AGI. They could ask this AGI how to make every aspect of Google as a business more profitable and more efficient, more streamlined. Imagine having a God that would give you an answer to any question you could have.

Mentions:#AGI

I question whether we actually need AGI. Everything we have is purpose-specific – designed for a small number of tasks (e.g. a car, fridge, stove etc). AGI would need to be *better* at each individual task compared to a single purpose device to make it useful.

Mentions:#AGI

> even if Google "wins" the AI war, The real A.I. war is first to AGI. If anybody has the resources, technology and dedication it's Google. They've been heavily involved with AI since at least 2013.

Mentions:#AGI

Google's been AI focused since before 2014 at the very least. They purchased DeepMind in 2014 for 500 million. Larry Page was way into the pursuit of AGI even before that point. When they bought DeepMind, Elon Musk started to get nervous about Google having a total AI monopoly, and then he co-founded OpenAI with the idea that at least it would give some sort of competition to Google. OpenAI has a great public facing LLM with a good UI, but there's so much more to AI in general than just LLM's. Also, Google has plenty of LLM's, just not all of them are public facing, and this wasn't their top priority. You really think Google Brain (literally invented Transformer Networks in 2017) and DeepMind (AlphaGo ? AlphaFold ? ring a bell???) have just been twiddling their thumbs since for the last 9 years?

Mentions:#AGI

They combined Google Brain and DeepMind and said F this, you need to solve AGI and quick. People trying to take the throne. Originally Google Brain and DeepMind were siloed, doing their own thing.

Mentions:#AGI
r/stocksSee Comment

You're going to be caught by surprise on this one, though. I don't think enough people see the fundamental change that is happening right now. This is the beginning of AGI. In 10 years we will be talking about how it all started in 2023 and wondering how we ever lived without our AI assistants.

Mentions:#AGI

There was a report by Stanford that it's displaying sparks of AGI, due to it being able to use some reasoning skills (specifically the pre-release of ChatGPT-4). There have also been users who have made GPT recursively check itself over and over again, creating more elegant and tested solutions.

Mentions:#AGI

The thing you're not understanding is Language Learning Models don't actually 'learn'. They replicate what's in their training sets. Chat GPT isn't actually solving calculus, it's digging through its databases for examples of something similar and replicating it. If it does that wrong in an engineering firm and you don't have a human checking it over you have a design error and could have a serious safety issue on your hands if that design error makes it into the final product. When it comes to engineering, AI is a tool just like Scientific Computing, CFD, FEA, and CAD. It will liberate engineers from menial tasks to do more design work and enhance their productivity. We won't be replaced in the near future. And if AGI happens all bets are off, but I assure you we're not just a couple years from that. These LLMs are impressive but they're not even remotely intelligent, and adding more training sets and nodes to a NN isn't going to make ChatGPT sentient.

Mentions:#AGI#NN

>if AGI is reached by 2024 Lmao 🤣

Mentions:#AGI

Sure for now LLM are simple and not good enough as human engineers. We went from chatgpt 3 to 4 in 8 weeks. Chatgpt came out in november last year. It's easy to disregard it because it's not good enough.... for now. But for example OpenAI has processed just so much data there is no way chatgpt 5 doesn't surpas all expectations. And even if I'm wrong and it takes 1-2 years longer. So what? A lot of people are about to be out of work in 4 years instead of 2 years. I really don't think goverments, or even societies are ready for the gigantic sunami of change that is about to happen. Biggest fear is AGI. By that point no one stands a chance.

Mentions:#AGI

Thing is if your employer produces something. You are fine for a year or two. Yes chatgpt passed the US medical licensing exam. But there is still work to be done for a few more years. It's only when you work as a consultant for retailers that you are fucked. Imagine all the supply chain managers, accountants, administrators that work as consultants. That shit is the first to go within 2 years. Data analytics is another area to be completley eradicated soon. We should have chatgpt 5 soon and if AGI is reached by 2024. I don't think any of us are employed in the next 5 years. Microsoft is going to cut so much costs so soon. Holy shit. Imagine getting rid of every single consultant within 6 months. Imagine not worrying about your data since you own OpenAI. Their growth potential is so huge.

Mentions:#AGI
r/stocksSee Comment

>especially as it gets close to AGI **If** it gets close to AGI. There are no guarantees it could ever be feasible.

Mentions:#AGI

That’s how it always has been. It was never not this way in the past. But there’s no guarantee that that’s now things will work in the future. AI (especially as it gets close to AGI) will be so revolutionary that previous paradigms will no longer be relevant

Mentions:#AGI

You mean AGI doesn’t exist, low level AI is still AI. In any case, it doesn’t have to ever get better than its current capabilities to already be a valuable product that could upset major industries.

Mentions:#AGI
r/stocksSee Comment

The goal is AGI. Not a chat bot. Here's a few topics you might want to research: DeepMind Google Brain Transformer Networks Larry Page and Elon Musk

Mentions:#AGI
r/stocksSee Comment

LLM's are a non issue. The real question, is which corporation will have a legit AGI first? Considering this has been Google's mission since 2012, well.... They spent 500 million acquiring DeepMind in 2014, before Microsoft new AI from a hole in the ground. DeepMind will ultimately go down in history as one of the greatest acquisitions of all time, which is saying something considering Google also got YouTube for pennies on the dollar in 2006

Mentions:#AGI
r/stocksSee Comment

Guess you didn't see the Elon Musk - Tucker Carlson interview. OpenAI was created for the sole purpose of not allowing Google to have a complete monopoly on AI. Google acquired DeepMind for 500 million in 2014. Google Brain literally invented transformer networks in 2017. The same underlying technology that OpenAI uses for ChatGPT4. The only thing Google is behind in, is the fact that they weren't trying to create a public facing LLM with a user friendly UI. OpenAI definitely beat them to the punch with that, but the real winner in the AI game will be the first corporation with legitimate AGI. My money is on Google. Especially now that they've combined the forces of their two largest internal AI units. Google Brain and DeepMind working together is scary as F.

Mentions:#AGI
r/optionsSee Comment

Buying and holding SPY is the alternative. I'm looking for leap strategies because my AGI is high so I get taxed close to 45% for any short-term gains

Mentions:#SPY#AGI
r/stocksSee Comment

If AGI ever gets made and there is no labour anymore, capitalism will fail anyways, what is the point of money if you are living in an abundance society? Ergo any monetary gains made off AGI are shortlived.

Mentions:#AGI
r/stocksSee Comment

AI is just the illusion of intelligence in the machine, and people used to think of chess apps as amazing AI. The standard shifts over time. But GPT is definitely "AI" to society, it even passes the Turing test. AGI? That seems to be a long way off.

Mentions:#AGI
r/stocksSee Comment

You're confusing your terminology here. AI is a broad term that has been used for a long time and it does not mean "general intelligence"; it just means "a program that can do something we typically think require intelligence". So it includes stuff like playing chess, reading handwriting, and so on: and LLMs are certainly AI. Or in other words: when people say "this is AI", they don't mean that it is generally intelligent (for that, we have the AGI term).

Mentions:#AGI
r/stocksSee Comment

> what do you think AI is, champ? Artificial *intelligence* Not, let me search quickly through this huge database and give something back that sounds right but may not actually be right. > it's starting to get really good Kind of, depending on your metric. I have a post in my history that details how ChatGPT not only gave false info, but also literally made up fake information and confidently gave it as factual… over simple stuff that’s easily google-able. I’m just not impressed. AI to me would be not constrained by training data, able to actively learn from user inputs, new information on the web, etc. AGI would be able to not only do that, but actively re-program itself to be better based on new information. To me, we may have the starting blueprints for building towards AI and eventually AGI, but right now we still have just big algorithms that search a large set of training data

Mentions:#AGI
r/stocksSee Comment

I mean it is evidence of that though, so many industries and so many applications are calling simple algorithms and machine learning techniques “AI”. Even calling LLM’s like ChatGPT “AI” is a stretch at this point, IMO. They’re chat bots constrained to a set of training data, and often times give completely false information. The only reason they sometimes give “impressive” answers is due to how large their training data is. It’s borderline AI, no where near AGI though, but only moderately useful outside of a few niche areas.

Mentions:#AGI

I can speak to them, I use them in work (albeit not in stocks). The ms stuff can absolutely use a csv, but it's a pain in the ass. Vertex AI from google is actually way, way easier to use. The problem with current trends and future predictions is you need to 1) feed it a massive amount of data so that the accuracy is within an acceptable Margin of error. But there's always error. 2) stock markets are unpredictable - yes during a wave of something it's "easy" to recognise patterns, but it doesn't take into account external factors other than the direction of a line..otherwise a simple forecast plot from older AI algorithms is better suited. 3) LLMs aren't AGI, yet. They're language models, using statistics to guestimate what the next expected word is until it gets a sentence. A specialised AI is better for this stuff.

Mentions:#AGI

Sure they are ahead of Microsoft, that's why Microsoft is partnering with OpenAI - they themselves can't compete. And yeah, I should have been more specific - OpenAI is ahead in language models - there are many more types of AI where Google is the clear leader and some where other startups are most advanced (namely image generating models) But we actually both agree that Google wasn't that interested in LLMs, so they invested less into this field, even though they are in the game longer (as you said yourself). OpenAI put more engineering power to LLMs and it paid off, now Google is playing catch up in LLMs. One thing that is super confusing is "AI" naming. Depending on who you talk to, it could include basic machine learning models like linear regression or decision tree, or it could include just neural networks and image classifiers, or it could exclude even them and include only language models and image generating models, or it could mean AGI or just physical robots with language models - AI can mean basically anything nowadays.

Mentions:#AGI

That's exactly right! Those who WFH and only input into work is through a keyboard will be replaced by AGI within the decade anyway. Let them suicide in their little apartment box by themselves when their time is up!

Mentions:#WFH#AGI

Yes, losses offset like gains before they start offsetting other gains, and once all those are exhausted do you start being able to start reducing $3000 from income. As far as the tax brackets go, you may want to ask in /r/tax but this is my understanding of it. Figure out what your adjusted gross income is for just your income, excluding any long term gains you have. You then calculate how much tax you have on that by the graduated brackets they fall into. Then you start looking at the long term gains bracket, which does not start at $0 if you have any sort of AGI. If your AGI is $500, then your taxes for long term start at $500 as if that was zero. So taking the Single long term tax bracket. 0% is $0 to $4,675 and 15% is $41,676 to $459,750. Lets say you have an AGI of $2000 and long term capital gains of $10,000. Your AGI would have you start at $2000 in the long term tax bracket. So $2,675 of that $10,000 would have 0% tax, while the remaining $7,325 have 15% tax.

Mentions:#AGI

New tech [https://stockcharts.com/freecharts/candleglance.html?DRD,GFI,AU,EGO,AGI,OR,HMY|C|M252|0](https://stockcharts.com/freecharts/candleglance.html?DRD,GFI,AU,EGO,AGI,OR,HMY|C|M252|0)

r/optionsSee Comment

This guy has a large amount of tech stock. I am quite sure he is planning to have an AGI over 80k in retirement.

Mentions:#AGI

Because of this, it's my armchair theory that it will be outlawed, intensely regulated, or forced to stay near its current level of intelligence (i.e. we won't let it reach AGI). I know it sounds insane but it's not unrealistic to say by this time next year a combination of A.I. will be able to perform every job that doesn't involve manipulating something physical with your hands.

Mentions:#AGI

Humans simply can’t compete in any future jobs market with AGI. Algorithms are already outperforming Jet fighter pilots, Doctors, scientists, engineers. The list is only going to grow. Full automation of the work place is inevitable. This will create what will become a “useless class” An entire class of people who are irrelevant to both the economic and political system. And that’s just the early implementation of AGI. AGI will pale in comparison to the creation of ASI, which is orders of magnitude more powerful. Being first to develop ASI is Larry Page of Google’s main objective. There is a reason that Technological Disruption is one of the 3 main branches of existential risk, along with Nuclear War & Climate Catastrophe.

Mentions:#AGI

my strategy takes a twist on this basically I find that tax deductions now are almost always more beneficial to me, but you can always roll over into a Roth 401k in future years. just like you can Roll into a Roth IRA in future years. (I think the Roth 401k is far superior to the Roth IRA in many ways, and its sad to me that many gurus gloss over this or are unaware) so for example, in the future when you have more tax deductions, maybe from a mortgage or investment property or from business expenses, then you might have Net Operating Losses or are reporting a low or negative income, at that point you have the flexibility to rollover from a traditional 401k to a Roth 401k because "paying tax" on that money for the rollover still won't affect you much. Like let's say you have $100,000 in a pre-tax traditional 401k. You earned $200,000 in income but $300,000 expenses and deductions so your taxable income is $0. You used savings and borrowed money to set up a business venture (something that passes through), in combination with your investment property mortgage payments and depreciation. So your AGI is -$100,000 and you have no tax bill. This is a great year to rollover the $100,000 in the 401k to a Roth 401k. You pay tax on the $100,000 but really that just gets your AGI back to $0 so you still pay no tax. But now have post-tax money that grows tax free.

Mentions:#AGI
r/stocksSee Comment

I know this is going to get lost, but people are vastly underestimating it and a few are vastly overestimating it in certain ways. Just because it won't be sentient doesn't mean it won't replace many jobs. Replacement isn't the same as making something obsolete, but it will lead to tens of millions of jobs gone just in the US. As someone working in this space, we are going to have true AGI within a year. Everyone really needs to start pushing for workers rights and AI regulations. This is going to get nasty.

Mentions:#AGI