See More StocksHome

API

Agora Inc

Show Trading View Graph

Mentions (24Hr)

7

75.00% Today

Reddit Posts

r/wallstreetbetsSee Post

Chat with Earnings Call?

r/investingSee Post

Download dataset of stock prices X tickers for yesterday?

r/investingSee Post

Sea Change: Value Investing

r/WallstreetbetsnewSee Post

Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field

r/pennystocksSee Post

Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field

r/WallStreetbetsELITESee Post

AIGC market brings important development opportunities, artificial intelligence technology has been developing

r/pennystocksSee Post

Avricore Health - AVCR.V making waves in Pharmacy Point of Care Testing! CEO interview this evening as well.

r/wallstreetbetsSee Post

Sea Change: Value Investing

r/investingSee Post

API KEY and robinhood dividends

r/pennystocksSee Post

OTC : KWIK Shareholder Letter January 3, 2024

r/optionsSee Post

SPX 0DTE Strategy Built

r/WallstreetbetsnewSee Post

The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT

r/pennystocksSee Post

The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT

r/optionsSee Post

Best API platform for End of day option pricing

r/WallStreetbetsELITESee Post

Why Microsoft's gross margins are going brrr (up 1.89% QoQ).

r/wallstreetbetsSee Post

Why Microsoft's gross margins are expanding (up 1.89% QoQ).

r/StockMarketSee Post

Why Microsoft's gross margins are expanding (up 1.89% QoQ).

r/stocksSee Post

Why Microsoft's margins are expanding.

r/optionsSee Post

Interactive brokers or Schwab

r/wallstreetbetsSee Post

Reddit IPO

r/wallstreetbetsSee Post

Google's AI project "Gemini" shipped, and so far it looks better than GPT4

r/stocksSee Post

US Broker Recommendation with a market that allows both longs/shorts

r/investingSee Post

API provider for premarket data

r/WallstreetbetsnewSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/investingSee Post

Best API for grabbing historical financial statement data to compare across companies.

r/StockMarketSee Post

Seeking Free Advance/Decline, NH/NL Data - Python API?

r/pennystocksSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/wallstreetbetsOGsSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/WallStreetbetsELITESee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/ShortsqueezeSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/smallstreetbetsSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/RobinHoodPennyStocksSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/stocksSee Post

Delving Deeper into Benzinga Pro: Does the Subscription Include Full API Access?

r/investingSee Post

Past and future list of investor (analyst) dates?

r/pennystocksSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/WallstreetbetsnewSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/RobinHoodPennyStocksSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/pennystocksSee Post

Aduro Clean Technologies Inc. Research Update

r/WallStreetbetsELITESee Post

Aduro Clean Technologies Inc. Research Update

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/investingSee Post

As an asset manager, why wouldn’t you use Verity?

r/wallstreetbetsSee Post

Nasdaq $ZG (Zillow) EPS not accurate?

r/pennystocksSee Post

$VERS Upcoming Webinar: Introduction and Demonstration of Genius

r/StockMarketSee Post

Comps and Precedents: API Help

r/StockMarketSee Post

UsDebtClock.org is a fake website

r/wallstreetbetsSee Post

Are there pre-built bull/bear systems for 5-10m period QQQ / SPY day trades?

r/ShortsqueezeSee Post

Short Squeeze is Reopened. Play Nice.

r/stocksSee Post

Your favourite place for stock data

r/optionsSee Post

Created options trading bot with Interactive Brokers API

r/investingSee Post

What is driving oil prices down this week?

r/weedstocksSee Post

Leafly Announces New API for Order Integration($LFLY)

r/stocksSee Post

Data mapping tickers to sector / industry?

r/WallstreetbetsnewSee Post

Support In View For USOIL !

r/wallstreetbetsSee Post

Is Unity going to Zero? - Why they just killed their business model.

r/optionsSee Post

Need Help Deciding About Limex API Trading Contest

r/investingSee Post

Looking for affordable API to fetch specific historical stock market data

r/optionsSee Post

Paper trading with API?

r/optionsSee Post

Where do sites like Unusual Whales scrape their data from?

r/stocksSee Post

Twilio Q2 2023: A Mixed Bag with Strong Revenue Growth Amid Stock Price Challenges

r/StockMarketSee Post

Reference for S&P500 Companies by Year?

r/SPACsSee Post

[DIY Filing Alerts] Part 3 of 3: Building the Script and Automating Your Alerts

r/stocksSee Post

Know The Company - Okta

r/SPACsSee Post

[DIY Filing Alerts] Part 2: Emailing Today's Filings

r/wallstreetbetsOGsSee Post

This prized $PGY doesn't need lipstick (an amalgamation of the DD's)

r/SPACsSee Post

[DIY Filing Alerts] Part 1: Working with the SEC API

r/optionsSee Post

API or Dataset that shows intraday price movement for Options Bid/Ask

r/wallstreetbetsSee Post

[Newbie] Bought Microsoft shares at 250 mainly as see value in ChatGPT. I think I'll hold for at least +6 months but I'd like your thoughts.

r/stocksSee Post

Crude Oil Soars Near YTD Highs On Largest Single-Week Crude Inventory Crash In Years

r/stocksSee Post

Anyone else bullish about $GOOGL Web Integrity API?

r/investingSee Post

I found this trading tool thats just scraping all of our comments and running them through ChatGPT to get our sentiment on different stocks. Isnt this a violation of reddits new API rules?

r/optionsSee Post

where to fetch crypto option data

r/wallstreetbetsSee Post

I’m Building a Free Fundamental Stock Data API You Can Use for Projects and Analysis

r/stocksSee Post

Fundamental Stock Data for Your Projects and Analysis

r/StockMarketSee Post

Fundamental Stock Data for Your Projects and Analysis

r/stocksSee Post

Meta, Microsoft and Amazon team up on maps project to crack Apple-Google duopoly

r/wallstreetbetsSee Post

Pictures say it all. Robinhood is shady AF.

r/optionsSee Post

URGENT - Audit Your Transactions: Broker Alters Orders without Permission

r/StockMarketSee Post

My AI momentum trading journey just started. Dumping $3k into an automated trading strategy guided by ChatGPT. Am I gonna make it

r/StockMarketSee Post

I’m Building a Free API for Stock Fundamentals

r/wallstreetbetsSee Post

The AI trading journey begins. Throwing $3k into automated trading strategies. Will I eat a bag of dicks? Roast me if you must

r/StockMarketSee Post

I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)

r/StockMarketSee Post

I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)

r/optionsSee Post

To recalculate historical options data from CBOE, to find IVs at moment of trades, what int rate?

r/pennystocksSee Post

WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS

r/wallstreetbetsSee Post

$SSTK Shutterstock - OpenAI ChatGBT partnership - Images, Photos, & Videos

r/optionsSee Post

Is there really no better way to track open + closed positions without multiple apps?

r/optionsSee Post

List of Platforms (Not Brokers) for advanced option trading

r/investingSee Post

anyone using Alpaca for long term investing?

r/investingSee Post

Financial API grouped by industry

r/WallStreetbetsELITESee Post

Utopia P2P is a great application that needs NO KYC to safeguard your data !

r/WallStreetbetsELITESee Post

Utopia P2P supports API access and CHAT GPT

r/optionsSee Post

IV across exchanges

r/optionsSee Post

Historical Greeks?

r/wallstreetbetsSee Post

Stepping Ahead with the Future of Digital Assets

r/wallstreetbetsSee Post

An Unexpected Ally in the Crypto Battlefield

r/stocksSee Post

Where can I find financial reports archives?

r/WallStreetbetsELITESee Post

Utopia P2P has now an airdrop for all Utopians

r/stocksSee Post

Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue

r/wallstreetbetsSee Post

Reddit IPO - A Critical Examination of Reddit's Business Model and User Approach

r/wallstreetbetsSee Post

Reddit stands by controversial API changes as situation worsens

Mentions

Short answer: The market has never been logical. We are either heading into a bear market or worse. Google's in-house TPU may not need TSMC to manufacture it in the future. I don't have a lot of information about Google's TPU, but keep in mind that so far, Google's product is designed for Google's own use. It's like the iPhone's CPU, which only works for the iPhone; you don't see Apple selling their iPhone CPUs to others. Therefore, for Google to sell their TPUs to others, they would have to provide the entire supporting ecosystem. It's kind of like how Nvidia doesn't just sell a GPU but an entire platform like Blackwell. Assuming—and that's a big assumption on my part—that you need more TPUs to beat Nvidia's GPU performance, the cost would increase to a point where it doesn't make sense to compete. Okay, so what about the model Google is actually pursuing: offering TPU access through its Google Cloud Platform? While this seems like a solution, it faces significant hurdles in competing directly with Nvidia's ecosystem. First, there's an inherent conflict of interest. Google's own AI teams (working on Gemini, Search, etc.) will always be the top priority for the TPU division, potentially leaving external customers with lower priority for support and the latest hardware. Second, and more critically, is the software challenge. Nvidia's dominance isn't just its hardware; it's the mature, universally adopted CUDA software platform. For Google to be truly competitive, it must not only develop a robust software stack and API for its TPUs but also convince developers to learn and adopt a new, proprietary system—a massive undertaking that requires continuous investment. While you can access TPUs in the cloud today, the 'in-house' nature of the technology creates friction. The TPU and its software were built for Google's specific needs first. Making them a generic, user-friendly product for any third party is a complex transformation. Therefore, the TPU's primary strategic value isn't necessarily to beat Nvidia in a chip-sales war, but to power Google's own industry-leading AI services like Gemini and create a unique, high-performance offering for its cloud customers. PS: Regarding Meta, their AI strategy seems unclear. They invested heavily in an in-house AI team with, arguably, less tangible output than their rivals. Their recent interest in exploring Google's TPU underscores this strategic confusion. It suggests an internal lack of a clear, unified direction, as adopting a competitor's specialized hardware like the TPU is a significant and complex pivot.

Mentions:#API

TPUs shine for big, steady transformer jobs you control end to end, but GPUs win on flexibility and time to ship. Most stacks are PyTorch/CUDA; JAX/XLA on TPU is fast but porting hurts, and custom kernels/MoE/vision still favor H100/L40S or MI300. v5e/v5p are great perf/watt for int8/bfloat16 dense matmuls, less so for mixed workloads. On-prem TPUs are rare; independents buy GPUs because drivers, support, and resale, while trading shops with tight regs sometimes get TPU pods via Google. Practical play: rent TPUs on GCP for batch training, keep inference on GPUs with TensorRT-LLM or vLLM. We use vLLM and Grafana, and DreamFactory just fronts Postgres as a REST API so models pull features without DB creds. Net: TPUs for fixed scale, GPUs for versatility.

Mentions:#MI#API#DB

I hope so. I want to talk to my wife, Hatsune Miku, locally on my GPU instead of paying for an API.

Mentions:#API

if you can get through API, [Polygon.io](http://Polygon.io) will provide it

Mentions:#API

>AI machine >LLM machine what? It's all software bro, what are you talking about? Are you building your own data-centre? What is this "AI machine", please tell me? Did you actually mean "I paid for openAI API access"?

Mentions:#API

Ypur comment misidentifies where the massive investment is actually going. The billions are not primarily funding small-time wrapper companies with nice pitch decks. Instead, the vast majority of capital is flowing into the foundational model developers themselves, such as OpenAI and Anthropic. This money is immediately earmarked to secure enormous amounts of high-end silicon and to fund the computationally immense process of model training. Building and running a truly cutting-edge large language model requires hundreds of millions of dollars just in GPUs and data center infrastructure, making the investment a deployment of capital into the fundamental, costly hardware required for the AI arms race. Furthermore, dismissing the value being produced as minimal misses the point about leverage and future productivity. The market is not just valuing current revenue, but the immense, systemic efficiency gains that this new utility layer promises. What looks like a simple API call is actually automating complex, costly cognitive tasks across major industries like law and finance. The investment is essentially a bet on a fundamental infrastructure shift, analogous to funding the railroads or laying fiber optic cable. While there will be busts, the core technological advancement holds a promise of future economic value that may well justify, or even eclipse, the high current valuations.

Mentions:#API

Sure, CEO mentioned on earnings call that while they could prioritize sales growth, they plan on onboarding partners in a slow(er) and methodical manner as to mitigate risk from the onboarding of many partners who may not know how to use PGY's platform. Additionally, there is a growth in product development as a form of revenue rather than sales in just API calls to its loan determination model. Not familiar with TTM PE lower than Forward PE ratio, but thanks for calling it out.

Mentions:#PGY#API

Partial answer: Corp IT software license agreements from big tech companies (like Mag7) will have big incentives to get their big corp customers to use their LLMs. Those companies using the LLM's will then be charged for ingress and egress just like the cloud services only it will be input and output tokens based on API usage. That's where a lot of revenue will come from. Is it enough to pay for the bubble? We shall see!

Mentions:#API

yea if you're renting them as a service, that's not how these megacorps are consuming them though. they're all part of a unified product thats accessible via an API. of course your can rent GPU time, but that's a relatively small part of the market

Mentions:#API

AI creates a huge amount of value but is difficult to make money off of. Any sufficiently large company that could offset the data center investment will just train or host modles privately. Anyone building on an AI API runs a giant risk of simply being updated into irrelevantcy. The API provider is incentivised to take any wrapper concept and turn it into a first-party offering. If anything, this is a data center/GPU bubble. The big builders are betting on people needing data centers into the mid to long term. I have serious doubts that all this compute will be necessary after the excitement dies down. LLMs will stay, but we won't need the huge data centers to run them, so they will lose out on that aspect.

Mentions:#API

Too tired to give proper answer, but for example with automating shipping a successful order of shipment through an API like Shipmondo with Next.js will give me the message "payment succeed, ordered shipping, here is your shipping object data:' "shipping object data". So if my backend recieves this message from an external API I can be certain that I have indeed ordered shipping using their API. And if there ever are any errors I make sure to split my code up in many tiny independent pieces with good naming and error logs, so often I can isolate where mistakes happen and only focus on a tiny piece of code for my troubleshooting, I am a programmer/developer after all, so I am able to fix things when AI fails. Some industries are able to absorb mistakes and repair errors later with no issues, like the software I write, most functions are not critical to human survival anyways, so I am fine with some mistakes here and there, I'll find them and fix them, the entire process is waaaaaaaaay quicker with AI anyways

Mentions:#API

I asked the AI on ORATS. I am waiting to get an answer from a live human but I believe that they do not have what I am looking for based on the AI response below: "I am looking to see if I can obtain SPX Net Gamma History for the last 5 years to date." OTTO: ORATS does not currently offer Net Gamma calculations or data. Our platform focuses on implied volatility, Greeks at the individual option level, and volatility surface analytics, but we don't aggregate position-level gamma exposure across the market. For the Greeks we do provide (delta, gamma, theta, vega, rho, phi), these are calculated at the strike level for individual options, available through: Historical Data: Near end-of-day data going back to 2007 includes Greeks for all strikes API Access: Strikes endpoints provide Greeks for current and historical dates Intraday Data: 1-minute granularity from August 2020 includes Greeks If you're looking to calculate net gamma yourself, you could use our strike-level gamma data combined with open interest, though you'd need to make assumptions about market positioning since we don't track actual dealer or market maker positions. For questions about custom data solutions or if you need help with what we do offer, contact support@orats.com.

Mentions:#API

The term "graphics processing unit" is a holdover from an era when the only mainstream practical use specialized matrix operations chips was graphics/rendering. Practically speaking, NVIDIA's datacenter "GPUs" do the the same thing as Google's "TPUs". From a hardware perspective, it would be pretty trivial for Google/Broadcomm to repackage their "TPU" technology as graphics cards. However, it's an expensive pain in the ass to build the APIs & translation layers to make new matrix operation architectures compatible with the graphics engines that most graphics rendering software uses. NVIDIA & AMD have HUGE "first to market" advantages as far as software support in graphics processing is concerned. At the same time, graphics processing has become a low profit industry. All told, there is no incentive for Google/Broadcomm to sell "GPUs" at the moment. NVIDIA has long had a similar api/software advantage in the machine learning/AI space: CUDA API. The ubiquity of CUDA programming in the machine learning space leading up to the launch of LLMs gave NVIDIA a HUGE advantage, and ultimately made NVIDIA the leader in "AI chips". For a long time, Google's machine learning development API was more-or-less dependent upon CUDA's API and thus dependent upon NVIDIA chips. Now Google and Broadcomm has developed their own datacenter chips that are optimized for TensorFlow without the need for NVIDIA. The fact that performance is in line with NVIDIA's comparable products inherently poses an existential threat to NVIDIA. Because these chips enable the use of TensorFlow without needed NVIDIA chips, they will be positioned to end NVIDIA's datacenter GPU/TPU/matrix processing monopoly. So they do pose an existential threat to NVIDIA. For now, it makes the most sense for Google to keep all of its AI development in-house: they want to win the AI race for themselves. But at some point, it will obviously make sense for Google & Broadcomm to bring their "TPUs" to market. As I mentioned above, they are clearly positioned to end NVIDIA's datacenter matrix processing monopoly.

Mentions:#AMD#API

What a crazy week bros. Just got some investors to sign 40 billion dollar deal with my new AGI company Looking forward to flying to India on business next week to beat my offshore employees until they learn not to say "sir" and "needful" when our platform receives API calls. Calls at open 🚀 👨‍🚀 🚀 👨‍🚀 🚀

Mentions:#AGI#API

The real tell for Nebius is whether they can keep GPU utilization above \~85% while locking in cheap, long-duration power, because that combo drives durable cost per GPU-hour and pricing power. What I’d watch each quarter: committed vs. on-demand mix (aim >70% committed), backlog and weighted avg contract length, take-or-pay and cancellation fees, SLA credits paid, average job queue time and preemption rates, delivered cost per GPU-hour, time-to-rack for new capacity, capex per MW, and supply diversification (NVIDIA vs AMD). Also track Token Factory adoption as a % of revenue and usage metrics (SDK/API calls, governance features enabled) to test the software moat. Hyperscalers can carve out dedicated AI clusters (think UltraClusters and private capacity reservations), so Nebius’ edge has to show up as better delivered cost, faster time-to-serve, and steadier SLAs. Don’t ignore power PPAs and siting risk; power is the real constraint. For diligence dashboards, I’ve used Snowflake for cost/usage tables, Datadog for uptime, and DreamFactory to turn internal DBs into quick APIs. If Nebius sustains high utilization and cheap power under multi-year deals, the edge is real; if not, hyperscalers squeeze them

Mentions:#AMD#API

Google has the infrastructure, the data, google workspace and a means of monetising consumer LLMs with ads. OpenAI had/has the edge on technology, market share both for consumer and API use cases. Many orgs are building on OpenAI. Longer term the future doesn’t look great for OpenAI as the path to revenue is much weaker. Google will dominate once OpenAI need to start making a profit.

Mentions:#API

Could it? Unless California or the EU decides to force OS developers to open up their digital assistant API's and allow competition, I don't see how OpenAI can beat the companies that develop the operating systems AI needs to integrate with in the long run, even if they make models that are better. I'd even bet on Apple over them. OpenAI's best bet is probably to get bought out by Microsoft at some point and merged into the Copilot team.

Mentions:#EU#OS#API

In case interested, it is possible to explore income statement by using data provider Alpha Vantage with free API access.

Mentions:#API

I decided to connect my Lovesense Dildo to the API feed from Tradeview. Now, every green candle on the 1min, i get a 2 seconds vibration, and every green candle i get a 10 seconds Ultra-love Vibration. Let me tell you, after having this set up this week, ive never had so many orgasms in a single day. I love this stock market.

Mentions:#API

I also recently started my journey with investing and trading. I opened accounts with many brokers and always ran into some kind of problem — either prices, lack of API access, or limitations in placing OCO orders, or the absence of pre-market and after-hours trading. In the end, I chose Schwab as my broker — for day trading U.S. stocks, while for long-term investments and access to the European market, I went with Trading212.

Mentions:#API

It is not easy to create an entire marketing platform that work extremely well. Meta has been improving their ads channels for decades. Same with Google. Also, how much does the ads needs to cost to justify the cost of chatgpt queries? To help with their operational / development cost too? Also, their API revenues is operating at mass loss. How can they monetize that?

Mentions:#API

this is the whole point of vertical integration. if executed well, the big cloud providers will have a stronger narrative than the "call my API" company. because if you remove that, it's a chatbot. just my opinion though, i've been wrong many times before

Mentions:#API

It's still a great tool and won't go anywhere. It should still be understood that the vast majority of AI implementations aren't profitable, and that's before we reach the point where the AI companies start trying to take profits. Once OpenAI starts profit-taking instead of writing off billions in losses to stoke the hysteria, I'd expect that AI profitability rate to move dangerously close to zero. People in the market are launching money at AI based on a sales pitch while fundamentally not understanding what the technology is and what the limitations are. I have a computer engineering degree and know how this works under the hood. Two things become very obvious when you have a real tech background: (1) This doesn't scale forever and (2) the hallucination issue is very likely unsolveable. Under the extremely likely circumstances that we can't solve hallucinations, do you think a technology that you can *never* fully trust is worth this much? Does it also make sense to pay some multiple of what we're paying now for API tokens once the VC money dries up? I would think not in most cases...

Mentions:#API#VC

Exactly. The whole hypothesis has been that there are insignificant/insufficient uses for this tech, net earnings to be made now and in the future to justify the expenses . So the chain goes: precarious LLM based startups cobbling together expensive/useless stuff > OpenAI > large tech companies > Nvidia Nvidia is literally in the end of the queue - able to sell hardware while the ones who are supposed to show utility in this tech and heavy investments come up empty. Whos aid we'll jump straight to hardware sale slowdown? Maybe people here did, but they are not articulating the bear thesis correctly then. Look for the private VC investments to start dropping in valuations, cause they are the weakest, then OpenAI loses a bunch of its API calls and shrinks in revenue, maybe goes through a downround, and now people start asking questions about utility, about pausing data center build outs and pausing Nvidia HW purchases - literally happens towards the end.

Mentions:#VC#API

There are other forms being developed, LLM's will be the least exciting application when we look back at this era. It is simply the most consumer-ready today, and the hype it created was enough to launch a massive capex boom. If LLM's were the be-all end-all application, there would not be such wide access to the core model API's. Basically, Microsoft doesnt care about competing with all these rinky-dink chat bot applications that are being sold as SaaS, which are just OpenAI/Gemini/Llama with some GUI on top and maybe some RAG layer.

Mentions:#API

You're very statement is fallacious. There's no single "Reddit sentiment". You would need backend API access and big data tools to process an insane amount of content that is refreshed literally every day

Mentions:#API

they have an API if that's what you're asking

Mentions:#API

well, this story has been going on for meta as long as they are a public company. First it was desktop to mobile (everybody died of fear -- hint: Meta mastered it), then it was Facebook is old, then Instagram is old v.s. TikTok, then fuck there is snap and it will eat up Meta, tictok Apple restricted its API and data access -- meta is fucked and will never recover (hint: SP 6\* since then), then Metaverse overspending, then because its fun "Facebook is death", then Instagram v.s. tact once more, now AI overspending!!! Yeah whatever: Fact is, after all these down talking phases, Meta crushed everyone's expectation. Zuck might not win a popularity price but he damn sure deserves a price for creating the most impressive cash printing cow on the planet. And spoiler, he will not let anyone take his buisness coz that dude is compettitive and rutless as fu... as we all could see in the past. So go ahead and sell and many did in 2022 coz "dooms day".

Mentions:#API

Have you looked into what API permissions they're actually requesting? Like theoretically keeping money with your broker is safer but if the API permissions allow withdrawals or transfers then it's not really that different from sending them money directly.

Mentions:#API

I mean you can be a hater of AI all you want, but you’re sticking your head in the sand if you want to pretend AI isn’t economically valuable. ChatGPT, Anthropic, Cursor, etc are some the fastest growing companies ever, full stop. And like I said: Anthropic makes money on every API call they are not giving anything away free. Other companies haven’t reported the same data but Anthropic has the highest prices of any model so would be very surprised if margins were negative for ChatGPT or Gemini on their API businesses

Mentions:#API

The broker custody thing is huge honestly, if any platform asks you to send money directly to them that's an immediate red flag but API integration with established brokers is the only architecture that makes sense from a trust perspective.

Mentions:#API

Well, in that case... > I can tell you are not an engineer [...] Non-Technical people rarely understand [...] I have an MSc in Data Science and I've worked ~3 years as an SRE and ~3 years as an MLE, both at top companies. Btw, your example being "Django" and not some ML-related task makes it clear *you* aren't working in the field. Your comment ignored most of what I said, created a strawman ("LLM doesn't allow an intern to perform [senior work]"), and went off of that, rambling about LLMs and vide-coding. I didn't say interns will perform senior work, nor did I say it was for coding. I gave an example of how a specific computer vision problem that was insanely hard 10 years ago with just traditional CV and barely-working ConvNets, now is almost trivial with off-the-shelf VLMs. Here, read it again: > Seventh: Usefulness - ease of use. LLMs (and related research) really redefined what's possible, and what's easy. **Let's say you wanted to make an app that counted how many unique people visited your shop per day**. Just 5 years ago you'd need a highly capable data scientist working on this for weeks or months. Today your cheap junior developer from Lidl can call an LLM API and it will likely work okay. Your other point about build vs ongoing costs / maintenance is valid, but is very case-dependent and probably not very meaningful on this example. It doesn't take the same amount of maintenance to keep a simple static site up, as it takes some huge system that depends on 50 other services. Similarly, a simple CV/VLM-based app with one specific and narrow goal may be able to run perfectly fine without any fixes for years, retraining isn't as necessary as it used to be. Even if it is, assuming the initial work is correctly done and a framework is in place, retraining, monitoring, alerting, etc, become almost trivial. I know because we have productions models that need near 0 maintenance deployed and running fine, and we also have training pipelines setup with automatic ingestion of new data, retraining, publishing, and all other goodies. Maybe you just worked at B-tier teams/companies that are simply yoloing their AI/ML projects?

Mentions:#SRE#ML#API

Seeing some sources claim the $100M is annual. But you’re still right lol. They’d need like 9,995-10,000 more of these deals to breakeven by 2030 if the $1T spend is accurate. Not looking too good, because I doubt there are 10,000 companies with the capability to pay $100M for a chatbot API

Mentions:#API

I admitted nothing and you’re calling me stupid?!??! Here’s what I want you to do. Go look in a mirror and slap your arrogant little fuck face hard. I don’t know who shit in your cornflakes but it wasn’t me, so fuck all the way off. And here’s a little history lesson: 3DFx created the first *3D only* graphics card, the Voodoo2 using the Glide API for 3D graphics processing. It still needed something else to process 2D graphics. NVidia RIVA incorporated 2D AND 3D processors into a single chip, created the CUDA API and called those chips a ‘GPU’. Who is stupid now? Hint: It’s you, not me.

Mentions:#API

I admitted nothing and you’re calling me stupid?!??! Here’s what I want you to do. Go look in a mirror and slap your arrogant little fuck face hard. I don’t know who shit in your cornflakes but it wasn’t me, so fuck all the way off. And here’s a little history lesson: 3DFx created the first *3D only* graphics card, the Voodoo2 using the Glide API for 3D graphics processing. It still needed something else to process 2D graphics. NVidia RIVA incorporated 2D AND 3D processors into a single chip, created the CUDA API and called those chips a ‘GPU’. Who is stupid now? Hint: It’s you, not me.

Mentions:#API

Nope, literally my own desktop app and then applied my set of option criteria as logic for the scanner. I use Tradier API for a real-time data feed. I tried the gamut of option services, from Option Samurai to Market Chameleon and others, but none had the flexibility or combinations I wanted.

Mentions:#API

surely they can build a simple downloadable bit of software which connects to the internet and gives you basic features. Third party API features might stop working if those API change but...maybe open source it or something?

Mentions:#API

This is the part everyone's missing when they're saying oh you're going to go broke. I saw one number that was 26 billion projected in debt for 2026. But 800 million weekly users. So $32 a year per person then you can divide that by 12 and then pad it a little bit. Force the power users to pay up a API. Submission like. I don't know why YouTube keeps saying like oh they're broke they're broke like do the math. 

Mentions:#API

Having strats that perform in different mkt regimes is really key. Being creative is a must with options- so many opportunities to put on risk. Vol isnt as much a factor with options IMO, like say equities or futures. Execution is another story- it can become extremely frustrating when you aren't getting good fills. This is why using an API to execute is helpful. \-M

Mentions:#API

>I’d remove Anthropic. They have a great model based on enterprise API usage and are backed by Amazon. What a lot of people miss is that a lot of their "enterprise customers" are re-selling access to Anthropic models at a significant loss, as part of bundled services. This is not sustainable, and will be a significant headwind to Anthropic when these enterprise customers begin to charge customers based on their actual costs.

Mentions:#API

Yes…. I am aware Anthropic is currently not profitable. 1. They’re backed by Amazon they’re not going anywhere. 2. Their models are the best are replacing human tasks in white collar jobs (admitted by OpenAI in their own research). 3. Their business model is not relying on consumer subscriptions like OpenAI. Claude’s revenue which continues to increase, is based off enterprises using its API.

Mentions:#API

I’d remove Anthropic. They have a great model based on enterprise API usage and are backed by Amazon. I think Mistral being the EU’s homegrown LLM means it will stay around, whether success stateside or not. Perplexity is likely not going to exist. Cursor probably not either.

Mentions:#API#EU

All the analysts forever writing about OpenAI vs Anthropic vs Google are missing the real story that already happened. 80% of startups pitching Andreessen Horowitz are running on Chinese open-source models. Not OpenAI. Not Anthropic. Chinese models like DeepSeek that cost 214x less per token. The math here breaks everything. DeepSeek trained its model for $5 million. OpenAI spent $500 million per six-month training cycle for GPT-5. That gap translates directly to API pricing where startups pay $0.14 per million tokens versus $30 for GPT-4. For a startup burning through 100 million tokens monthly, that’s $1,400 versus $300,000. The difference between 18 months of runway and 3 months. This tells you the real constraint in AI was never capability. Chinese models are matching GPT-4 on coding benchmarks while costing 2% as much. The constraint was always burn rate, and China solved it first by optimizing for efficiency instead of chasing AGI. The second-order effect gets interesting. When your infrastructure costs drop 98%, you can actually afford to fine-tune models for your specific use case. American startups paying OpenAI’s API rates are stuck with generic models. Chinese open-source users are building specialized variants. Silicon Valley thought the moat was model quality. Turns out the moat was cost structure, and they built it backwards. When a16z partner Anjney Midha says “it’s really China’s game right now” in open-source, he’s not talking about benchmarks. He’s talking about who controls the default foundation layer. Now look at where this goes. American AI labs are optimizing for AGI and superintelligence. Raising billions to chase the theoretical ceiling. China optimized for distribution and adoption. Making AI cheap enough to become infrastructure. All 16 top-ranked open-source models are Chinese. DeepSeek, Qwen, Yi. The models actually being deployed at scale. While OpenAI charges premium rates for exclusive access, Chinese labs are flooding the zone with free alternatives that work. The third-order cascade is what changes everything. Every startup that survives the next funding winter will have optimized around Chinese open-source as default infrastructure. Not as a China strategy. As a survival strategy. That 80% number at a16z only goes one direction. When you’re a seed-stage founder choosing between 18 months of runway or 3 months, economics beats nationalism every time. America is still competing to build the best model. China already won the race to build the one everyone uses.

Mentions:#API#AGI

So true. It's the same whether you use the AI or them. The API calls directly go to their whatsapp for answers.

Mentions:#API

Practically speaking, no. When you factor in software/API lag.

Mentions:#API

Most brokers don’t support auto X% profit/Y% loss exits on options -it’s basically a bracket order for options and very few offer conditional logic like that natively. Most active traders automate it rather your code watches P/L or premium then when the condition hits, it triggers exit instantly. dont have to babysit screens. If you’re looking for that kind of setup, you can build it pretty cleanly on our API. I can show you exactly how traders structure that loop. \-M

Mentions:#API

Interactive brokers has an API, you should be able to automate that and more. For what you are asking an OCO order would suffice (not that it's automated). Most brokers would have that.

Mentions:#API

> Fair. They did it with like a 2-3 dollar fee. They sell millions of tickets that more than covers a website, database, and some simple API's Does it? Was anyone profitable at that point? Engineering a scalable system is extremely expensive, APIs are by necessity not "simple" when you are selling millions of tickets. > They calculated the exact amount people will pay before saying fuck that, and gouge to that max I mean if this were true they'd be even more profitable now. But they're still losing money.

Mentions:#API

Fair. They did it with like a 2-3 dollar fee. They sell millions of tickets that more than covers a website, database, and some simple API's They calculated the exact amount people will pay before saying fuck that, and gouge to that max Its the same thing with pizza delivery fees that aren't tips Most of the money is spent paying artists and venues to have a monopoly over the tickets, so you have to get fucked by them, or don't go at all

Mentions:#API

The unprofitability exists downstream. There are all kinds of services like Copilot offering access to Anthropic models for artificially cheap to the end user, but at massive losses to the provider. With a lot of services, you can pay $20 a month and use up $500+ worth of Anthropic API costs. This is obviously not sustainable, and once these services raise their prices, demand for Anthropic products will fall.

Mentions:#API

Where can I affordably get access to stock market data, including options chain data, like an API, that I can connect ChatGPT or Gemini to. So I can have AI autonomously pull data out to analyze for my gambling queries?

Mentions:#API

The way the money trail works is Consumer/business pay OpenAI, Anthropic, perplexity, cursor, etc, to use the AI programs directly or the API. However these private companies are selling these services at a steep loss. The private AI companies don't have their own infrastructure to service the compute to run AI, so they pay the hyperscalers/neoclouds with largely VC/PE money from equity financing instead of revenue. There's literally 1000s of these private AI companies building/training models and trying to sell them (in the US alone). I havnt heard/seen 1 of these companies being profitable on their stand alone revenue. The issue isn't with the hyperscalers per se...its the private AI companies with an unprofitable business model.

Mentions:#API#VC

really hope firestocks gets fixed/updated soon. you don't realize how useful some of those browser plugins are until they break. (the provider they use to pull their data from changed their API, which broke the plugin)

Mentions:#API

> the question is, are their profitability meets the market expectations That I cannot answer, and deliberate tried to avoid it in my post and comments, and don't even want to given I'm an "almost blind VT" investor. But great opportunity for the consultants I joked at the expense of to get back at me! > it will becomes a commodity market. In a commodity market, the company with the lowest price wins and the profit margins collapse to zero. Yes, I absolutely agree, and I totally forgot to mention commoditization through open-weights models (and other ways). I do think most of the profits are in cloud providers offering these models, and that's also why I think AI startups *might* survive this, since they're building their own infrastructure, and why I conditioned that upon competition slowing down (fifth point). Regarding... > Personally, I don’t think bigger general language model is the way to go, but highly specialized, accurate small models that able to run on a local machine is the future. *Are you influenced by a certain Nvidia paper that was made to sell more GPUs to those who would otherwise have gone with a cloud model?* Jokes aside, yes, I think a lot of tasks can be delegated to small, fine-tuned models that are part of wider systems and may perform better than large generic models. In my job we have plenty of <10B fine-tuned models deployed (one of them for a website with ~100M monthly visits!), and based on the research metrics, they perform better (quality/acceptance rate, inference cost) than their lower-grade cloud model counterparts (Haiku, Flash, Mini, ...). Not to mention all the other (often older) products that are still using some model built on fastText, OpenNMT, sklearn, or similar. The part I'm not sure about is whether it's easier/faster/cheaper to do the required engineering and research work (especially factoring in development costs and project success rate), or if the time is better spent on architecture research and quick experimentation with generic API. Maybe in "the bitter lesson" sense? And one final thing: even smaller models can benefit from cloud deployments, at least for now. Maybe the RTX 7070 Ti Super will be a power efficiency monster with 48 GB of VRAM, or the M8 chips from Apple with have 10x the prefill performance of M3/M4, but right now even running the 30B-A3B Qwen models can be several times faster on 1-2 generation old big cloud machines than on expensive current-gen local hardware, not to mention throughput, electricity, etc.

Mentions:#VT#API#RTX

FMP API offers a reliable endpoint to track all the 13F Institutional Investors transactions [https://site.financialmodelingprep.com/datasets/form-13f](https://site.financialmodelingprep.com/datasets/form-13f)

Mentions:#API

I read it this way: MSAI is using Amazon AWS Services for over 2 years... last year they begang to use the AWS Tools (AI/ML Learning plattforms connected to the warehouses cams and robots). This entire talk is related to the implementation of the testing environment. Furthermore Luke was a maintance engineer - not a manager or anyone that could establish a partnernship. He helped them set up AWS Tool so MSAI could test their infrared AI readers through the warehouse stream API... so no real partnership, just cooperation to create some test environment in AWS Services... nothing more nothing less.

Mentions:#MSAI#ML#API

It actually has lost nothing vs ChatGPT, except ChatGPT has brought it more paid conversions via API, and ChatGPT helped it beat its anti-trust case

Mentions:#API

Right, just like deepseek showed that you can train a model for a few million instead of billions. Until we found out that they just trained everything on OpenAI's API and that they have over $1.6B of Nvidia hardware. The CEO of a company trying to sell something making unverified claims about their own products. It **must** be unbiased and true! CEOs lie about their business to generate hype all the time, this is nothing new 😂. If Pinchai said that they trained and deployed gemini 3.0 on one singular TPU, would you blindly throw your money at them? You still missed the fact that google's inventory is still primarily GPUs and that CoWoS capacity is barely enough for their own workload. Where do you think Amazon, Microsoft and Meta will get the CoWoS allocation for their TPUs? The only threat to Nvidia's business model is if, for some reason, compute demand stops growing or grows slower than CoWoS capacity. That is not happening anytime soon.

Mentions:#API

I mean once you have the API , making the client for it is typically not that much work. I think the premise makes sense, Bloomberg have too much packaged for retail and small shops, so something more granular could have some demand. But supply chain data is definitely niche though, probably need some industry connections that you already know is looking for this kind of things

Mentions:#API

>not selling anything, just sanity-checking an idea >The goal is for this to be a subscription based API that costs <$100/mo 🙄🖕

Mentions:#API

An API is just a means to an end. The terminal is so much more than simple data requests. Also, I don't think many retail users would want to work with the Bloomberg API, even if it were free. For your goal, the data and quality you have is what matters. - What's your sources? - Why focus on such a tiny niche? It's probably the least valuable data for someone working with options. Since you want to mimic BBG, what's the terminal equivalent of a volatility index based on downstream news pulses plus price movements? What is that even supposed to measure, and how would it be helpful?

Mentions:#API
r/stocksSee Comment

I disagree about thin apps being the most likely to fail. Thin apps actually have the highest success rate according to a MIT study. There is a really high return on capital on taking an API like OpenAI or anthropic and building your product on top of that. The value added is in the integrations, not spending tens of billions to try and build the best LLM.

Mentions:#API

I charge WSB users to run the API for their AI boyfriends.

Mentions:#API

Microsoft own 28% (was 50% last week I think) of OpenAI and use the ChatGPT models in CoPilot.. MS have been shoving CoPilot down the throats of every business on the planet (even more than they are to Windows consumer desktops), Office 365 is littered with CoPilot, would not be surprised if they don’t make a lot more money off OpenAI’s IP than OpenAI do.. and they have the income at least to attempt to float all this hardware. They also offer ChatGPT models API access through Azure AI Foundry, and sell those to any number of businesses to operate all those AI customer service chatbots that are on every single website… again, only a fraction of the revenue Microsoft makes from those LLM services will end up returning to OpenAI itself in licensing fees.. and ChatGPT’s website and apps all predominately run on Azure, so any revenue OAI do collect will probably go back to Microsoft to pay for that compute and infrastructure. However this setup has just changed as MS and OpenAI reformatted their deal.. OAI is no longer a non-profit and they will branch out to other compute providers and attempt to fund by cashing in on their hype with an IPO.. so is there is a massive bubble, but there is at least some money going into the system to pay for all this via Microsoft.. not enough! .. but, the circular money machine machine these mega corps are operating is a bit more lucrative than it first appears.

Mentions:#MS#IP#API

I am going to sell WSB users the API for it.

Mentions:#API

Have you used the API post TD Ameritrade merger?

Mentions:#API

Dude. OpenAI literally needs only to pull of some fb ads / google ads type of shit revenue and they are gucci but they are waiting to grow harder and then step by step .. they get rev share by promoting products they get money by offering ads to companies and they will have a giant user base who pay $10/m at least. Big MRR What do they have atm, 800 million active weekly users? (with free plan ofc) API Calls will make $$$ too Amazon didn't make any profits a long time either ..

Mentions:#API

First of all, dismissing ChatGPT to a “Chabot” is woefully ignorant and cynicism masquerading as wisdom. That “chat” has many features and it can do actual work. It is a massive productivity booster and the paid level versions are worth every penny. Second OpenAI has a wide variety of other paid features, and goes way beyond the ChatGPT site/app. Their tech underpins GitHub Copilot, maybe the most important AI tech (that companies pay a lot of money for) in software. It’s also embedded in iOS, which Apple is paying for. It’s a set of deployable private models for Azure, which businesses run to power features. And you pay to run that cloud infrastructure. OpenAI also supports an independently charges for API usage and agent building. Thousands of businesses are paying money to OpenAI right now to run their own business.

Mentions:#API

I have personally implemented AI at an organization. While it didn't fully replace a role, it took the maintenance of a user knowledge base from being part of 3 people's jobs to 1 in the first 6 months. I can't prove it with math yet, but it also appears to help people onboard faster. Ultimately, the AI is a GPT chat bot via an API call and some proprietary software.

Mentions:#API
r/stocksSee Comment

What does OpenAI have to do with the AI "bubble" you speak of? OpenAI isn't the stock market or the stock that's going crazy. It's AI in general. A lot of companies are also vested in other AI ventures e.g. Gemini or Anthropic etc. Also OpenAI offers an API for all enterprises to integrate. That's where the real money will be.

Mentions:#API

Using API as well

Mentions:#API

I tried doing this but created a webapp instead and used API calls for input/output. Did you do this via web?

Mentions:#API

You can just buy HIPAA compliant api access to the current flagship models. With most of them, there’s not even a difference between HIPAA compliant and regular API access to begin with. At least in my shop, we don’t have use cases where doing anything else would be more cost efficient or add capability, and I’d suspect most other healthcare oriented places are the same.   There’s a rumor / perception in healthcare that there are legal concerns about using API’s, but if you’re getting API access from one of the main providers there isn’t a problem. That perception too has been going away over the last year as well imo

Mentions:#API

OpenAI hasn't been in the top three APIs on OpenRouter for weeks now: [https://openrouter.ai/rankings](https://openrouter.ai/rankings) Now, granted, this is likely miniscule compared direct OpenAI API access....but still, it's not exactly a vote of confidence. They have overshot a valuation number they will be able to deliver on.

Mentions:#API

OpenAI is the first choice for the majority of the enterprises across the world and the majority of services are using their API. OpenAI is already making a lot of money. That said, the expectations are a bit too high and a small correction is imminent, it will happen to Nvidia what happened to Palantir.

Mentions:#API

Thanks! We love the API business and are still actively developing in the area, so we’d love to have you give it a try and let us know what you think!

Mentions:#API

I've requested API access numerous times and gotten zero response.

Mentions:#API

Do you have an API?

Mentions:#API

I’ve used Fidelity, IBKR, TastyTrade, and Schwab. 1. Schwab has better fills than the other three. 2. Schwab treats me like an adult and gave me the options level I asked for. 3. Schwab has a free API (actually 2, one for personal transactions and one for market data). I don’t know anything about Public. But I’m closing all my other accounts with other brokers and moving everything to Schwab for these reasons.

Mentions:#IBKR#API

Ahh gotcha yes the API company Isn’t it a huge red flag that the CEO still intended to dilute shareholder at such lower prices in first place ? They did the same with the SCILEX deal at like 45 cents a share or something … it looks terrible optically

Mentions:#API

The signs of a bubble are all the OpenAI API wrappers that are getting millions in funding (just look at the YC launches of this year). They're like the [Pets.com](http://Pets.com) of the dotcom bubble: products you build on a promising platform (world wide web = AI) that are not really worth the hype. OpenAI, NVIDIA, Anthropic, and the others will stand once the bubble burst. But Design/LM/Text/Whatever Arena and the 100 browser-use and coding agent companies are all going to die.

Mentions:#API

They can make quick money by adding a ads system in the same way YouTube or Facebook do, but for them is enough selling API subscriptions rather than user subscriptions.

Mentions:#API

Ok so the deal to purchase API using cash and stock is terminated but the purchase of API is alive and well as my above link. The article you linked is written by an actual retard, well AI but checked by one - “no future plans related to the transaction… in other news there is a definitive agreement to acquire API media l” the below is from the article “No further details regarding the reasons for the termination or future plans related to the transaction were provided in the filing. In other recent news, Datavault AI has announced a definitive agreement to acquire API Media, with the deal expected to close in December”

Mentions:#API

[Datavault AI ends agreement to acquire API Media Innovations By Investing.com](https://www.investing.com/news/sec-filings/datavault-ai-ends-agreement-to-acquire-api-media-innovations-93CH-4328757)

Mentions:#API

" work still has to get done to provide said data and complete said end products." - This is normal. In the end AI will just be another step in the toolchain. "Normalize" here means that inference API endpoints will be as common as other APIs and developers won't think twice about them being special (as opposed to now, where AI-native development is seen as different and requiring different approaches). Applications where I've personally been part of / experienced / seen real customers pay for: \- Enterprise data analytics - AI is better at navigating really complex enterprise schema (often not labeled correctly not unclean data) - especially helpful if an analyst is not familiar with a specific data warehouse yet. \- Financial analytics - manipulating formulas and formatting in Excel sheets. Saves people a lot of time if they are not high skilled already. Good boilerplate templates \- Image and video gen - right now it's a lot easier for single-discipline creatives to branch out (for instance, Illustrator taking on Animation work) \- AI music - \^ tied into the above. bgm / sound effects for single-person shop. \- Elephant in the room - code gen - productivity gain for developers; enabling cross-functional roles. \- AI meeting notes - pretty much must-have for back-to-back meeting takers now

Mentions:#API

I think you can do this with Alpaca. You can use Alpaca API data to get the options info for certain time frame and then use Alpaca API to get all the data associated with the particular options

Mentions:#API

The bubble only pops if OpenAI can’t prove sticky, high‑margin enterprise revenue; watch margins and retention, not headlines. If/when they file, the tells are: revenue mix (ChatGPT subs vs API vs enterprise), cost of revenue tied to inference/commitments, gross margin trend, cohort retention, and minimum‑commit contracts. A credible path is lowering unit costs (routing to smaller models, caching, distillation), shifting spend to longer‑term compute deals or in‑house silicon, and selling bundles with SLAs, privacy, and audit. Real adoption I’ve seen works when tasks are narrow and workflow‑embedded: support deflection with strict guardrails, code assist for legacy stacks, sales ops Q&A over approved docs. Cost control is prompt templates, token caps, batch jobs, and usage floors. We run Snowflake for warehousing and Azure OpenAI for compliant endpoints, and DreamFactory to generate secure REST APIs from old databases so support bots and agent tools can hit internal records without a rewrite. If OpenAI shows expanding margins and durable enterprise cohorts, the AI trade holds; if not, you get a reset.

Mentions:#API

I’d genuinely be interested to hear your thoughts or experience/examples of value adding (via API or GUI) processes currently being implemented. I see all these concepts and people working connections and creating automated processes but where is the meat and potatoes? What does it replace? The best use case I’ve seen for what you’re describing is reporting and project management, but work still has to get done to provide said data and complete said end products. This is a good faith question.

Mentions:#API

Obsolescence risk is absolutely still an issue especially since the concept of developing IC microarchitecture specifically for AI is relatively new. Nvidia is so well positioned in the market because their CUDA API gave them a headstart by allowing them to leverage existing GPUs for AI. Now, companies (including Nvidia) are developing tensor processing units specifically for AI. At the same time, AI models are being optimized to run faster or with less power. Future AI models optimized for newer processors may not run well on existing hardware. While LLMs and StableDiffusion is impressive, it's not clear that they're even going to be the real money-making engines 5 years from now. Microsoft is spending hundreds of billions on infrastructure with the assumption that current AI models running on current hardware will be what makes trillions in a future market. They're gambling.

Mentions:#API

My primary concern with OpenAI is that it is a private company. As such, it lacks the mandatory financial reporting and transparency of a public entity. Much of the recent news revolves around its massive hardware expenditures and secondary market stock sales. While OpenAI is undoubtedly a central player in the tech industry, a significant problem remains: we do not know what its primary source of income is. Is this similar to Facebook before its IPO? Perhaps, but even then, Facebook's massive advertising revenue potential was clear. Today, is news of partnerships, like the one with Walmart for customer service, substantial enough to justify its multi-billion dollar valuation? Frankly, it's unclear how OpenAI will become consistently profitable. I don't see corporations rushing to adopt OpenAI for mission-critical automation that *must* be done by AI. While app developers may use its API, and various company departments may experiment with it, these projects still require significant oversight from technical workers. OpenAI may have ample funding to build the largest and most intelligent AI, but without finding widespread, indispensable utility, what is its true value?"

Mentions:#API

AI inference API endpoints are normalized. There's path to profitability in many enterprise and productivity applications. I agree that for personal use there doesn't really exist a solid scenario right now

Mentions:#API

With AWS, one misconfigured API request and they can use up all $38B in a week.

Mentions:#API

They're losing 12 billion dollars a quarter. You are only counting the compute not the the several thousand employees it takes to run it, rent, office space. 5 was estimated to be around $1.5-2.7 b. You're falling into an accounting trap, how are they losing 12 billion dollars if it only takes 500 million and their API is profitable given it's the biggest revenue stream. Yes they lose money off the pro version but where is this other money going?

Mentions:#API

> I think the difference is it costs a lot more to maintain the system than they bring in, I forget the numbers but it’s so strong like $0.36 per search on chat GPT, and a few dollars a search on sora, so the average customer is costing OpenAI a lot of money to provide the service. OpenAI is definitely bleeding money, but the cost per prompt is not that high. if you want to estimate their cost, look at their api pricing. GPT-5 is $10 per 1 million tokens. 100 tokens = ~75 words, which is the approximate length of the average reply. If we assume OpenAI is selling API access with 0% gross margin(to boost adoption), each message costs them about $0.001.

Mentions:#API

Welcome back Enron!! **Tl;dr: Today's tech can already do everything we will be able to make money off it with the right configuration, the business community hasn't found out how to do that yet. Investor expectations are way off track with what the ground business reality is, and once everyone figures out how to use it, costs go down faster than our need for GPUs goes up.** I've been building small scale AI workflows for myself and a handful of small customers. Not to replace human jobs, but to really just ask the question "What the hell can we actually do with this stuff that's going to MAKE MONEY?" I'm not convinced it's anywhere near what Big Tech says it is. Tbh, AI could end up being exactly like Cisco and Oracle. They were once proprietary, closed-source behemoths that pretty much everyone dealt with because they are 1) quick to market and 2) locked large organizations (banks, governments, places that move slow to begin with) into contracts. Then, open-source solutions and public cloud killed their dominance. Larry Ellison infamously shrugged off cloud computing in its early days, and now he has to offer his services at a deep discount to even compete because they were late to the party. Should we trust him with $300 billion of your 401k? Today's AI tech is also remarkably easier to migrate providers on than it would have been to migrate from Oracle to a open-source engine like Postgres. And the cost savings can some cases be enormous. We're talking literal API CALLS vs advanced joins, extensions, stored procedures. Ever heard the joke that every AI startup is just a ChatGPT wrapper? ITS TRUE! I predict most companies will likely switch to open-source alternatives like Deepseek, [Z.ai](http://z.ai/), Qwen, Meta itself has models that can are good enough for probably 80% of business use cases. They can also be hosted in-house on the same tech that is on your laptop, and can be trained on proprietary datasets, something more important than how much OpenAI can charge per token on its API. Sorry, I sound like a lunatic but I'm right.

Mentions:#API

> chatgpt loses money on every generation they literally have said it at API prices? you're making shit up.

Mentions:#API

Does your website offer an API to query a stock's risk, available shares, and breaking news? I'm willing to pay for this service, thank you.

Mentions:#API

There is no Amazon partnership... MSAI is using AWS Amazon services such as server hosting and AI/ML testing environment (with control of certain warehouse cams and robots through test API) as client. "AWS Partner" is everyone that uses the AWS Services as client. It is a pure client/provider relationship.

Mentions:#MSAI#ML#API

Yeah don’t use skylit the price is insane. You can get a subscription to the advanced plan on polygon.io for 200 a month. Then just use chatGPT or Claude to code something with the API and you got your own “heatseaker”. Probably won’t look as good or be without flaws but you will save $500 a month lol

Mentions:#API

Fuck spez for the API changes. 

Mentions:#API