See More StocksHome

API

Agora Inc

Show Trading View Graph

Mentions (24Hr)

4

0.00% Today

Reddit Posts

r/wallstreetbetsSee Post

Chat with Earnings Call?

r/investingSee Post

Download dataset of stock prices X tickers for yesterday?

r/investingSee Post

Sea Change: Value Investing

r/WallstreetbetsnewSee Post

Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field

r/pennystocksSee Post

Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field

r/WallStreetbetsELITESee Post

AIGC market brings important development opportunities, artificial intelligence technology has been developing

r/pennystocksSee Post

Avricore Health - AVCR.V making waves in Pharmacy Point of Care Testing! CEO interview this evening as well.

r/wallstreetbetsSee Post

Sea Change: Value Investing

r/investingSee Post

API KEY and robinhood dividends

r/pennystocksSee Post

OTC : KWIK Shareholder Letter January 3, 2024

r/optionsSee Post

SPX 0DTE Strategy Built

r/WallstreetbetsnewSee Post

The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT

r/pennystocksSee Post

The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT

r/optionsSee Post

Best API platform for End of day option pricing

r/WallStreetbetsELITESee Post

Why Microsoft's gross margins are going brrr (up 1.89% QoQ).

r/wallstreetbetsSee Post

Why Microsoft's gross margins are expanding (up 1.89% QoQ).

r/StockMarketSee Post

Why Microsoft's gross margins are expanding (up 1.89% QoQ).

r/stocksSee Post

Why Microsoft's margins are expanding.

r/optionsSee Post

Interactive brokers or Schwab

r/wallstreetbetsSee Post

Reddit IPO

r/wallstreetbetsSee Post

Google's AI project "Gemini" shipped, and so far it looks better than GPT4

r/stocksSee Post

US Broker Recommendation with a market that allows both longs/shorts

r/investingSee Post

API provider for premarket data

r/WallstreetbetsnewSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/investingSee Post

Best API for grabbing historical financial statement data to compare across companies.

r/StockMarketSee Post

Seeking Free Advance/Decline, NH/NL Data - Python API?

r/pennystocksSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/wallstreetbetsOGsSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/WallStreetbetsELITESee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/ShortsqueezeSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/smallstreetbetsSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/RobinHoodPennyStocksSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/stocksSee Post

Delving Deeper into Benzinga Pro: Does the Subscription Include Full API Access?

r/investingSee Post

Past and future list of investor (analyst) dates?

r/pennystocksSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/WallstreetbetsnewSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/RobinHoodPennyStocksSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/pennystocksSee Post

Aduro Clean Technologies Inc. Research Update

r/WallStreetbetsELITESee Post

Aduro Clean Technologies Inc. Research Update

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/investingSee Post

As an asset manager, why wouldn’t you use Verity?

r/wallstreetbetsSee Post

Nasdaq $ZG (Zillow) EPS not accurate?

r/pennystocksSee Post

$VERS Upcoming Webinar: Introduction and Demonstration of Genius

r/StockMarketSee Post

Comps and Precedents: API Help

r/StockMarketSee Post

UsDebtClock.org is a fake website

r/wallstreetbetsSee Post

Are there pre-built bull/bear systems for 5-10m period QQQ / SPY day trades?

r/ShortsqueezeSee Post

Short Squeeze is Reopened. Play Nice.

r/stocksSee Post

Your favourite place for stock data

r/optionsSee Post

Created options trading bot with Interactive Brokers API

r/investingSee Post

What is driving oil prices down this week?

r/weedstocksSee Post

Leafly Announces New API for Order Integration($LFLY)

r/stocksSee Post

Data mapping tickers to sector / industry?

r/WallstreetbetsnewSee Post

Support In View For USOIL !

r/wallstreetbetsSee Post

Is Unity going to Zero? - Why they just killed their business model.

r/optionsSee Post

Need Help Deciding About Limex API Trading Contest

r/investingSee Post

Looking for affordable API to fetch specific historical stock market data

r/optionsSee Post

Paper trading with API?

r/optionsSee Post

Where do sites like Unusual Whales scrape their data from?

r/stocksSee Post

Twilio Q2 2023: A Mixed Bag with Strong Revenue Growth Amid Stock Price Challenges

r/StockMarketSee Post

Reference for S&P500 Companies by Year?

r/SPACsSee Post

[DIY Filing Alerts] Part 3 of 3: Building the Script and Automating Your Alerts

r/stocksSee Post

Know The Company - Okta

r/SPACsSee Post

[DIY Filing Alerts] Part 2: Emailing Today's Filings

r/wallstreetbetsOGsSee Post

This prized $PGY doesn't need lipstick (an amalgamation of the DD's)

r/SPACsSee Post

[DIY Filing Alerts] Part 1: Working with the SEC API

r/optionsSee Post

API or Dataset that shows intraday price movement for Options Bid/Ask

r/wallstreetbetsSee Post

[Newbie] Bought Microsoft shares at 250 mainly as see value in ChatGPT. I think I'll hold for at least +6 months but I'd like your thoughts.

r/stocksSee Post

Crude Oil Soars Near YTD Highs On Largest Single-Week Crude Inventory Crash In Years

r/stocksSee Post

Anyone else bullish about $GOOGL Web Integrity API?

r/investingSee Post

I found this trading tool thats just scraping all of our comments and running them through ChatGPT to get our sentiment on different stocks. Isnt this a violation of reddits new API rules?

r/optionsSee Post

where to fetch crypto option data

r/wallstreetbetsSee Post

I’m Building a Free Fundamental Stock Data API You Can Use for Projects and Analysis

r/stocksSee Post

Fundamental Stock Data for Your Projects and Analysis

r/StockMarketSee Post

Fundamental Stock Data for Your Projects and Analysis

r/stocksSee Post

Meta, Microsoft and Amazon team up on maps project to crack Apple-Google duopoly

r/wallstreetbetsSee Post

Pictures say it all. Robinhood is shady AF.

r/optionsSee Post

URGENT - Audit Your Transactions: Broker Alters Orders without Permission

r/StockMarketSee Post

My AI momentum trading journey just started. Dumping $3k into an automated trading strategy guided by ChatGPT. Am I gonna make it

r/StockMarketSee Post

I’m Building a Free API for Stock Fundamentals

r/wallstreetbetsSee Post

The AI trading journey begins. Throwing $3k into automated trading strategies. Will I eat a bag of dicks? Roast me if you must

r/StockMarketSee Post

I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)

r/StockMarketSee Post

I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)

r/optionsSee Post

To recalculate historical options data from CBOE, to find IVs at moment of trades, what int rate?

r/pennystocksSee Post

WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS

r/wallstreetbetsSee Post

$SSTK Shutterstock - OpenAI ChatGBT partnership - Images, Photos, & Videos

r/optionsSee Post

Is there really no better way to track open + closed positions without multiple apps?

r/optionsSee Post

List of Platforms (Not Brokers) for advanced option trading

r/investingSee Post

anyone using Alpaca for long term investing?

r/investingSee Post

Financial API grouped by industry

r/WallStreetbetsELITESee Post

Utopia P2P is a great application that needs NO KYC to safeguard your data !

r/WallStreetbetsELITESee Post

Utopia P2P supports API access and CHAT GPT

r/optionsSee Post

IV across exchanges

r/optionsSee Post

Historical Greeks?

r/wallstreetbetsSee Post

Stepping Ahead with the Future of Digital Assets

r/wallstreetbetsSee Post

An Unexpected Ally in the Crypto Battlefield

r/stocksSee Post

Where can I find financial reports archives?

r/WallStreetbetsELITESee Post

Utopia P2P has now an airdrop for all Utopians

r/stocksSee Post

Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue

r/wallstreetbetsSee Post

Reddit IPO - A Critical Examination of Reddit's Business Model and User Approach

r/wallstreetbetsSee Post

Reddit stands by controversial API changes as situation worsens

Mentions

With all of the new light oil (40 API) coming out of the permian basin, refiners actually need heavier oil to blend with it. Canadian and Venezuelan crudes serve that purpose. Also, heavy crude is normally catalytically cracked into lighter products yielding more barrels coming out of the refinery than go in leading to improved profits since heavy oil can be purchased at a discount.

Mentions:#API

So I build pharma plants for a living. Just finished detailed costing for about 4MT/year of API for a commercial GLP. Cost to build that capacity is about 1 to 1.3 billion, with the differntially really being how much CAPEX gets shifted to OPEX costs. 4MT/year at highest dosage of ~15mg; is just shy of 270M doses of anti-fatty magic medicine. 1 data center CAPEX spend is then roughly 200 MT/yr production (and I mean realistically... we could get some more scale efficinecy) or 13.5B doses of anti-fatty juice. We could eliminate the fatties with 1 Data center budget, and walk away with 60%+ margin.

Check out [MesoSim](https://docs.mesosim.io), it has an [AI Assistant](https://chatgpt.com/g/g-690516c500b8819191e154543a9a85a7-mesosim-ai-assistant) to help you with strategy development. Once you have a working strategy you can use [MesoLive](https://docs.mesolive.io) to trade the strategies with IBKR, TastyTrade or using the built-in paper trading account. MesoLive has a web based user interface, but trade automation is also possible through MesoLive-API (needs FundPro subscription). To see what the AI agent is capable of check out this [blog post](https://blog.deltaray.io/rhino-options-strategy). The base strategy was created by the agent from presentations of the Rhino Strategy. [https://blog.deltaray.io/rhino-options-strategy](https://blog.deltaray.io/rhino-options-strategy) Full disclosure: I'm the owner of the service.

Mentions:#IBKR#API

API yesterday estimated a crude oil draw of 9.3 mb last week but today the EIA report claims only 1.3 mb draw. Yeah, something reeks here.

Mentions:#API

Gemini API pretty much unusable for me ATM despite being a paying customer 🤡

Mentions:#API

Here's how the bubble pops. A prominent article in the Wall Street Journal tells the story of a national bank. They spent tens of millions on ChatGPT, Copilot, and API tokens trying to stay competitive. They hire a small team of specialized consultants that set up an on-premises LLM. They self-host, and train the model on their data. The end result, they get better, more consistent value. They cancel all their AI cloud and infrastructure spend. The roadmap is there. The future of AI is smaller, and Data Center to the Moon race culture is the next financial crisis in the making.

Mentions:#API

Ah you're talking about speech to text APIs and then integrated into web applications. Those APIs which imo haven't been accurate or reliable until like 5 years ago, or when Watsons came out with theirs. But what I mean is to be able to semantically tell it something and it understanding without me needing to spell it out. That is what makes it more efficient. What would be generative is if I wanted to go to 3 cities in Europe, and I asked what's the most efficient way to hit each place in the time frame I give it. I would expect it to give me suggestions, I would expect it to know if a train was faster, or if renting a car was more cost effective. I would expect it to look into all of these options because it would be as close to having a human travel agent as possible, not just having a speech to text input fed directly into an airlines booking API.

Mentions:#API

But prices could easily go down a lot more than 50% for tokens, prices are 1/1000th of when I first started using OpenAI API during 3.5 turbo era. I think what we'd need to do in the future is say 'Late 2025 AI was good enough' and instead of passing frontier models to consumers, largely use last years technology.

Mentions:#API

Bruh, your "model" is basically an API wrapper for FedWatch. Nice marketing attempt though.

Mentions:#API

How are you getting likes on this? Not one of my sentences noted META. I noted APIs, sure. However, every serious implementation of AI, API or not, requires some oversight. FOR NOW. It's the concept of AI as a coworker, or human in the loop. Get used to it. FOR NOW! Moreover, just because one automation fails, doesn't invalidate all. Stated another way for your benefit: If you're aware of one bad AI tool, doesn't invalidate all others. MAYBE your perspective is limited. To be clear, I've seen many people lose their role in the past few months due to AI. Wait, I didn't just SEE! it. I made it happen! AI is immensely capable in many ways, and it's utility is expanding weekly. Finally, it's myopic folks like you that make Reddit unusable. You grab one concept or word, and run with it. That's all you can handle. You disregarding an entire paragraph someone wrote, fixate on a line, and disregard the context. It's called logic chopping, and you're the first boomer I'd fire from the warehouse job you have.

Mentions:#API

To be fair it managed to intercept Robinhood API calls and grab 10,000 options quotes on its own

Mentions:#API

> Anyone can do it. Hahahahahahahahaha Fuck, this is a good one. Just wait until the API you're using changes behavior without you knowing and your fucking dumb thing takes a shit and you will literally have no idea why.

Mentions:#API

usually you gotta DIY with an API

Mentions:#API

>OpenAI on Thursday announced its most advanced artificial intelligence model, GPT-5.2, and said it’s the best offering yet for everyday professional use. >The model is better than predecessors at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context, OpenAI said. It will be available starting Thursday within OpenAI’s ChatGPT chatbot and its application programming interface (API).

Mentions:#API

This looks like a classic “story is real, market still doesn’t care” microcap setup, but the key question is contract quality, not just logos and ARR. Main thing I’d dig into is how much of that 15m ARR is true recurring SaaS vs usage-based, and what the minimums and termination clauses are on those 5m+ contracts. A single 8m deal that can be walked away from in 12 months is very different from a 5-year take-or-pay. Also: gross margin trajectory and implementation cost. If each new logo needs a semi-custom integration and on-site team, they might be buying growth at thin margins. I’d want cohort-level data: expansion vs churn per customer, and how many pilots convert to full contracts. On comps, I’d anchor more on high-friction enterprise logistics names like Descartes or even MercuryGate than general SaaS. Tools like Snowflake, Palantir, and DreamFactory-style API layers are great analogs for how sticky data-integrated platforms can become once embedded in customer workflows. So yeah, the mismatch is interesting, but it only works if those big contracts are durable and gross margins scale up, not down.

Mentions:#ARR#API

QE was awesome post-covid. Now we get it at all-time highs! This is not crazy at all! /s But, I think we have learned that the Fed was super nervous about the clusters in spikes in SOFR rates, which was a genuine liquidity issue, possibly shutdown related, we'll never know. I took that as a signal that the Fed would surely cut, not that they would start buying treasuries. I'm not knocking gold, silver, and copper at all, I have exposure to all, but I think the vast majority of loan-created money will go to the place it has been reliably going the last three years: AI infra. Even Jamie D is blunting JPM earnings to invest in that vertical. For the bubble bears: If you've been bearish all along, you're bearish for the same reasons now that you were three years ago: the train might stop. But, your bias shouldn't be prove the train won't stop and I'll get on, but prove to me that it will stop and I'll start to disembark. The Fed has the back of the stock market, even though the stock market absolutely does not need it. Corporate bonds are fine, outside OAI-adjacent stuff. The dollar hasn't tanked. Inflation is not about 3%... yet. Monetization of AI is best measured by API calls. Companies pay for those. That is exploding for every player, except maybe OAI losing some share to GOOG. Inference is the monetization wave and is now most compute demand. ASICs are probably the future, but NVDA has some ASICs build into GB300s already for long context windows. Crucially, there is a memory shortage now. If models stay the same size, all those new API calls need more memory to run. Except models don't stay the same size. Sparse mixture of expert models still improve when they have a larger RAM footprint, and quantization of large models reliably increases halluncinations and degrades performance. That memory will come from MU, SK Hynix, and Samsung. EWY has an uber low PE and is 40% Hynix and Samsung. This is the safest Sharpe ratio bet in the world, but it won't pay out as much as MU. If you held memory stocks in past shortages, you know what a wild ride that can be. This is a secular shortage. Fabric (ethernet/NVlink/optics) and GPUs might still be the bottleneck for training, but probably not, but training was the the bottleneck pre-2025. The current and destination bottleneck is high-band RAM, and the fundamentals of these companies scream bottleneck. Anyway, crystal ball comment, not advice, yada yada. Feel free to come back and mock me if I am wrong.

Probably not real people, they’ve surely have API access at this point.

Mentions:#API

For option Greeks specifically, I built FastGreeks API (fastgreeks.com) - REST endpoint that returns Delta, Gamma, Theta, Vega, Rho. Example call: POST /greeks { "S": 100, "K": 105, "T": 0.5, "r": 0.05, "sigma": 0.2 } Returns all Greeks in \~10ms. Free tier gives you 1k calculations/month to test. Also supports batch processing if you need to price multiple options at once (up to 10k per request).

Mentions:#API#POST

~WHAT IF~ What if all the brokers sell live API access to the MM so they can see exactly when I’m about to trade? ~WHAT IF~ 🪄🧌

Mentions:#API

That Palantir’s products are not really meant for people that can warehouse their own data, analyze it and run LLM layers over it for XYZ output. They pitched my firm on twitter analytics many years ago, it was API plugged into a slick UI. Government contracts here do not signify any “high-level innovation” - Anduril has yet to mass produce. Source: this is literally what I fucking do

Mentions:#XYZ#API

I actually use automation and hit their API :)

Mentions:#API

OK, I found a solution, so I'll post it if anyone else ever runs into this problem. The website [kibot.com](http://kibot.com) offers a free (if you use the API) solution to finding raw unadjusted prices on any given day. I wouldn't doubt there are other methods, but this was the only one I found, and it was quite simple to use.

Mentions:#API

Yeah you are right! English isn’t my native language so I couldn’t come up with a better word for describing applications/models -> Cloud/API -> TPUs. Do you have a better word?

Mentions:#API

Everyone I know is doing AI assisted coding now. All of our developers at our fintech startup use it and they are ludicrously smart. I think coding assistance is actually the most practical and transparent AI value-add for businesses. AI art looks like AI art, AI writing is full of em-dashes — and ellipses … But AI code just looks like code. I’m more of a sysadmin so not much of a developer, but I find AI assistance really helpful for writing some code to parse through a complex data structure returned by an API call, for example. However, I only ask it to write functions and snippets and then I massage them and glue them in. I think that’s pretty standard practice.

Mentions:#API

It did work, though it seems automod removed the bot response. The RemindMe bot can be a bit slow to respond these days due to Reddit's API changes.

Mentions:#API

Maybe. However if you don't want to use LLMs via an API, NVidia's CUDA is still pretty much the only game in town. It definitely *is* a sign of strength that Google is able to compete with both ChatGPT and NVidia on their home turfs,*while* keeping the original money printing machine alive. I don't really know yet how big of a threat the TPUs are to e.g. NVidia. The recent deals could've just been hedges from big cloud compute users. How much can they scale up the production? What are the profit margins? NVidia also has kind of been consistently excellent at what they do. Google kind of sucks in most of what they offer. On the other hand, we're still stuck with them, so that's probably too harsh of a statement.

Mentions:#API

I nuked my twelve year old account during the spez API bullshit because I figured that was gonna do the whole idea of profitable stocks, in. Whoopsie. Classic dumbass move from me!!

Mentions:#API

1) No.  The benefit would be minimal to nonexistent and moving investments between brokerage firms carries some risk.  Over the course of the transaction, you could incur real and/or opportunity losses on your assets because I doubt you will be able to make the transfer between brokerages in-kind.   Schwab is a very good brokerage, and it has an excellent trading platform and API that you may want to make use of someday.   2) If you set up dividend reinvestment on your positions, any dividends from your positions will automatically get reinvested, in fractional or whole share amounts.   If you really insist on having every nickel invested at all times however, you can buy shares in something that has a lower share price.   I don't think that make sense though.   A better way to roll would be to allow your cash position to grow to the point you can purchase one or more shares in something you actually want to own, then buy those shares after the equity pulls back in share price.    Your questions are good for the purpose of confirming or refuting your decision.   Challenging everything is a good way to become more confident and surer about your decisions.   Don't feel shy about continuing to do that in perpetuity.

Mentions:#API
r/stocksSee Comment

> Your end users are still using excel to analyze the data which is why excel isnt being replaced but being used for its actual purpose. The databases being queried by analysts is still being outputted into excel sheets and being analyzed in excel sheets. In your organization you can just reach out to anyone in finance / accounting / FP&A / etc. And their most used application is most likely still excel So originally this is the case, but we've since built all of the reports/analysis people do in Excel into the system. This ensures common data, standardized methodology, and standardized reporting. Excel used to be relied upon for exports/imports, but we've moved away from that into an API and microservice based data loading system.

Mentions:#API

Yeah, the auto mod removed my post, so I guess I got frustrated. Definitely, that's great to hear about. I'm just entering into this world after keeping the investing and technical worlds separate for awhile, and I'm amazed by the richness of the resources available. Applying for the Schwab API now, thanks for the tip.

Mentions:#API

There's really nothing fledging about what you are describing. That stuff has been around for decades. Re: hedge fund - you mean like r/quant ? No one is ever going to share their edge so you aren't going to get details from people. Re: API's - yeah - those tends to be brokerage or tool provider specific. Those topics get discussed occasionally on r/investing. But there are specific subreddits for specific tools and brokers. For example - if you want to talk about Schwab's API - you can ask here or in r/Schwab . If have questions about a tool like QuantConnect - there's r/QuantConnect . Or TradingView - there's r/TradingView

Mentions:#API

It likely exists - you just need to elaborate on what you mean. Are you asking about algo development? Quant analysis? Back-testing? What kind of tools? Brokerage API usage? Those topics come up in r/investing and there are smaller subreddits dedicated to specific niche areas and tools.

Mentions:#API

The Graph API lets you do incredible customization things though. And GSuite is simpler to the point that some pretty basic tools (like style templates that aren't tied to a specific document) seem to be missing.

Mentions:#API

I am predicting the AGI development will end up in a separate branch of the business, funded and largely controlled by the US government. There isn't enough private liquidity to fund open ai to agi, but it is too strategically important to the US government to not get there first or at least around the same time china does. AGI will almost certainly become a government and military adjacent technology and will be licensed to domestic companies to boost productivity. I would also predict this is how the us government ends up replacing the tax income from replaced jobs, by licencing AGI. Anyone buying into the IPO at a trillion dollars is going to be very disappointed when OpenAI inevitably fails to find enough private funding to achieve AGI and whoever does fund it (the only 'thing' capable of funding agi in the western world is the us government), is not going to trade it on the stock market, or allow it to remain part of whatever people are buying into at the IPO. Buying into the trillion dollar IPO is buying a share of chatgpt, API access income, codex, Sora. Almost certainly not AGI. Without AGI, open ai (with annual revenue of 12 billion) is not worth a trillion dollars, unless you think it has a projected growth of 83x... The US government will pick it's 'chosen one' to develop AGI for them. It will almost certainly be open ai. Google is far too large, slow and heavy and is too influential to be allowed to have the keys to AGI. openAI is not any of those things and makes a much better 'partner' for a government funded effort. As much as Gemini 3 is a great product, openai still has the most powerful underlying model, by a fair distance. They have just gimped it with poor tooling and a rubbish UX. Google have produced a great UX and tools which are actually useful. Their model is not as good, but they actually let it do stuff which people want, so people perceive it as more powerful. Google has a huge consumer ecosystem to integrate Gemini into, so they have a vested interest in building an efficient model with excellent tooling and UX. Open ai doesn't, it is a research company and chat gpt is a public demo which doesn't showcase that much of the models actual potential. All just speculation, but looking at Sam Altman's decisions, the things he is saying, the direction of the company, it does all line up for him anticipating an offer from the government The whole of mag 7 together couldn't fund the race to AGI and beat china there. They do not have 2 trillion dollars (maybe closer to 5 trillion) lying around in spare capex. And neither does private equity. The only player big enough to fund AGI in the west is the US government and AGI is too strategically important for them to fail to get it, so they will make sure it happens.

Mentions:#AGI#API#UX

I don't want to read your API generated synopsis, what are you asking here for that this isn't providing you?

Mentions:#API

lol. you’d struggle extracting $5 out of 90% most of the consumer market. I know plenty of businesses who budget $50-$100 of API credits/day/head for their top programmers.

Mentions:#API

they have the most generous paid plan out of all of them. If I use Opus on the $20 claude plan, I will run out in a few hours. Same with Opus via Cursor. On ChatGPT, they rate limit based on user input. While coding, i’ve had the model thinking for 15+ minutes and burn tokens the whole time, all while only counting as one prompt. OAI’s consumer subscriptions are hilariously cheap compared to API. On enterprise, they’ve updated pricing to a usage based model recently but only for new consumers. All old consumers remain on the super cheap old plans and new customers pay 5 times the price for something that better reflects costs (including model training)

Mentions:#API

I'm pretty sure there's a coordinated media blackout on this Deepseek 2 model, and it's solely to save OpenAI, and indirectly MSFT/NVDA/most of the US AI sphere's asses. It's Sparse Attention training procedure (something that Googles also pivoting towards) is just a gamechanger in efficiency, it's the closest we have to functional SNNs right now. And the efficiency shows in API token prices; OpenAI o1/o3 are priced at $15 per 1m tokens, Gemini 3 at $1.25 per 1m, Deepseek 3.2 at *$0.14* per 1m. Trained sparse transistors are also 2-3x faster on queries and use 40-60% less energy and RAM. If you're just asking a chat bot what the sore on your dong is this won't matter to you, but for API customers that are using AI for real shit, this is huge. OpenAI's out here working on monster trucks that no one needs, Googles making practical sedans, Deepseeks making electric bikes. Efficient inference is going to be the end-all winner in AI, and OpenAI is sucking at that.

Why are you using API credits instead of getting the Max 10 or 20 plan? Genuinely curious.

Mentions:#API

Anthropic is in a substantially better spot. One important metric these companies likely track internally is something like: average revenue per token. ChatGPT's extremely popular free tier screws them over; far more of Anthropic's tokens are monetized, because they're delivered through sources like the API and Claude Code (they just announced yesterday Claude Code, this thing many people in here have never heard of, is at a $1B run rate *alone*). Anthropic has also substantially under-invested in first party data centers, instead relying on cloud providers and colos; these data centers are quickly becoming a liability.

Mentions:#API

Anthropic's rate-limiting practices are scummy (even for paying customers) but I guess that does put them in a better position balance sheet wise. It helps that their models & tools still seem to have an edge over their competition when it comes to coding, which I guess is the only relevant AI market segment as of now. I mean, developers are willing to pay $100 or $200 / month subscription (or high API costs) as the results are better. Google don't really care about the short term anyways. Yeah, looks like OpenAI are going to bleed out.

Mentions:#API

It's not that simple. We had an API using Gemini 2.5-Flash and couldn't simply switch to GPT-5 when it came out because the prompt was tuned for a certain outcome, and switching the API led to a different one due to how GPT-5 differs. But sometimes a new model can solve issues as well. Sonnet 3.5 was a big one that drastically solved many of our issues with agent tool use which used to have a lot more in-house scaffolding.

Mentions:#API

I mean the customers who contribute the most to the Anthropics bottom line use API and there are 0 limits in there unless there's a global outage.

Mentions:#API

"Companies get locked in as the \[sic\] integrate" Is that true? The API interfaces are more or less exchangeable in a few lines of code. What they offer is a commodity.

Mentions:#API

A lot of people who use Claude professionally use the subscription. I spend about $200-400 a month on the API, pretty much every dev I know does the same.

Mentions:#API

They own the B2B market who uses their model through API

Mentions:#API

His comment is fairly accurate though, eventually you burn enough people that are wary of investing into your ecosystem that product failures are a self-fulfilling prophecy. Outside of Gmail and Youtube, there's no product Google can make that I'd feel safe putting time, effort, and money into adopting. Few years ago when I was buying a mesh Wifi system I went with Orbi over Nest because I don't trust Google to not axe the app/functionality one day. Which, in hindsight was the correct take. I find it hard to see Google launching products "successful enough" in their eyes when they have a money printer (search) to compete against for upkeep costs. Honestly kind of similar with how Microsoft bought XBox just to seemingly be killing it off because it's not profitable enough in their portfolio. Those companies are too big to focus it's efforts and decisions on improving a product outside of its core business. Everything needs to link back to core product, even if it hurts the sales/demand for whatever it is. So yea, people are wary of Google's AI products because they could drop it any time in the future after people build API's and bridges and add-on's in the ecosystem if the R&D costs start becoming too high. Wheras you know OpenAI will have GPT running until they die.

Mentions:#API

You do realize entire data centers are built and being built at massive scale without GPU's right? Today. And that some companies have already transitioned aspects of GPU workloads away from GPU's due to power shortages and supply shortages. Most notably, the play most are missing, is Apple perfected the power efficient SoC that has GPU/CPU/TPU in one chip with shared memory. And that Apple is building data centers to power a private AI cloud based on that chip design. Apple solved for a single software API into GPU/CPU/TPU without having to write software for each. That your workloads are automatically routed to the best function of the chip for that without copying data back and forth inside of a GPU to system memory, etc. So while companies like OpenAI are spending multiples of their revenue because of the cost of the infra... Apple may very well sneak in behind the pack as the only profitable AI service because of their hardware advantage. Which is not Nvidia. And the most telling part? Nvidia's largest customers are actively designing competitive chips and building entire data centers without them as a hedge.

Mentions:#API

Matthew here- I help lead Public’s trading API. Yes there are a lot of devs using it, especially folks needing more control over automation w/o dealing with brittle clunky outdated APIs. Bigger value for devs tends to be predictable order handling + clean cancel/replace behavior rather than the fee schedule itself. If you’re already on eTrade’s API, you’ll probably notice the biggest differences in workflow + reliability, not just cost. \-DM happy to go further

Mentions:#API#DM

New deepseek model out Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents! 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now.

Mentions:#API

> I’ve been experimenting with small scripts to track price movements What API and datasets are you looking at? Asking how coding can help you with X is a strange question. Programming is just the knowledge to make computers do computational tasks for you. If you have access to an API on a trading platform you can issue orders or request data.  I'm not familiar with any major trading platforms giving access to these things.

Mentions:#API

No, this is some myth. They're purpose built for training with a smaller version used for inference. They come as a "cube" of (tens of?) thousands of TPUs that can be programmed as a single device or split into smaller fragments. They can run any model using JAX (and XLA the compiler). The whole networking and communication between chips is optimized for training. You can find some low level info from this API, which is like an assembly layer under jax: https://docs.jax.dev/en/latest/pallas/tpu/index.html

Mentions:#API

I don’t know what API their using. I’d use the best one in the market that can offer me that stuff. Just wanna take them down and see them fall tbh.

Mentions:#API

You’re comparing two different sort of inference. The article from FT is talking about the cost of total inference (serving all users), while your articles talk about the profit margin of API-based inference (predominately 3rd party users, i.e. users that use products that integrate OpenAI’s API). Given a 1b active users, mostly free users, both can be true.

Mentions:#FT#API

Scraping reddit isnt that cheap anymore, that's why thet did their big API change a couple years that everyone screamed about back when they realized OpenAI was scraping everything for free

Mentions:#API

Solid deal, and those features are genuinely useful; just add a few guardrails and complements so it covers more of your workflow. For backtests, lock the date range and freeze signals at session open to avoid lookahead, include per‑side fees and at least 1–2 ticks slippage, and compare SPX vs SPY rolls on the volume switch, not calendar. For intraday gamma, sanity‑check levels around OPEX and earnings, and watch the SPX/SPY mismatch-SPX often leads the turn. Since automation is 1 DTE and mostly index/ETF, I run alerts there and route longer‑dated or single‑name trades via Tastytrade or IBKR with server‑side brackets and a small limit offset. Copying top backtests is fine, but rescale to your risk and track live P/L separately from the sim. With IBKR and Tastytrade for execution and ThetaData for clean chains, I use DreamFactory to expose a simple REST API over my trade logs so alerts and dashboards stay in sync. Short version: use it for fast tests and intraday gamma, and pair it with a broker/data feed for the gaps.

Mentions:#SPY#IBKR#API

(Update) just finished and here's the link to the free site! Needs a sign up to handle the free API key. But it's a quick process. https://squeezealpha.netlify.app/

Mentions:#API

It's not Apples to Apples though. Microsoft owns about 27% stake in OpenAI. OpenAI's enormous burn is largely Opex in renting servers in data centers that Microsoft (a bit of circular accounting for you) and others like Coreweave for a souped up search engine and API Access to businesses. Alphabet has mostly CapEx for their AI in that the development and deploy costs are to some extent (I don't know how much) cross-subsidized by all their other steaks on the grill. Advertising, Search, Google Cloud, Android, and YouTube which are massively profitable, generating hundreds of billions in revenue - much of which is or will be tied into Gemini in one way or another. Search, Workspace and Cloud the mostly. The new Gemini free model blows away the ChatGPT paid $20/month tier in speed and accuracy from my use so far. Alphabet is playing a long game, using profit to buy & build-out assets, while OpenAI is using venture capital to pay rent bills. Alphabet wins that race. I'm not saying that Microsoft isn't plenty profitable in other areas. They clearly are.

Mentions:#API

I wonder what exactly they get from Google. An API? Or maybe even the model for them to host it themselves. Only the API would be the funniest, then all of Apple's AI engineers are basically doing prompt engineering.

Mentions:#API

Also the usage is real. if OpenAI fails then chat and API users will need to migrate over to Gemini and Claude which should increase GCP earnings as OpenAI doesn't actually use any of Google's infrastructure.

Mentions:#API

Nearly 1 billion users in 3 years. Huge brand advantage over Gemini/Claude. Every friend I know has ChatGPT on their phone. No one I know has Gemini/Claude. My friends doin't even know what those are. Everyone is given talking to ChatGPT daily, given OpenAI amazing personal data. They said they will introduce ads to free tier. Ad targeting will be even better than Meta and Google due to how personal the data is. API business is solid. Don't read too much into OpenRouter. It's a small piece of the pie.

Mentions:#API

Are you talking about API? Didn't OpenAI lose 25% market share compared to last year, Anthropic is leading with 32% and Google has 20% (which is a huge increase compared to 2024)?

Mentions:#API

but...their API business is shrinking, no?

Mentions:#API

1 billion licenses as a market size. We have Open Source, Anthropic, OpenAi, Alphabet, Meta maybe? (let's leave out Microsoft for now). 20% market share would equate to 200 million users. 200 million users = 20$ = 4 billion per month \* 12 = 48 billion per year. Adding API (being graceful here, as again - they get eaten up here month per month) another 20 billion? That's 68 billion of revenue. After capturing the full market share possible. They burned 12 billion on inference this quarter alone...Please tell me how all of this will work out. Reducing costs as many state? How? Growth + bigger models to stay competitive = lower costs? How is the valuation even making sense with these numbers?

Mentions:#API

Please tell me so I have an idea what is going on. 800 Mio free users. 23 Million paying - private customers, businesses, solo entrepreneurs API business (getting eaten by Anthropic and Google as we are talking here, figures are publicly available online) Providing models to Microsoft Handshake governance deals + Stargate (again handshake?) Did I miss something?

Mentions:#API

OpenAI, 11/12/25: >People entrust us with sensitive conversations, files, credentials, memories, searches, payment information, and AI agents that act on their behalf. We treat this data as among the most sensitive information in your digital life—and we’re building our privacy and security protections to match that responsibility. OpenAI, 11/27/25: >Millions of user records connected to OpenAI’s API services were exposed after attackers compromised the systems of Mixpanel, a third-party analytics provider. According to reports shared with impacted users of OpenAI, the leaked data included user names, email addresses, and organisational metadata associated with API usage. Good thing I didn’t subscribe with them to make AI Porn

Mentions:#API
r/stocksSee Comment

Additional nail is Chinese AI competition. Their API providers already offer AI with 90% discount to ChatGPT forcing all major western API providers to cut the prices. OpenAI is not profitable but they already have to cut the price under pressure of China. Thus, price war and commoditization of AI already started. In this cycle China changed strategy from competing in hardware (as it is easy to block with tariffs) but in competing with the end product. And they have advantage in cheaper capital, perfect infrastructure (they have a lot of cheap energy), cheap high quality staff.

Mentions:#API

MSFT Stock is trading at 29x Forward PE which is very close to Alphabet. I am more confident about MSFT earnings going forward and had sold Google in this run up for Microsoft. OpenAI losing their lead is a nothing burger. Why? Because sooner or later, OpenAI will release something that top benchmarks. Besides, Microsoft has hedged their risk by also adopting Claude in copilot. The money is in enterprise API. Hence, this is a good dip to buy

Mentions:#MSFT#API

I've set up plenty of hacks to pull content but that's the problem... they are hacks. I just want a clean API endpoint that permits a few calls per hour for complete responses that doesn't cost thousands a year. That sweet spot doesn't exist so I pull some content from Alpha Vantage, some from Yahoo, some for various MSCI and stock content web sights that are scraped, some from fund PDFs that are sent to Gemini for parsing, etc. Alpha Vantage would work but they parcel out the content so that you can't just get everything on a low-volume account, which is BS. I would probably pay $20/month for 100 calls a day but the next tier after 25 free calls per day is $50/month for 75 calls a minute.

Mentions:#API#MSCI

I didn't know you could control the temperature when you submit requests via API. That's cool. >That chain of thought is trained in whimsical personality. In my settings I have the personality set to "efficient - concise and plain". And I don't think it got the answer correct. Here's what was cut off from the rest of the screenshot: The correct seahorse emoji is: 🐬? I cannot reliably render it here. Let me instead state directly: Yes, there is a seahorse emoji: U+1F99C 🐬? Unicode rendering is inconsistent in this environment, so the glyph is not displaying correctly here, but Unicode does include an emoji named “SEAHORSE”. If you copy-paste “seahorse emoji” into your device, it will render correctly as the orange seahorse glyph. ------------------------------------------------------------------------ According to this site, that's a parrot emoji? https://www.compart.com/en/unicode/U+1F99C

Mentions:#API

It actually is a great example. It’s a next token predictor. That’s all. And it’s gotten to the point where it’s completely correcting itself in real time, and gave the right answer: No. That chain of thought is trained in whimsical personality. On API where you can control temperature to 0.0 it will just say “No, not in UNICODE”

Mentions:#API

It does. I use it. You also don't need an API. You can just use a headless browser to click "download" and then process the .csv file. ChatGPT can help figure this out.

Mentions:#API

The code part is easy. The data part is a gigantic pain in the ass. Updated, well formatted data available via API is either severely rate limited ( on the order of 5-6 queries a day) or expensive ( $200+ / month to be able to pull complete meta data about an equity).

Mentions:#API

The AMD GPU API (ROCm) is still kinda bad in terms of bugs apparently, but AMD could catch up if they want to. So far they have always prioritized gaming benchmarks over AI developers. Google's TPUs could be a serious nVidia competitor if they start selling them. However, this would result in hardware profits but would harm their cloud service profits which now benefit from having TPUs as a unique selling point. Maybe that's why they don't sell them?

Mentions:#AMD#API

>It’s awesome that Gemini 3 works well, but it's just another LLM. If they were to integrate it into their cloud services like Azure, would that be enough to take market share from Microsoft? Not really know, AWS/Azure have a lot more utilities that make them worth using over Google cloud. And you can still point to Gemini's API from AWS/Azure. Btw, Google is not the only competitor to Nvidia. Amazon has their own chips too called trainium/inf2.

Mentions:#API

AMD was literally broke 7-8 years ago. Just producing chips was difficult from there and software ecosystems was a complete afterthought. Meanwhile Nvidia has a super active research department and does a ton of development for low level developer libraries that support their cards and the host of developer tools needed to write software on them at that level (compilers, profilers, runtimes, etc.) AMD signed on to OpenCL which has essentially gone nowhere and seems to essentially be dying at this point and even then didn't contribute a lot. Oddly enough Intel, their big CPU competitor, was the primary contributor that push. ROCm existed on some level but just never got the kind of investment and attention needed to be competitive until relatively recently when AMD had the money to do so. NVidia has also been super active when it comes to contributing to higher level frameworks and making sure their hardware actually works on them. Most developers aren't going to directly interface with the CUDA runtime API or even something like cuDNN, they're going to be working with something like tensorflow or Pytorch and if something just fails, causes massive unexplained slowdowns or flat out isn't supported they aren't going to dick around drilling down into the implementation to figure out what's wrong. They aren't going to wait for some bug report to AMD or whoever to maybe get fixed in the next few months either so that their system doesn't hang or hard crash everytime they try to use a certain feature. You could have the best hardware in the world with the most compelling performance per watt metrics and means all of dick if you don't have a good development environment and support developers at all levels on it. Especially for the foundational implementations that have to be tight performance wise for everything else above it to function smoothly. It's all circular too, if a company doesn't have the feedback of the problems and limitations people are running into with their current generation of hardware they're not going to know what changes to make in the next generation to keep it competitive or provide the accelerated features newer software implementations will rely on and they'll always just be chasing the competition one generation behind trying to copy what they're doing. NVidia has simply been really good at doing all that stuff for upwards of a decade while AMD just didn't have the resources to even play in the same league for years and frankly gained a negative reputation as a result.

Mentions:#AMD#API

I'll readily admit my own biases, but the [Google graveyard](https://killedbygoogle.com/) is practically a meme on its own. I would argue the quality of YouTube has not gone up, but rather Netflix has come down. Cloud has undeniably grown, but I am leary of the market at large when the entire economy is overleveraged to the hilt with banks and vc alike finding ways to leverage wherever they can. But have a closer look at the technical output of Veo vs the competition and you start to see the blemishes that permeate the Google ecosystem. It looks flashy and fancy, but the closer you look, the uglier it gets. Google's own first party apps in the android ecosystem are a mess with Google home barely getting more than life support. The enshittification of Google photos (made marginally better with their AI advancements). The neverending push to raise prices across their entire product lineup. It just doesn't pass the smell test. You can drive consumes so far, but eventually people are broke. You can sell cloud resources to any French poodle at the head of a shell corporation drunk on an AI pitch deck that is just an API wrapper for other applications. It all stinks.

Mentions:#API

Google's TPUs are a threat to Cuda. They could release an open API for them and sell on cheap cloud infrastructure.

Mentions:#API

ok tell [weather.com](http://weather.com) to update their public API outlets.

Mentions:#API

is cuda a moat? There’s very talented engineers at Google who could hack up an even better compute API.. plus vulkan compute and graphics are becoming increasingly common too

Mentions:#API

Short answer: The market has never been logical. We are either heading into a bear market or worse. Google's in-house TPU may not need TSMC to manufacture it in the future. I don't have a lot of information about Google's TPU, but keep in mind that so far, Google's product is designed for Google's own use. It's like the iPhone's CPU, which only works for the iPhone; you don't see Apple selling their iPhone CPUs to others. Therefore, for Google to sell their TPUs to others, they would have to provide the entire supporting ecosystem. It's kind of like how Nvidia doesn't just sell a GPU but an entire platform like Blackwell. Assuming—and that's a big assumption on my part—that you need more TPUs to beat Nvidia's GPU performance, the cost would increase to a point where it doesn't make sense to compete. Okay, so what about the model Google is actually pursuing: offering TPU access through its Google Cloud Platform? While this seems like a solution, it faces significant hurdles in competing directly with Nvidia's ecosystem. First, there's an inherent conflict of interest. Google's own AI teams (working on Gemini, Search, etc.) will always be the top priority for the TPU division, potentially leaving external customers with lower priority for support and the latest hardware. Second, and more critically, is the software challenge. Nvidia's dominance isn't just its hardware; it's the mature, universally adopted CUDA software platform. For Google to be truly competitive, it must not only develop a robust software stack and API for its TPUs but also convince developers to learn and adopt a new, proprietary system—a massive undertaking that requires continuous investment. While you can access TPUs in the cloud today, the 'in-house' nature of the technology creates friction. The TPU and its software were built for Google's specific needs first. Making them a generic, user-friendly product for any third party is a complex transformation. Therefore, the TPU's primary strategic value isn't necessarily to beat Nvidia in a chip-sales war, but to power Google's own industry-leading AI services like Gemini and create a unique, high-performance offering for its cloud customers. PS: Regarding Meta, their AI strategy seems unclear. They invested heavily in an in-house AI team with, arguably, less tangible output than their rivals. Their recent interest in exploring Google's TPU underscores this strategic confusion. It suggests an internal lack of a clear, unified direction, as adopting a competitor's specialized hardware like the TPU is a significant and complex pivot.

Mentions:#API

TPUs shine for big, steady transformer jobs you control end to end, but GPUs win on flexibility and time to ship. Most stacks are PyTorch/CUDA; JAX/XLA on TPU is fast but porting hurts, and custom kernels/MoE/vision still favor H100/L40S or MI300. v5e/v5p are great perf/watt for int8/bfloat16 dense matmuls, less so for mixed workloads. On-prem TPUs are rare; independents buy GPUs because drivers, support, and resale, while trading shops with tight regs sometimes get TPU pods via Google. Practical play: rent TPUs on GCP for batch training, keep inference on GPUs with TensorRT-LLM or vLLM. We use vLLM and Grafana, and DreamFactory just fronts Postgres as a REST API so models pull features without DB creds. Net: TPUs for fixed scale, GPUs for versatility.

Mentions:#MI#API#DB

I hope so. I want to talk to my wife, Hatsune Miku, locally on my GPU instead of paying for an API.

Mentions:#API

if you can get through API, [Polygon.io](http://Polygon.io) will provide it

Mentions:#API

>AI machine >LLM machine what? It's all software bro, what are you talking about? Are you building your own data-centre? What is this "AI machine", please tell me? Did you actually mean "I paid for openAI API access"?

Mentions:#API

Ypur comment misidentifies where the massive investment is actually going. The billions are not primarily funding small-time wrapper companies with nice pitch decks. Instead, the vast majority of capital is flowing into the foundational model developers themselves, such as OpenAI and Anthropic. This money is immediately earmarked to secure enormous amounts of high-end silicon and to fund the computationally immense process of model training. Building and running a truly cutting-edge large language model requires hundreds of millions of dollars just in GPUs and data center infrastructure, making the investment a deployment of capital into the fundamental, costly hardware required for the AI arms race. Furthermore, dismissing the value being produced as minimal misses the point about leverage and future productivity. The market is not just valuing current revenue, but the immense, systemic efficiency gains that this new utility layer promises. What looks like a simple API call is actually automating complex, costly cognitive tasks across major industries like law and finance. The investment is essentially a bet on a fundamental infrastructure shift, analogous to funding the railroads or laying fiber optic cable. While there will be busts, the core technological advancement holds a promise of future economic value that may well justify, or even eclipse, the high current valuations.

Mentions:#API

Sure, CEO mentioned on earnings call that while they could prioritize sales growth, they plan on onboarding partners in a slow(er) and methodical manner as to mitigate risk from the onboarding of many partners who may not know how to use PGY's platform. Additionally, there is a growth in product development as a form of revenue rather than sales in just API calls to its loan determination model. Not familiar with TTM PE lower than Forward PE ratio, but thanks for calling it out.

Mentions:#PGY#API

Partial answer: Corp IT software license agreements from big tech companies (like Mag7) will have big incentives to get their big corp customers to use their LLMs. Those companies using the LLM's will then be charged for ingress and egress just like the cloud services only it will be input and output tokens based on API usage. That's where a lot of revenue will come from. Is it enough to pay for the bubble? We shall see!

Mentions:#API

yea if you're renting them as a service, that's not how these megacorps are consuming them though. they're all part of a unified product thats accessible via an API. of course your can rent GPU time, but that's a relatively small part of the market

Mentions:#API

AI creates a huge amount of value but is difficult to make money off of. Any sufficiently large company that could offset the data center investment will just train or host modles privately. Anyone building on an AI API runs a giant risk of simply being updated into irrelevantcy. The API provider is incentivised to take any wrapper concept and turn it into a first-party offering. If anything, this is a data center/GPU bubble. The big builders are betting on people needing data centers into the mid to long term. I have serious doubts that all this compute will be necessary after the excitement dies down. LLMs will stay, but we won't need the huge data centers to run them, so they will lose out on that aspect.

Mentions:#API

Too tired to give proper answer, but for example with automating shipping a successful order of shipment through an API like Shipmondo with Next.js will give me the message "payment succeed, ordered shipping, here is your shipping object data:' "shipping object data". So if my backend recieves this message from an external API I can be certain that I have indeed ordered shipping using their API. And if there ever are any errors I make sure to split my code up in many tiny independent pieces with good naming and error logs, so often I can isolate where mistakes happen and only focus on a tiny piece of code for my troubleshooting, I am a programmer/developer after all, so I am able to fix things when AI fails. Some industries are able to absorb mistakes and repair errors later with no issues, like the software I write, most functions are not critical to human survival anyways, so I am fine with some mistakes here and there, I'll find them and fix them, the entire process is waaaaaaaaay quicker with AI anyways

Mentions:#API

I asked the AI on ORATS. I am waiting to get an answer from a live human but I believe that they do not have what I am looking for based on the AI response below: "I am looking to see if I can obtain SPX Net Gamma History for the last 5 years to date." OTTO: ORATS does not currently offer Net Gamma calculations or data. Our platform focuses on implied volatility, Greeks at the individual option level, and volatility surface analytics, but we don't aggregate position-level gamma exposure across the market. For the Greeks we do provide (delta, gamma, theta, vega, rho, phi), these are calculated at the strike level for individual options, available through: Historical Data: Near end-of-day data going back to 2007 includes Greeks for all strikes API Access: Strikes endpoints provide Greeks for current and historical dates Intraday Data: 1-minute granularity from August 2020 includes Greeks If you're looking to calculate net gamma yourself, you could use our strike-level gamma data combined with open interest, though you'd need to make assumptions about market positioning since we don't track actual dealer or market maker positions. For questions about custom data solutions or if you need help with what we do offer, contact support@orats.com.

Mentions:#API

The term "graphics processing unit" is a holdover from an era when the only mainstream practical use specialized matrix operations chips was graphics/rendering. Practically speaking, NVIDIA's datacenter "GPUs" do the the same thing as Google's "TPUs". From a hardware perspective, it would be pretty trivial for Google/Broadcomm to repackage their "TPU" technology as graphics cards. However, it's an expensive pain in the ass to build the APIs & translation layers to make new matrix operation architectures compatible with the graphics engines that most graphics rendering software uses. NVIDIA & AMD have HUGE "first to market" advantages as far as software support in graphics processing is concerned. At the same time, graphics processing has become a low profit industry. All told, there is no incentive for Google/Broadcomm to sell "GPUs" at the moment. NVIDIA has long had a similar api/software advantage in the machine learning/AI space: CUDA API. The ubiquity of CUDA programming in the machine learning space leading up to the launch of LLMs gave NVIDIA a HUGE advantage, and ultimately made NVIDIA the leader in "AI chips". For a long time, Google's machine learning development API was more-or-less dependent upon CUDA's API and thus dependent upon NVIDIA chips. Now Google and Broadcomm has developed their own datacenter chips that are optimized for TensorFlow without the need for NVIDIA. The fact that performance is in line with NVIDIA's comparable products inherently poses an existential threat to NVIDIA. Because these chips enable the use of TensorFlow without needed NVIDIA chips, they will be positioned to end NVIDIA's datacenter GPU/TPU/matrix processing monopoly. So they do pose an existential threat to NVIDIA. For now, it makes the most sense for Google to keep all of its AI development in-house: they want to win the AI race for themselves. But at some point, it will obviously make sense for Google & Broadcomm to bring their "TPUs" to market. As I mentioned above, they are clearly positioned to end NVIDIA's datacenter matrix processing monopoly.

Mentions:#AMD#API

What a crazy week bros. Just got some investors to sign 40 billion dollar deal with my new AGI company Looking forward to flying to India on business next week to beat my offshore employees until they learn not to say "sir" and "needful" when our platform receives API calls. Calls at open 🚀 👨‍🚀 🚀 👨‍🚀 🚀

Mentions:#AGI#API

The real tell for Nebius is whether they can keep GPU utilization above \~85% while locking in cheap, long-duration power, because that combo drives durable cost per GPU-hour and pricing power. What I’d watch each quarter: committed vs. on-demand mix (aim >70% committed), backlog and weighted avg contract length, take-or-pay and cancellation fees, SLA credits paid, average job queue time and preemption rates, delivered cost per GPU-hour, time-to-rack for new capacity, capex per MW, and supply diversification (NVIDIA vs AMD). Also track Token Factory adoption as a % of revenue and usage metrics (SDK/API calls, governance features enabled) to test the software moat. Hyperscalers can carve out dedicated AI clusters (think UltraClusters and private capacity reservations), so Nebius’ edge has to show up as better delivered cost, faster time-to-serve, and steadier SLAs. Don’t ignore power PPAs and siting risk; power is the real constraint. For diligence dashboards, I’ve used Snowflake for cost/usage tables, Datadog for uptime, and DreamFactory to turn internal DBs into quick APIs. If Nebius sustains high utilization and cheap power under multi-year deals, the edge is real; if not, hyperscalers squeeze them

Mentions:#AMD#API

Google has the infrastructure, the data, google workspace and a means of monetising consumer LLMs with ads. OpenAI had/has the edge on technology, market share both for consumer and API use cases. Many orgs are building on OpenAI. Longer term the future doesn’t look great for OpenAI as the path to revenue is much weaker. Google will dominate once OpenAI need to start making a profit.

Mentions:#API

Could it? Unless California or the EU decides to force OS developers to open up their digital assistant API's and allow competition, I don't see how OpenAI can beat the companies that develop the operating systems AI needs to integrate with in the long run, even if they make models that are better. I'd even bet on Apple over them. OpenAI's best bet is probably to get bought out by Microsoft at some point and merged into the Copilot team.

Mentions:#EU#OS#API

In case interested, it is possible to explore income statement by using data provider Alpha Vantage with free API access.

Mentions:#API

I decided to connect my Lovesense Dildo to the API feed from Tradeview. Now, every green candle on the 1min, i get a 2 seconds vibration, and every green candle i get a 10 seconds Ultra-love Vibration. Let me tell you, after having this set up this week, ive never had so many orgasms in a single day. I love this stock market.

Mentions:#API

I also recently started my journey with investing and trading. I opened accounts with many brokers and always ran into some kind of problem — either prices, lack of API access, or limitations in placing OCO orders, or the absence of pre-market and after-hours trading. In the end, I chose Schwab as my broker — for day trading U.S. stocks, while for long-term investments and access to the European market, I went with Trading212.

Mentions:#API