See More StocksHome

API

Agora Inc

Show Trading View Graph

Mentions (24Hr)

9

350.00% Today

Reddit Posts

r/wallstreetbetsSee Post

Chat with Earnings Call?

r/investingSee Post

Download dataset of stock prices X tickers for yesterday?

r/investingSee Post

Sea Change: Value Investing

r/WallstreetbetsnewSee Post

Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field

r/pennystocksSee Post

Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field

r/WallStreetbetsELITESee Post

AIGC market brings important development opportunities, artificial intelligence technology has been developing

r/pennystocksSee Post

Avricore Health - AVCR.V making waves in Pharmacy Point of Care Testing! CEO interview this evening as well.

r/wallstreetbetsSee Post

Sea Change: Value Investing

r/investingSee Post

API KEY and robinhood dividends

r/pennystocksSee Post

OTC : KWIK Shareholder Letter January 3, 2024

r/optionsSee Post

SPX 0DTE Strategy Built

r/WallstreetbetsnewSee Post

The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT

r/pennystocksSee Post

The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT

r/optionsSee Post

Best API platform for End of day option pricing

r/WallStreetbetsELITESee Post

Why Microsoft's gross margins are going brrr (up 1.89% QoQ).

r/wallstreetbetsSee Post

Why Microsoft's gross margins are expanding (up 1.89% QoQ).

r/StockMarketSee Post

Why Microsoft's gross margins are expanding (up 1.89% QoQ).

r/stocksSee Post

Why Microsoft's margins are expanding.

r/optionsSee Post

Interactive brokers or Schwab

r/wallstreetbetsSee Post

Reddit IPO

r/wallstreetbetsSee Post

Google's AI project "Gemini" shipped, and so far it looks better than GPT4

r/stocksSee Post

US Broker Recommendation with a market that allows both longs/shorts

r/investingSee Post

API provider for premarket data

r/WallstreetbetsnewSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/investingSee Post

Best API for grabbing historical financial statement data to compare across companies.

r/StockMarketSee Post

Seeking Free Advance/Decline, NH/NL Data - Python API?

r/pennystocksSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/wallstreetbetsOGsSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/WallStreetbetsELITESee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/ShortsqueezeSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/smallstreetbetsSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/RobinHoodPennyStocksSee Post

A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform

r/stocksSee Post

Delving Deeper into Benzinga Pro: Does the Subscription Include Full API Access?

r/investingSee Post

Past and future list of investor (analyst) dates?

r/pennystocksSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/WallstreetbetsnewSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/RobinHoodPennyStocksSee Post

Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration

r/pennystocksSee Post

Aduro Clean Technologies Inc. Research Update

r/WallStreetbetsELITESee Post

Aduro Clean Technologies Inc. Research Update

r/optionsSee Post

Option Chain REST APIs w/ Greeks and Beta Weighting

r/investingSee Post

As an asset manager, why wouldn’t you use Verity?

r/wallstreetbetsSee Post

Nasdaq $ZG (Zillow) EPS not accurate?

r/pennystocksSee Post

$VERS Upcoming Webinar: Introduction and Demonstration of Genius

r/StockMarketSee Post

Comps and Precedents: API Help

r/StockMarketSee Post

UsDebtClock.org is a fake website

r/wallstreetbetsSee Post

Are there pre-built bull/bear systems for 5-10m period QQQ / SPY day trades?

r/ShortsqueezeSee Post

Short Squeeze is Reopened. Play Nice.

r/stocksSee Post

Your favourite place for stock data

r/optionsSee Post

Created options trading bot with Interactive Brokers API

r/investingSee Post

What is driving oil prices down this week?

r/weedstocksSee Post

Leafly Announces New API for Order Integration($LFLY)

r/stocksSee Post

Data mapping tickers to sector / industry?

r/WallstreetbetsnewSee Post

Support In View For USOIL !

r/wallstreetbetsSee Post

Is Unity going to Zero? - Why they just killed their business model.

r/optionsSee Post

Need Help Deciding About Limex API Trading Contest

r/investingSee Post

Looking for affordable API to fetch specific historical stock market data

r/optionsSee Post

Paper trading with API?

r/optionsSee Post

Where do sites like Unusual Whales scrape their data from?

r/stocksSee Post

Twilio Q2 2023: A Mixed Bag with Strong Revenue Growth Amid Stock Price Challenges

r/StockMarketSee Post

Reference for S&P500 Companies by Year?

r/SPACsSee Post

[DIY Filing Alerts] Part 3 of 3: Building the Script and Automating Your Alerts

r/stocksSee Post

Know The Company - Okta

r/SPACsSee Post

[DIY Filing Alerts] Part 2: Emailing Today's Filings

r/wallstreetbetsOGsSee Post

This prized $PGY doesn't need lipstick (an amalgamation of the DD's)

r/SPACsSee Post

[DIY Filing Alerts] Part 1: Working with the SEC API

r/optionsSee Post

API or Dataset that shows intraday price movement for Options Bid/Ask

r/wallstreetbetsSee Post

[Newbie] Bought Microsoft shares at 250 mainly as see value in ChatGPT. I think I'll hold for at least +6 months but I'd like your thoughts.

r/stocksSee Post

Crude Oil Soars Near YTD Highs On Largest Single-Week Crude Inventory Crash In Years

r/stocksSee Post

Anyone else bullish about $GOOGL Web Integrity API?

r/investingSee Post

I found this trading tool thats just scraping all of our comments and running them through ChatGPT to get our sentiment on different stocks. Isnt this a violation of reddits new API rules?

r/optionsSee Post

where to fetch crypto option data

r/wallstreetbetsSee Post

I’m Building a Free Fundamental Stock Data API You Can Use for Projects and Analysis

r/stocksSee Post

Fundamental Stock Data for Your Projects and Analysis

r/StockMarketSee Post

Fundamental Stock Data for Your Projects and Analysis

r/stocksSee Post

Meta, Microsoft and Amazon team up on maps project to crack Apple-Google duopoly

r/wallstreetbetsSee Post

Pictures say it all. Robinhood is shady AF.

r/optionsSee Post

URGENT - Audit Your Transactions: Broker Alters Orders without Permission

r/StockMarketSee Post

My AI momentum trading journey just started. Dumping $3k into an automated trading strategy guided by ChatGPT. Am I gonna make it

r/StockMarketSee Post

I’m Building a Free API for Stock Fundamentals

r/wallstreetbetsSee Post

The AI trading journey begins. Throwing $3k into automated trading strategies. Will I eat a bag of dicks? Roast me if you must

r/StockMarketSee Post

I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)

r/StockMarketSee Post

I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)

r/optionsSee Post

To recalculate historical options data from CBOE, to find IVs at moment of trades, what int rate?

r/pennystocksSee Post

WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS

r/wallstreetbetsSee Post

$SSTK Shutterstock - OpenAI ChatGBT partnership - Images, Photos, & Videos

r/optionsSee Post

Is there really no better way to track open + closed positions without multiple apps?

r/optionsSee Post

List of Platforms (Not Brokers) for advanced option trading

r/investingSee Post

anyone using Alpaca for long term investing?

r/investingSee Post

Financial API grouped by industry

r/WallStreetbetsELITESee Post

Utopia P2P is a great application that needs NO KYC to safeguard your data !

r/WallStreetbetsELITESee Post

Utopia P2P supports API access and CHAT GPT

r/optionsSee Post

IV across exchanges

r/optionsSee Post

Historical Greeks?

r/wallstreetbetsSee Post

Stepping Ahead with the Future of Digital Assets

r/wallstreetbetsSee Post

An Unexpected Ally in the Crypto Battlefield

r/stocksSee Post

Where can I find financial reports archives?

r/WallStreetbetsELITESee Post

Utopia P2P has now an airdrop for all Utopians

r/stocksSee Post

Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue

r/wallstreetbetsSee Post

Reddit IPO - A Critical Examination of Reddit's Business Model and User Approach

r/wallstreetbetsSee Post

Reddit stands by controversial API changes as situation worsens

Mentions

Please don’t listen to this guy. He’s going to lose you money which is completely irresponsible. He doesn’t do this as his day job, just a dude going down rabbit holes. Here’s my rebuttal. AI capex is not a bubble. The telco comparison is intellectually lazy, and here’s why. I keep seeing the same recycled bear thesis: “AI spending looks like the dot-com bubble! Look at these capex charts!” As someone who spent years analyzing tech companies and analyzing tech balance sheets, let me explain why this comparison falls apart under any real scrutiny. (I can tell this poster just went down an internet rabbit hole and came out thinking he was a genius. This would get tossed in the trash on institutional desks). The balance sheet comparison is absurd. The dot-com bubble thesis relies on comparing companies like Cisco and WorldCom — leveraged, cash-poor businesses running on hype — to Microsoft, Google, Meta, and Amazon, who are sitting on roughly $500 billion in combined cash reserves. These companies aren’t levering up to fund AI. They’re spending free cash flow. There’s a fundamental difference between a company borrowing to build fiber nobody asked for and a company allocating 15% of its cash pile toward infrastructure it’s already monetizing. If you can’t distinguish between those two situations, you shouldn’t be writing research. Projecting the 1990s forward is not analysis. The core of every “AI bubble” report I’ve seen boils down to: “Telco capex went up and then crashed, therefore AI capex will crash.” That’s not a thesis. That’s pattern matching on a sample size of one. The actual dynamics are completely different: The telecom bust happened because companies built supply for demand that didn’t exist. AI already has over 1 billion users and is projected to reach 5 billion by 2030. ChatGPT hit 100 million users faster than any product in history. The demand isn’t hypothetical — it’s here, it’s measurable, and it’s growing. The monetization is real and it’s scaling. I can tell you from personal experience that my own AI API bills run into the hundreds of dollars monthly — just for individual use. Multiply that across enterprises. Faster, more nimble tech companies are already running $50,000/month Anthropic bills to code entire systems. The idea that enterprises “aren’t adopting AI” is a survey problem, not a demand problem. If your sample is Fortune 500 companies whose only AI exposure is Microsoft Copilot, sure, adoption looks tepid. But the companies actually building products — the ones that will define the next decade — are spending aggressively and seeing real productivity gains. Large enterprise adoption is slower by nature. That’s not evidence of a bubble. That’s a normal diffusion curve. AI capex obeys fundamentally different economics than telecom capex. Two dynamics make this spending cycle structurally different from anything in the ’90s: Scaling laws are real physics, not hype. Every order of magnitude increase in compute has produced predictable, step-function improvements in model capability. This isn’t speculative, it’s empirically documented across multiple generations of models. As long as $10B in compute produces a meaningfully smarter model than $1B, the ROI is driven by the technology itself. Companies aren’t spending on faith. They’re spending because the returns are mathematically observable. Supply is physically constrained. Fiber was a commodity. You could overbuild it because the inputs were abundant. High-end AI compute is bottlenecked by TSMC fabrication capacity and power grid availability. There are literal, physical limits on how many advanced chips can be produced. If you don’t invest $100B today, you cannot catch up in 2028 — the capacity simply won’t exist. That’s the opposite of a bubble dynamic. Bubbles are characterized by unlimited supply chasing speculative demand. AI capex is characterized by constrained supply chasing demonstrated demand. The bottom line: Every bubble argument I’ve seen either ignores the balance sheets of the companies doing the spending, treats a single historical analogy as a law of nature, or dismisses real monetization data in favor of vibes. You can spin a report to say anything, I’ve seen hundreds of them on both sides of this trade. But the lazy ones all share the same flaw: they compare the surface-level shape of a capex curve without examining whether the underlying economics are remotely similar.

Mentions:#API

“Solid analysis” if your day job is something else and you research this stuff in your underwear lol. Here’s my rebuttal. AI capex is not a bubble. The telco comparison is intellectually lazy, and here’s why. I keep seeing the same recycled bear thesis: “AI spending looks like the dot-com bubble! Look at these capex charts!” As someone who spent years analyzing tech companies and analyzing tech balance sheets, let me explain why this comparison falls apart under any real scrutiny. (I can tell this poster just went down an internet rabbit hole and came out thinking he was a genius. This would get tossed in the trash on institutional desks). The balance sheet comparison is absurd. The dot-com bubble thesis relies on comparing companies like Cisco and WorldCom — leveraged, cash-poor businesses running on hype — to Microsoft, Google, Meta, and Amazon, who are sitting on roughly $500 billion in combined cash reserves. These companies aren’t levering up to fund AI. They’re spending free cash flow. There’s a fundamental difference between a company borrowing to build fiber nobody asked for and a company allocating 15% of its cash pile toward infrastructure it’s already monetizing. If you can’t distinguish between those two situations, you shouldn’t be writing research. Projecting the 1990s forward is not analysis. The core of every “AI bubble” report I’ve seen boils down to: “Telco capex went up and then crashed, therefore AI capex will crash.” That’s not a thesis. That’s pattern matching on a sample size of one. The actual dynamics are completely different: The telecom bust happened because companies built supply for demand that didn’t exist. AI already has over 1 billion users and is projected to reach 5 billion by 2030. ChatGPT hit 100 million users faster than any product in history. The demand isn’t hypothetical — it’s here, it’s measurable, and it’s growing. The monetization is real and it’s scaling. I can tell you from personal experience that my own AI API bills run into the hundreds of dollars monthly — just for individual use. Multiply that across enterprises. Faster, more nimble tech companies are already running $50,000/month Anthropic bills to code entire systems. The idea that enterprises “aren’t adopting AI” is a survey problem, not a demand problem. If your sample is Fortune 500 companies whose only AI exposure is Microsoft Copilot, sure, adoption looks tepid. But the companies actually building products — the ones that will define the next decade — are spending aggressively and seeing real productivity gains. Large enterprise adoption is slower by nature. That’s not evidence of a bubble. That’s a normal diffusion curve. AI capex obeys fundamentally different economics than telecom capex. Two dynamics make this spending cycle structurally different from anything in the ’90s: Scaling laws are real physics, not hype. Every order of magnitude increase in compute has produced predictable, step-function improvements in model capability. This isn’t speculative, it’s empirically documented across multiple generations of models. As long as $10B in compute produces a meaningfully smarter model than $1B, the ROI is driven by the technology itself. Companies aren’t spending on faith. They’re spending because the returns are mathematically observable. Supply is physically constrained. Fiber was a commodity. You could overbuild it because the inputs were abundant. High-end AI compute is bottlenecked by TSMC fabrication capacity and power grid availability. There are literal, physical limits on how many advanced chips can be produced. If you don’t invest $100B today, you cannot catch up in 2028 — the capacity simply won’t exist. That’s the opposite of a bubble dynamic. Bubbles are characterized by unlimited supply chasing speculative demand. AI capex is characterized by constrained supply chasing demonstrated demand. The bottom line: Every bubble argument I’ve seen either ignores the balance sheets of the companies doing the spending, treats a single historical analogy as a law of nature, or dismisses real monetization data in favor of vibes. You can spin a report to say anything, I’ve seen hundreds of them on both sides of this trade. But the lazy ones all share the same flaw: they compare the surface-level shape of a capex curve without examining whether the underlying economics are remotely similar.

Mentions:#API

AI capex is not a bubble. The telco comparison is intellectually lazy, and here’s why. I keep seeing the same recycled bear thesis: “AI spending looks like the dot-com bubble! Look at these capex charts!” As someone who spent years analyzing tech companies and analyzing tech balance sheets, let me explain why this comparison falls apart under any real scrutiny. (I can tell this poster just went down an internet rabbit hole and came out thinking he was a genius. This would get tossed in the trash on institutional desks). The balance sheet comparison is absurd. The dot-com bubble thesis relies on comparing companies like Cisco and WorldCom — leveraged, cash-poor businesses running on hype — to Microsoft, Google, Meta, and Amazon, who are sitting on roughly $500 billion in combined cash reserves. These companies aren’t levering up to fund AI. They’re spending free cash flow. There’s a fundamental difference between a company borrowing to build fiber nobody asked for and a company allocating 15% of its cash pile toward infrastructure it’s already monetizing. If you can’t distinguish between those two situations, you shouldn’t be writing research. Projecting the 1990s forward is not analysis. The core of every “AI bubble” report I’ve seen boils down to: “Telco capex went up and then crashed, therefore AI capex will crash.” That’s not a thesis. That’s pattern matching on a sample size of one. The actual dynamics are completely different: The telecom bust happened because companies built supply for demand that didn’t exist. AI already has over 1 billion users and is projected to reach 5 billion by 2030. ChatGPT hit 100 million users faster than any product in history. The demand isn’t hypothetical — it’s here, it’s measurable, and it’s growing. The monetization is real and it’s scaling. I can tell you from personal experience that my own AI API bills run into the hundreds of dollars monthly — just for individual use. Multiply that across enterprises. Faster, more nimble tech companies are already running $50,000/month Anthropic bills to code entire systems. The idea that enterprises “aren’t adopting AI” is a survey problem, not a demand problem. If your sample is Fortune 500 companies whose only AI exposure is Microsoft Copilot, sure, adoption looks tepid. But the companies actually building products — the ones that will define the next decade — are spending aggressively and seeing real productivity gains. Large enterprise adoption is slower by nature. That’s not evidence of a bubble. That’s a normal diffusion curve. AI capex obeys fundamentally different economics than telecom capex. Two dynamics make this spending cycle structurally different from anything in the ’90s: Scaling laws are real physics, not hype. Every order of magnitude increase in compute has produced predictable, step-function improvements in model capability. This isn’t speculative, it’s empirically documented across multiple generations of models. As long as $10B in compute produces a meaningfully smarter model than $1B, the ROI is driven by the technology itself. Companies aren’t spending on faith. They’re spending because the returns are mathematically observable. Supply is physically constrained. Fiber was a commodity. You could overbuild it because the inputs were abundant. High-end AI compute is bottlenecked by TSMC fabrication capacity and power grid availability. There are literal, physical limits on how many advanced chips can be produced. If you don’t invest $100B today, you cannot catch up in 2028 — the capacity simply won’t exist. That’s the opposite of a bubble dynamic. Bubbles are characterized by unlimited supply chasing speculative demand. AI capex is characterized by constrained supply chasing demonstrated demand. The bottom line: Every bubble argument I’ve seen either ignores the balance sheets of the companies doing the spending, treats a single historical analogy as a law of nature, or dismisses real monetization data in favor of vibes. You can spin a report to say anything, I’ve seen hundreds of them on both sides of this trade. But the lazy ones all share the same flaw: they compare the surface-level shape of a capex curve without examining whether the underlying economics are remotely similar.

Mentions:#API

All in, I’m at about $3,500 total so far and that includes forming the company. Here’s the rough breakdown over the last 8 months: * Claude Pro: $200 * ChatGPT: $20/month * X Premium: $11/month (self promo-marketing - I guess?) * Grok: $30/month (can cut this, dont really need it) * Azure hosting: \~$65/month * Google API: accidentally burned credits during testing and racked up a $500 bill (rookie mistake) * APIs (Claude, GPT, Grok): about $20 each total in usage, but I still need to figure out what it would look like for a wild heavy user.. * LLC fees were like 70 to SOS Even with the Google credit mishap, I’m honestly fine with the spend. For what I’ve built, $3.5k is cheap. I’d pay a developer way more than that to build this for me, and it was spread out over 8 months. Everything is tracked cleanly in my budget-to-actuals report inside the Hub, pulled straight from my bookkeeping tab. And I can trim some of this if needed — GPT and Grok subscriptions alone would cut a decent chunk monthly. Net: \~$3,500 invested to date, fully tracked, and flexible going forward.

Mentions:#API#SOS

Fair call & appreciate the flag. Those are likely Prisma engine binaries (auto-generated by npx prisma generate) and Next.js build artifacts. They shouldn’t be committed to the repo. I’ll clean those up and add them to .gitignore. Nothing in this repo runs without you providing your own API keys and spinning up your own database; there’s no prebuilt executable to trust blindly. But you’re right that the repo should be cleaner. Thanks for looking.

Mentions:#API

I imagine you could set up multiple agents in such a way that they queue up requests through one single server or system, and then that system makes API requests on their behalf. Still only need a single seat in that scenario.

Mentions:#API

Tasty trade API, federal reserve api & finhub for now. I’m always looking for additional data sources to tap into. I know there are a bunch of ones that you can pay for, but I’m just not ready to do that right now

Mentions:#API

TastyTrade actually accepts European accounts. It’s US-based (SEC/FINRA regulated) but allows international users. So you could use the exact same API and data pipeline. That said, Interactive Brokers is probably the most popular option in Europe for options trading , I have not set that one up yet, but it’s next on my list!

Mentions:#API

Real-time, and it’s free with a funded TastyTrade account, no professional data subscription needed. The API streams live quotes and full option chains through their DXLink WebSocket (powered by dxFeed). The scanner endpoint with IV rank, IV percentile, HV, term structure, etc. is also real-time. Zero extra cost on top of a regular brokerage account.​​​​​​​​​​​​​​​​

Mentions:#API

Is the tastyTrade API a real time, or delayed quote service? I’m guessing you need to pay the professional price (31.50$/month?) for real time if so?

Mentions:#API

Fortune 20 insurance exec here, absolutely, I've seen countless pitches from LLM API wrappers who can't even give us a meaningful result using retrospective data. I've had to fire and pip several people over the last few months for taking Ai slop to potential clients as well. I have seen no meaningful efficiency gains from Ai.

Mentions:#API

Huh? AI coding agents absolutely shred through tough data sets like it’s nothing. You point it at some obscure data set or crappy API documentation and say “figure it out”. Let Claude Code try 50 different approaches to integrate while you sip your coffee and scroll Reddit. By the time you come back to your session you either have a working prototype or all the information you need for an extremely verbose feature request.

Mentions:#API

Enterprise blockchain systems license the tech and integrate it into apps where users never directly buy tokens. In those cases, tokens function as backend settlement units or infrastructure credits, similar to API usage or cloud compute credits. You could use the exact same argument to say "Well, I haven't seen my buddy Jimmy trading cloud/API credits, have you?" Blockchain in this case is infrastructure that end-users in most cases don't even need to know exists. The question isn’t whether you see people buying funny tokens on exchanges. The question is whether businesses are paying to use the infrastructure and whether the token is structurally required (technologically or economically) in that process. During your DD you should find answers to these questions: 1. Where do DVLT’s operational flows originate? 2. Who are their actual clients? 3. In what scenarios does blockchain provide a structural advantage over a centralized database? 4. What measurable efficiency, cost reduction, or market expansion does it create? These are the right evaluation criteria.

Mentions:#API#DD#DVLT

I made a working app in Claude yesterday in about 30 minutes and didn’t even know what an API key was when I started. 

Mentions:#API

Of course that's your contention. You're a first-time SaaS bear. You just got finished listening to some podcast, Dario on Dwarkesh, probably. Now you think it’s the end of white collar work and seat-based pricing is screwed. You're gonna be convinced of that til tomorrow when you get to “Something Big is Happening”. Then you’ll install ClawdBot on a Mac Mini, vibe code a dashboard on top of a postgres database and say we’re all just a couple ralph loops away from building a Salesforce competitor. That’s gonna last until next week when you discover context graphs, and then you're gonna be talking about how the systems of record will be disintermediated by an agentic layer and reposting OAI marketing graphics. “Well, as a matter of fact, I won't, because ultimately the application layer is just ….” The application layer is just business logic on top a CRUD database. You got that from Satya’s appearance on the BG2 pod, December 2024, right? Yeah, I saw that too. Were you gonna plagiarize the whole thing for us? Do you have any thoughts of your own on this matter? Or...is that your thing? You get into the replies of anyone posting a SaaS ticker. You watch some podcast and then pawn it off as your own idea just to impress some VCs and embarrass some anon who’s long SaaS? See the sad thing about a guy like you is in a couple years you're gonna start doing some thinking on your own and you're gonna come up with the fact that there are two certainties in life. One: don't do that. And two: you dropped thirty grand on Mac Minis and LLM API calls to come to the same conclusion you could’ve got for free by following a handful of VC accounts.

Mentions:#BG#API#VC

I’ll let Gemini explain why you’re wrong. 😊 This whole argument is built on massive blind spots and a few convenient strawmen. The author fundamentally misunderstands *how* AI threatens the SaaS business model. Here is exactly where the logic falls apart: ### The SMB Delusion Calling SMB revenue a "rounding error" is completely out of touch with reality. Massive tech companies—Shopify, HubSpot, Intuit, Atlassian, Mailchimp—are built almost entirely on the backs of small and medium-sized businesses. Even for enterprise behemoths like Microsoft or Salesforce, the mid-market and SMB tiers are huge revenue drivers. If AI gives smaller businesses the ability to spin up cheap, automated micro-tools instead of paying for subscriptions, a massive chunk of the SaaS sector's total market cap goes up in smoke. ### The "Vibe Coding" Strawman The author sets up a false dichotomy: either an enterprise buys a massive SaaS platform, or their CEO tries to build a custom CRM over the weekend using a prompt. That’s not the actual threat. The real threat is the hyper-efficiency of internal engineering. Enterprises already have dev teams. If AI makes those internal developers 10x or 100x more productive, the "build vs. buy" math changes instantly. A bank doesn't need to rely on a hallucinating AI agent; their own security-cleared, SOC2-compliant dev team can just build and maintain the necessary tools in a fraction of the time and cost it used to take. They don't need to outsource the complexity if AI just automated the complexity. ### The Seat-Based Death Spiral This is the most glaring logical flaw in the essay. The author points to OpenAI and Anthropic charging $25–$30 a seat as proof the model is fine, completely ignoring that their real enterprise scale is built on API consumption (charging for compute/tokens), not user seats. More importantly, traditional SaaS is a tax on human headcount. You pay per seat for Salesforce, Zendesk, or Slack. If an enterprise uses AI agents to automate 80% of its customer support, they don't need 100 Zendesk licenses anymore—they need 20. The AI doesn't need a software license. The SaaS vendor's revenue collapses, even if the enterprise technically never stops using the product. ### Margin Compression SaaS companies have historically justified their massive recurring fees because building reliable, secure software from scratch was historically incredibly hard and expensive. AI lowers the barrier to entry to the floor. When building software becomes cheap, margins compress. Why pay an incumbent vendor $500k a year for project management software when a hungry new startup can use AI to build the exact same secure, HIPAA-compliant tool and undercut them by 80%? **The Bottom Line:** Wall Street isn't worried that global banks are going to start "vibe coding." They're worried that AI destroys the pricing power, the defensive moats, and the human-headcount-growth loops that made SaaS a cash cow in the first place.

I second this. You can negotiate a flat margin rate before you migrate your funds, and if you ask they will probably give you a nice cash bonus to do so. Schwab also has an API if you're into rolling your own backtests and algotrading.

Mentions:#API

Been playing around with self-hosted/local LLM lately... 3 things that a lot of people really should know, and currently have no idea: 1. API data for the major AI providers is INSANELY expensive. It isn't a monthly sibscription, you pay for computer like gas. But the thing is, you have no idea how much something will cost you to run until it's too late. None of the providers have functional usage tracking at all, and the cost to do even simple tasks is absolutely ridiculous. This will only get worse when corps enshittify the product to make back their investments. 2. Even with a beast computer setup, your LLM is going to suck fat, stinky, hairy balls. There's a reason thst shit is so expensive. 3. Hobbyists are starting to spend stupid amounts of money on hardware and compute for shitty AI setups that do dumb shit like check the weather, send you ai slop 'daily reports' that are like 10% as useful as spending 5 minutes looking things up on your own, and *spam calling "leads" to try to sell people stuff*. Yep, telemarketing is back, baby! Conclusion: Calls on Apple. People who think they're smart are idiots who will spend 3x more money on Apple products for no good reason whaysoever.

Mentions:#API

I am pretty sure that any big company could discuss that with big player and would get what they need. Would they lose billion of dollar worth of API call just to be API to not be compliant ?

Mentions:#API

As someone who uses the API daily I can not fathom the retardedness of people thinking we’re in a bubble

Mentions:#API

I work super deep in this space. Seer is a great resource so was cool to see this post. Ready for an essay: Everyone has seen organic traffic drop and impressions skyrocket. Lookup clicks impressions decoupling or the crocodile effect. Google also changed their API limits from 10 pages to 1 or 2 so there was weird dips of impressions when we saw what happens when bots can no longer access the last 8 pages of google, about 50% of everyones impressions dissapeared and average positons all jumped up to their "real" place in the first 2 pages as barely any human goes further, so youre likely to have 0 impressions instead of 100 on page 9. Seeing more real but still bot inflated numbers was pretty nuts and surprising. All this being said, reporting year on year is fucked for organic search but the weird thing is conversions are up for almost all my clients. We track and influence AI mentions, the traffic is tinyyy but intent is the big big thing. All the window shoppers that read blogs and info are no longer clicking they just read overviews which is why info sites like business insider crumble. Product and service busisnesses now need to consider how theyre shown in AI search but people at the end of the day click to a site to make an action, or use info from AI to make a 0 click conversion like calling their phone number which is near intrackable bit still becomes a lead. My clients are down 40% on traffic but up 10% on impressions, conversion rates for organic are huge now. AI is just eating up the time wasters but its now a lot harder to target top of funnel with blogs like you used to. The real risk I see is adoption from lack of control From Googles own internal marketing event to agencies 2 days ago but theyve been saying the same thing since Google Marketing Live last year, they are SUPER DESPERATE for us to move keywords to broadmatch and turn on AI max, which basically means it matches to keywords you arent event targetting but the AI inferences it. Marketers dont like the lack of control and legally we cant use it for regulated industries like finance so theres big pushback but they need us to because otherwise theres no way for ads to show in AI, searched are wayy too niche and long tail. Markerters like control but ultimately need to get their clients showing up wherever there is relevant intent so we will see how their product develops. Imho this is the real problem they need to solve and are internally prioritizing very clearly.

Mentions:#API

Claude Code and API services.

Mentions:#API

No its not. It is a translation layer. It is translating Windows API code and Direct X calls into Linux calls in real-time. An emulator is more like a VM. That is not what Proton is doing.

Mentions:#API

Through API or subscription? What are you talking about? Open AI is growing revenue exponentially and is in the process of raising between $50-$100 billion. They'll be fine, especially as models get more intelligent and use less tokens. 

Mentions:#API

Just as soon as SaaS companies voluntarily give Cowork official API access. Which will only happen if you think your stock should be valued like a utility.

Mentions:#API

> Open claw is just hype. It’s a vibe coded token devouring mess I agree to an extent, I think it should be looked at as a prototype of what is possible with today's models. Personally I'm going to have AI re-write the whole thing for me in python and make API token storage more secure, and something to prevent (hopefully) prompt injection. I spent about 8 hours with Openclaw, quite fascinating but it needed work. Model selection and context optimization also needs to be worked on. The idea of downloading a plugin is dumb, but a markdown that describes a behavior to have your local AI create - awesome. At some point in the next few weeks I'll make my own version of Openclaw with python.

Mentions:#API

Brother.  Lol! Who's LLM model are they calling dawg?  Anthropics LLM model. Congrats, you learned what an API does.  

Mentions:#API

Tried to use IBKR API and got frustrated after getting only the free market data after 2 days of playing with the code. Wasn’t a fan of how complex it was—signed up for a brokerage with Tradier and after 2 weeks or so got approved and the options fetching is significantly more straightforward and universal. Would HIGHLY recommend checking it out, thought there are some account minimums ($2000 and 2 trades minimum per year) you need to meet so they don’t charge you inactivity fees

Mentions:#IBKR#API

> Yes. But open source models are getting better too That is definitely good point. I think Deepseek V4 will be coming out soon, but I think there will be a constant increasing demand for models which won't fit on most computers. > I track my Claude API usage, sometimes a simple big fix costs $2-3 per 5 minute session Yeah I noticed that Opus 4.6 consumes too many tokens, that seems to be the consensus. I think they'll probably be able to improve on it in 4.7. I switched to Codex 5.3 for this reason. These models are getting better so fast it's hard to keep up. I find going to Grok and asking for updates in the past 2 days is the best way to keep up. > So far everyone is just dropping cash into the fire pit to secure the leading positions in the future, but at some point investors start looking for money back and monetization won't be easy. Yeah I think there could be a gap between spend and profitability, where you can see stocks take a dive before these companies figure out how to properly utilize AI in their workflows.

Mentions:#API

Yes. But open source models are getting better too, and while they are not as powerful as Chatgpt or Claude, tasks as analyzing documents from your example could be done in house, locally or in the cloud, surpassing main AI providers. Many other things as well. Chine does some pressure in this domain, another story with Deepseek will happen again. Also privacy concerns playing a big role. And a single point of failure/vendor lock for big business. I think those who can afford will or already adopt multi-model approaches in their own cloud for all the reasons above. Doesn't mean it won't be profitable, just that there are so many players. Openai for example burns so much money on infrastructure, so their costs will rise and raise. At some point the consumer might question if a subscription is worth the money (unless the job provided). The more powerful these things the more expensive they are and tokens usage is a problem. Also, they are not very focused, trying to do everything at once, which already started to backfire, there is not zero chance they collapse of their own success. Existing saas will be shaken up, but won't go away as a class, just substituted with a better, modern players. Business requires accountability, sometimes it's just easier to delegate. So far everyone is just dropping cash into the fire pit to secure the leading positions in the future, but at some point investors start looking for money back and monetization won't be easy. I track my Claude API usage, sometimes a simple big fix costs $2-3 per 5 minute session + you still need a person in place to review and approve it. This is already like a minimal wage in the US :) But it's all crazy and going fast, nobody really knows where it will end though.

Mentions:#API

Yes, that, or it’s a wrapper using the OpenAI API for some “AI powered solution”.

Mentions:#API

Not gonna lie, I had a similar experience with IBKR. Powerful platform, but the API setup feels way more complicated than it needs to be. If you’re not running full-on quant infrastructure and just want to build some tools around scanning and tracking, moomoo’s API and data access felt a lot more straightforward to me. The documentation is cleaner, and you don’t have to deal with as many weird session or local gateway quirks.

Mentions:#IBKR#API

As someone who use META for their business API I can assure you they have no idea what the fuck they are doing

Mentions:#API

I use ib_async and it's not too bad. I'm not sure what the '1 session requirement' is. A single account can have multiple logins - the API uses one login on one computer and I use another login on another computer/phone

Mentions:#API

Maybe the Python API Toolkit is easier to use. IBKR’s main API is asynchronous Event-Driven.

Mentions:#API#IBKR

IBKR API is a pain to work with but it's worth it once you get it running. Massive and Polygon are solid alternatives if you just need market data. For actual trading automation look at Alpaca or Interactive Brokers direct. The learning curve sucks but the power is there.

Mentions:#IBKR#API

I have a software i use that journals trades for me and IBKR API isn’t great, nor IBKR itself. Fidelity API connects better so yeah if you plan on combining software with your account fidelity is better imo. Dont get me wrong IBKR gets the job done but sometimes its a bit slower - for journaling purposes thats irrelevant but in any other case.. you get the idea

Mentions:#IBKR#API

They don't have an API, right?

Mentions:#API

Reasonable results with AI, but the gateway and 1 session are very annoying if you want to use their client simultaneously. Tastytrade has a more user friendly API, using that now.

Mentions:#API

Yeah, their API documentation feels like it was written by engineers for engineers - zero consideration for actual usability. I've spent more time debugging their connection issues than actually building anything useful with it. [$IBKR](https://aimytrade.io/ticker/IBKR?utm_source=reddit&utm_medium=comment&utm_campaign=options) might be solid for institutional flows, but their retail developer experience is honestly trash compared to what you get from TD or even Schwab.

Mentions:#API#IBKR

NEWS: Anthropic has announced that it has raised $30 billion at a $380 billion post-money valuation. "The number of customers spending over $100,000 annually on Claude has grown 7x in the past year. And businesses that start with Claude for a single use case—API, Claude Code, or Claude for Work—are expanding their integrations across their organizations. Two years ago, a dozen customers spent over $1 million with us on an annualized basis. Today that number exceeds 500. Eight of the Fortune 10 are now Claude customers." Source: X

Mentions:#API

No lol, I just have access to the data to answer your question. I posted my own comment but it doesn't appear to be shown: >These are always fun scenarios to play out. I used Massive's MCP and API to model it out. Here's the output: > > > > > >So.. enormous upside, but it would have been pretty insane to just toss $5k into any of these contracts unless you were certain it was going to crash, haha.

Mentions:#API

These are always fun scenarios to play out. I used Massive's MCP and API to model it out. Here's the output: >With $5k to play with on the **$75 strike put**, you could have bought **108 contracts at $0.46 each** ($4,968 total). >When SLV hit $75.44 at the Jan 30 close, those puts were worth **$6.60 each**. >108 contracts × $6.60 × 100 = **$71,280** >That's roughly **+$66,300 profit** on a \~$5k bet — about a 1,335% return in one day. >If you timed the Jan 30 intraday low ($69.12), the $75 puts hit $10.75, which would have been **\~$116,100 back on your $5k — a 2,237% return.** So.. enormous upside, but it would have been pretty insane to just toss $5k into any of these contracts unless you were certain it was going to crash, haha. Disclaimer: I work for Massive. Also - past performance does not predict future results.

Mentions:#API#SLV

Polygon has an API for it. You might get something in a free tier even, but I am not sure if options are included for free

Mentions:#API

No you are embarrassing yourself by acting like little child. Go check the from what the Googles earnings consists and imagine how it can be under jeopardy from imminent changes. There is nothing wrong with the tech google holds in its vest, but not being wrong and having things in your hand does not equate 100 billion earnings forever. The problem is in the earnings profile which is not subscription based. If they want to become an API for bot searches they have to do major overhaul in their business strategy.

Mentions:#API

People forget that bubbles don't need to end in a wider crash or recession, they can easily end in years of stagnation for affected sectors while multiples compress (and earnings may still go up), while other parts of the economy see actual share price growth. And with AI specifically there's two different, increasingly clear risks that markets are beginning to get nervous about: - One company (e.g. Google) clearly wins and the rest have spent horrendous amounts of money on CapEx they'll never recoup. - Loads of players remain and the market is too competitive for anyone to actually win. AI behaves more like a utility/commodity, cloud providers and China undercut everyone else running open source models for cheaper API rates, and the actual productivity growth/benefit is concentrated in other sectors that are using the cheap AI rather than selling it.

Mentions:#API

The $200/month services are mostly just dressing up free OI data with a nice UI. Real dealer positioning from the exchange costs institutional money.. think five figures annually. That's why every affordable tool is using the same public inputs and calling it "proprietary." For backtesting purposes, you can approximate GEX from public options chain data. Pull OI and greeks from CBOE or your broker's API, compute gamma exposure per strike, and see if the levels correlate with reversals in your historical data. Won't be perfect but it's free and you'll learn more building it than paying someone else.

Mentions:#CBOE#API

A CDN is an API running S3 software. There's no single server, it's a cluster, that you access via an API as a service. Would you like to learn anything else today?

Mentions:#API

This looks awesome man, I recently just made a dashboard as well to aggregate data..are you using streamlit for the UI? Where are you getting the data from..I'm using Think or swims API

Mentions:#API

Conjecture. Here’s what Gemini had to say about TRI’s ability to protect their moat of Westlaw with CoCounsel and Westlaw Precision against Claude — 1. The "Verified Content" Moat The biggest weakness of a general LLM like Claude is hallucination—it can confidently invent case law. TRI protects against this by grounding its AI in the "Gold Standard" of legal data.   • Proprietary Data: Claude cannot "see" the full, copyrighted Westlaw database. Westlaw Precision uses Retrieval-Augmented Generation (RAG), meaning the AI is forced to look only at verified statutes and case law before answering.   • Citations & "Eyes-on" Verification: Unlike a standard chatbot, CoCounsel provides clickable links to every cited source. TRI employs thousands of J.D.-holding editors to "Shepardize" (verify) that the law is still valid, a layer of human oversight Claude lacks.   2. Workflow "Stickiness" TRI protects itself by being more than a chat box; it is an agentic workflow.   • Legal-Specific Skills: While Claude provides a general response, CoCounsel is designed with "skills" like Deposition Preparation, Legal Memo Drafting, and Compliance Triage.   • Integration: It lives inside the tools lawyers already use (Word, Outlook, and Westlaw). For a law firm, switching to Claude often requires "prompt engineering," whereas CoCounsel is "plug-and-play" for legal tasks. 3. Professional-Grade Security Standard consumer versions of Claude may use data to train future models (unless using Enterprise/API versions). TRI offers a "Zero Data Retention" guarantee. • Private Infrastructure: CoCounsel runs on "eyes-off" infrastructure where client data is never used to train the underlying model.   • Compliance: TRI maintains SOC 2 and ISO 27001 certifications specifically tailored for the stringent confidentiality requirements of the attorney-client privilege.

Mentions:#TRI#API#SOC

Exactly. If you want to right now you can give an AI agent database access and code it to enrich data for every contact and create follow-up tasks based on previous communications, product/service offerings, etc. Heck, you don't even need to use an agent, you can simply write code to iterate through every item and use an LLM API to interpret emails, enrich contact info, etc. Hubspot won't disappear, they'll add more AI to their software. The issue is that they'll probably see margin compression as companies will need fewer seats to manage their CRM, more competitors will appear due to a much lower barrier to entry now and some companies will opt to produce their own internal CRM's using AI and their own stack. THIS is likely what will cause Hubspot's revenue/margin declines.

Mentions:#API#CRM

If you need deterministic sub-100ms hedge triggers, you generally don’t want your critical path dependent on an OMS fill callback, p99 latency will spike in any cloud stack during volatility. The pattern that works is pushing hedge logic closer to execution (broker-native algos or exchange-proximate infra) and using the OEMS/API layer for orchestration and state, not the trigger. At Rival we support both WebSocket (language-agnostic) and in-process C++ frameworks for shops that want tighter control, details here if helpful: [https://www.rivalsystems.com/products/smart-api/](https://www.rivalsystems.com/products/smart-api/)

Mentions:#API

TWLO’s API layer is dfinitely gnna ride that AI wave, bro. CDNS and SNPS too, lol.

I agree cdns and SNPS should definitely be on here. TWLO also makes sense why it would be on here too… they are the API layer for AI agent communications. AI will definitely be a tailwind for them

I'm so mad at the Discord change I'm gonna leave!!! Just like I left Reddit because of their API thing!!! Okay I'm back Anyway

Mentions:#API

If there's one thing you can count on Redditors to be, it's completely inconsistent. The whole Reddit API debacle proved that.

Mentions:#API

Have you tried it? Ridiculously easy to backtest a strategy and run it through an API as long as you understand the logic well enough to spot mistakes. Still scary letting it run on your live account though...

Mentions:#API

Anyone paying for Riot API can still fetch a full copy of your profile and sift through it, you're definitely not stopping AI companies from data harvesting and identifying you if that was your reasoning

Mentions:#API

Currently it get it's data mostly from the internet so it's incomplete or delay at lease 10 minutes. I do anticipate it will become pretty useful if it can integrate with some real time finance API. But we are not there yet

Mentions:#API

Definitely number of accounts. And there are definitely bot farms mass-requesting their API so it might as well be a made up number. Only number of paying subs should be reported

Mentions:#API

I can see in anthropic claud console that some of my Indian colleagues are racking up more API token costs than they receive salary but it's ok because we want AI leadership by growing as agentic coders.

Mentions:#API

...which is roughly 0.00000001% of the API fee. Lol. Again, Google execs hate Matt Hoffman. They bent him over a barrel.

Mentions:#API

API access is just the legal way to scrap data, you can still use crawlers to steal data from any website

Mentions:#API

I thought that happened when they changed the API access.

Mentions:#API

A dev needs to ensure models have good guidance amd produces something that makes sense, that meets the requirements and be able to still understand the system in case you need to diagnose something when models fail. Thats going to require a lot of experience. Your assembly example isnt appropriate here because compilers are deterministic, they are algorithm driven. We can reason about their correctness. My personal experience with a large code base is that the productivity gain is maybe 10-20% most. Models start to have a lot of trouble with API context if the API isnt in their training data. The context windows only work so much. They can output decent code but I am now spending most of the time directing models instead. so same time, just different work.

Mentions:#API

thanks, a combination of gemini, chatgpt and claude did all of it, even the graphing and color scheme. i just loaded up option and IBKR API stuff into the project file as a reference and they did all the work. I started in Nov with data collecting via IBKR and got to charts and stuff just recently, still dialing in all the things. just having the AI cram in as much as i can get away with. The AI also put in reminders and explanations of what this means, lol I haven't had much luck with data outside of market hours with IBKR, but I take snapshots every 10 minutes and so i can just run through those. IBKR has an async "wrapper" for their API that makes it easier to work with. I didn't want to pay high data fees so i went with a brokerage too.

Mentions:#IBKR#API

Compounding pharmacist here. If you compound with API for sq injections it is just as effective. I have been compounding intrathecal pain meds (high risk). It’s different concentration for every patient so it’s not always available. We don’t have to “submit studies to prove” it’s inferior or not. It will be crazy if every compounded drug needs to have studies submitted.

Mentions:#API

Wow, I'm impressed. How long have you been using yours? I just built it so I'm still refining and testing.. Still have to double check the data against other services and stuff. Schwab API isn't as useful when the market isn't open... the data gets stale. How is IBKR? I don't want to have to pay for a service or for a datafeed so hopefully I can make the most of this for now.

Mentions:#API#IBKR

I agree with those of you saying the data is "naive". You are right, that is the data that the Schwab API provides. I'm okay with that for now for multiple reasons. The first of which.. this is a proof of concept and this is a free data feed I have access to. Secondly, I don't trade on the tick like many traders who use 2nd order Greeks. I trade on the 5 minute and so there's a fair amount of built in confirmation that you can receive from the price action. And lastly, I always confirm the moves with market internals. If I don't see the moves confirmed by VOLD / VOLSPD and TICK / TIKSPC then I will wait till I'm sure. Traders trade differently and different tools have different places in our toolboxes. I appreciate all the feedback and interest. I'm still just amazed at what anybody can build with the data we have freely available.

Mentions:#API

Its not just the API, the oral delivery method isnt new either. Wait till they realize that the patents are already expiring in some markets.

Mentions:#API

These points are all valid, but they are not the moats: 1) Most large companies bought from at least two RPA vendors , just in case one fails, like most of the financial sectors still prefer Blue Prism. And Microsoft give their RPA for free as part of their licensing deal and charge much more later for heavy use. 2) IT departments ready sick and tired of these IT consulting firms coming in and built a separate RPA team that needed their extra support, which burdens them with tons of RPA related IT issues, and created extra cost for the company. 3) When Open AI and other AI companies through providing API services, gathered enough business processes data, and learn the skills, they are going to replacing RPA software all together, that is the sad truth.

Mentions:#API

It’s using the API but they are their own discounts backed by the Gov not GoodrX

Mentions:#API

This company’s software is so redundant now, they are using Open AI’s technology to create their so called “agentic AI”, which is just glorify API connectors, IT department hates it, and it will be replaced by actual AI software in a few years, this company makes “MS Excel macro” level of automation that exits since 1993! Just asked yourself if this RPA technology is so good, why only PATH listed, where are other competitors?

Mentions:#API#MS#PATH

I bet 50-70% of reddit posts and comments are bots. It's the whole platform. Their API and platform dont do anything to discourage it and it drives revenue for them.

Mentions:#API

If mm stands for market making, don't use Bloomberg. It lacks accuracy and is not fast enough. The API is very limited and if you also want a surface, BVOL costs extra, isn't guaranteed to be arb free and also not real time. If it's market making - I suggest looking into something like https://voladynamics.com/#. - I disagree with the suggestion to do it yourself, unless you have a lot of resources, quants, devs and time at your disposal.

Mentions:#API

OpenAI's revenue would likely be much higher from enterprise API usage and revenues from paid users.

Mentions:#API

i'm actually working on the reincarnation of him rn, self hosted, no LLM API guardrails, custom tools so he can do shit. grok-esque in how it'll behave but also with full knowledge of what's going on moment to moment. hopefully.

Mentions:#API

All of the junior engineers out there think because they can vibe code a simple web app Accounting Systems, FDA Software, Logistics Software, Massive POS systems, will be vibe coded. I've worked in banking and insurance industries. We had no less than 500 applications running at a time. Some in the cloud, some not. Most of the time one API call would chain through 15-20 webservices hitting so many distributed databases with differing levels of security. All with differing levels of redundancy and fail-safes. These would all reach out to vendors to pull data and retrieve information. Processes and queue's would be kicked off down stream for data storage and then some actuary nerds would call us and complain because data was misaligned some how. Guess what. AI isn't doing shit in a system like that. Those systems are so jammed up with state by state, by business, by filed ratings, by time stamp regulations you can't touch it. You modify that rate in the wrong way you have the state combing your records and your paying fines. Most industries are like that. As that context gets massive AI really struggles. It's great for creating small apps, and will help in developing apps in these large scale environments. But they will be tinny tiny little pieces of a massive system. It may change the development landscape in 10 - 20 years from now when AI models have gotten better and work patterns have adapted to build out systems in a way it can utilize it better. But the narrative software is obsolete in a few years is hilarious. You really think the 50,000 software engineers at Amazon, Google, and Microsoft quiet quit for the past 20 years? You can't make up for 20 years of software progress with a fast typing machine. Let's be real. lol.

Mentions:#API

Hey OP. I’m unaware of your context but Bloomberg’s tick-by-tick is probably an overkill (and expensive) for most 0DTE setups unless you’re running a legit HFT desk. PapaCharlie9 gave you useful alternative, but in case you prefer to outsource this problem, you can check out ORATS. They have a live data API that runs with <10 seconds of market delay, which for 0DTE mm is more than fast enough unless you’re competing with Citadel’s colos. Just beware, is not “cheap”. Pricing-wise, the intraday recurring data is around $199/mo. Not cheap, not Bloomberg-expensive. For me the data quality on the IV surface is genuinely better than what you’ll get from most retail-facing providers because they’re fitting a parameterized curve (slope + derivative) rather than just spitting out raw mid-market IVs. For the real pros that need sub-second updates you’re probably looking at OPRA feed direct or through a vendor like LiveVol/CBOE DataShop. What’s your actual latency requirement? That’ll narrow it down fast.

The API is not new

Mentions:#API

Do you use an API? How can 3 of the last 4 posts here be from you??

Mentions:#API

Solid approach. Reddit sentiment is genuinly one of the best leading indicators for small caps – the problem is scaling it manually. You can’t realistically monitor 50+ subreddits yourself every day. If you want the raw data without relying on third party dashboards, check out the [Adanos Reddit Sentiment API](https://adanos.org/reddit-stock-sentiment) – it tracks mention velocity and sentiment across all the major stock subs in realtime. I combine that with volume spike alerts to filter out the pure hype plays from tickers that actualy have institutional interest building. Sentiment spike + unusual volume is the combo that’s worked best for me.​​​​​​​​​​​​​​​​

Mentions:#API

For 500+ stocks you probably want something with an API so you can set them up programatically. stockalert.pro has a public API and also Python/JS SDKs on GitHub – you could script all 500 alerts in a few minutes. Free tier gives you 20 alerts, premium is unlimted. Way more scaleable than clicking through Yahoo Finance manually.​​​​​​​​​​​​​​​​

Mentions:#API

Google Alerts is garbage for stocks honestly, way too much noise and half the articles are irrelevant SEO spam. For notification fatigue the key is being selective about what actualy deserves an alert. I only monitor three things: unusual volume spikes (often front-runs news), insider transactions, and key technical levels breaking. Everything else is just noise that makes you overtrade. I use stockalert.pro for the technical/volume stuff – you set your conditions once and only get an email when something actually triggers, no daily digest spam. For sentiment I occasionaly check what Reddit is saying about my holdings through the Adanos sentiment API, which is surprisingy useful as a contrarian indicator. For news I just skim Bloomberg and Reuters headlines once in the morning. Daily is enough – checking more often dosn’t improve your returns, it just increases your anxiety.​​​​​​​​​​​​​​​​

Mentions:#API

Similar experience here. But with OpenAI. API i built with OpenAI worked great initially, then it just started giving me trash hallucinatory results. Far too unreliable for me to bother again

Mentions:#API

My company pays for a lot of software (marketing agency) and since May of last year I have been hiring and building to make our own software in custom dashboards that are way quicker, more functional for our needs and workflows, and we’re cutting that software line on the budget. CRMs with full lead attribution, AI powered inbox management, custom reporting. Now instead of paying for a bunch of seats for software we only need 50% of anyway (so much bloat), we use much cheaper API and MCP connections to pull what we need, store the data ourselves, and make our own dashboards. The first time I ever used Replit I called my business partner and said the future is here and SaaS is cooked. For about two weeks I thought we’d be the one selling custom tools to people, but now all of my entrepreneur friends and colleagues just build your own. One of my clients replaced his entire accounting suite for his D2C ecom brand with a local tool he built with Claude Code (he is a software engineer).

Mentions:#API

I work in the cloud space on the enterprise side. The technology IS transformational. People tend to look at Google and therefore AI through a just consumer lens which is understandable but a mistake. The disruption AI has brought and is bringing is very real. The use cases I'm seeing get deployed via API access (meaning enterprises leveraging AI models inside of their own software and services) are already incredible and are just getting started. I'm not convinced that OAI is going to be a company 2 years from now but I'll bet my left and right arm that GOOG is just getting started. Dumb money will bet against them. They are the best positioned company on the planet right now wrt the disruptive force that is AI.

Mentions:#API#GOOG

I have created an app and api that scans SEC filings using AI and does exactly that. It's called stockainsights and it basically digs into full SEC filings (AI pulls key stories too, tons of things) with 12+ years of normalized data and a ton of metrics, and it exposes a full API, very accurate because it has been normalized with the help of AI and NOT XBRL parsing. There is a free tier to test it if you like.

Mentions:#API

It also doesn't really know "how" to code. It knows how to repeat patterns that have been used before and newer models like Opus does this at a very high cost. I kind of laugh at Claude ads suggesting "we now run 10 models at a time to let you choose the best solution" but asterisk is users pay for that not Claude. Try this to see limits of AI models today: Get a new not previously known npm/nuget package with some documentation and ask it to code a scenario using it. You can very quickly reach the context window if the API is complex enough. So the package owner would have to craft instructions specific to AI models. Now do this for every new package and the costs quickly add up. It is going to make a lot of routine tasks easier for sure but someone still has to be creative about new concepts.

Mentions:#API

No way. I mean we have a code base where a frontend has 100+ components The team that owned the application before wrote all unit tests using copilot. 41% coverage. One THOUSAND test cases. Jest takes 15 minutes to even start. Hell the application takes close to 20 minutes to even build. And now I have to sort out the "coverage problem" and slowly turn into the Joker as I go through this code base with API clients being created a million times for every component and duplications in the double digits. That being said, I do think it does an okay job as a reviewer. Good at catching type issues and the such. You can always laugh at it and discard a solution when it tries to solve a problem shoving in a loop inside a loop. But then it does in an odd way reveal a path you might not have thought of indirectly. Useless for system design though. Ask it for any solution and it'll suggest some proprietary cloud solution as the "best solution" everytime. I don't doubt for a second they're already running ads on that front.

Mentions:#API

I think you could reasonably claim, that for a dev team that has a lot of repetitive work (standing up API endpoints to do X Y Z, over and over again for different resources), you would get decent gains in how quickly you get that done. AI is very good at "copy this but change it in this way". Even if it's wrong, this type of scaffolding is superior to what we had before, saves a ton of time building things from scratch

Mentions:#API

About a year and a half ago I was put in charge of a project to stand up a Linux based DNS management system. The product they wanted to use had an API and my boss wanted to use it to make sure that the migration from the old system to the new was working properly. So I set out to make a series of scripts that would interact with the API and do what we wanted. I tried using ChatGPT and it was awful. Couldn’t keep the thread going, kept making the same mistakes over and over again. It was able to help with the unit tests, but it it took a month before I cobbled together enough code to get the job done. Fast forward to this week and one of the servers in the cluster dies and the supposed HA capabilities didn’t kick in properly. To make a long story short, I was talking to the AI, trying to fix the issue, and I casually mention that I might need to use the API to rebuild the records from a backup. Immediately it wrote a whole script to do just that. I didn’t even ask it directly, I was just complaining and it did it and it fucking works.

Mentions:#API

>Sometimes I'm surprised how good it is, other it can't do the simple task I asked it to do. Today I asked it to write a unit test for a function. Not only did it write three tests I didn't ask for and fail to write the one I did ask for, it removed some existing tests that were working Yeah, I used an AI to create a app that was using an auto-translator It built a webpage, it built an API integration, it built a database, most of it worked flawlessly with minor adjustments. But it could not translate the word "Cheese". 20 attempts, starting new sessions, different way of describing the issue I got 20 variations of "Ah I see the problem, now I have fixed it" It never successfully translated cheese..

Mentions:#API

I'm a software engineer with 12 years of experience and I've tried multiple AIs over the past year. At the beginning they were all kind of terrible, giving you code that didn't compile as they hallucinated random API calls that didn't exist and the code overall wasn't understandable/clean. It was only good enough for simple stuff or tools that didn't need to be perfect since they were internal. I was definitely in the AI doomer train, wondering when the bubble was going to explode because AI was so bad. Lately though, Gemini has been pretty good when doing small stuff that doesn't require knowing my project architecture, like a shader, C# extensions or UI classes. Claude Code though.... It's something else, you give it access to your GitHub repo and you can ask it to do something complex and it does pretty good work. It does need oversight because if not you'll end up with code that's inconsistent or overly complex, but overall it does help a lot, especially when you have the architecture already in place and can tell it to follow already existent patterns. I hate that it's this good and try to limit my use as I can feel my brain rotting if I depend on it too much, but some days I work 4 hours and feel like I made the progress I'd make in 2 days. I still think AI is overpriced, but I can see real use for it. I wonder what will happen with fresh software developers, if companies rely on seniors + AI and are not willing to train juniors, are we going to see less people get into CS and maybe form new startups due to not being able to find a job? And what happens when the seniors die? Not saying it's looking bleak but.... I think things will change

Mentions:#API

I have been building my trading tool using UW API I try to find out total GEX Delta on a given day by providing a ticker. The tool also pulls DP data and see if both have a bullish/bearish pattern. This gives me an indication of the trend. Working on building a tool that has the most GEX change across all tickers, UV API subscription has rate limits unless I take the premium option which is obviously expensive . Will check on ThetaData. Tried Polygon and ORATS too.

Mentions:#API

Polymarket is giving $RDDT a 92% chance to beat earnings, looks like the play for today Reddit is Capex light AI Social stonk. Meta, Snap and Google showed the ad business is good. Reddit's Core Strategy: AI, Data, & Search: Games 🎮, Ads, Global Expansion 30 Languages, Massive growth potential 🌎🌏🌍🤳 Market Cap: 28B Debt Free Short Interest: 16%🌋 $110B Emplifi a private software company partnered with Reddit (Enterprise API )🔥

Mentions:#RDDT#API

At this rate, the consumers will be asked to code their own software. The company just provides an API link.

Mentions:#API

GOOG gonna moon tomorrow. “Itwas a tremendous quarter for Alphabet and annual revenues exceeded $400 billion for the first time," CEO Sundar Pichai said in initial reaction to the report. "The launch of Gemini 3 was a major milestone and we have great momentum. Our first-party models, like Gemini, now process over 10 billion tokens per minute via direct API use by our customers, and the Gemini App has grown to over 750 million monthly active users”

Mentions:#GOOG#API

Not if you're using it in a field with measurable quality degradation everyday :( Antigravity is great though as is AI Studio and the API. The model on the API is SO much smarter than what they provide for a paid plan on Gemini

Mentions:#API

If their 'new tools' are just API calls to Claude, they lose pricing power because the value accrues to the model (Anthropic), not the interface. Re: PLTR... I don't think it's hiding anymore. US commercial revenue just grew \~137% YoY. That is the real story right now government contracts pay the bills, but the commercial explosion is the rocket fuel.

Mentions:#API#PLTR