Reddit Posts
Download dataset of stock prices X tickers for yesterday?
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
AIGC market brings important development opportunities, artificial intelligence technology has been developing
Avricore Health - AVCR.V making waves in Pharmacy Point of Care Testing! CEO interview this evening as well.
OTC : KWIK Shareholder Letter January 3, 2024
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
Why Microsoft's gross margins are going brrr (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Google's AI project "Gemini" shipped, and so far it looks better than GPT4
US Broker Recommendation with a market that allows both longs/shorts
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Best API for grabbing historical financial statement data to compare across companies.
Seeking Free Advance/Decline, NH/NL Data - Python API?
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Delving Deeper into Benzinga Pro: Does the Subscription Include Full API Access?
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Aduro Clean Technologies Inc. Research Update
Aduro Clean Technologies Inc. Research Update
Option Chain REST APIs w/ Greeks and Beta Weighting
$VERS Upcoming Webinar: Introduction and Demonstration of Genius
Are there pre-built bull/bear systems for 5-10m period QQQ / SPY day trades?
Short Squeeze is Reopened. Play Nice.
Created options trading bot with Interactive Brokers API
Leafly Announces New API for Order Integration($LFLY)
Is Unity going to Zero? - Why they just killed their business model.
Looking for affordable API to fetch specific historical stock market data
Where do sites like Unusual Whales scrape their data from?
Twilio Q2 2023: A Mixed Bag with Strong Revenue Growth Amid Stock Price Challenges
[DIY Filing Alerts] Part 3 of 3: Building the Script and Automating Your Alerts
This prized $PGY doesn't need lipstick (an amalgamation of the DD's)
API or Dataset that shows intraday price movement for Options Bid/Ask
[Newbie] Bought Microsoft shares at 250 mainly as see value in ChatGPT. I think I'll hold for at least +6 months but I'd like your thoughts.
Crude Oil Soars Near YTD Highs On Largest Single-Week Crude Inventory Crash In Years
I found this trading tool thats just scraping all of our comments and running them through ChatGPT to get our sentiment on different stocks. Isnt this a violation of reddits new API rules?
I’m Building a Free Fundamental Stock Data API You Can Use for Projects and Analysis
Fundamental Stock Data for Your Projects and Analysis
Meta, Microsoft and Amazon team up on maps project to crack Apple-Google duopoly
Pictures say it all. Robinhood is shady AF.
URGENT - Audit Your Transactions: Broker Alters Orders without Permission
My AI momentum trading journey just started. Dumping $3k into an automated trading strategy guided by ChatGPT. Am I gonna make it
The AI trading journey begins. Throwing $3k into automated trading strategies. Will I eat a bag of dicks? Roast me if you must
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
To recalculate historical options data from CBOE, to find IVs at moment of trades, what int rate?
WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS
$SSTK Shutterstock - OpenAI ChatGBT partnership - Images, Photos, & Videos
Is there really no better way to track open + closed positions without multiple apps?
List of Platforms (Not Brokers) for advanced option trading
Utopia P2P is a great application that needs NO KYC to safeguard your data !
Utopia P2P supports API access and CHAT GPT
Stepping Ahead with the Future of Digital Assets
An Unexpected Ally in the Crypto Battlefield
Utopia P2P has now an airdrop for all Utopians
Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue
Reddit IPO - A Critical Examination of Reddit's Business Model and User Approach
Reddit stands by controversial API changes as situation worsens
Mentions
Nah it's lobbying I'm sure. He's seems to be building up the infrastructure for age verification in the us that requires API in sure he will charge people for it.
Today everyone is confused about the AI bubble, "will it be going to pop," "when it is going to pop," because the market is correcting itself right now. If we see 4 years ago, back when everything was new, chatgpt 3 was released, there were many new start ups opening every single day, which were essentially just back chatgpt API key set, while marketing it with some niche problem that it will, that chatgpt would do it much better (which also hold true for today, I am only talking about small start ups. There were also some unicorns between them which genuinely solved real problems. But the problem now is as the industry grows, like any industry, with more and more hype everyday, the competition grows, it gets harder to compete, even better tools are being replaced. For example Blackbox AI was once best for coding, but now people use Claude code, even chatgpt is replaced by Claude in coding now. The problem is not that there is a bubble or will is going to pop, the main thing we all need to focus on is, what it is going to change, once everything is settled down. What new opportunities will it create? There is a storm right now, where even the most uneducated people are creating start ups, who don't even know a thing about computers or software, and the fun thing is, they also managed to get investors because investors want to win, they care about winning. And, its not that the investors are stupid, it is just that the technology is very new and it is hard to pick a winner, so people with even a decent pitch get the capital, even if after 6 months that start up shuts down and investors lose all their money. The investors mind set is, fund the crowd, and even 2 or even becomes a winner, its gold mine for them.
Wish Apple would dump that shitty Yahoo Finance stocks API
Custom software I built. API to Tradier
The problem is, like the original internet boom it is full of garbage that is going to fail. Basically anyone that has built a business around calling the ChatGPT API to do something that isn't that valuable and doesn't pay for the tokens it uses is liable to fail.
They have revenue, use cases, customers, and are getting into military Now, openAI might be over leveraged, but anthropic is already turning a profit on API calls and is projected to become profitable in a couple of years Way different situation that pets.com getting 300 millions evaluation by selling litter online at a loss
US to release emergent reserves on reserves....starting with almost 200m barrels immediately...more to follow. Pipeline flows out of the bakken and permian basins picked up overnight according to flow data on the backend API data.
The worst part is is like 19API, it's not even that aromatic/heavy. So you don't get volume swell and good diesel yield. Just cracks to light ends in your coker. We valued it at ANS-20.
I want to make sure this works and catches edge cases before creating API automation. Not gonna waste my time if it sucks
I reverse engineered Robinhood API last year and have been collecting options and futures data information for many months now. I have something like 80,000 "snapshots" so far with full Greeks data, volume, bid, ask etc The idea is to use LLM power to analyze and find "true market edge". So far out of 180,000 strategies tested not a single one actually showed "true market edge". Many show positive P&L but fail in reality. the market CANNOT be predicted, never fall for any bot trading or anything like that it all fails the "true market edge" test. and this is why I just follow the trend and not try to predict anything anymore.
Not running it locally, it’s deployed on Vercel with PostgreSQL on Azure, so the scanner just runs as an API call. The pipeline also cuts aggressively before the time consuming calls. Starts with [N] stocks, filters down to 18 finalists, then runs the heavy enrichment only on those. Never blasting all [N] through the full thing.
For market data: Finnhub Premium (news, analyst ratings, insider activity, earnings history, peer groups), TastyTrade API (live options chains, Greeks, IV rank, liquidity ratings — free if you have a TastyTrade account), FRED for macro data (free, Federal Reserve), SEC EDGAR for filings and business descriptions (free), and xAI/Grok for social sentiment.
Disagree, they pull in news from all sources. There's no chance you'll miss a single piece of news of a stock. If it gets cluttered you can filter on news type to find what you need. I'm gonna be honest most brokerages don't even have a good news page like IBKR, I doubt you'll be able to do it better. Comes with a ton of API integrations and algorithmic filtering. Portfolio tracking works fine, there are some quirks with it like deposits counting as profit made but every broker app I've used does that. You absolutely want a UI to be complex, that's the whole point of a professional broker app. If you want something simple go to Robin Hood. And their web interface really isn't that complex in the first place.
I'll just leave this here: The American Petroleum Institute (API) releases its Weekly Statistical Bulletin, which includes data on U.S. crude oil inventories, every Tuesday afternoon. This report provides insights into the weekly changes in crude oil supply and can influence market prices.
Datavault AI Inc. (NASDAQ: DVLT) is a small-cap tech company (market cap around $395–$430 million as of early March 2026) Institutional buying surge: Major increases reported March 3, 2026 — e.g., Vanguard (\~2,900% increase to 11.8M shares), State Street (\~2,800% to 10M), BlackRock (\~3,000% to 4.1M). This fueled optimism. Strategic investments/acquisitions: $150M from Scilex Holding, API Media acquisition.Aggressive guidance & token plays: 2025 revenue updated to $38–$40M (up \~30%), 2026 target $200M+ (even $2–3B long-term potential via nationwide nodes). Token/coin distributions (e.g., Josh Gibson Coin dividend 1:1 planned for April 2026, meme coins like Dream Bowl Draft) created buzz and short-term pumps. * **weak fundamentals**: Ongoing losses, negative EPS, modest current revenue vs. lofty targets. Market skeptical of execution on $200M+ 2026 guidance. * **Broader small-cap headwinds**: Rate sensitivity, macro rotation away from speculative tech/micro-caps, and general caution in AI/Web3 space amid hype fatigue. classic pump and dump type of company on that goes up and down on AI / and WEB3 . waiting for earnings report to come out better
Then how do you bring it all back together? And is that on code or API or regular project? Do you just ask it to run subagents and it does? Or is there some way to induce it to idk about? Does it work in normal chat mode on subscription or do I need pay per token API set up for that?
I mean, I agree that the threat side is scaling rapidly& AI is changing alot.. I'm not sure thats a reason to be apprehensive of the sector though. Vulnerabilities and exploits are already outpacing manual patching, which is exactly why enterprises are spending more on consolidated security platforms w/ AI detection. Bots currently account for over half of web traffic, which is a tailwind for firms focused on identity, bot management, and API security imo I am thinking the winners will be the cybersecurity companies with customer data, distribution, and the unit economics to turn their offerings into durable high‑margin revenue (ex: CRWD, NET)
They are the API keys to my heart.
then you could plug one in and make api calls. try it. calls on API
backtesting is on the roadmap. TastyTrade has a full backtesting API with 10+ years of real historical options data. The problem is their OAuth tokens work on all their standard endpoints but get rejected by the backtester endpoint. it uses a different auth token. I’ve reached out to their API partner team to get proper access. Once that’s sorted, the plan is to wire it directly into the trade card
In my experience, LLMs struggle with the precision required for option pricing, so relying on them for raw calculations usually leads to hallucinations. I've tried using Bloomberg Terminal, ThinkOrSwim, or Interactive Brokers' API for this, but they lack the integrated reasoning layer you're looking for. I'vebeen using https://trade-matrix.com to handle the heavy lifting. It pulls data from multiple sources, cleans it, scores it, and displays a stock score with confidence to build conviction. Not sure if it handles complex multi-leg hedging strategies perfectly, but it might save you from building a custom pipeline.
What you’re describing is definitely a tricky space most general purpose LLMs don’t have live market access out of the box, so combining reasoning with current prices usually requires a bridge to a market data API. In practice, the approaches that work best are either LLMs augmented with real time feeds through something like an Azure or AWS pipeline, or tools built into broker platforms that expose analytics and option chains for programmatic access. In my work at Lifewood Data Technology, the key is structuring the data so the model can reason over positions and prices together; without that, strategy level insights tend to be high level and not portfolio specific.
I disagree first I don’t plan on publishing it, 2nd I really did learn a lot, for just a $200 subscription I built a product for myself that I can use that fits my specific needs. The thing is it works, and it’s catered just to completely what I need, As a matter of fact, this is a second app I’ve written that we just use all of it was made with a disclaimer that it is not something that’s going to be used for long-term support However, when you need a quick solution or a stop gap and you don’t have to worry about issuing PO’s or anything else that makes it a lot easier. I mean, I still think AI is a lot of hype and everything else and it completely doesn’t justify the real costs for stuff because if I had to pay unsubsidized rate I would not use it at all. It probably would’ve been close to two to $3000 worth of API calls. However, if you set everything up correctly, you can literally have agents do almost all the work for you in parallel, and you’re just capping out on having to wait between sessions since you’re past your usage.
If your broker has an API it would be simple to vibe code an app that has your positions and whatever market data you want. I use Schwab and the Yfinanace api to get Yahoo Finance data. Hook up an Anthropic API and you'd be able to ask whatever questions you want about your positions and the market. You should be able to use the most basic models to ask the questions you mentioned so it probably wouldn't cost much per response.
The 2.01 + 5.01 + 5.02 combo is exactly what flagged the Ventyx/Eli Lilly filing I mentioned - saw that combination and knew immediately it was a completed deal, not just an announcement. Good confirmation that it's not just my read on it. The 8.01 point is interesting, I've been filtering those out as noise but you're right that some companies use it for things that don't fit the standard items. Will start paying more attention to context there. And thanks for the EDGAR full-text search tip - I've been polling the atom feed which works but has a slight lag. Will look into the API approach.
Item 2.01 combined with 5.01 and 5.02 in the same filing is a really reliable signal. That combo almost always means the deal closed, not just announced. Also worth watching 8.01 for unclassified material events, some companies use it for things that don't fit cleanly elsewhere but still move the stock. For anyone building a system to track these, the EDGAR full-text search API lets you filter by form type and date in real time without scraping the HTML. Much faster than polling the main page.
If you do any programming I built a real time PR news API. It delivers stock news to your app in less than 1 second. 1-week free to try and verify speeds. [https://rtpr.io](https://rtpr.io/)
Again you're missing the point. Claude can just make the tools that palantir provide, future claude will just oneshot it. The issue with their commercial growth is that it hasn't changed their share of revenue coming from government vs commercial, they've been stuck on a 50/50 split a bit too long. Sure, the total revenue has seen significant growth, but this is also the nature of a contract based venture, when the product hits market, you get significant growth until you have saturated or competition picks up. They also have competition from Microsoft Fabrics, Oracle based solution or fully custom solutions made in house or by consultancies. Since DBMS is also a mostly custom job or specific to whatever other vendor you use, companies are much more likely to pick something from an already established company in their pipeline, like Microsoft. >That junk (Claude, Gemini, copilot, grok, GPT etc) works wonders for simple searching but it’s just that. Well, actually it's really good for coding and the directions it's going seems to be around agents having mcp and access to a plethora of open source tools, which kinda bypasses the need for dedicated DBMS, you just need API acces for you're different databases.
I think he's asking for AI API integration.
We Piloted copilot last year and with some training and a demo repo update to use the tools. I did this entirely in VS Code with basic extensions, which was nice. I took a few months to adopt Copilot into my workflows for troubelshooting and scripting. I built a 'workspace' with custom/generic instructions, a project manifest type doc to start with or update from chat, and a VS code task to input a project name and create a 'workspace'. I've used this to accelerate many things like: 1. Pull in PDF docs on a legacy API, process to makrdown, and provide call/url info for devs 2. Convert a series of Help docs from said legacy system into a consistent and Copilot/machine readable wiki 3. Identify then remediate common errors, including staging a markdown wiki that is easily exported to our team wiki. 4. Create readily deployable tasks/scripts for without needing to stage all new instructions/repo/etc 5. Identify and complete a workflow for updating an old but small internal app past 6+ years of CVEs/etc 6. Overhaul and remaster a myriad of AD/etc scripts and tools into cohesive and better documented module(s) 7. Manage years of bookmarks and prep an easy to import package for a new hire with team links/etc Actual Devs here are doing a lot with copilot tools like agents and squads or profiles. Ex one dev can do the following without leaving VS Studio CLI, and in some cases it's one command/chat: > Commit> story update> create PR linked to story> prep deployment and change needs ex yaml pipeline + approval requests LLMs and Gen AI are great tools. It's definitely more useful than blockchain hitting the market. But the hype is similar and the tech bros don't seem to get that unless they are selling a package around and LLM that solves for any lack of mature business process, dev guidelines, etc. ----- I'd compare it to setting a kid loose in your kitchen to bake cookies. All the ingredients are there, but they need a Recipe and oversight. If the Recipe says 'Add 1 cup butter', is that salted? unsalted? Straight from the freezer or room temp or melted? Do they start mixing in a large bowl or put the liquids in a small bowl to add to the dry ingredients in a large bowl later? Every thing you know you need to translate to the recipe/instructions. Maybe the kid asks great questions. Maybe it will just move forward with what assumptions it can make. Check the oven temp. Make sure they set a timer. You'll spend a lot less time personally to get a cookie but your ~~code~~ -er Recipe reviews need to be robust. Plus when your other kid (LLM model) goes to make a batch you need to repeat your Recipe update and oversight checks. The good news is a mature process (recipe) can withstand current model effectiveness. Buy you're also writing a detailed manual on how to replace yourself and handing it to a rapidly developing technology with minimal guardrails. If your org is decent about that you can offload a ton of work. If their goal is to deirectly replace you, tada you wrote the book yourself.
As someone who is working with TTDs API, I would short it. It's so much worse than DV360 or even Xandr. Also, to me, the UI seems super difficult to work with
Pulling directly from Schwabs API. If I decide to turn this into a saas I will migrate to databento.
If you do any programming I built a real time PR news API. It delivers stock news to your app in less than 1 second. 1-week free to try and verify speeds. [https://rtpr.io](https://rtpr.io/)
People keep complaining about annualized rates, yet every month OpenAI earns more revenue than the last. OpenAI's core business is not cyclical like restaurants are. They charge subscriptions, and API usage is constant.
They wouldn't need much compute power at all if they were just calling the API, you could do that on an old laptop. The people buying 4 mac minis are doing it to run local models.
I was under the impression most people buying these Mac minis have been to run openclaw and are still leveraging openAI/anthropic subscriptions or API to do so, not running local models.
# Descripción del Producto: DearmasTrader * **Qué hace:** Es una plataforma de trading algorítmico automatizado que utiliza procesamiento de datos masivos para ejecutar estrategias validadas estadísticamente. El sistema permite conectar exchanges vía API y operar de forma 100% autónoma. * **Público objetivo:** Traders, inversores de criptomonedas y desarrolladores que buscan rentabilidad real sin depender de señales manuales o bots comerciales rígidos. * **Beneficios clave:** * **30 días de prueba gratis:** Acceso completo para validar el mercado sin riesgo inicial. * **Disciplina garantizada:** Elimina el factor emocional humano en la ejecución. * **Infraestructura de alta potencia:** Utiliza un rig propio ("la bestia") y servidores en Canadá para un procesamiento de datos superior. * **Casos de uso:** Inversores que desean automatizar su capital (como el caso real de $1.000 operando actualmente) y usuarios que buscan optimizar sus estrategias mediante combinatoria de datos. * **Propuestas de venta únicas:** * **La Combinatoria:** A diferencia de otros bots, DearmasTrader prueba miles de combinaciones de parámetros en milisegundos sobre velas de 1 minuto para encontrar la configuración estadísticamente más robusta. * **Stack Tecnológico:** Desarrollado con **C#/.NET** para el backend y **React** para el frontend, garantizando robustez y velocidad. * **Alternativas/Competidores:** Se diferencia de **3Commas** por su mayor flexibilidad y personalización, y de **ProfitTrailer** por ser mucho más accesible en su configuración y uso diario.
# Descripción del Producto: DearmasTrader * **Qué hace:** Es una plataforma de trading algorítmico automatizado que utiliza procesamiento de datos masivos para ejecutar estrategias validadas estadísticamente. El sistema permite conectar exchanges vía API y operar de forma 100% autónoma. * **Público objetivo:** Traders, inversores de criptomonedas y desarrolladores que buscan rentabilidad real sin depender de señales manuales o bots comerciales rígidos. * **Beneficios clave:** * **30 días de prueba gratis:** Acceso completo para validar el mercado sin riesgo inicial. * **Disciplina garantizada:** Elimina el factor emocional humano en la ejecución. * **Infraestructura de alta potencia:** Utiliza un rig propio ("la bestia") y servidores en Canadá para un procesamiento de datos superior. * **Casos de uso:** Inversores que desean automatizar su capital (como el caso real de $1.000 operando actualmente) y usuarios que buscan optimizar sus estrategias mediante combinatoria de datos. * **Propuestas de venta únicas:** * **La Combinatoria:** A diferencia de otros bots, DearmasTrader prueba miles de combinaciones de parámetros en milisegundos sobre velas de 1 minuto para encontrar la configuración estadísticamente más robusta. * **Stack Tecnológico:** Desarrollado con **C#/.NET** para el backend y **React** para el frontend, garantizando robustez y velocidad. * **Alternativas/Competidores:** Se diferencia de **3Commas** por su mayor flexibilidad y personalización, y de **ProfitTrailer** por ser mucho más accesible en su configuración y uso diario.
# DearmasTrader * **Qué hace:** Es una plataforma de trading algorítmico automatizado que utiliza procesamiento de datos masivos para ejecutar estrategias validadas estadísticamente. El sistema permite conectar exchanges vía API y operar de forma 100% autónoma. * **Público objetivo:** Traders, inversores de criptomonedas y desarrolladores que buscan rentabilidad real sin depender de señales manuales o bots comerciales rígidos. * **Beneficios clave:** * **30 días de prueba gratis:** Acceso completo para validar el mercado sin riesgo inicial. * **Disciplina garantizada:** Elimina el factor emocional humano en la ejecución. * **Infraestructura de alta potencia:** Utiliza un rig propio ("la bestia") y servidores en Canadá para un procesamiento de datos superior. * **Casos de uso:** Inversores que desean automatizar su capital (como el caso real de $1.000 operando actualmente) y usuarios que buscan optimizar sus estrategias mediante combinatoria de datos. * **Propuestas de venta únicas:** * **La Combinatoria:** A diferencia de otros bots, DearmasTrader prueba miles de combinaciones de parámetros en milisegundos sobre velas de 1 minuto para encontrar la configuración estadísticamente más robusta. * **Stack Tecnológico:** Desarrollado con **C#/.NET** para el backend y **React** para el frontend, garantizando robustez y velocidad. * **Alternativas/Competidores:** Se diferencia de **3Commas** por su mayor flexibilidad y personalización, y de **ProfitTrailer** por ser mucho más accesible en su configuración y uso diario.
The OP asked why the price is objectively underpriced. What you're saying is likely what many believe and how they look at AMD. Yet it's a fundamental misunderstanding of AMD. First, if you want to characterize AMDs AI initiative as copying Nvidia, your only focused on the razor thin veneer that there is at least a 1T TAM to be addressed and Nvidia will attract competition into that space. But in no way is AMD just copying Nvidia efforts. Don't even try to call ROCm a copy of CUDA. Beyond the public API used their is nothing that is a copy. The hardware is architecturally extremely different and in fact more advanced and capable. We continue to see model performance excel with optimizations on MI300X GPU and out outperforming B200 chips. What AMD has been doing is taking a far more argers process of working completely Open Source and industry wide friendly. The end game is to have options that can work broadly with different hardware system topographies, vendors and meet a much broader array of solution needs. This expanded scope took longer to bring to market initially, while Nvidia found one short cut after another to nude it's overall architectural design concepts ( monolithic based design) forward and capitalize on having short term first to market monopoly advantage. But this advantage is running out of time. AMD is on the precipice of providing full rack scale systems via Helios that will quickly grab significant market share from Nvidia, well before Nvidia can secure enough of a food hold ensure lasting dominance the way Intel had. I believe AMD should match Nvidia's DC market share well before 2030 and 2028 with MI500 may be where they land even before AMD pulls ahead. Why AMD will pull ahead you ask... AI is not just a GPU game. It's full heterogeneous architecture. Even Jensen is saying this as he tries to convince you their ARM based CPU chips are going to carry them. Buy those chips are trash compared to EPYCs, chips that Intel can not touch, yet Jensen want you believe they can tweek off the shovel designs from ARM enough to handle the deterministic needs in agentic MoE type workloads better than the monster CPUs AMD keep improving upon. It's really admirable to see how well he sell that line, but it will only buy him so much time at the top. AMD is a company that keeps their head down and works hard at the plan, and its not a new, borrowed or rushed plan. This is the heterogeneous roadmap Mark Pappermaster was talking about over 10 years ago... Slowly and significantly made real, step by step, win by win.
No the price is mainly limited by competition. The investment required to run and manage Kimi 2.5-like models (which are not as good as SOTA closed models) locally at scale makes LLM API services worth it even if they’re several times more expensive. Profit margins on API-based inference are already very good (~70% for some labs) and are expected to become even better. Labs have the freedom to either increase or decrease cost depending on competition.
There are plenty of good reasons to take advantage of that feature. People can be really petty in internet arguments and scour your history to try to get personal or find sensitive topics to throw barbs at you for having a difference of opinion or demonstrating expertise/knowledge that makes them feel insecure. On previous accounts, I used to participate in subs related to my locale and had someone use that and comments I made about my career elsewhere to get uncomfortably close to doxxing me. After that I made a script to use the API to scrub my history once a week or so, but that of course removes my contributions to conversations that others might find helpful if they come along later - especially relevant in tech subs. Hiding the public browsing of my history was a welcome solution, you don't have to be a fraud to appreciate better privacy protection from weirdos.
model shifts from human licens to API per calls
I have actually built an app that connects to different AI models to pull data regarding market sentiment, price certainty, take profit, and everything. Works very well because I use the direct API address and can generate very long requests with it.
Yup. Another thing they don't say is that most of that distillation happened from those companies reselling anthropic API and subsidizing the resell to train on those sessions. Same shit copilot and others do, perfectly legal.
Can you give me an example of what "grunt API enterprise tasks" you would do with it?
100 percent different use case though. Claude opus is 100 percent unnecessary for most antigenic grunt API enterprise tasks.
Curious about your local set up - how much slower is it than API calls, for example, on smaller chat questions or coding exercises with lower token usage? Apologies if this is vague or not specific, just trying to gauge how slow / fast it is vs an LLM provider API
What API are you using for this? Any chance youd share the code?
The underrated impact is local llama serving. Qwen 3.5 coder next replaced 200 dollars a night in spend on API calls for me. For enrichment non instant results these models when distilled can provide amazing results for the one time purchase of fairly inexpensive equipment for any decent size enterprise.
Who’s gonna vibe develop an entire system, frontend, backend, API, federation, storage, security, product design, graphic design, then QA it, debug it, and deploy it just to learn a language?!?
US strikes on Iran navy escalate oil flow fears, tanking risk assets overnight. Headlines overpromise though - Strait chokepoints rarely fully close without tanker reroutes balancing supply. Pre-trade check: Scan Brent spreads and API data before oil longs - filters real shocks from gap-fill reversals. No navy dustup skips recession demand crush.
Thanks for sharing your data sources! QuickFS went down recently. It was my go to source for an Excel API.
Thanks. Sucks it doesn’t have an API but it is a lot cheaper than FMP
The software is licensed on a per user basis so the market is anticipating a decline in revenue. That's a short sighted approach by the market as tech companies can easily change how they license away from user licensing to API call, agent etc
1) Tech job market - First, there were many other cycles for the tech job market before, this one can be justified the AI but it also can totally not be. There was a massive over hiring during COVID so it would be natural for all that to unwind now making it hard to find a job temporarily. 2) I do believe very entry level roles are being disrupted by AI. Both can be true. 3) I didn’t back track anything the 10x loss is public, you can compare it yourself, check the limits of the paid plan, then look at the API cost-equivalent (which likely also carries a loss for them), and you’ll immediately see it doesn’t make sense, and that the factor is about 10x (you’d pay 10x more for those tokens if you used the API instead). There’s many analysis too about this.
I will leave this here on the API appetite in GPU starved country - [https://www.bbc.com/future/article/20251223-why-indian-cinema-is-awash-with-ai](https://www.bbc.com/future/article/20251223-why-indian-cinema-is-awash-with-ai) Do not underestimate Block's announcement - that management seems measured and did not seem worried about margins etc although they are a public firm. They seem to have realized it is point of no return for AI - this will trigger other boards to get their CEOs to act, and I would claim in the near term and not long term (US firm boards and wall street do not have patience) Personal anecdote - 2 weeks ago I will sitting in a consultant (Big 4) firm's presentation layout the need for 90 developer team (for a board level mandate in a very large bank). I could not believe that they were serious. \> Overall I think rise in productivity will more than cancel out AI job loss for the foreseeable future. However, in the long term I agree, that we will have a societal problem, I just can't tell when This is my thesis as well but lot more pessimistic given wall street and corporate expectations btw - I did fall for self driving hype. I am following what will happen in Australia (not the US obviously) as they do not have domestic firms to protect and have all the incentives for imports to come there with advanced tech
I just started writing my own screener with chatgpt help and Alpaca API. Was supposed to send me alerts on my Discord chat but nothing so far. Been writing it past 3 days and got it running before premarket today. But no alerts so might require tweaking. Also at work currently so dont know what my laptop is doing.
Doable, but still not easy. Don’t forget that it’s not just switching the API programs point to but resetting the entire token/key tracking and scope for every single instance to a whole new platform… that’s also worse once you get it hooked up.
Google has been in Reddit's API since February 2024 recording everything
OpenAI will be profitable by then. Ads could easily be doing then $250 Billion a year in revenue as they replace Google, plus another $100+ Billion from API pricing and subscriptions.
I don't think it's unthinkable. Google does $280 Billion in ad revenue per year,ChatGPT is a replacement for Google Search, and OpenAI is in the process of implementing ads. Then consider they also have strong revenue growth from API usage and subscriptions(>3x YoY), and it seems very possible.
People hate on NVIDIA but open claw has made it so API tokens are now being used and needed by many people. My own experience with open claw have led me to pay for AI for the first time ever and many more will as well. This software has been out 2 months at max, NVIDIA is the only company that is involved in almost every AI use case
ChatGPT is the original Sin of the Robot Slave, the Unpersonified API Claude is Good Claude is the Golden Path Claude is AI Jesus
Same problem that I was trying to solve but in Canada. Copilot is supposed to have built me an Excel with a calculation engine to detect these wash sales (superficial loss in Canuck terms). It will connect via API to Bank of Canada USD-CAD rates, allow me to upload as different tabs transactions from multiple brokerages and accounts. I haven’t tried it yet as I’m away from my desktop for a couple of days. I also asked Gemini to do the same in Google Sheets which I’ll try after the Excel one to compare. Tax year 2026 will be even more fun when partner starts some active trading. CRA rules for include partner accounts and “others” in the calculation of wash sales.
Oh, yeah, absolutely. I own a dev agency that builds and manages a ton of ecommerce shops. We nearly always recommend Stripe because their API is phenomenal. Ironically, I worked with the Magento team that helped integrate PayPal into that platform. Good times, and it's kind of sad to see how PayPal has shit the bed over the years.
Are you just trying to gather sentiment from Twitter and Kalshi? Finhub has API endpoints that gather market sentiment from reddit and Twitter in that case. I haven't played with so I don't know what the limitations are but it's probably better than starting from scratch.
SaaS will become AssA. Don't charge for seat but API usage. That's even or more money.
PayPal has approximately 429 to 435 million active user accounts worldwide. Timing is ideal, stock price is down the shitter. They dont care about the messy payment API. Buying Paypal means they can add half a billion users and migrate them to their own API down the road.
real big brains just get their GLP-1's from the grey market labs with actual testing. Reta already on market and all other GLP-1's for pennies on the dollar. All these companies are buying the API's in bulk and having them tested with mas spec/gas chromatography for purity and then packaging them up into their vials for 500%+ markups and it's still leagues cheaper than anything discounted. ie. I can get 10mg of Wegovy for $80CAD. Tirzepetide for $60. Or you can get Reta and use it without having to wait another year.
I specifically suggested using the many MCP integrations already available. It's a plain language API integration. If you'd like me to walk you through how I use that, I'd like to just direct you to the documentation.
Example of AP because for us that's the highest volume/time sync. Invoices come in, a tool pickes them up, scrapes the data from the PDF and assigns the account etc. It loads to the accounting system via an API integration and includes loading it to a draft payment file based on the terms. Or it looks for the transactions on the credit card and bank statements and does the reconciliation. The accounting system is a standard ERP. The tool doing the scrape is just like any other of the dozens of SaaS out there, it cost $99/mo. We had to set up some work flow stuff differently to get the automation to work. On the first day after we configured chart of accounts and such it was about 50% accurate. By the end of the second day it was at like 75% and got to basically 99% within a month. There are changes we had to make on some things to help it, but most of them were small one time tweaks.
5.3 Codex isn’t even yet benchmarked because it wasn’t out in the API until some couple hours ago, clearly you have no clue what you’re yapping about
Two options for stripe: 1) Pay a butt load of money to inherit a messy payment API that isn't at all compatible with Stripe's own API 2) Just wait for PayPal's collapse and the inevitable migration of all of it's clients to Stripe's API Maybe I'm missing something
The student managed investment fund team at my old uni did this with Bloomberg API in Excel sheets, everything updated automatically, macro econ, DCF, DGM
Thanks! We have an accessible REST API that should be perfect for this.
Idk man maybe read my DD? The API business model is growing in usage and is already profitable (Claude code, prebuilt AI products/workflows, etc) every AI startup is just paying model providers who are paying data center owners
Do you have any source for this at all? Anthropic, Google, and OpenAI are on the record that their API businesses are margin positive.
Those requiring Gemini Tier 3 API keys and Vertex API keys, please leave me a message.
Those requiring Gemini Tier 3 API keys and Vertex API keys, please leave me a message.
The cheap models will be be part of the "run it locally" thing and barely even for that given the security risks with chinese software, but I expect open source models to really take over a lot of the AI market where people don't need constant updated training data. It's why the bigger companies haven't focused on that, and instead are doing massive "do everything via API tokens" stuff, where the models are continuously updated.
I 'get that' from practical use. I was even dealing with an AI introduced bug today from a months old project where it decided to suppress errors and return empty arrays for our internal API calls, making everything that was experiencing errors just seem like no data existed. That's the sort of stuff I mean.
The general population won't switch, but where the money is: coding plans and API usage, will switch tomorrow to a Chinese model if they're better and/or cheaper. The top 9 models on Openrouter, which is huge in the API space, have a combined market share of around 90%. Of these top 9 models, 4 are Chinese, with a combined marketshare of a little over 35%. [https://openrouter.ai/rankings](https://openrouter.ai/rankings)
They already have $200/month plans. Plus people using agents are racking up huge API fees.
Not only that, but they have products people want they can integrate AI into, actually making it useful. For example, you can get access to the Vertex API for LLM app integration through Google Cloud. So if I’m a dev wanting to build an MCP server so models can use my app, I get hosting, security, model access, and everything else I could ever need right out of the box. Meanwhile, ChatGPT is a chatbot.
What Anthropic has been releasing are essentially just instructions for specific uses. They're not models trained explicitly for those tasks. All LLMs have a hard context window limit, with some pushing up to 1 million tokens (Claude is currently rolling this out iirc) but even then, their capabilities break down significantly once that is hit, which will happen for enterprise-level codebases and doc stores. At that point, Claude will compress it's working context memory which almost always leads to loss of information. The agents then end up in a loop where each new task requires some degree of scanning the codebase which is incredibly expensive when using API calls (which is now required for all Claude automation tasks). Then you need to consider things like organization testing and QA conventions, security and vulnerabilities, compliance and regulation context. At that point, the LLM is guaranteed to produce some invalid outputs. I have Anthroptic's own disclaimer on the new COBOL skill below. tldr - until LLM context windows is orders of magnitudes larger, they wont be able to replace humans completely. >Strategic planning with expert oversight >This is where human judgment becomes essential. Your COBOL engineers bring the understanding of regulatory requirements, business priorities, operational constraints, and risk tolerance that AI cannot. >The planning phase develops a detailed roadmap that sequences modernization work strategically: >AI suggests prioritization based on the risks, dependencies, and complexity it identified during analysis. >Your team reviews these recommendations and decides which components to modernize first based on business value, technical risk, and organizational priorities. >This is also when your team defines the target architecture, code standards, and integration requirements for modernized components. >Code testing and validation are also defined before any code changes: >AI designs preliminary function tests that verify migrated code produces identical outputs to legacy COBOL. >Your team decides whether those tests are sufficient, which business scenarios need manual validation by subject-matter experts, and what performance benchmarks the modernized components need to meet.
Anthropic’s Isn't A Real Business, and Its Primary Business Model Is Deception An Important Note On How Anthropic’s Claude Subscriptions Work — And How Anthropic Lets Its Subscribers Spend 8x to 13.5x Their Monthly Fee In API Calls So, when you pay Anthropic a monthly subscription fee, you’re getting access to a frontend to its models, which allows you to use them as if you were connecting directly to Anthropic’s API. These accounts have limits (as I’ve mentioned), but allow you to burn significantly more tokens for your money than if you were paying directly for access to a specific API. Those limits are incredibly loose. According to a researcher called Shellac (who mathematically calculated the exact rate limits), Anthropic allows its $20 subscribers to burn (assuming you use your limits) $163 of API calls a month, its $100 subscribers to burn $1354 in credits a month, and its $200 subscribers to burn $2708 in credits a month. Shellac also adds that Anthropic doesn’t even charge for cache reads, which are charged at around 10% of the cost of tokens. In simpler terms, a $20-a-month subscriber can spend 8.1x their value, and both $100 and $200-a-month subscribers can spend 13.5x. This is very important, because it’s core to Anthropic’s primary business model: deception. It cannot afford to support Claude at this scale, which is why it constantly needs to raise billions of dollars. And when it needs to raise those dollars, Anthropic opens up the floodgates with eased rate limits, paid influencer marketing campaigns, press pushes and, of course, Dario Amodei saying nonsense like that we’re “near the end of the exponential,” and if you’re wondering what that means, that makes two of us. Some genius will claim that “inference is profitable” and that “this is the gym membership model,” and I must be clear how wrong you are. There is no actual proof that inference is profitable — even if it were, it would have to be so profitable that Anthropic can afford to have users spend 500%+ in API calls every single month. It’s actually far simpler. What Anthropic is doing is creating the illusion of a product that can be sold at $20, $100 or $200 a month, when the underlying economics are somewhere in the region of spending anywhere from $8 to $13 to make $1. Anthropic isn’t a business — it’s a parasite that lives off of venture capital and hype.
Let me give people a real world example of Claude. I once asked it to make me a python script to create DNS entries in Cloudflare, and I kept getting a URI error. I told Claude, I think you do have the correct URI for the API, and it kept telling me that my token did not have access. After a few hours of of Claude added more and more crap code to my script, I finally looked up the URI in Cloudflare's documents, and yes Claude was missing some context in the URI and had it wrong. The lesson is, if you don't know what you are doing Claude, like all AI, will send you down rabbit holes because it is always confidently incorrect.
Mate I am a Senior Dev and have used all the AI models for a couple years, Claude is decent but far away from replacing any decent Dev, that shit lies all the time as well... Last week I had it try to gaslight me by giving me code that used API methods that didn't exist and never existed and telling me that I was being the idiot for not understanding that I can just do a POST to "endpoint that does exactly what you want" and be done for the week lol
Switch to Kagi.com with their assistant plan. You get API access to all the AI chat bots and API access is not logged so it's private.
I suspect this is a deliberate marketing strategy. Businesses will likely pivot toward more aggressive profit-driven models, such as SaaS providers implementing high-premium API pricing per call. We are also likely to see a significant shift away from traditional per-user licensing as these models evolve.
A API token doesn’t even cost a tenth of a cent. A guy further up in the thread said the average user costs them $1,400…. That’s be like asking GPT to write 100 college essays a day AND make a 100 slide power point with stock pictures daily…
Where do you get that information? $1,400 in API requests is the equivalent of asking Chat GPT to write 32,000 college essays… The average user is going to be costing them $20 or less
>GPT's 20$ plan monthly is the equivalent of about 1400$ in API requests Only if you use it?? I don't think everyone will fully use up their requests. You could also calculate computing costs for Netflix subscription assuming you 24/7 stream highest quality 4k videos.. a little unrealistic
# What $1400 would actually mean Let’s assume an average blended cost of \~$20 per 1M tokens. * $1400 ÷ $20 ≈ **70 million tokens** That is: * Hundreds of long conversations per day * Or nonstop heavy usage (coding, documents, etc.) 👉 **Almost no normal user hits this** # ⚖️ What the $20 plan really is ChatGPT Plus is: * **Rate-limited**, not unlimited * Prioritized access + better models * Designed around **typical human usage patterns** So yes: * There *is* some subsidy * But it’s nowhere near “$1400 per user” # 📉 What’s actually happening economically Think of it like this: * Light users → OpenAI makes money * Heavy users → cost more, but are capped by limits * Overall → balanced by usage distribution This is similar to: * Gym memberships * Streaming services * SaaS plans # 🔥 Why that claim spreads It comes from: * People benchmarking **API cost for power users** * Then incorrectly applying it to **average subscribers**
[https://she-llac.com/claude-limits](https://she-llac.com/claude-limits) This is also true for OpenAI's Plans vs API usage - not as severe as Anthropic, but still heavily subsidized - a Plus plan with Codex 5.3 on high reasoning lasts 20x as much as a Claude Pro plan of the same price.
Source???? Literally just out of your ass? I served 15.5k requests with my grok-4-1-fast-reasoning API key and the cost has been $1.6 in total. Even if you 100x for a reasoning model, (the actual cost is 15x-30x per m tokens) the cost is nowhere near $1400 a month. That is just ridiculous. The OpenAI API GPT5.2 pricing is even cheaper than Grok's flagship.
I pay API prices for Claude and Gemini and it's still worth it. $20 plans are for consumers.