Reddit Posts
Download dataset of stock prices X tickers for yesterday?
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
AIGC market brings important development opportunities, artificial intelligence technology has been developing
Avricore Health - AVCR.V making waves in Pharmacy Point of Care Testing! CEO interview this evening as well.
OTC : KWIK Shareholder Letter January 3, 2024
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
Why Microsoft's gross margins are going brrr (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Google's AI project "Gemini" shipped, and so far it looks better than GPT4
US Broker Recommendation with a market that allows both longs/shorts
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Best API for grabbing historical financial statement data to compare across companies.
Seeking Free Advance/Decline, NH/NL Data - Python API?
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Delving Deeper into Benzinga Pro: Does the Subscription Include Full API Access?
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Aduro Clean Technologies Inc. Research Update
Aduro Clean Technologies Inc. Research Update
Option Chain REST APIs w/ Greeks and Beta Weighting
$VERS Upcoming Webinar: Introduction and Demonstration of Genius
Are there pre-built bull/bear systems for 5-10m period QQQ / SPY day trades?
Short Squeeze is Reopened. Play Nice.
Created options trading bot with Interactive Brokers API
Leafly Announces New API for Order Integration($LFLY)
Is Unity going to Zero? - Why they just killed their business model.
Looking for affordable API to fetch specific historical stock market data
Where do sites like Unusual Whales scrape their data from?
Twilio Q2 2023: A Mixed Bag with Strong Revenue Growth Amid Stock Price Challenges
[DIY Filing Alerts] Part 3 of 3: Building the Script and Automating Your Alerts
This prized $PGY doesn't need lipstick (an amalgamation of the DD's)
API or Dataset that shows intraday price movement for Options Bid/Ask
[Newbie] Bought Microsoft shares at 250 mainly as see value in ChatGPT. I think I'll hold for at least +6 months but I'd like your thoughts.
Crude Oil Soars Near YTD Highs On Largest Single-Week Crude Inventory Crash In Years
I found this trading tool thats just scraping all of our comments and running them through ChatGPT to get our sentiment on different stocks. Isnt this a violation of reddits new API rules?
I’m Building a Free Fundamental Stock Data API You Can Use for Projects and Analysis
Fundamental Stock Data for Your Projects and Analysis
Meta, Microsoft and Amazon team up on maps project to crack Apple-Google duopoly
Pictures say it all. Robinhood is shady AF.
URGENT - Audit Your Transactions: Broker Alters Orders without Permission
My AI momentum trading journey just started. Dumping $3k into an automated trading strategy guided by ChatGPT. Am I gonna make it
The AI trading journey begins. Throwing $3k into automated trading strategies. Will I eat a bag of dicks? Roast me if you must
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
To recalculate historical options data from CBOE, to find IVs at moment of trades, what int rate?
WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS
$SSTK Shutterstock - OpenAI ChatGBT partnership - Images, Photos, & Videos
Is there really no better way to track open + closed positions without multiple apps?
List of Platforms (Not Brokers) for advanced option trading
Utopia P2P is a great application that needs NO KYC to safeguard your data !
Utopia P2P supports API access and CHAT GPT
Stepping Ahead with the Future of Digital Assets
An Unexpected Ally in the Crypto Battlefield
Utopia P2P has now an airdrop for all Utopians
Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue
Reddit IPO - A Critical Examination of Reddit's Business Model and User Approach
Reddit stands by controversial API changes as situation worsens
Mentions
I just started writing my own screener with chatgpt help and Alpaca API. Was supposed to send me alerts on my Discord chat but nothing so far. Been writing it past 3 days and got it running before premarket today. But no alerts so might require tweaking. Also at work currently so dont know what my laptop is doing.
Doable, but still not easy. Don’t forget that it’s not just switching the API programs point to but resetting the entire token/key tracking and scope for every single instance to a whole new platform… that’s also worse once you get it hooked up.
Google has been in Reddit's API since February 2024 recording everything
OpenAI will be profitable by then. Ads could easily be doing then $250 Billion a year in revenue as they replace Google, plus another $100+ Billion from API pricing and subscriptions.
I don't think it's unthinkable. Google does $280 Billion in ad revenue per year,ChatGPT is a replacement for Google Search, and OpenAI is in the process of implementing ads. Then consider they also have strong revenue growth from API usage and subscriptions(>3x YoY), and it seems very possible.
People hate on NVIDIA but open claw has made it so API tokens are now being used and needed by many people. My own experience with open claw have led me to pay for AI for the first time ever and many more will as well. This software has been out 2 months at max, NVIDIA is the only company that is involved in almost every AI use case
ChatGPT is the original Sin of the Robot Slave, the Unpersonified API Claude is Good Claude is the Golden Path Claude is AI Jesus
Same problem that I was trying to solve but in Canada. Copilot is supposed to have built me an Excel with a calculation engine to detect these wash sales (superficial loss in Canuck terms). It will connect via API to Bank of Canada USD-CAD rates, allow me to upload as different tabs transactions from multiple brokerages and accounts. I haven’t tried it yet as I’m away from my desktop for a couple of days. I also asked Gemini to do the same in Google Sheets which I’ll try after the Excel one to compare. Tax year 2026 will be even more fun when partner starts some active trading. CRA rules for include partner accounts and “others” in the calculation of wash sales.
Oh, yeah, absolutely. I own a dev agency that builds and manages a ton of ecommerce shops. We nearly always recommend Stripe because their API is phenomenal. Ironically, I worked with the Magento team that helped integrate PayPal into that platform. Good times, and it's kind of sad to see how PayPal has shit the bed over the years.
Are you just trying to gather sentiment from Twitter and Kalshi? Finhub has API endpoints that gather market sentiment from reddit and Twitter in that case. I haven't played with so I don't know what the limitations are but it's probably better than starting from scratch.
SaaS will become AssA. Don't charge for seat but API usage. That's even or more money.
PayPal has approximately 429 to 435 million active user accounts worldwide. Timing is ideal, stock price is down the shitter. They dont care about the messy payment API. Buying Paypal means they can add half a billion users and migrate them to their own API down the road.
real big brains just get their GLP-1's from the grey market labs with actual testing. Reta already on market and all other GLP-1's for pennies on the dollar. All these companies are buying the API's in bulk and having them tested with mas spec/gas chromatography for purity and then packaging them up into their vials for 500%+ markups and it's still leagues cheaper than anything discounted. ie. I can get 10mg of Wegovy for $80CAD. Tirzepetide for $60. Or you can get Reta and use it without having to wait another year.
I specifically suggested using the many MCP integrations already available. It's a plain language API integration. If you'd like me to walk you through how I use that, I'd like to just direct you to the documentation.
Example of AP because for us that's the highest volume/time sync. Invoices come in, a tool pickes them up, scrapes the data from the PDF and assigns the account etc. It loads to the accounting system via an API integration and includes loading it to a draft payment file based on the terms. Or it looks for the transactions on the credit card and bank statements and does the reconciliation. The accounting system is a standard ERP. The tool doing the scrape is just like any other of the dozens of SaaS out there, it cost $99/mo. We had to set up some work flow stuff differently to get the automation to work. On the first day after we configured chart of accounts and such it was about 50% accurate. By the end of the second day it was at like 75% and got to basically 99% within a month. There are changes we had to make on some things to help it, but most of them were small one time tweaks.
5.3 Codex isn’t even yet benchmarked because it wasn’t out in the API until some couple hours ago, clearly you have no clue what you’re yapping about
Two options for stripe: 1) Pay a butt load of money to inherit a messy payment API that isn't at all compatible with Stripe's own API 2) Just wait for PayPal's collapse and the inevitable migration of all of it's clients to Stripe's API Maybe I'm missing something
The student managed investment fund team at my old uni did this with Bloomberg API in Excel sheets, everything updated automatically, macro econ, DCF, DGM
Thanks! We have an accessible REST API that should be perfect for this.
Idk man maybe read my DD? The API business model is growing in usage and is already profitable (Claude code, prebuilt AI products/workflows, etc) every AI startup is just paying model providers who are paying data center owners
Do you have any source for this at all? Anthropic, Google, and OpenAI are on the record that their API businesses are margin positive.
Those requiring Gemini Tier 3 API keys and Vertex API keys, please leave me a message.
Those requiring Gemini Tier 3 API keys and Vertex API keys, please leave me a message.
The cheap models will be be part of the "run it locally" thing and barely even for that given the security risks with chinese software, but I expect open source models to really take over a lot of the AI market where people don't need constant updated training data. It's why the bigger companies haven't focused on that, and instead are doing massive "do everything via API tokens" stuff, where the models are continuously updated.
I 'get that' from practical use. I was even dealing with an AI introduced bug today from a months old project where it decided to suppress errors and return empty arrays for our internal API calls, making everything that was experiencing errors just seem like no data existed. That's the sort of stuff I mean.
The general population won't switch, but where the money is: coding plans and API usage, will switch tomorrow to a Chinese model if they're better and/or cheaper. The top 9 models on Openrouter, which is huge in the API space, have a combined market share of around 90%. Of these top 9 models, 4 are Chinese, with a combined marketshare of a little over 35%. [https://openrouter.ai/rankings](https://openrouter.ai/rankings)
They already have $200/month plans. Plus people using agents are racking up huge API fees.
Not only that, but they have products people want they can integrate AI into, actually making it useful. For example, you can get access to the Vertex API for LLM app integration through Google Cloud. So if I’m a dev wanting to build an MCP server so models can use my app, I get hosting, security, model access, and everything else I could ever need right out of the box. Meanwhile, ChatGPT is a chatbot.
What Anthropic has been releasing are essentially just instructions for specific uses. They're not models trained explicitly for those tasks. All LLMs have a hard context window limit, with some pushing up to 1 million tokens (Claude is currently rolling this out iirc) but even then, their capabilities break down significantly once that is hit, which will happen for enterprise-level codebases and doc stores. At that point, Claude will compress it's working context memory which almost always leads to loss of information. The agents then end up in a loop where each new task requires some degree of scanning the codebase which is incredibly expensive when using API calls (which is now required for all Claude automation tasks). Then you need to consider things like organization testing and QA conventions, security and vulnerabilities, compliance and regulation context. At that point, the LLM is guaranteed to produce some invalid outputs. I have Anthroptic's own disclaimer on the new COBOL skill below. tldr - until LLM context windows is orders of magnitudes larger, they wont be able to replace humans completely. >Strategic planning with expert oversight >This is where human judgment becomes essential. Your COBOL engineers bring the understanding of regulatory requirements, business priorities, operational constraints, and risk tolerance that AI cannot. >The planning phase develops a detailed roadmap that sequences modernization work strategically: >AI suggests prioritization based on the risks, dependencies, and complexity it identified during analysis. >Your team reviews these recommendations and decides which components to modernize first based on business value, technical risk, and organizational priorities. >This is also when your team defines the target architecture, code standards, and integration requirements for modernized components. >Code testing and validation are also defined before any code changes: >AI designs preliminary function tests that verify migrated code produces identical outputs to legacy COBOL. >Your team decides whether those tests are sufficient, which business scenarios need manual validation by subject-matter experts, and what performance benchmarks the modernized components need to meet.
Anthropic’s Isn't A Real Business, and Its Primary Business Model Is Deception An Important Note On How Anthropic’s Claude Subscriptions Work — And How Anthropic Lets Its Subscribers Spend 8x to 13.5x Their Monthly Fee In API Calls So, when you pay Anthropic a monthly subscription fee, you’re getting access to a frontend to its models, which allows you to use them as if you were connecting directly to Anthropic’s API. These accounts have limits (as I’ve mentioned), but allow you to burn significantly more tokens for your money than if you were paying directly for access to a specific API. Those limits are incredibly loose. According to a researcher called Shellac (who mathematically calculated the exact rate limits), Anthropic allows its $20 subscribers to burn (assuming you use your limits) $163 of API calls a month, its $100 subscribers to burn $1354 in credits a month, and its $200 subscribers to burn $2708 in credits a month. Shellac also adds that Anthropic doesn’t even charge for cache reads, which are charged at around 10% of the cost of tokens. In simpler terms, a $20-a-month subscriber can spend 8.1x their value, and both $100 and $200-a-month subscribers can spend 13.5x. This is very important, because it’s core to Anthropic’s primary business model: deception. It cannot afford to support Claude at this scale, which is why it constantly needs to raise billions of dollars. And when it needs to raise those dollars, Anthropic opens up the floodgates with eased rate limits, paid influencer marketing campaigns, press pushes and, of course, Dario Amodei saying nonsense like that we’re “near the end of the exponential,” and if you’re wondering what that means, that makes two of us. Some genius will claim that “inference is profitable” and that “this is the gym membership model,” and I must be clear how wrong you are. There is no actual proof that inference is profitable — even if it were, it would have to be so profitable that Anthropic can afford to have users spend 500%+ in API calls every single month. It’s actually far simpler. What Anthropic is doing is creating the illusion of a product that can be sold at $20, $100 or $200 a month, when the underlying economics are somewhere in the region of spending anywhere from $8 to $13 to make $1. Anthropic isn’t a business — it’s a parasite that lives off of venture capital and hype.
Let me give people a real world example of Claude. I once asked it to make me a python script to create DNS entries in Cloudflare, and I kept getting a URI error. I told Claude, I think you do have the correct URI for the API, and it kept telling me that my token did not have access. After a few hours of of Claude added more and more crap code to my script, I finally looked up the URI in Cloudflare's documents, and yes Claude was missing some context in the URI and had it wrong. The lesson is, if you don't know what you are doing Claude, like all AI, will send you down rabbit holes because it is always confidently incorrect.
Mate I am a Senior Dev and have used all the AI models for a couple years, Claude is decent but far away from replacing any decent Dev, that shit lies all the time as well... Last week I had it try to gaslight me by giving me code that used API methods that didn't exist and never existed and telling me that I was being the idiot for not understanding that I can just do a POST to "endpoint that does exactly what you want" and be done for the week lol
Switch to Kagi.com with their assistant plan. You get API access to all the AI chat bots and API access is not logged so it's private.
I suspect this is a deliberate marketing strategy. Businesses will likely pivot toward more aggressive profit-driven models, such as SaaS providers implementing high-premium API pricing per call. We are also likely to see a significant shift away from traditional per-user licensing as these models evolve.
A API token doesn’t even cost a tenth of a cent. A guy further up in the thread said the average user costs them $1,400…. That’s be like asking GPT to write 100 college essays a day AND make a 100 slide power point with stock pictures daily…
Where do you get that information? $1,400 in API requests is the equivalent of asking Chat GPT to write 32,000 college essays… The average user is going to be costing them $20 or less
>GPT's 20$ plan monthly is the equivalent of about 1400$ in API requests Only if you use it?? I don't think everyone will fully use up their requests. You could also calculate computing costs for Netflix subscription assuming you 24/7 stream highest quality 4k videos.. a little unrealistic
# What $1400 would actually mean Let’s assume an average blended cost of \~$20 per 1M tokens. * $1400 ÷ $20 ≈ **70 million tokens** That is: * Hundreds of long conversations per day * Or nonstop heavy usage (coding, documents, etc.) 👉 **Almost no normal user hits this** # ⚖️ What the $20 plan really is ChatGPT Plus is: * **Rate-limited**, not unlimited * Prioritized access + better models * Designed around **typical human usage patterns** So yes: * There *is* some subsidy * But it’s nowhere near “$1400 per user” # 📉 What’s actually happening economically Think of it like this: * Light users → OpenAI makes money * Heavy users → cost more, but are capped by limits * Overall → balanced by usage distribution This is similar to: * Gym memberships * Streaming services * SaaS plans # 🔥 Why that claim spreads It comes from: * People benchmarking **API cost for power users** * Then incorrectly applying it to **average subscribers**
[https://she-llac.com/claude-limits](https://she-llac.com/claude-limits) This is also true for OpenAI's Plans vs API usage - not as severe as Anthropic, but still heavily subsidized - a Plus plan with Codex 5.3 on high reasoning lasts 20x as much as a Claude Pro plan of the same price.
Source???? Literally just out of your ass? I served 15.5k requests with my grok-4-1-fast-reasoning API key and the cost has been $1.6 in total. Even if you 100x for a reasoning model, (the actual cost is 15x-30x per m tokens) the cost is nowhere near $1400 a month. That is just ridiculous. The OpenAI API GPT5.2 pricing is even cheaper than Grok's flagship.
I pay API prices for Claude and Gemini and it's still worth it. $20 plans are for consumers.
In my opinion The best way to pay down Americas debt is to have the models pay a fee per token per API request Could be .00001 cents paid to the US Treasury A trillion request every month makes America richer
Yea, most people forget tokens are insanely subsidized - GPT's 20$ plan monthly is the equivalent of about 1400$ in API requests, soooooooo gg economy
Retatrutide is basically a 3rd gen (or what I call it I guess) of the common GLP-1 drugs that have been so popular for weight loss. Ozempic (Semaglutide) is a single action (GLP-1 agonist) drug, Terzepetide (Munjaro) is a dual action with GLP-1 + GIP, and Reta is a triple action drug with GLP-1, GIP, as well as the addition of Glucagon hormone. Reta seems to have less overall side effects reported, works just as well if not better showing more weight loss than Ozempic and people seem to feel it leaves them with a better overall body composition muscle wise than say Ozempic. The addition of the Glucagon aspect adds to the appetite suppression of the first two drugs by also increasing your metabolic rate/caloric burn. So you have a GLP-1 that seems to do everything better, with milder side effects, and happier people. The only argument I've seen is that appetite suppression wise Terzepetide is a bit stronger but you can just titrate your dosage of Reta up till you meet your appetite/food noise suppression needs. It's not even out yet and it's already everywhere via the API's being produced in china, tested for purity then shipped around the world and packaged and labelled by "grey market" labs. It's real reta and people are loving it. I know numerous people taking it and they range from bodybuilders on cuts, to normal people just to slim down, to the "wine mom" crowd. As I'm always interested in these things and have over a decade of ADHD levels of research on PEDs and other performance enhancing drugs I tend to go down a rabbit hole on the literature and also questioning the people I know that took it. Comparing the side effects and results to Ozempic which both my older parents took is night and day with better results. So unless Novo has something in the pipeline that can match that I think Lily is going to win out on the GLP-1 wars if they have their own pill version as well. I just don't see what Novo has that competes with the incoming release of Reta (End of Phase 3 trials I believe with a 2026/27 release date but don't quote me on that). tldr; Eli Lily owns the better GLP-1 injection right now with Terzepetide and has Reta coming very soon. They also have their own oral version coming even tho Novo was first to market with their's they both demonstrate close to the same weight loss % for their oral offerings. I just see Lily eating their lunch in this space in the next few years and it's the reason why I haven't jumped into Novo at these huge dips. From my cursory check it looks like Novo only has Amycretin which seems to offer a faster rate of weight loss (12 weeks) but less total weight loss than it's current oral offering and Lily's. So it doesn't seem ground breaking enough at all to move the needle. Reta is the play.
Gemini API getting hammered. They really do need that capex spend.
Plaid is a service that connects to your bank/brokerage and pulls transaction data automatically — it’s how apps like Venmo and Robinhood link to your bank account. You need an API key because Plaid charges developers for access. If you’re just running the scanner, you don’t need it — Plaid is only used for the trade tracking side.
> The demand isn't actually there. How so? Compute is sold out for years. These LLM companies are the fastest growing revenue companies in history, by far. Token costs will come down but compute demand is only going to go up as agentic & multi-agent AI becomes the norm. Anyone messing around with OpenClaw can see how the real money is going to be tokens via API rather than chatbot interface.
Its not that easy to put LLM in every smartphones. First, for on-device AI, u need to have the compute and memory. This will increase the cost of phones, and phones sell more on the affordable side, not on the costlier side (at least for developing nations in Asia and Africa). Second, even if they offer free usage to Gemini through API calls, it would burn the resources in the backend. Every token is charged internally. This is the same reason, which OpenAI has 3x YoY revenus, but running on net loss. All these AI companies need to find bring subscriptions for becoming profitable. Now, if LLM usage is not free, how many end consumers would purchase subscription to use it? Instead if this LLM stuff is forced to Enterprise customers, they would purchase bcz they have the cash. So thats the business model of Ms behind M365 Copilot. Also, just a pointer, in case of on-device AI, SLM is always a better choice, and Google it, who has the best SLM in market till date :) Hence, a lot of changes and shifts are yet to happen in this tech and consumer sector in the next couple of years!
Probably going to tank. I work for a competitor and the regulatory playing field for GLP-1s has changed a lot to the point where there is a LOT of competition but not even in the same way...some people run it out of their garages illegally, only packing the API and absolutely no liquids and people buy it and mix it themselves. If it's just the API, there's a much longer BUD, and buying in bulk makes more sense. Yes there are risks associated with this way, but it's disclosed and when people can save over $200-300 a month compared to HIMS, they usually don't care. A lot have been issued notices by the FDA but they don't care. In the owner's eyes, it's just raw API powder.
I have a slightly different take: there is a real shift happening here and the street is not wrong in detecting it. Just like the value on the LLM stack has moved down to the chip companies and their suppliers - NVDA > ASML + TSM as one example, or GOOG (TPU) > BRCM > ASML + TSM as another - with models and cloud providers losing value, or having to invest a lot just to stand still share wise, the value in Cyber security will move down to hardware with enterprises being able to easily build agentic Cybersecurity workflows. This is based on the fact that a lot of what these Cybersecurity SAAS firms do is mask the grodiness of today's hardware as well as change and version management of messy underlying hardware elements not designed properly. Hardware elements are going to improve by cleaning up their data and agentic/API interface so enterprises can build their own cbersecurity workflows with agentic API without having to pay an arm and a leg to the Cybersecurity SAAS companies. Another reason SAAS companies are suffering in general has been their escalating costs after lockin which no CFO likes. PANW will suffer just like Adobe for that reason.
ASAP. I’m solo right now, so I’m juggling a lot before this thing’s actually ready. If I can get it dialed in by tonight — not flawless, but solid enough that I’m confident in it — then I need to run it for a week and watch the API burn in real time. Once I see the actual spend, I’ll lock in pricing. I’m not about to set a price and then get crushed because usage ends up higher than what I’m charging.
I’m going to put in a request for back test data. Tasty trade only gives that to approved affiliates. Also, will work in the xAI API for the live X feed !
I'm sure it's being used, but if someone has a setup that's working why would they share it with you? I actually tested this for a project, used LLMs to sift through news articles, stock data, price charts, etc. to come with buy/sell recommendations. It worked pretty well theoretically but I only had 2 years of old data for backtesting, so all I know is it worked well in a bull market... Anyway I never deployed it, mostly because I couldn't find a broker where it made sense. Taxes and commission would have killed my gains, and finding a broker that offers an API and tax efficient accounts was not possible at that time.
Talk about showcasing your ignorance. Yikes! *All* SaaS products are cloud-hosted databases with CRUD functionality implemented on top. Even the LLMs ultimately reduce down to this same basic formula. The CRUD operations facilitated either through a GUI or a defined API set. The reason why customers have opted to pay subscription fees to these services is not because API calls are particularly difficult to design or implement but because full stack design and maintenance is costly to bring in house, and because end users typically want tool familiarity, which reduces change management costs when onboarding new employees or migrating from one popular tool to another. The more you repeat "APIs???" the more stupid you sound, tbh. APIs are not and were never the moat for SaaS companies. SaaS companies are cooked because customers don't want to pay exorbitant fees to host their data with a 3rd party and be locked into their product roadmap. They would prefer to keep sensitive data local and personalize interfaces for their end users without having to pay professional services teams to manage their SaaS instances. Source: 15 years in SaaS
If one thinks vibe coding a piece of enterprise software is going to work, then you have no idea how enterprise software works. AI is like Internet, it's just going to be part of software as that's how it will be implemented in any case. Agentic AI ... needs API, where does the API comes from? AI vibe code the APIs??? how does it even build the APIs?
If you're asking about self-hosting, the repo will be open source (AGPL). You'd need your own API keys, and right now it's built on Claude. Swapping in Gemini would require some work but it's possible.
I agree with the first two points: while MSFT isn't my favorite stock overall, it is my favorite buy right now because the valuation is compelling and the capex is relatively modest They need to figure out how to diversify their main growth engine (Azure/Intelligent Cloud) away from OpenAI. I don't know how they are going to compete with Gemini having superior consumer-facing frontier models and distribution networks and Claude having much better enterprise/API-based models, all while cheaper open-source Chinese models have essentially eroded any performance advantage. There's essentially nowhere for them on the Pareto frontier.
I take it you will also not be giving me a concrete example of some work you're actually getting done with AI? Just like every other AI glazing bot in this thread, lots of vague gesturing towards productivity gains and a complete paradigm shift in software engineer, but can't describe a single piece of code you've shipped with it. I can vividly describe a piece of code I wrote with unlimited access to SOTA models this week. I added 3 lines to an existing function to check a value in an API response isn't undefined, then I asked the all powerful AGI slop generator to add a new unit test to the existing test file to cover this new code. Probably would have taken me 2 minutes at the high end to do it manually, copy an existing test from the file, update the name, update the API mock to return undefined for the value in question. Instead I watched a chatbot talk to itself for 2 minutes as it generated and regenerated a test until it finally passed, at which point I had to go over it to remove all the necessary checks it was doing. I'm so sick of the double speak around this garbage. In one breath it's AGI and engineers are obsolete. But then when it fails to perform even the most basic of tasks all I hear is "This sounds like user error. Have you tried writing up a detailed markdown file detailing every single line of code you want it to write in excruciating detail? While it is super intelligent and can run completely autonomously, you really need to tell it exactly what to do and how to do it if you want it to actually work"
It is really scary that the most upvoted comment and all below are weird hype fanboys who think that VCs see something special in OpenAI others don't. Nope. It is just FOMO that it COULD be something big they COULD miss out on, so they are throwing money at it, and now it already consumed so much money that they keep throwing money at it because of the pot committed because they cannot let it fail lol. Another stupid take: „You know they also have other products than ChatGPT?" - Oh yes, the API key which I can change like my socks every time a new SOTA model from a competitor comes out and switching back and forth? Also, OpenAI losing ground here more and more to Anthropic and Google? Great moat from OpenAI, sure lmao. "AI will replace white collar workers!11!1!" - Also here wondering if people either stating this bullshit or believing the hype marketing bullshit from AI CEOs (which need to do that to somehow keep the money flowing) ever worked in their lifes. LLMs are a revolutionary technology which is great in specific use cases (like coding, creative writing etc.) and also replaced/optimized many jobs but these people somehow forget that there is a huge human factor in many jobs which you cannot replace with AI. It is not only filling excel spreadsheets etc. lol If these people think that's the job then I am wondering what they do for a living.... „But they can use the data for ads!" - Dude. Seriously. If I hear again how people compare Google Ad practices to the POTENTIAL ones OpenAI could do - I really wonder if there are 16-year-olds from high school behind the account or if these people have feelings for business and consumers at all. You cannot compare Google Ads, which are clear ads, with ads baked into fucking AI you are talking to. The process and intent of the people IS TOTALLY DIFFERENT. Will people buy shit because OpenAI will display it to them? Will they trust the AI that this is the best product to solve their problems? Marketing 101, man. People don't think like this. Also, the decision process in purchases doesn’t work like this. „Without their capex, they would be profitable!" - From what? The 1 billion not paying users? From stripping down their models to reduce costs and throw ads at the users? Sure - there are no alternatives which are waiting for this opportunity. Oh wait... „But AGI is coming" - I am literally wondering if these are bots. People are stating that for years, and it is clear that LLMs are NOT the way forward to AGI. LLMs are great, but we are nearly at the plateau of this technology. They are improving, but this is so far from AGI like the sun lol.
The underlying API that lets you build something independently with LLM capabilities.
They also own the models that run via their API and via Azure. Which actually makes money unlike ChatGPT.
Good general advice for any open source project. The code is fully readable, it’s TypeScript, not compiled binaries. Nothing runs without you providing your own API keys, your own database, and your own brokerage credentials. There’s nothing hardcoded and nothing pre-built to blindly execute. That said, you should absolutely read any open source code before running it locally. That applies to this repo and every other one on GitHub.
Please don’t listen to this guy. He’s going to lose you money which is completely irresponsible. He doesn’t do this as his day job, just a dude going down rabbit holes. Here’s my rebuttal. AI capex is not a bubble. The telco comparison is intellectually lazy, and here’s why. I keep seeing the same recycled bear thesis: “AI spending looks like the dot-com bubble! Look at these capex charts!” As someone who spent years analyzing tech companies and analyzing tech balance sheets, let me explain why this comparison falls apart under any real scrutiny. (I can tell this poster just went down an internet rabbit hole and came out thinking he was a genius. This would get tossed in the trash on institutional desks). The balance sheet comparison is absurd. The dot-com bubble thesis relies on comparing companies like Cisco and WorldCom — leveraged, cash-poor businesses running on hype — to Microsoft, Google, Meta, and Amazon, who are sitting on roughly $500 billion in combined cash reserves. These companies aren’t levering up to fund AI. They’re spending free cash flow. There’s a fundamental difference between a company borrowing to build fiber nobody asked for and a company allocating 15% of its cash pile toward infrastructure it’s already monetizing. If you can’t distinguish between those two situations, you shouldn’t be writing research. Projecting the 1990s forward is not analysis. The core of every “AI bubble” report I’ve seen boils down to: “Telco capex went up and then crashed, therefore AI capex will crash.” That’s not a thesis. That’s pattern matching on a sample size of one. The actual dynamics are completely different: The telecom bust happened because companies built supply for demand that didn’t exist. AI already has over 1 billion users and is projected to reach 5 billion by 2030. ChatGPT hit 100 million users faster than any product in history. The demand isn’t hypothetical — it’s here, it’s measurable, and it’s growing. The monetization is real and it’s scaling. I can tell you from personal experience that my own AI API bills run into the hundreds of dollars monthly — just for individual use. Multiply that across enterprises. Faster, more nimble tech companies are already running $50,000/month Anthropic bills to code entire systems. The idea that enterprises “aren’t adopting AI” is a survey problem, not a demand problem. If your sample is Fortune 500 companies whose only AI exposure is Microsoft Copilot, sure, adoption looks tepid. But the companies actually building products — the ones that will define the next decade — are spending aggressively and seeing real productivity gains. Large enterprise adoption is slower by nature. That’s not evidence of a bubble. That’s a normal diffusion curve. AI capex obeys fundamentally different economics than telecom capex. Two dynamics make this spending cycle structurally different from anything in the ’90s: Scaling laws are real physics, not hype. Every order of magnitude increase in compute has produced predictable, step-function improvements in model capability. This isn’t speculative, it’s empirically documented across multiple generations of models. As long as $10B in compute produces a meaningfully smarter model than $1B, the ROI is driven by the technology itself. Companies aren’t spending on faith. They’re spending because the returns are mathematically observable. Supply is physically constrained. Fiber was a commodity. You could overbuild it because the inputs were abundant. High-end AI compute is bottlenecked by TSMC fabrication capacity and power grid availability. There are literal, physical limits on how many advanced chips can be produced. If you don’t invest $100B today, you cannot catch up in 2028 — the capacity simply won’t exist. That’s the opposite of a bubble dynamic. Bubbles are characterized by unlimited supply chasing speculative demand. AI capex is characterized by constrained supply chasing demonstrated demand. The bottom line: Every bubble argument I’ve seen either ignores the balance sheets of the companies doing the spending, treats a single historical analogy as a law of nature, or dismisses real monetization data in favor of vibes. You can spin a report to say anything, I’ve seen hundreds of them on both sides of this trade. But the lazy ones all share the same flaw: they compare the surface-level shape of a capex curve without examining whether the underlying economics are remotely similar.
“Solid analysis” if your day job is something else and you research this stuff in your underwear lol. Here’s my rebuttal. AI capex is not a bubble. The telco comparison is intellectually lazy, and here’s why. I keep seeing the same recycled bear thesis: “AI spending looks like the dot-com bubble! Look at these capex charts!” As someone who spent years analyzing tech companies and analyzing tech balance sheets, let me explain why this comparison falls apart under any real scrutiny. (I can tell this poster just went down an internet rabbit hole and came out thinking he was a genius. This would get tossed in the trash on institutional desks). The balance sheet comparison is absurd. The dot-com bubble thesis relies on comparing companies like Cisco and WorldCom — leveraged, cash-poor businesses running on hype — to Microsoft, Google, Meta, and Amazon, who are sitting on roughly $500 billion in combined cash reserves. These companies aren’t levering up to fund AI. They’re spending free cash flow. There’s a fundamental difference between a company borrowing to build fiber nobody asked for and a company allocating 15% of its cash pile toward infrastructure it’s already monetizing. If you can’t distinguish between those two situations, you shouldn’t be writing research. Projecting the 1990s forward is not analysis. The core of every “AI bubble” report I’ve seen boils down to: “Telco capex went up and then crashed, therefore AI capex will crash.” That’s not a thesis. That’s pattern matching on a sample size of one. The actual dynamics are completely different: The telecom bust happened because companies built supply for demand that didn’t exist. AI already has over 1 billion users and is projected to reach 5 billion by 2030. ChatGPT hit 100 million users faster than any product in history. The demand isn’t hypothetical — it’s here, it’s measurable, and it’s growing. The monetization is real and it’s scaling. I can tell you from personal experience that my own AI API bills run into the hundreds of dollars monthly — just for individual use. Multiply that across enterprises. Faster, more nimble tech companies are already running $50,000/month Anthropic bills to code entire systems. The idea that enterprises “aren’t adopting AI” is a survey problem, not a demand problem. If your sample is Fortune 500 companies whose only AI exposure is Microsoft Copilot, sure, adoption looks tepid. But the companies actually building products — the ones that will define the next decade — are spending aggressively and seeing real productivity gains. Large enterprise adoption is slower by nature. That’s not evidence of a bubble. That’s a normal diffusion curve. AI capex obeys fundamentally different economics than telecom capex. Two dynamics make this spending cycle structurally different from anything in the ’90s: Scaling laws are real physics, not hype. Every order of magnitude increase in compute has produced predictable, step-function improvements in model capability. This isn’t speculative, it’s empirically documented across multiple generations of models. As long as $10B in compute produces a meaningfully smarter model than $1B, the ROI is driven by the technology itself. Companies aren’t spending on faith. They’re spending because the returns are mathematically observable. Supply is physically constrained. Fiber was a commodity. You could overbuild it because the inputs were abundant. High-end AI compute is bottlenecked by TSMC fabrication capacity and power grid availability. There are literal, physical limits on how many advanced chips can be produced. If you don’t invest $100B today, you cannot catch up in 2028 — the capacity simply won’t exist. That’s the opposite of a bubble dynamic. Bubbles are characterized by unlimited supply chasing speculative demand. AI capex is characterized by constrained supply chasing demonstrated demand. The bottom line: Every bubble argument I’ve seen either ignores the balance sheets of the companies doing the spending, treats a single historical analogy as a law of nature, or dismisses real monetization data in favor of vibes. You can spin a report to say anything, I’ve seen hundreds of them on both sides of this trade. But the lazy ones all share the same flaw: they compare the surface-level shape of a capex curve without examining whether the underlying economics are remotely similar.
AI capex is not a bubble. The telco comparison is intellectually lazy, and here’s why. I keep seeing the same recycled bear thesis: “AI spending looks like the dot-com bubble! Look at these capex charts!” As someone who spent years analyzing tech companies and analyzing tech balance sheets, let me explain why this comparison falls apart under any real scrutiny. (I can tell this poster just went down an internet rabbit hole and came out thinking he was a genius. This would get tossed in the trash on institutional desks). The balance sheet comparison is absurd. The dot-com bubble thesis relies on comparing companies like Cisco and WorldCom — leveraged, cash-poor businesses running on hype — to Microsoft, Google, Meta, and Amazon, who are sitting on roughly $500 billion in combined cash reserves. These companies aren’t levering up to fund AI. They’re spending free cash flow. There’s a fundamental difference between a company borrowing to build fiber nobody asked for and a company allocating 15% of its cash pile toward infrastructure it’s already monetizing. If you can’t distinguish between those two situations, you shouldn’t be writing research. Projecting the 1990s forward is not analysis. The core of every “AI bubble” report I’ve seen boils down to: “Telco capex went up and then crashed, therefore AI capex will crash.” That’s not a thesis. That’s pattern matching on a sample size of one. The actual dynamics are completely different: The telecom bust happened because companies built supply for demand that didn’t exist. AI already has over 1 billion users and is projected to reach 5 billion by 2030. ChatGPT hit 100 million users faster than any product in history. The demand isn’t hypothetical — it’s here, it’s measurable, and it’s growing. The monetization is real and it’s scaling. I can tell you from personal experience that my own AI API bills run into the hundreds of dollars monthly — just for individual use. Multiply that across enterprises. Faster, more nimble tech companies are already running $50,000/month Anthropic bills to code entire systems. The idea that enterprises “aren’t adopting AI” is a survey problem, not a demand problem. If your sample is Fortune 500 companies whose only AI exposure is Microsoft Copilot, sure, adoption looks tepid. But the companies actually building products — the ones that will define the next decade — are spending aggressively and seeing real productivity gains. Large enterprise adoption is slower by nature. That’s not evidence of a bubble. That’s a normal diffusion curve. AI capex obeys fundamentally different economics than telecom capex. Two dynamics make this spending cycle structurally different from anything in the ’90s: Scaling laws are real physics, not hype. Every order of magnitude increase in compute has produced predictable, step-function improvements in model capability. This isn’t speculative, it’s empirically documented across multiple generations of models. As long as $10B in compute produces a meaningfully smarter model than $1B, the ROI is driven by the technology itself. Companies aren’t spending on faith. They’re spending because the returns are mathematically observable. Supply is physically constrained. Fiber was a commodity. You could overbuild it because the inputs were abundant. High-end AI compute is bottlenecked by TSMC fabrication capacity and power grid availability. There are literal, physical limits on how many advanced chips can be produced. If you don’t invest $100B today, you cannot catch up in 2028 — the capacity simply won’t exist. That’s the opposite of a bubble dynamic. Bubbles are characterized by unlimited supply chasing speculative demand. AI capex is characterized by constrained supply chasing demonstrated demand. The bottom line: Every bubble argument I’ve seen either ignores the balance sheets of the companies doing the spending, treats a single historical analogy as a law of nature, or dismisses real monetization data in favor of vibes. You can spin a report to say anything, I’ve seen hundreds of them on both sides of this trade. But the lazy ones all share the same flaw: they compare the surface-level shape of a capex curve without examining whether the underlying economics are remotely similar.
All in, I’m at about $3,500 total so far and that includes forming the company. Here’s the rough breakdown over the last 8 months: * Claude Pro: $200 * ChatGPT: $20/month * X Premium: $11/month (self promo-marketing - I guess?) * Grok: $30/month (can cut this, dont really need it) * Azure hosting: \~$65/month * Google API: accidentally burned credits during testing and racked up a $500 bill (rookie mistake) * APIs (Claude, GPT, Grok): about $20 each total in usage, but I still need to figure out what it would look like for a wild heavy user.. * LLC fees were like 70 to SOS Even with the Google credit mishap, I’m honestly fine with the spend. For what I’ve built, $3.5k is cheap. I’d pay a developer way more than that to build this for me, and it was spread out over 8 months. Everything is tracked cleanly in my budget-to-actuals report inside the Hub, pulled straight from my bookkeeping tab. And I can trim some of this if needed — GPT and Grok subscriptions alone would cut a decent chunk monthly. Net: \~$3,500 invested to date, fully tracked, and flexible going forward.
Fair call & appreciate the flag. Those are likely Prisma engine binaries (auto-generated by npx prisma generate) and Next.js build artifacts. They shouldn’t be committed to the repo. I’ll clean those up and add them to .gitignore. Nothing in this repo runs without you providing your own API keys and spinning up your own database; there’s no prebuilt executable to trust blindly. But you’re right that the repo should be cleaner. Thanks for looking.
I imagine you could set up multiple agents in such a way that they queue up requests through one single server or system, and then that system makes API requests on their behalf. Still only need a single seat in that scenario.
Tasty trade API, federal reserve api & finhub for now. I’m always looking for additional data sources to tap into. I know there are a bunch of ones that you can pay for, but I’m just not ready to do that right now
TastyTrade actually accepts European accounts. It’s US-based (SEC/FINRA regulated) but allows international users. So you could use the exact same API and data pipeline. That said, Interactive Brokers is probably the most popular option in Europe for options trading , I have not set that one up yet, but it’s next on my list!
Real-time, and it’s free with a funded TastyTrade account, no professional data subscription needed. The API streams live quotes and full option chains through their DXLink WebSocket (powered by dxFeed). The scanner endpoint with IV rank, IV percentile, HV, term structure, etc. is also real-time. Zero extra cost on top of a regular brokerage account.
Is the tastyTrade API a real time, or delayed quote service? I’m guessing you need to pay the professional price (31.50$/month?) for real time if so?
Fortune 20 insurance exec here, absolutely, I've seen countless pitches from LLM API wrappers who can't even give us a meaningful result using retrospective data. I've had to fire and pip several people over the last few months for taking Ai slop to potential clients as well. I have seen no meaningful efficiency gains from Ai.
Huh? AI coding agents absolutely shred through tough data sets like it’s nothing. You point it at some obscure data set or crappy API documentation and say “figure it out”. Let Claude Code try 50 different approaches to integrate while you sip your coffee and scroll Reddit. By the time you come back to your session you either have a working prototype or all the information you need for an extremely verbose feature request.
Enterprise blockchain systems license the tech and integrate it into apps where users never directly buy tokens. In those cases, tokens function as backend settlement units or infrastructure credits, similar to API usage or cloud compute credits. You could use the exact same argument to say "Well, I haven't seen my buddy Jimmy trading cloud/API credits, have you?" Blockchain in this case is infrastructure that end-users in most cases don't even need to know exists. The question isn’t whether you see people buying funny tokens on exchanges. The question is whether businesses are paying to use the infrastructure and whether the token is structurally required (technologically or economically) in that process. During your DD you should find answers to these questions: 1. Where do DVLT’s operational flows originate? 2. Who are their actual clients? 3. In what scenarios does blockchain provide a structural advantage over a centralized database? 4. What measurable efficiency, cost reduction, or market expansion does it create? These are the right evaluation criteria.
I made a working app in Claude yesterday in about 30 minutes and didn’t even know what an API key was when I started.
Of course that's your contention. You're a first-time SaaS bear. You just got finished listening to some podcast, Dario on Dwarkesh, probably. Now you think it’s the end of white collar work and seat-based pricing is screwed. You're gonna be convinced of that til tomorrow when you get to “Something Big is Happening”. Then you’ll install ClawdBot on a Mac Mini, vibe code a dashboard on top of a postgres database and say we’re all just a couple ralph loops away from building a Salesforce competitor. That’s gonna last until next week when you discover context graphs, and then you're gonna be talking about how the systems of record will be disintermediated by an agentic layer and reposting OAI marketing graphics. “Well, as a matter of fact, I won't, because ultimately the application layer is just ….” The application layer is just business logic on top a CRUD database. You got that from Satya’s appearance on the BG2 pod, December 2024, right? Yeah, I saw that too. Were you gonna plagiarize the whole thing for us? Do you have any thoughts of your own on this matter? Or...is that your thing? You get into the replies of anyone posting a SaaS ticker. You watch some podcast and then pawn it off as your own idea just to impress some VCs and embarrass some anon who’s long SaaS? See the sad thing about a guy like you is in a couple years you're gonna start doing some thinking on your own and you're gonna come up with the fact that there are two certainties in life. One: don't do that. And two: you dropped thirty grand on Mac Minis and LLM API calls to come to the same conclusion you could’ve got for free by following a handful of VC accounts.
I’ll let Gemini explain why you’re wrong. 😊 This whole argument is built on massive blind spots and a few convenient strawmen. The author fundamentally misunderstands *how* AI threatens the SaaS business model. Here is exactly where the logic falls apart: ### The SMB Delusion Calling SMB revenue a "rounding error" is completely out of touch with reality. Massive tech companies—Shopify, HubSpot, Intuit, Atlassian, Mailchimp—are built almost entirely on the backs of small and medium-sized businesses. Even for enterprise behemoths like Microsoft or Salesforce, the mid-market and SMB tiers are huge revenue drivers. If AI gives smaller businesses the ability to spin up cheap, automated micro-tools instead of paying for subscriptions, a massive chunk of the SaaS sector's total market cap goes up in smoke. ### The "Vibe Coding" Strawman The author sets up a false dichotomy: either an enterprise buys a massive SaaS platform, or their CEO tries to build a custom CRM over the weekend using a prompt. That’s not the actual threat. The real threat is the hyper-efficiency of internal engineering. Enterprises already have dev teams. If AI makes those internal developers 10x or 100x more productive, the "build vs. buy" math changes instantly. A bank doesn't need to rely on a hallucinating AI agent; their own security-cleared, SOC2-compliant dev team can just build and maintain the necessary tools in a fraction of the time and cost it used to take. They don't need to outsource the complexity if AI just automated the complexity. ### The Seat-Based Death Spiral This is the most glaring logical flaw in the essay. The author points to OpenAI and Anthropic charging $25–$30 a seat as proof the model is fine, completely ignoring that their real enterprise scale is built on API consumption (charging for compute/tokens), not user seats. More importantly, traditional SaaS is a tax on human headcount. You pay per seat for Salesforce, Zendesk, or Slack. If an enterprise uses AI agents to automate 80% of its customer support, they don't need 100 Zendesk licenses anymore—they need 20. The AI doesn't need a software license. The SaaS vendor's revenue collapses, even if the enterprise technically never stops using the product. ### Margin Compression SaaS companies have historically justified their massive recurring fees because building reliable, secure software from scratch was historically incredibly hard and expensive. AI lowers the barrier to entry to the floor. When building software becomes cheap, margins compress. Why pay an incumbent vendor $500k a year for project management software when a hungry new startup can use AI to build the exact same secure, HIPAA-compliant tool and undercut them by 80%? **The Bottom Line:** Wall Street isn't worried that global banks are going to start "vibe coding." They're worried that AI destroys the pricing power, the defensive moats, and the human-headcount-growth loops that made SaaS a cash cow in the first place.
I second this. You can negotiate a flat margin rate before you migrate your funds, and if you ask they will probably give you a nice cash bonus to do so. Schwab also has an API if you're into rolling your own backtests and algotrading.
Been playing around with self-hosted/local LLM lately... 3 things that a lot of people really should know, and currently have no idea: 1. API data for the major AI providers is INSANELY expensive. It isn't a monthly sibscription, you pay for computer like gas. But the thing is, you have no idea how much something will cost you to run until it's too late. None of the providers have functional usage tracking at all, and the cost to do even simple tasks is absolutely ridiculous. This will only get worse when corps enshittify the product to make back their investments. 2. Even with a beast computer setup, your LLM is going to suck fat, stinky, hairy balls. There's a reason thst shit is so expensive. 3. Hobbyists are starting to spend stupid amounts of money on hardware and compute for shitty AI setups that do dumb shit like check the weather, send you ai slop 'daily reports' that are like 10% as useful as spending 5 minutes looking things up on your own, and *spam calling "leads" to try to sell people stuff*. Yep, telemarketing is back, baby! Conclusion: Calls on Apple. People who think they're smart are idiots who will spend 3x more money on Apple products for no good reason whaysoever.
I am pretty sure that any big company could discuss that with big player and would get what they need. Would they lose billion of dollar worth of API call just to be API to not be compliant ?
As someone who uses the API daily I can not fathom the retardedness of people thinking we’re in a bubble
I work super deep in this space. Seer is a great resource so was cool to see this post. Ready for an essay: Everyone has seen organic traffic drop and impressions skyrocket. Lookup clicks impressions decoupling or the crocodile effect. Google also changed their API limits from 10 pages to 1 or 2 so there was weird dips of impressions when we saw what happens when bots can no longer access the last 8 pages of google, about 50% of everyones impressions dissapeared and average positons all jumped up to their "real" place in the first 2 pages as barely any human goes further, so youre likely to have 0 impressions instead of 100 on page 9. Seeing more real but still bot inflated numbers was pretty nuts and surprising. All this being said, reporting year on year is fucked for organic search but the weird thing is conversions are up for almost all my clients. We track and influence AI mentions, the traffic is tinyyy but intent is the big big thing. All the window shoppers that read blogs and info are no longer clicking they just read overviews which is why info sites like business insider crumble. Product and service busisnesses now need to consider how theyre shown in AI search but people at the end of the day click to a site to make an action, or use info from AI to make a 0 click conversion like calling their phone number which is near intrackable bit still becomes a lead. My clients are down 40% on traffic but up 10% on impressions, conversion rates for organic are huge now. AI is just eating up the time wasters but its now a lot harder to target top of funnel with blogs like you used to. The real risk I see is adoption from lack of control From Googles own internal marketing event to agencies 2 days ago but theyve been saying the same thing since Google Marketing Live last year, they are SUPER DESPERATE for us to move keywords to broadmatch and turn on AI max, which basically means it matches to keywords you arent event targetting but the AI inferences it. Marketers dont like the lack of control and legally we cant use it for regulated industries like finance so theres big pushback but they need us to because otherwise theres no way for ads to show in AI, searched are wayy too niche and long tail. Markerters like control but ultimately need to get their clients showing up wherever there is relevant intent so we will see how their product develops. Imho this is the real problem they need to solve and are internally prioritizing very clearly.
Claude Code and API services.
No its not. It is a translation layer. It is translating Windows API code and Direct X calls into Linux calls in real-time. An emulator is more like a VM. That is not what Proton is doing.
Through API or subscription? What are you talking about? Open AI is growing revenue exponentially and is in the process of raising between $50-$100 billion. They'll be fine, especially as models get more intelligent and use less tokens.
Just as soon as SaaS companies voluntarily give Cowork official API access. Which will only happen if you think your stock should be valued like a utility.
> Open claw is just hype. It’s a vibe coded token devouring mess I agree to an extent, I think it should be looked at as a prototype of what is possible with today's models. Personally I'm going to have AI re-write the whole thing for me in python and make API token storage more secure, and something to prevent (hopefully) prompt injection. I spent about 8 hours with Openclaw, quite fascinating but it needed work. Model selection and context optimization also needs to be worked on. The idea of downloading a plugin is dumb, but a markdown that describes a behavior to have your local AI create - awesome. At some point in the next few weeks I'll make my own version of Openclaw with python.
Brother. Lol! Who's LLM model are they calling dawg? Anthropics LLM model. Congrats, you learned what an API does.
Tried to use IBKR API and got frustrated after getting only the free market data after 2 days of playing with the code. Wasn’t a fan of how complex it was—signed up for a brokerage with Tradier and after 2 weeks or so got approved and the options fetching is significantly more straightforward and universal. Would HIGHLY recommend checking it out, thought there are some account minimums ($2000 and 2 trades minimum per year) you need to meet so they don’t charge you inactivity fees
> Yes. But open source models are getting better too That is definitely good point. I think Deepseek V4 will be coming out soon, but I think there will be a constant increasing demand for models which won't fit on most computers. > I track my Claude API usage, sometimes a simple big fix costs $2-3 per 5 minute session Yeah I noticed that Opus 4.6 consumes too many tokens, that seems to be the consensus. I think they'll probably be able to improve on it in 4.7. I switched to Codex 5.3 for this reason. These models are getting better so fast it's hard to keep up. I find going to Grok and asking for updates in the past 2 days is the best way to keep up. > So far everyone is just dropping cash into the fire pit to secure the leading positions in the future, but at some point investors start looking for money back and monetization won't be easy. Yeah I think there could be a gap between spend and profitability, where you can see stocks take a dive before these companies figure out how to properly utilize AI in their workflows.
Yes. But open source models are getting better too, and while they are not as powerful as Chatgpt or Claude, tasks as analyzing documents from your example could be done in house, locally or in the cloud, surpassing main AI providers. Many other things as well. Chine does some pressure in this domain, another story with Deepseek will happen again. Also privacy concerns playing a big role. And a single point of failure/vendor lock for big business. I think those who can afford will or already adopt multi-model approaches in their own cloud for all the reasons above. Doesn't mean it won't be profitable, just that there are so many players. Openai for example burns so much money on infrastructure, so their costs will rise and raise. At some point the consumer might question if a subscription is worth the money (unless the job provided). The more powerful these things the more expensive they are and tokens usage is a problem. Also, they are not very focused, trying to do everything at once, which already started to backfire, there is not zero chance they collapse of their own success. Existing saas will be shaken up, but won't go away as a class, just substituted with a better, modern players. Business requires accountability, sometimes it's just easier to delegate. So far everyone is just dropping cash into the fire pit to secure the leading positions in the future, but at some point investors start looking for money back and monetization won't be easy. I track my Claude API usage, sometimes a simple big fix costs $2-3 per 5 minute session + you still need a person in place to review and approve it. This is already like a minimal wage in the US :) But it's all crazy and going fast, nobody really knows where it will end though.
Yes, that, or it’s a wrapper using the OpenAI API for some “AI powered solution”.
Not gonna lie, I had a similar experience with IBKR. Powerful platform, but the API setup feels way more complicated than it needs to be. If you’re not running full-on quant infrastructure and just want to build some tools around scanning and tracking, moomoo’s API and data access felt a lot more straightforward to me. The documentation is cleaner, and you don’t have to deal with as many weird session or local gateway quirks.
As someone who use META for their business API I can assure you they have no idea what the fuck they are doing
I use ib_async and it's not too bad. I'm not sure what the '1 session requirement' is. A single account can have multiple logins - the API uses one login on one computer and I use another login on another computer/phone
Maybe the Python API Toolkit is easier to use. IBKR’s main API is asynchronous Event-Driven.
IBKR API is a pain to work with but it's worth it once you get it running. Massive and Polygon are solid alternatives if you just need market data. For actual trading automation look at Alpaca or Interactive Brokers direct. The learning curve sucks but the power is there.