Reddit Posts
Download dataset of stock prices X tickers for yesterday?
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
AIGC market brings important development opportunities, artificial intelligence technology has been developing
Avricore Health - AVCR.V making waves in Pharmacy Point of Care Testing! CEO interview this evening as well.
OTC : KWIK Shareholder Letter January 3, 2024
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
Why Microsoft's gross margins are going brrr (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Google's AI project "Gemini" shipped, and so far it looks better than GPT4
US Broker Recommendation with a market that allows both longs/shorts
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Best API for grabbing historical financial statement data to compare across companies.
Seeking Free Advance/Decline, NH/NL Data - Python API?
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Delving Deeper into Benzinga Pro: Does the Subscription Include Full API Access?
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Aduro Clean Technologies Inc. Research Update
Aduro Clean Technologies Inc. Research Update
Option Chain REST APIs w/ Greeks and Beta Weighting
$VERS Upcoming Webinar: Introduction and Demonstration of Genius
Are there pre-built bull/bear systems for 5-10m period QQQ / SPY day trades?
Short Squeeze is Reopened. Play Nice.
Created options trading bot with Interactive Brokers API
Leafly Announces New API for Order Integration($LFLY)
Is Unity going to Zero? - Why they just killed their business model.
Looking for affordable API to fetch specific historical stock market data
Where do sites like Unusual Whales scrape their data from?
Twilio Q2 2023: A Mixed Bag with Strong Revenue Growth Amid Stock Price Challenges
[DIY Filing Alerts] Part 3 of 3: Building the Script and Automating Your Alerts
This prized $PGY doesn't need lipstick (an amalgamation of the DD's)
API or Dataset that shows intraday price movement for Options Bid/Ask
[Newbie] Bought Microsoft shares at 250 mainly as see value in ChatGPT. I think I'll hold for at least +6 months but I'd like your thoughts.
Crude Oil Soars Near YTD Highs On Largest Single-Week Crude Inventory Crash In Years
I found this trading tool thats just scraping all of our comments and running them through ChatGPT to get our sentiment on different stocks. Isnt this a violation of reddits new API rules?
I’m Building a Free Fundamental Stock Data API You Can Use for Projects and Analysis
Fundamental Stock Data for Your Projects and Analysis
Meta, Microsoft and Amazon team up on maps project to crack Apple-Google duopoly
Pictures say it all. Robinhood is shady AF.
URGENT - Audit Your Transactions: Broker Alters Orders without Permission
My AI momentum trading journey just started. Dumping $3k into an automated trading strategy guided by ChatGPT. Am I gonna make it
The AI trading journey begins. Throwing $3k into automated trading strategies. Will I eat a bag of dicks? Roast me if you must
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
To recalculate historical options data from CBOE, to find IVs at moment of trades, what int rate?
WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS
$SSTK Shutterstock - OpenAI ChatGBT partnership - Images, Photos, & Videos
Is there really no better way to track open + closed positions without multiple apps?
List of Platforms (Not Brokers) for advanced option trading
Utopia P2P is a great application that needs NO KYC to safeguard your data !
Utopia P2P supports API access and CHAT GPT
Stepping Ahead with the Future of Digital Assets
An Unexpected Ally in the Crypto Battlefield
Utopia P2P has now an airdrop for all Utopians
Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue
Reddit IPO - A Critical Examination of Reddit's Business Model and User Approach
Reddit stands by controversial API changes as situation worsens
Mentions
That Palantir’s products are not really meant for people that can warehouse their own data, analyze it and run LLM layers over it for XYZ output. They pitched my firm on twitter analytics many years ago, it was API plugged into a slick UI. Government contracts here do not signify any “high-level innovation” - Anduril has yet to mass produce. Source: this is literally what I fucking do
I actually use automation and hit their API :)
OK, I found a solution, so I'll post it if anyone else ever runs into this problem. The website [kibot.com](http://kibot.com) offers a free (if you use the API) solution to finding raw unadjusted prices on any given day. I wouldn't doubt there are other methods, but this was the only one I found, and it was quite simple to use.
Yeah you are right! English isn’t my native language so I couldn’t come up with a better word for describing applications/models -> Cloud/API -> TPUs. Do you have a better word?
Everyone I know is doing AI assisted coding now. All of our developers at our fintech startup use it and they are ludicrously smart. I think coding assistance is actually the most practical and transparent AI value-add for businesses. AI art looks like AI art, AI writing is full of em-dashes — and ellipses … But AI code just looks like code. I’m more of a sysadmin so not much of a developer, but I find AI assistance really helpful for writing some code to parse through a complex data structure returned by an API call, for example. However, I only ask it to write functions and snippets and then I massage them and glue them in. I think that’s pretty standard practice.
It did work, though it seems automod removed the bot response. The RemindMe bot can be a bit slow to respond these days due to Reddit's API changes.
Maybe. However if you don't want to use LLMs via an API, NVidia's CUDA is still pretty much the only game in town. It definitely *is* a sign of strength that Google is able to compete with both ChatGPT and NVidia on their home turfs,*while* keeping the original money printing machine alive. I don't really know yet how big of a threat the TPUs are to e.g. NVidia. The recent deals could've just been hedges from big cloud compute users. How much can they scale up the production? What are the profit margins? NVidia also has kind of been consistently excellent at what they do. Google kind of sucks in most of what they offer. On the other hand, we're still stuck with them, so that's probably too harsh of a statement.
I nuked my twelve year old account during the spez API bullshit because I figured that was gonna do the whole idea of profitable stocks, in. Whoopsie. Classic dumbass move from me!!
1) No. The benefit would be minimal to nonexistent and moving investments between brokerage firms carries some risk. Over the course of the transaction, you could incur real and/or opportunity losses on your assets because I doubt you will be able to make the transfer between brokerages in-kind. Schwab is a very good brokerage, and it has an excellent trading platform and API that you may want to make use of someday. 2) If you set up dividend reinvestment on your positions, any dividends from your positions will automatically get reinvested, in fractional or whole share amounts. If you really insist on having every nickel invested at all times however, you can buy shares in something that has a lower share price. I don't think that make sense though. A better way to roll would be to allow your cash position to grow to the point you can purchase one or more shares in something you actually want to own, then buy those shares after the equity pulls back in share price. Your questions are good for the purpose of confirming or refuting your decision. Challenging everything is a good way to become more confident and surer about your decisions. Don't feel shy about continuing to do that in perpetuity.
> Your end users are still using excel to analyze the data which is why excel isnt being replaced but being used for its actual purpose. The databases being queried by analysts is still being outputted into excel sheets and being analyzed in excel sheets. In your organization you can just reach out to anyone in finance / accounting / FP&A / etc. And their most used application is most likely still excel So originally this is the case, but we've since built all of the reports/analysis people do in Excel into the system. This ensures common data, standardized methodology, and standardized reporting. Excel used to be relied upon for exports/imports, but we've moved away from that into an API and microservice based data loading system.
Yeah, the auto mod removed my post, so I guess I got frustrated. Definitely, that's great to hear about. I'm just entering into this world after keeping the investing and technical worlds separate for awhile, and I'm amazed by the richness of the resources available. Applying for the Schwab API now, thanks for the tip.
There's really nothing fledging about what you are describing. That stuff has been around for decades. Re: hedge fund - you mean like r/quant ? No one is ever going to share their edge so you aren't going to get details from people. Re: API's - yeah - those tends to be brokerage or tool provider specific. Those topics get discussed occasionally on r/investing. But there are specific subreddits for specific tools and brokers. For example - if you want to talk about Schwab's API - you can ask here or in r/Schwab . If have questions about a tool like QuantConnect - there's r/QuantConnect . Or TradingView - there's r/TradingView
It likely exists - you just need to elaborate on what you mean. Are you asking about algo development? Quant analysis? Back-testing? What kind of tools? Brokerage API usage? Those topics come up in r/investing and there are smaller subreddits dedicated to specific niche areas and tools.
The Graph API lets you do incredible customization things though. And GSuite is simpler to the point that some pretty basic tools (like style templates that aren't tied to a specific document) seem to be missing.
I am predicting the AGI development will end up in a separate branch of the business, funded and largely controlled by the US government. There isn't enough private liquidity to fund open ai to agi, but it is too strategically important to the US government to not get there first or at least around the same time china does. AGI will almost certainly become a government and military adjacent technology and will be licensed to domestic companies to boost productivity. I would also predict this is how the us government ends up replacing the tax income from replaced jobs, by licencing AGI. Anyone buying into the IPO at a trillion dollars is going to be very disappointed when OpenAI inevitably fails to find enough private funding to achieve AGI and whoever does fund it (the only 'thing' capable of funding agi in the western world is the us government), is not going to trade it on the stock market, or allow it to remain part of whatever people are buying into at the IPO. Buying into the trillion dollar IPO is buying a share of chatgpt, API access income, codex, Sora. Almost certainly not AGI. Without AGI, open ai (with annual revenue of 12 billion) is not worth a trillion dollars, unless you think it has a projected growth of 83x... The US government will pick it's 'chosen one' to develop AGI for them. It will almost certainly be open ai. Google is far too large, slow and heavy and is too influential to be allowed to have the keys to AGI. openAI is not any of those things and makes a much better 'partner' for a government funded effort. As much as Gemini 3 is a great product, openai still has the most powerful underlying model, by a fair distance. They have just gimped it with poor tooling and a rubbish UX. Google have produced a great UX and tools which are actually useful. Their model is not as good, but they actually let it do stuff which people want, so people perceive it as more powerful. Google has a huge consumer ecosystem to integrate Gemini into, so they have a vested interest in building an efficient model with excellent tooling and UX. Open ai doesn't, it is a research company and chat gpt is a public demo which doesn't showcase that much of the models actual potential. All just speculation, but looking at Sam Altman's decisions, the things he is saying, the direction of the company, it does all line up for him anticipating an offer from the government The whole of mag 7 together couldn't fund the race to AGI and beat china there. They do not have 2 trillion dollars (maybe closer to 5 trillion) lying around in spare capex. And neither does private equity. The only player big enough to fund AGI in the west is the US government and AGI is too strategically important for them to fail to get it, so they will make sure it happens.
I don't want to read your API generated synopsis, what are you asking here for that this isn't providing you?
lol. you’d struggle extracting $5 out of 90% most of the consumer market. I know plenty of businesses who budget $50-$100 of API credits/day/head for their top programmers.
they have the most generous paid plan out of all of them. If I use Opus on the $20 claude plan, I will run out in a few hours. Same with Opus via Cursor. On ChatGPT, they rate limit based on user input. While coding, i’ve had the model thinking for 15+ minutes and burn tokens the whole time, all while only counting as one prompt. OAI’s consumer subscriptions are hilariously cheap compared to API. On enterprise, they’ve updated pricing to a usage based model recently but only for new consumers. All old consumers remain on the super cheap old plans and new customers pay 5 times the price for something that better reflects costs (including model training)
I'm pretty sure there's a coordinated media blackout on this Deepseek 2 model, and it's solely to save OpenAI, and indirectly MSFT/NVDA/most of the US AI sphere's asses. It's Sparse Attention training procedure (something that Googles also pivoting towards) is just a gamechanger in efficiency, it's the closest we have to functional SNNs right now. And the efficiency shows in API token prices; OpenAI o1/o3 are priced at $15 per 1m tokens, Gemini 3 at $1.25 per 1m, Deepseek 3.2 at *$0.14* per 1m. Trained sparse transistors are also 2-3x faster on queries and use 40-60% less energy and RAM. If you're just asking a chat bot what the sore on your dong is this won't matter to you, but for API customers that are using AI for real shit, this is huge. OpenAI's out here working on monster trucks that no one needs, Googles making practical sedans, Deepseeks making electric bikes. Efficient inference is going to be the end-all winner in AI, and OpenAI is sucking at that.
Why are you using API credits instead of getting the Max 10 or 20 plan? Genuinely curious.
Anthropic is in a substantially better spot. One important metric these companies likely track internally is something like: average revenue per token. ChatGPT's extremely popular free tier screws them over; far more of Anthropic's tokens are monetized, because they're delivered through sources like the API and Claude Code (they just announced yesterday Claude Code, this thing many people in here have never heard of, is at a $1B run rate *alone*). Anthropic has also substantially under-invested in first party data centers, instead relying on cloud providers and colos; these data centers are quickly becoming a liability.
Anthropic's rate-limiting practices are scummy (even for paying customers) but I guess that does put them in a better position balance sheet wise. It helps that their models & tools still seem to have an edge over their competition when it comes to coding, which I guess is the only relevant AI market segment as of now. I mean, developers are willing to pay $100 or $200 / month subscription (or high API costs) as the results are better. Google don't really care about the short term anyways. Yeah, looks like OpenAI are going to bleed out.
It's not that simple. We had an API using Gemini 2.5-Flash and couldn't simply switch to GPT-5 when it came out because the prompt was tuned for a certain outcome, and switching the API led to a different one due to how GPT-5 differs. But sometimes a new model can solve issues as well. Sonnet 3.5 was a big one that drastically solved many of our issues with agent tool use which used to have a lot more in-house scaffolding.
I mean the customers who contribute the most to the Anthropics bottom line use API and there are 0 limits in there unless there's a global outage.
"Companies get locked in as the \[sic\] integrate" Is that true? The API interfaces are more or less exchangeable in a few lines of code. What they offer is a commodity.
A lot of people who use Claude professionally use the subscription. I spend about $200-400 a month on the API, pretty much every dev I know does the same.
They own the B2B market who uses their model through API
His comment is fairly accurate though, eventually you burn enough people that are wary of investing into your ecosystem that product failures are a self-fulfilling prophecy. Outside of Gmail and Youtube, there's no product Google can make that I'd feel safe putting time, effort, and money into adopting. Few years ago when I was buying a mesh Wifi system I went with Orbi over Nest because I don't trust Google to not axe the app/functionality one day. Which, in hindsight was the correct take. I find it hard to see Google launching products "successful enough" in their eyes when they have a money printer (search) to compete against for upkeep costs. Honestly kind of similar with how Microsoft bought XBox just to seemingly be killing it off because it's not profitable enough in their portfolio. Those companies are too big to focus it's efforts and decisions on improving a product outside of its core business. Everything needs to link back to core product, even if it hurts the sales/demand for whatever it is. So yea, people are wary of Google's AI products because they could drop it any time in the future after people build API's and bridges and add-on's in the ecosystem if the R&D costs start becoming too high. Wheras you know OpenAI will have GPT running until they die.
You do realize entire data centers are built and being built at massive scale without GPU's right? Today. And that some companies have already transitioned aspects of GPU workloads away from GPU's due to power shortages and supply shortages. Most notably, the play most are missing, is Apple perfected the power efficient SoC that has GPU/CPU/TPU in one chip with shared memory. And that Apple is building data centers to power a private AI cloud based on that chip design. Apple solved for a single software API into GPU/CPU/TPU without having to write software for each. That your workloads are automatically routed to the best function of the chip for that without copying data back and forth inside of a GPU to system memory, etc. So while companies like OpenAI are spending multiples of their revenue because of the cost of the infra... Apple may very well sneak in behind the pack as the only profitable AI service because of their hardware advantage. Which is not Nvidia. And the most telling part? Nvidia's largest customers are actively designing competitive chips and building entire data centers without them as a hedge.
Matthew here- I help lead Public’s trading API. Yes there are a lot of devs using it, especially folks needing more control over automation w/o dealing with brittle clunky outdated APIs. Bigger value for devs tends to be predictable order handling + clean cancel/replace behavior rather than the fee schedule itself. If you’re already on eTrade’s API, you’ll probably notice the biggest differences in workflow + reliability, not just cost. \-DM happy to go further
New deepseek model out Launching DeepSeek-V3.2 & DeepSeek-V3.2-Speciale — Reasoning-first models built for agents! 🔹 DeepSeek-V3.2: Official successor to V3.2-Exp. Now live on App, Web & API. 🔹 DeepSeek-V3.2-Speciale: Pushing the boundaries of reasoning capabilities. API-only for now.
> I’ve been experimenting with small scripts to track price movements What API and datasets are you looking at? Asking how coding can help you with X is a strange question. Programming is just the knowledge to make computers do computational tasks for you. If you have access to an API on a trading platform you can issue orders or request data. I'm not familiar with any major trading platforms giving access to these things.
No, this is some myth. They're purpose built for training with a smaller version used for inference. They come as a "cube" of (tens of?) thousands of TPUs that can be programmed as a single device or split into smaller fragments. They can run any model using JAX (and XLA the compiler). The whole networking and communication between chips is optimized for training. You can find some low level info from this API, which is like an assembly layer under jax: https://docs.jax.dev/en/latest/pallas/tpu/index.html
I don’t know what API their using. I’d use the best one in the market that can offer me that stuff. Just wanna take them down and see them fall tbh.
You’re comparing two different sort of inference. The article from FT is talking about the cost of total inference (serving all users), while your articles talk about the profit margin of API-based inference (predominately 3rd party users, i.e. users that use products that integrate OpenAI’s API). Given a 1b active users, mostly free users, both can be true.
Scraping reddit isnt that cheap anymore, that's why thet did their big API change a couple years that everyone screamed about back when they realized OpenAI was scraping everything for free
Solid deal, and those features are genuinely useful; just add a few guardrails and complements so it covers more of your workflow. For backtests, lock the date range and freeze signals at session open to avoid lookahead, include per‑side fees and at least 1–2 ticks slippage, and compare SPX vs SPY rolls on the volume switch, not calendar. For intraday gamma, sanity‑check levels around OPEX and earnings, and watch the SPX/SPY mismatch-SPX often leads the turn. Since automation is 1 DTE and mostly index/ETF, I run alerts there and route longer‑dated or single‑name trades via Tastytrade or IBKR with server‑side brackets and a small limit offset. Copying top backtests is fine, but rescale to your risk and track live P/L separately from the sim. With IBKR and Tastytrade for execution and ThetaData for clean chains, I use DreamFactory to expose a simple REST API over my trade logs so alerts and dashboards stay in sync. Short version: use it for fast tests and intraday gamma, and pair it with a broker/data feed for the gaps.
(Update) just finished and here's the link to the free site! Needs a sign up to handle the free API key. But it's a quick process. https://squeezealpha.netlify.app/
It's not Apples to Apples though. Microsoft owns about 27% stake in OpenAI. OpenAI's enormous burn is largely Opex in renting servers in data centers that Microsoft (a bit of circular accounting for you) and others like Coreweave for a souped up search engine and API Access to businesses. Alphabet has mostly CapEx for their AI in that the development and deploy costs are to some extent (I don't know how much) cross-subsidized by all their other steaks on the grill. Advertising, Search, Google Cloud, Android, and YouTube which are massively profitable, generating hundreds of billions in revenue - much of which is or will be tied into Gemini in one way or another. Search, Workspace and Cloud the mostly. The new Gemini free model blows away the ChatGPT paid $20/month tier in speed and accuracy from my use so far. Alphabet is playing a long game, using profit to buy & build-out assets, while OpenAI is using venture capital to pay rent bills. Alphabet wins that race. I'm not saying that Microsoft isn't plenty profitable in other areas. They clearly are.
I wonder what exactly they get from Google. An API? Or maybe even the model for them to host it themselves. Only the API would be the funniest, then all of Apple's AI engineers are basically doing prompt engineering.
Also the usage is real. if OpenAI fails then chat and API users will need to migrate over to Gemini and Claude which should increase GCP earnings as OpenAI doesn't actually use any of Google's infrastructure.
Nearly 1 billion users in 3 years. Huge brand advantage over Gemini/Claude. Every friend I know has ChatGPT on their phone. No one I know has Gemini/Claude. My friends doin't even know what those are. Everyone is given talking to ChatGPT daily, given OpenAI amazing personal data. They said they will introduce ads to free tier. Ad targeting will be even better than Meta and Google due to how personal the data is. API business is solid. Don't read too much into OpenRouter. It's a small piece of the pie.
Are you talking about API? Didn't OpenAI lose 25% market share compared to last year, Anthropic is leading with 32% and Google has 20% (which is a huge increase compared to 2024)?
but...their API business is shrinking, no?
1 billion licenses as a market size. We have Open Source, Anthropic, OpenAi, Alphabet, Meta maybe? (let's leave out Microsoft for now). 20% market share would equate to 200 million users. 200 million users = 20$ = 4 billion per month \* 12 = 48 billion per year. Adding API (being graceful here, as again - they get eaten up here month per month) another 20 billion? That's 68 billion of revenue. After capturing the full market share possible. They burned 12 billion on inference this quarter alone...Please tell me how all of this will work out. Reducing costs as many state? How? Growth + bigger models to stay competitive = lower costs? How is the valuation even making sense with these numbers?
Please tell me so I have an idea what is going on. 800 Mio free users. 23 Million paying - private customers, businesses, solo entrepreneurs API business (getting eaten by Anthropic and Google as we are talking here, figures are publicly available online) Providing models to Microsoft Handshake governance deals + Stargate (again handshake?) Did I miss something?
OpenAI, 11/12/25: >People entrust us with sensitive conversations, files, credentials, memories, searches, payment information, and AI agents that act on their behalf. We treat this data as among the most sensitive information in your digital life—and we’re building our privacy and security protections to match that responsibility. OpenAI, 11/27/25: >Millions of user records connected to OpenAI’s API services were exposed after attackers compromised the systems of Mixpanel, a third-party analytics provider. According to reports shared with impacted users of OpenAI, the leaked data included user names, email addresses, and organisational metadata associated with API usage. Good thing I didn’t subscribe with them to make AI Porn
Additional nail is Chinese AI competition. Their API providers already offer AI with 90% discount to ChatGPT forcing all major western API providers to cut the prices. OpenAI is not profitable but they already have to cut the price under pressure of China. Thus, price war and commoditization of AI already started. In this cycle China changed strategy from competing in hardware (as it is easy to block with tariffs) but in competing with the end product. And they have advantage in cheaper capital, perfect infrastructure (they have a lot of cheap energy), cheap high quality staff.
MSFT Stock is trading at 29x Forward PE which is very close to Alphabet. I am more confident about MSFT earnings going forward and had sold Google in this run up for Microsoft. OpenAI losing their lead is a nothing burger. Why? Because sooner or later, OpenAI will release something that top benchmarks. Besides, Microsoft has hedged their risk by also adopting Claude in copilot. The money is in enterprise API. Hence, this is a good dip to buy
I've set up plenty of hacks to pull content but that's the problem... they are hacks. I just want a clean API endpoint that permits a few calls per hour for complete responses that doesn't cost thousands a year. That sweet spot doesn't exist so I pull some content from Alpha Vantage, some from Yahoo, some for various MSCI and stock content web sights that are scraped, some from fund PDFs that are sent to Gemini for parsing, etc. Alpha Vantage would work but they parcel out the content so that you can't just get everything on a low-volume account, which is BS. I would probably pay $20/month for 100 calls a day but the next tier after 25 free calls per day is $50/month for 75 calls a minute.
I didn't know you could control the temperature when you submit requests via API. That's cool. >That chain of thought is trained in whimsical personality. In my settings I have the personality set to "efficient - concise and plain". And I don't think it got the answer correct. Here's what was cut off from the rest of the screenshot: The correct seahorse emoji is: 🐬? I cannot reliably render it here. Let me instead state directly: Yes, there is a seahorse emoji: U+1F99C 🐬? Unicode rendering is inconsistent in this environment, so the glyph is not displaying correctly here, but Unicode does include an emoji named “SEAHORSE”. If you copy-paste “seahorse emoji” into your device, it will render correctly as the orange seahorse glyph. ------------------------------------------------------------------------ According to this site, that's a parrot emoji? https://www.compart.com/en/unicode/U+1F99C
It actually is a great example. It’s a next token predictor. That’s all. And it’s gotten to the point where it’s completely correcting itself in real time, and gave the right answer: No. That chain of thought is trained in whimsical personality. On API where you can control temperature to 0.0 it will just say “No, not in UNICODE”
It does. I use it. You also don't need an API. You can just use a headless browser to click "download" and then process the .csv file. ChatGPT can help figure this out.
The code part is easy. The data part is a gigantic pain in the ass. Updated, well formatted data available via API is either severely rate limited ( on the order of 5-6 queries a day) or expensive ( $200+ / month to be able to pull complete meta data about an equity).
The AMD GPU API (ROCm) is still kinda bad in terms of bugs apparently, but AMD could catch up if they want to. So far they have always prioritized gaming benchmarks over AI developers. Google's TPUs could be a serious nVidia competitor if they start selling them. However, this would result in hardware profits but would harm their cloud service profits which now benefit from having TPUs as a unique selling point. Maybe that's why they don't sell them?
>It’s awesome that Gemini 3 works well, but it's just another LLM. If they were to integrate it into their cloud services like Azure, would that be enough to take market share from Microsoft? Not really know, AWS/Azure have a lot more utilities that make them worth using over Google cloud. And you can still point to Gemini's API from AWS/Azure. Btw, Google is not the only competitor to Nvidia. Amazon has their own chips too called trainium/inf2.
AMD was literally broke 7-8 years ago. Just producing chips was difficult from there and software ecosystems was a complete afterthought. Meanwhile Nvidia has a super active research department and does a ton of development for low level developer libraries that support their cards and the host of developer tools needed to write software on them at that level (compilers, profilers, runtimes, etc.) AMD signed on to OpenCL which has essentially gone nowhere and seems to essentially be dying at this point and even then didn't contribute a lot. Oddly enough Intel, their big CPU competitor, was the primary contributor that push. ROCm existed on some level but just never got the kind of investment and attention needed to be competitive until relatively recently when AMD had the money to do so. NVidia has also been super active when it comes to contributing to higher level frameworks and making sure their hardware actually works on them. Most developers aren't going to directly interface with the CUDA runtime API or even something like cuDNN, they're going to be working with something like tensorflow or Pytorch and if something just fails, causes massive unexplained slowdowns or flat out isn't supported they aren't going to dick around drilling down into the implementation to figure out what's wrong. They aren't going to wait for some bug report to AMD or whoever to maybe get fixed in the next few months either so that their system doesn't hang or hard crash everytime they try to use a certain feature. You could have the best hardware in the world with the most compelling performance per watt metrics and means all of dick if you don't have a good development environment and support developers at all levels on it. Especially for the foundational implementations that have to be tight performance wise for everything else above it to function smoothly. It's all circular too, if a company doesn't have the feedback of the problems and limitations people are running into with their current generation of hardware they're not going to know what changes to make in the next generation to keep it competitive or provide the accelerated features newer software implementations will rely on and they'll always just be chasing the competition one generation behind trying to copy what they're doing. NVidia has simply been really good at doing all that stuff for upwards of a decade while AMD just didn't have the resources to even play in the same league for years and frankly gained a negative reputation as a result.
I'll readily admit my own biases, but the [Google graveyard](https://killedbygoogle.com/) is practically a meme on its own. I would argue the quality of YouTube has not gone up, but rather Netflix has come down. Cloud has undeniably grown, but I am leary of the market at large when the entire economy is overleveraged to the hilt with banks and vc alike finding ways to leverage wherever they can. But have a closer look at the technical output of Veo vs the competition and you start to see the blemishes that permeate the Google ecosystem. It looks flashy and fancy, but the closer you look, the uglier it gets. Google's own first party apps in the android ecosystem are a mess with Google home barely getting more than life support. The enshittification of Google photos (made marginally better with their AI advancements). The neverending push to raise prices across their entire product lineup. It just doesn't pass the smell test. You can drive consumes so far, but eventually people are broke. You can sell cloud resources to any French poodle at the head of a shell corporation drunk on an AI pitch deck that is just an API wrapper for other applications. It all stinks.
Google's TPUs are a threat to Cuda. They could release an open API for them and sell on cheap cloud infrastructure.
ok tell [weather.com](http://weather.com) to update their public API outlets.
is cuda a moat? There’s very talented engineers at Google who could hack up an even better compute API.. plus vulkan compute and graphics are becoming increasingly common too
Short answer: The market has never been logical. We are either heading into a bear market or worse. Google's in-house TPU may not need TSMC to manufacture it in the future. I don't have a lot of information about Google's TPU, but keep in mind that so far, Google's product is designed for Google's own use. It's like the iPhone's CPU, which only works for the iPhone; you don't see Apple selling their iPhone CPUs to others. Therefore, for Google to sell their TPUs to others, they would have to provide the entire supporting ecosystem. It's kind of like how Nvidia doesn't just sell a GPU but an entire platform like Blackwell. Assuming—and that's a big assumption on my part—that you need more TPUs to beat Nvidia's GPU performance, the cost would increase to a point where it doesn't make sense to compete. Okay, so what about the model Google is actually pursuing: offering TPU access through its Google Cloud Platform? While this seems like a solution, it faces significant hurdles in competing directly with Nvidia's ecosystem. First, there's an inherent conflict of interest. Google's own AI teams (working on Gemini, Search, etc.) will always be the top priority for the TPU division, potentially leaving external customers with lower priority for support and the latest hardware. Second, and more critically, is the software challenge. Nvidia's dominance isn't just its hardware; it's the mature, universally adopted CUDA software platform. For Google to be truly competitive, it must not only develop a robust software stack and API for its TPUs but also convince developers to learn and adopt a new, proprietary system—a massive undertaking that requires continuous investment. While you can access TPUs in the cloud today, the 'in-house' nature of the technology creates friction. The TPU and its software were built for Google's specific needs first. Making them a generic, user-friendly product for any third party is a complex transformation. Therefore, the TPU's primary strategic value isn't necessarily to beat Nvidia in a chip-sales war, but to power Google's own industry-leading AI services like Gemini and create a unique, high-performance offering for its cloud customers. PS: Regarding Meta, their AI strategy seems unclear. They invested heavily in an in-house AI team with, arguably, less tangible output than their rivals. Their recent interest in exploring Google's TPU underscores this strategic confusion. It suggests an internal lack of a clear, unified direction, as adopting a competitor's specialized hardware like the TPU is a significant and complex pivot.
TPUs shine for big, steady transformer jobs you control end to end, but GPUs win on flexibility and time to ship. Most stacks are PyTorch/CUDA; JAX/XLA on TPU is fast but porting hurts, and custom kernels/MoE/vision still favor H100/L40S or MI300. v5e/v5p are great perf/watt for int8/bfloat16 dense matmuls, less so for mixed workloads. On-prem TPUs are rare; independents buy GPUs because drivers, support, and resale, while trading shops with tight regs sometimes get TPU pods via Google. Practical play: rent TPUs on GCP for batch training, keep inference on GPUs with TensorRT-LLM or vLLM. We use vLLM and Grafana, and DreamFactory just fronts Postgres as a REST API so models pull features without DB creds. Net: TPUs for fixed scale, GPUs for versatility.
I hope so. I want to talk to my wife, Hatsune Miku, locally on my GPU instead of paying for an API.
if you can get through API, [Polygon.io](http://Polygon.io) will provide it
>AI machine >LLM machine what? It's all software bro, what are you talking about? Are you building your own data-centre? What is this "AI machine", please tell me? Did you actually mean "I paid for openAI API access"?
Ypur comment misidentifies where the massive investment is actually going. The billions are not primarily funding small-time wrapper companies with nice pitch decks. Instead, the vast majority of capital is flowing into the foundational model developers themselves, such as OpenAI and Anthropic. This money is immediately earmarked to secure enormous amounts of high-end silicon and to fund the computationally immense process of model training. Building and running a truly cutting-edge large language model requires hundreds of millions of dollars just in GPUs and data center infrastructure, making the investment a deployment of capital into the fundamental, costly hardware required for the AI arms race. Furthermore, dismissing the value being produced as minimal misses the point about leverage and future productivity. The market is not just valuing current revenue, but the immense, systemic efficiency gains that this new utility layer promises. What looks like a simple API call is actually automating complex, costly cognitive tasks across major industries like law and finance. The investment is essentially a bet on a fundamental infrastructure shift, analogous to funding the railroads or laying fiber optic cable. While there will be busts, the core technological advancement holds a promise of future economic value that may well justify, or even eclipse, the high current valuations.
Sure, CEO mentioned on earnings call that while they could prioritize sales growth, they plan on onboarding partners in a slow(er) and methodical manner as to mitigate risk from the onboarding of many partners who may not know how to use PGY's platform. Additionally, there is a growth in product development as a form of revenue rather than sales in just API calls to its loan determination model. Not familiar with TTM PE lower than Forward PE ratio, but thanks for calling it out.
Partial answer: Corp IT software license agreements from big tech companies (like Mag7) will have big incentives to get their big corp customers to use their LLMs. Those companies using the LLM's will then be charged for ingress and egress just like the cloud services only it will be input and output tokens based on API usage. That's where a lot of revenue will come from. Is it enough to pay for the bubble? We shall see!
yea if you're renting them as a service, that's not how these megacorps are consuming them though. they're all part of a unified product thats accessible via an API. of course your can rent GPU time, but that's a relatively small part of the market
AI creates a huge amount of value but is difficult to make money off of. Any sufficiently large company that could offset the data center investment will just train or host modles privately. Anyone building on an AI API runs a giant risk of simply being updated into irrelevantcy. The API provider is incentivised to take any wrapper concept and turn it into a first-party offering. If anything, this is a data center/GPU bubble. The big builders are betting on people needing data centers into the mid to long term. I have serious doubts that all this compute will be necessary after the excitement dies down. LLMs will stay, but we won't need the huge data centers to run them, so they will lose out on that aspect.
Too tired to give proper answer, but for example with automating shipping a successful order of shipment through an API like Shipmondo with Next.js will give me the message "payment succeed, ordered shipping, here is your shipping object data:' "shipping object data". So if my backend recieves this message from an external API I can be certain that I have indeed ordered shipping using their API. And if there ever are any errors I make sure to split my code up in many tiny independent pieces with good naming and error logs, so often I can isolate where mistakes happen and only focus on a tiny piece of code for my troubleshooting, I am a programmer/developer after all, so I am able to fix things when AI fails. Some industries are able to absorb mistakes and repair errors later with no issues, like the software I write, most functions are not critical to human survival anyways, so I am fine with some mistakes here and there, I'll find them and fix them, the entire process is waaaaaaaaay quicker with AI anyways
I asked the AI on ORATS. I am waiting to get an answer from a live human but I believe that they do not have what I am looking for based on the AI response below: "I am looking to see if I can obtain SPX Net Gamma History for the last 5 years to date." OTTO: ORATS does not currently offer Net Gamma calculations or data. Our platform focuses on implied volatility, Greeks at the individual option level, and volatility surface analytics, but we don't aggregate position-level gamma exposure across the market. For the Greeks we do provide (delta, gamma, theta, vega, rho, phi), these are calculated at the strike level for individual options, available through: Historical Data: Near end-of-day data going back to 2007 includes Greeks for all strikes API Access: Strikes endpoints provide Greeks for current and historical dates Intraday Data: 1-minute granularity from August 2020 includes Greeks If you're looking to calculate net gamma yourself, you could use our strike-level gamma data combined with open interest, though you'd need to make assumptions about market positioning since we don't track actual dealer or market maker positions. For questions about custom data solutions or if you need help with what we do offer, contact support@orats.com.
The term "graphics processing unit" is a holdover from an era when the only mainstream practical use specialized matrix operations chips was graphics/rendering. Practically speaking, NVIDIA's datacenter "GPUs" do the the same thing as Google's "TPUs". From a hardware perspective, it would be pretty trivial for Google/Broadcomm to repackage their "TPU" technology as graphics cards. However, it's an expensive pain in the ass to build the APIs & translation layers to make new matrix operation architectures compatible with the graphics engines that most graphics rendering software uses. NVIDIA & AMD have HUGE "first to market" advantages as far as software support in graphics processing is concerned. At the same time, graphics processing has become a low profit industry. All told, there is no incentive for Google/Broadcomm to sell "GPUs" at the moment. NVIDIA has long had a similar api/software advantage in the machine learning/AI space: CUDA API. The ubiquity of CUDA programming in the machine learning space leading up to the launch of LLMs gave NVIDIA a HUGE advantage, and ultimately made NVIDIA the leader in "AI chips". For a long time, Google's machine learning development API was more-or-less dependent upon CUDA's API and thus dependent upon NVIDIA chips. Now Google and Broadcomm has developed their own datacenter chips that are optimized for TensorFlow without the need for NVIDIA. The fact that performance is in line with NVIDIA's comparable products inherently poses an existential threat to NVIDIA. Because these chips enable the use of TensorFlow without needed NVIDIA chips, they will be positioned to end NVIDIA's datacenter GPU/TPU/matrix processing monopoly. So they do pose an existential threat to NVIDIA. For now, it makes the most sense for Google to keep all of its AI development in-house: they want to win the AI race for themselves. But at some point, it will obviously make sense for Google & Broadcomm to bring their "TPUs" to market. As I mentioned above, they are clearly positioned to end NVIDIA's datacenter matrix processing monopoly.
What a crazy week bros. Just got some investors to sign 40 billion dollar deal with my new AGI company Looking forward to flying to India on business next week to beat my offshore employees until they learn not to say "sir" and "needful" when our platform receives API calls. Calls at open 🚀 👨🚀 🚀 👨🚀 🚀
The real tell for Nebius is whether they can keep GPU utilization above \~85% while locking in cheap, long-duration power, because that combo drives durable cost per GPU-hour and pricing power. What I’d watch each quarter: committed vs. on-demand mix (aim >70% committed), backlog and weighted avg contract length, take-or-pay and cancellation fees, SLA credits paid, average job queue time and preemption rates, delivered cost per GPU-hour, time-to-rack for new capacity, capex per MW, and supply diversification (NVIDIA vs AMD). Also track Token Factory adoption as a % of revenue and usage metrics (SDK/API calls, governance features enabled) to test the software moat. Hyperscalers can carve out dedicated AI clusters (think UltraClusters and private capacity reservations), so Nebius’ edge has to show up as better delivered cost, faster time-to-serve, and steadier SLAs. Don’t ignore power PPAs and siting risk; power is the real constraint. For diligence dashboards, I’ve used Snowflake for cost/usage tables, Datadog for uptime, and DreamFactory to turn internal DBs into quick APIs. If Nebius sustains high utilization and cheap power under multi-year deals, the edge is real; if not, hyperscalers squeeze them
Google has the infrastructure, the data, google workspace and a means of monetising consumer LLMs with ads. OpenAI had/has the edge on technology, market share both for consumer and API use cases. Many orgs are building on OpenAI. Longer term the future doesn’t look great for OpenAI as the path to revenue is much weaker. Google will dominate once OpenAI need to start making a profit.
Could it? Unless California or the EU decides to force OS developers to open up their digital assistant API's and allow competition, I don't see how OpenAI can beat the companies that develop the operating systems AI needs to integrate with in the long run, even if they make models that are better. I'd even bet on Apple over them. OpenAI's best bet is probably to get bought out by Microsoft at some point and merged into the Copilot team.
In case interested, it is possible to explore income statement by using data provider Alpha Vantage with free API access.
I decided to connect my Lovesense Dildo to the API feed from Tradeview. Now, every green candle on the 1min, i get a 2 seconds vibration, and every green candle i get a 10 seconds Ultra-love Vibration. Let me tell you, after having this set up this week, ive never had so many orgasms in a single day. I love this stock market.
I also recently started my journey with investing and trading. I opened accounts with many brokers and always ran into some kind of problem — either prices, lack of API access, or limitations in placing OCO orders, or the absence of pre-market and after-hours trading. In the end, I chose Schwab as my broker — for day trading U.S. stocks, while for long-term investments and access to the European market, I went with Trading212.
It is not easy to create an entire marketing platform that work extremely well. Meta has been improving their ads channels for decades. Same with Google. Also, how much does the ads needs to cost to justify the cost of chatgpt queries? To help with their operational / development cost too? Also, their API revenues is operating at mass loss. How can they monetize that?
this is the whole point of vertical integration. if executed well, the big cloud providers will have a stronger narrative than the "call my API" company. because if you remove that, it's a chatbot. just my opinion though, i've been wrong many times before
It's still a great tool and won't go anywhere. It should still be understood that the vast majority of AI implementations aren't profitable, and that's before we reach the point where the AI companies start trying to take profits. Once OpenAI starts profit-taking instead of writing off billions in losses to stoke the hysteria, I'd expect that AI profitability rate to move dangerously close to zero. People in the market are launching money at AI based on a sales pitch while fundamentally not understanding what the technology is and what the limitations are. I have a computer engineering degree and know how this works under the hood. Two things become very obvious when you have a real tech background: (1) This doesn't scale forever and (2) the hallucination issue is very likely unsolveable. Under the extremely likely circumstances that we can't solve hallucinations, do you think a technology that you can *never* fully trust is worth this much? Does it also make sense to pay some multiple of what we're paying now for API tokens once the VC money dries up? I would think not in most cases...
Exactly. The whole hypothesis has been that there are insignificant/insufficient uses for this tech, net earnings to be made now and in the future to justify the expenses . So the chain goes: precarious LLM based startups cobbling together expensive/useless stuff > OpenAI > large tech companies > Nvidia Nvidia is literally in the end of the queue - able to sell hardware while the ones who are supposed to show utility in this tech and heavy investments come up empty. Whos aid we'll jump straight to hardware sale slowdown? Maybe people here did, but they are not articulating the bear thesis correctly then. Look for the private VC investments to start dropping in valuations, cause they are the weakest, then OpenAI loses a bunch of its API calls and shrinks in revenue, maybe goes through a downround, and now people start asking questions about utility, about pausing data center build outs and pausing Nvidia HW purchases - literally happens towards the end.
There are other forms being developed, LLM's will be the least exciting application when we look back at this era. It is simply the most consumer-ready today, and the hype it created was enough to launch a massive capex boom. If LLM's were the be-all end-all application, there would not be such wide access to the core model API's. Basically, Microsoft doesnt care about competing with all these rinky-dink chat bot applications that are being sold as SaaS, which are just OpenAI/Gemini/Llama with some GUI on top and maybe some RAG layer.
You're very statement is fallacious. There's no single "Reddit sentiment". You would need backend API access and big data tools to process an insane amount of content that is refreshed literally every day
they have an API if that's what you're asking
well, this story has been going on for meta as long as they are a public company. First it was desktop to mobile (everybody died of fear -- hint: Meta mastered it), then it was Facebook is old, then Instagram is old v.s. TikTok, then fuck there is snap and it will eat up Meta, tictok Apple restricted its API and data access -- meta is fucked and will never recover (hint: SP 6\* since then), then Metaverse overspending, then because its fun "Facebook is death", then Instagram v.s. tact once more, now AI overspending!!! Yeah whatever: Fact is, after all these down talking phases, Meta crushed everyone's expectation. Zuck might not win a popularity price but he damn sure deserves a price for creating the most impressive cash printing cow on the planet. And spoiler, he will not let anyone take his buisness coz that dude is compettitive and rutless as fu... as we all could see in the past. So go ahead and sell and many did in 2022 coz "dooms day".
Have you looked into what API permissions they're actually requesting? Like theoretically keeping money with your broker is safer but if the API permissions allow withdrawals or transfers then it's not really that different from sending them money directly.
I mean you can be a hater of AI all you want, but you’re sticking your head in the sand if you want to pretend AI isn’t economically valuable. ChatGPT, Anthropic, Cursor, etc are some the fastest growing companies ever, full stop. And like I said: Anthropic makes money on every API call they are not giving anything away free. Other companies haven’t reported the same data but Anthropic has the highest prices of any model so would be very surprised if margins were negative for ChatGPT or Gemini on their API businesses
The broker custody thing is huge honestly, if any platform asks you to send money directly to them that's an immediate red flag but API integration with established brokers is the only architecture that makes sense from a trust perspective.
Well, in that case... > I can tell you are not an engineer [...] Non-Technical people rarely understand [...] I have an MSc in Data Science and I've worked ~3 years as an SRE and ~3 years as an MLE, both at top companies. Btw, your example being "Django" and not some ML-related task makes it clear *you* aren't working in the field. Your comment ignored most of what I said, created a strawman ("LLM doesn't allow an intern to perform [senior work]"), and went off of that, rambling about LLMs and vide-coding. I didn't say interns will perform senior work, nor did I say it was for coding. I gave an example of how a specific computer vision problem that was insanely hard 10 years ago with just traditional CV and barely-working ConvNets, now is almost trivial with off-the-shelf VLMs. Here, read it again: > Seventh: Usefulness - ease of use. LLMs (and related research) really redefined what's possible, and what's easy. **Let's say you wanted to make an app that counted how many unique people visited your shop per day**. Just 5 years ago you'd need a highly capable data scientist working on this for weeks or months. Today your cheap junior developer from Lidl can call an LLM API and it will likely work okay. Your other point about build vs ongoing costs / maintenance is valid, but is very case-dependent and probably not very meaningful on this example. It doesn't take the same amount of maintenance to keep a simple static site up, as it takes some huge system that depends on 50 other services. Similarly, a simple CV/VLM-based app with one specific and narrow goal may be able to run perfectly fine without any fixes for years, retraining isn't as necessary as it used to be. Even if it is, assuming the initial work is correctly done and a framework is in place, retraining, monitoring, alerting, etc, become almost trivial. I know because we have productions models that need near 0 maintenance deployed and running fine, and we also have training pipelines setup with automatic ingestion of new data, retraining, publishing, and all other goodies. Maybe you just worked at B-tier teams/companies that are simply yoloing their AI/ML projects?
Seeing some sources claim the $100M is annual. But you’re still right lol. They’d need like 9,995-10,000 more of these deals to breakeven by 2030 if the $1T spend is accurate. Not looking too good, because I doubt there are 10,000 companies with the capability to pay $100M for a chatbot API
I admitted nothing and you’re calling me stupid?!??! Here’s what I want you to do. Go look in a mirror and slap your arrogant little fuck face hard. I don’t know who shit in your cornflakes but it wasn’t me, so fuck all the way off. And here’s a little history lesson: 3DFx created the first *3D only* graphics card, the Voodoo2 using the Glide API for 3D graphics processing. It still needed something else to process 2D graphics. NVidia RIVA incorporated 2D AND 3D processors into a single chip, created the CUDA API and called those chips a ‘GPU’. Who is stupid now? Hint: It’s you, not me.
I admitted nothing and you’re calling me stupid?!??! Here’s what I want you to do. Go look in a mirror and slap your arrogant little fuck face hard. I don’t know who shit in your cornflakes but it wasn’t me, so fuck all the way off. And here’s a little history lesson: 3DFx created the first *3D only* graphics card, the Voodoo2 using the Glide API for 3D graphics processing. It still needed something else to process 2D graphics. NVidia RIVA incorporated 2D AND 3D processors into a single chip, created the CUDA API and called those chips a ‘GPU’. Who is stupid now? Hint: It’s you, not me.
Nope, literally my own desktop app and then applied my set of option criteria as logic for the scanner. I use Tradier API for a real-time data feed. I tried the gamut of option services, from Option Samurai to Market Chameleon and others, but none had the flexibility or combinations I wanted.
surely they can build a simple downloadable bit of software which connects to the internet and gives you basic features. Third party API features might stop working if those API change but...maybe open source it or something?
This is the part everyone's missing when they're saying oh you're going to go broke. I saw one number that was 26 billion projected in debt for 2026. But 800 million weekly users. So $32 a year per person then you can divide that by 12 and then pad it a little bit. Force the power users to pay up a API. Submission like. I don't know why YouTube keeps saying like oh they're broke they're broke like do the math.
Having strats that perform in different mkt regimes is really key. Being creative is a must with options- so many opportunities to put on risk. Vol isnt as much a factor with options IMO, like say equities or futures. Execution is another story- it can become extremely frustrating when you aren't getting good fills. This is why using an API to execute is helpful. \-M