Reddit Posts
Download dataset of stock prices X tickers for yesterday?
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
Tech market brings important development opportunities, AIGC is firmly top 1 in the current technology field
AIGC market brings important development opportunities, artificial intelligence technology has been developing
Avricore Health - AVCR.V making waves in Pharmacy Point of Care Testing! CEO interview this evening as well.
OTC : KWIK Shareholder Letter January 3, 2024
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
The commercialization of multimodal models is emerging, Gemini now appears to exceed ChatGPT
Why Microsoft's gross margins are going brrr (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Why Microsoft's gross margins are expanding (up 1.89% QoQ).
Google's AI project "Gemini" shipped, and so far it looks better than GPT4
US Broker Recommendation with a market that allows both longs/shorts
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Best API for grabbing historical financial statement data to compare across companies.
Seeking Free Advance/Decline, NH/NL Data - Python API?
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
A Littel DD on FobiAI, harnesses the power of AI and data intelligence, enabling businesses to digitally transform
Delving Deeper into Benzinga Pro: Does the Subscription Include Full API Access?
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Qples by Fobi Announces 77% Sales Growth YoY with Increased Momentum From Media Solutions, AI (8112) Coupons, & New API Integration
Aduro Clean Technologies Inc. Research Update
Aduro Clean Technologies Inc. Research Update
Option Chain REST APIs w/ Greeks and Beta Weighting
$VERS Upcoming Webinar: Introduction and Demonstration of Genius
Are there pre-built bull/bear systems for 5-10m period QQQ / SPY day trades?
Short Squeeze is Reopened. Play Nice.
Created options trading bot with Interactive Brokers API
Leafly Announces New API for Order Integration($LFLY)
Is Unity going to Zero? - Why they just killed their business model.
Looking for affordable API to fetch specific historical stock market data
Where do sites like Unusual Whales scrape their data from?
Twilio Q2 2023: A Mixed Bag with Strong Revenue Growth Amid Stock Price Challenges
[DIY Filing Alerts] Part 3 of 3: Building the Script and Automating Your Alerts
This prized $PGY doesn't need lipstick (an amalgamation of the DD's)
API or Dataset that shows intraday price movement for Options Bid/Ask
[Newbie] Bought Microsoft shares at 250 mainly as see value in ChatGPT. I think I'll hold for at least +6 months but I'd like your thoughts.
Crude Oil Soars Near YTD Highs On Largest Single-Week Crude Inventory Crash In Years
I found this trading tool thats just scraping all of our comments and running them through ChatGPT to get our sentiment on different stocks. Isnt this a violation of reddits new API rules?
I’m Building a Free Fundamental Stock Data API You Can Use for Projects and Analysis
Fundamental Stock Data for Your Projects and Analysis
Meta, Microsoft and Amazon team up on maps project to crack Apple-Google duopoly
Pictures say it all. Robinhood is shady AF.
URGENT - Audit Your Transactions: Broker Alters Orders without Permission
My AI momentum trading journey just started. Dumping $3k into an automated trading strategy guided by ChatGPT. Am I gonna make it
The AI trading journey begins. Throwing $3k into automated trading strategies. Will I eat a bag of dicks? Roast me if you must
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
I made a free & unique spreadsheet that removes stock prices to help you invest like Warren Buffett (V2)
To recalculate historical options data from CBOE, to find IVs at moment of trades, what int rate?
WiMi Hologram Cloud Proposes A New Lightweight Decentralized Application Technical Solution Based on IPFS
$SSTK Shutterstock - OpenAI ChatGBT partnership - Images, Photos, & Videos
Is there really no better way to track open + closed positions without multiple apps?
List of Platforms (Not Brokers) for advanced option trading
Utopia P2P is a great application that needs NO KYC to safeguard your data !
Utopia P2P supports API access and CHAT GPT
Stepping Ahead with the Future of Digital Assets
An Unexpected Ally in the Crypto Battlefield
Utopia P2P has now an airdrop for all Utopians
Microsoft’s stock hits record after executives predict $10 billion in annual A.I. revenue
Reddit IPO - A Critical Examination of Reddit's Business Model and User Approach
Reddit stands by controversial API changes as situation worsens
Mentions
Max 20x's limits really aren't enough, you need to pay more at API rates.
Google also paid billions, admittedly less, for Varun and the Windsurf team only for the product to rapidly get objectively better almost immediately after acquisition. Meta acquired Manus recently who are a genuinely top class application team and I'd expect that one to turn out pretty well. They're also lucky that LeCun chose to leave and can hopefully spend more resources on LLMs now. They have a lot to do in AI still and dont even have an API properly set up yet, but they've committed pretty string on fixing that. I'm not sure how Wang will turn out but I trust his taste and youth over a combo of Zuck and LeCun, so dont see why the appointment gets such.negativity. he's definitely a smart guy with youth on his side, even if he isn't an expert on AI.
You don't get it. They could disable free plan completely, but that would only make your customers go to competition who would do so This has nothing to do with API which isn't free. Im so fucking tired of openai haters/ai is a bubble worshipers.. 99% of you are helplessly clueless forming circle jerks while the tech leaders and money tell completely different story
"in practice" every options market maker does this daily. They take it a step further and adjust based on entire portfolio. They also hedge out some other greeks besides delta, depending on their risk parameters. Besides stocks they can and do also use options to neutralize some greeks. In practice it's done with software (algos) and they have super fast order systems to handle the volume and speed. -- For retail, you could do it manually or using a bot using your brokers API. Pick something low gamma and decide how often you want to hedge. -- In regards to making money. You have to have an edge somewhere. -- In regards to risk, you have pin risk and unwinding risk and slippage. * Pin risk, just don't let it go to expiration. * Unwinding risk, if your position is closed (by you or early exercise) then you are sitting with a bunch of stock deltas. You need to unwind those, hopefully before market moves against you. * Slippage. If you are giving up pennys or nickles in slippage on orders or have fees or commissions, then by acting of hedging, you are giving up what edge you had. You'll just get nickeled and dimed
It’s probably worse. This technology seems to have fundamental problems that will limit its usefulness forever, and they’re hiding the O&M expenses really hard for companies that are trying to justify 1.5T in capex are worth it based on $10B revenue. Even Google has carefully obfuscated their o&m costs for Gemini and specifically did not report profits generated by their best in class LLM. They’re selling API access to their best in class LLM but not bragging about the money they’re making on it. Probably because it’s losing money at a rapid pace even in a world where they paused all capex today.
Main thing: even if LLMs grab ad dollars, this is a budget reshuffle over years, not some instant kill shot to Meta or Google. The constraint isn’t “users” for OpenAI, it’s inventory quality, intent, and targeting data. Google owns high-intent search, Meta owns crazy-deep persona graphs. ChatGPT is still mostly exploratory, low-structure queries where it’s hard to prove ROAS at scale. If anything, the early winners are whoever can fuse assistant UX with classic performance plumbing: first-party data, conversions API, attribution tools like GA4, Triple Whale, etc. Pulse plus Reddit Ads is already a good example of niche intent + community context actually moving the needle without mega-scale. I’d watch for LLMs becoming more like “ad routers” across existing networks: they answer, then kick you to Google Shopping, Amazon, or Meta/Instagram-style placements. The platform that nails that last conversion hop keeps the lion’s share of value, which is why I still think Meta and Google stay core holdings unless their data or distribution moats crack.
Yes, there are free platforms that offer long-term financial data and allow downloads, including public filing systems that host 10-plus years of financial statements and some open financial APIs that let you export income statements, balance sheets, cash flows, and key metrics without a paid subscription. These free options may require registration or API usage but provide substantial historical data at no cost.
I think they are doing the right thing. According to Gemini they have amazing tech. 1. The "15 Million QPS" Challenge TTD processes roughly 15 million queries per second (QPS). To put that in perspective, Google Search handles roughly 100,000 per second. Latency: Every single bid request must be ingested, analyzed, and responded to in under 100 milliseconds. Edge Computing: They use a hybrid model of AWS (for global acceleration) and their own colocation data centers. By using AWS Global Accelerator, they route traffic over the AWS private network to keep latency low across different geographic regions. The Database: They rely heavily on Aerospike, a NoSQL database designed for petabyte-scale data with sub-millisecond lookups. It’s what allows them to remember a user’s frequency (how many times they've seen an ad) across millions of simultaneous auctions. 2. Kokai: The AI Upgrade While their older AI engine (Koa) was great for basic automation, the new Kokai platform is a "deep learning" upgrade. Audience Scoring: Instead of just bidding on a "car buyer," Kokai assigns a unique relevance score to every single impression opportunity based on "Seeds" (first-party data provided by the advertiser). Distributed AI: They don't just run one big model; they distribute "intelligence" to the edge. This means the bidding server at the data center can make a "smart" decision based on local patterns without waiting for a central brain to chime in. 3. UID2 (Unified ID 2.0) Architecture This is their technical answer to the death of the cookie. It’s an open-source framework, not just a product. Hashing & Salting: It takes a user's PII (like an email) and puts it through a double-blind encryption process. The Token Flow: 1. A user logs into a site. 2. The email is sent to a UID2 Operator. 3. An encrypted UID2 Token is generated. 4. This token is passed into the "bid stream." Only authorized DSPs (like The Trade Desk) have the decryption keys to see the underlying ID and match it to an audience segment. Rotating Keys: To keep it secure, the encryption keys rotate frequently, making the tokens useless to anyone who tries to intercept them without permission. 4. OpenPath: Cutting out the Middleman Technically, TTD is a Demand-Side Platform (DSP), which usually buys from a Supply-Side Platform (SSP). However, with OpenPath, they’ve built direct API integrations into publisher ad servers (like those of The Washington Post or Reuters). Efficiency: By removing the SSP "hop," they reduce the physical distance data has to travel, which lowers latency and eliminates the "tech tax" (fees) that SSPs usually take. Protocol: They use OpenRTB (Real-Time Bidding) standards but implement it via direct pipes, essentially making the supply chain "flatter." 5. Data Analytics at Scale For long-term storage and reporting, they use Vertica (on Amazon S3). This allows them to store petabytes of "cold" data but still perform massive SQL-like queries for advertisers who want to see a report on exactly where every cent of their $10M budget went.
Funny timing - I actually built exactly this. https://stockalert.pro does AI-powered alerts for price movements, technicals, volume spikes etc. Honest feedback tho: one-time purchase wont work for this. The ongoing costs for real-time data feeds and monitoring infrastructure are brutal. Everyone thinks “just alerts” is simple until they see the API bills lol. Also the “statistically significant rise” part is tricky - by the time something is statisticaly significant the move is usually already over. The edge is in catching it early, which means more false positives. Happy to chat if you want to compare notes before you build.
Anthropic have a very different business model focused on business use, with a large amount of their revenue being generated via API token usage. They haven't scaled as fast as OpenAI nor seek to be the "Google" of AI that ChatGPT has become. While that gives impressive user figures it means are large amount of your users just cost you money.
The point about inference constraints is the smoking gun that bears are ignoring. I’m seeing this daily, try building anything complex with Claude right now during peak hours and watch the latency spikes. You don’t get API instability from 'fake demand.'
Here's my take. MCP allows agents to work on your behalf. So systems like Jira, which charges by seat, don't need end user licenses except for those actually assigned the work . Reporters and PM can do all their task and project management through an agent. The only reason to have a license is to be assigned a task. And even that can be proxied through a custom field. Which means Jira is basically just a data schema with a cool API that maintains relationships
It's my own mobile app that I built. It uses the API from finhub and Alpha vantage.
I'm on IBKR and the price only moves 6 1/2 hours a day (stock market hours), using the mobile app and the python API. I must not be signed up for the right data feed. Maybe I shouldn't fix that.
That’s true, it’s just one source. Will add more API endpoints with different sources over time 💪
Oh nice, just yesterday I added a feature to <https://adanos.org/reddit-stock-sentiment> that explains why a stock is trending. You just click on any ticker in the table and it shows you the actual posts driving the sentiment. Theres also an explain API endpoint if you want to integrate it programaticaly.
Google doesn't charge for Gemini licenses... you pay as you go via the API with no upfront costs. If Apple wanted to implement Gemini in iOS, they wouldn't need to pay $1 Billion upfront. >Do you have any background with software? Yes, I've involved in discussions with vendors. Apple negotiated a bulk deal; rather than paying the retail API prices Google charges, Apple is paying a fixed amount. Apple gets access to a LLM within their OS for dirt cheap, while Google get's to report a big $1 Billion contract which pumps their share price.
re: oil apparently some report came out from the U.S. Energy Information Administration (EIA) and American Petroleum Institute (API) showing significant increases in U.S. crude oil and gas killing the supply concerns. The call is coming from inside the house
Most likely no invasion. But more than likely a limited strike package on IRGC targets and other gov/nuclear targets. If the regime is overthrown expect oil prices to go down. They can flood the market not because they’re holding reserves but can actually trade with the world again instead of being sanctioned. And unlike Venezuela a lot of their oil wells are operable. As well as the fact that Iranian oil is typically a medium-to-heavy flowable liquid (API 29–34), whereas Venezuelan oil is predominantly an extra-heavy, tar-like bitumen (API 8–16) that is far more viscous and difficult to refine. It’s probably priced in but Iran being able to trade at market rate again will affect Russia and China and the oil market more broadly. As well as to finally allow for capital inflow.
> This is hugely bullish. The more Apple uses Gemini the more money Google makes. > > Plus it will be with really nice margins as ONLY Google has the TPUs. > > Realize Apple has only licensed the technology for the billion. There is the actual running the model. I would expect over time Apple hopes to do more of it on device as some point. But even if that happened they would have already paid billions to Google. This is incorrect. The $1 Billion deal is a license for unlimited usage of the Gemini API for their Siri product.
Oh very interesting... what indicators do you need to do EW Analysis? I've been paying for an API (chart-img) to automate getting screenshots from TradingView then passing that via API into gemini. If our site ever takes off or reaches ramen profitability it seems like it wouldn't be too difficult to add EW analysis, although that would effectively double our LLM costs to run a separate EW analysis on top of our standard TA...
The "best LLM" changes every 3 months, and heavily depends on your use case. Claude has been the best for technical work, and ChatGPT has been the best for creative tasks. But lately Anthropic has a lot of speed and reliability issues, and OpenAI has been more focused on cost optimization rather than pushing the performance envelope. So yeah, they've allowed Gemini to creep up. I think the Apple deal is a huge thing for Google, but I also think this subreddit is over-hyping it just because everyone loves their iPhones so much and gets irrational about Apple. Monetization from AI is not going to come from people paying $20 month for web chat. It's going to come from API usage. And OpenAI is just dominant there, with little real competition. Their "gpt-5-nano" model is fully capable for any RAG or categorization problem, and even light reasoning, and OpenAI has the pricing down to where it's on-par with the shittier Chinese models now. Meanwhile, Google Cloud is still in a distant third place behind AWS and Microsoft Azure, because Google is a crappy tech vendor beyond the consumer space and businesses don't want to work with them. There's not going to be any "THE winner" in the AI race.
> I'm currently paying for Gemini Out of curiosity, why? I'm a heavy user, and have never sniffed any kind of rate limit or daily cap so far. As far as I can tell, the only use case for paid Gemini (chat, not API) is a company wanting some kind of data privacy guarantee.
My company spends about $50k/month in token API usage and increasing rapidly every month.
Oh I am aware. I am planning on eventually going straight to nasdaq and OPRA for the data and perphaps become a API provider similar to the data providers I use.
By list, an API is fine as well
Imagine a world where every app dev can monetize AI without having to incur the capital costs or deep expertise in it. Your interface is Foundation Models either via API or via UI via Apple Intelligence and Apple Intelligence can send specific tasking to specific models and allow the inference to run \*in\* iCloud basically so you can run models on your data + mix with world knowledge data. "Hey Siri, did that restaurant I went to with bob a few years ago close? They had a pretty good steak, is that still on the menu?" And have that be baked in as a first class citizen on everything from your laptop to your phone to your TV. People are focused on the models purely from a GenAI perspective and judging Apple as a failure without thinking about the user experience of AI and whether GenAI is even a jack of all trades. Whereas Apple is focused on the experience first with pluggable models. Imagine an AI marketplace on the App Store where you can pick from best in class niche models from 12 Labs, Writer, etc. The best models in the world only so go far if the user experience is shit. There's only so much you can do with a chat interface that has to upload/download files. A model that can govern other models and route queries in a pluggable fashion and happens to run next to the servers that hold all your data and integrated into the homomorphic encryption ecosystem..... that unlocks new things. Entire new market segments of micro/macro niche models that you can develop with Apple taking care of the serving, inference, privacy, etc. If you believe that is more valuable (and I do) then Apple will profit from the generations of models to come and the developer revenue they can capture. Apple has a hardware advantage no one else has in terms of the energy footprint of their own chips and ability to serve models on it. Hint: MacOS 26.2 added RDMA support. As in you can cluster Mac's via hundreds of gigabits of Thunderbolt shared memory and get into multi-terabyte context windows. :)
This is big but not for the reasons many folks here think. From my understanding, the Gemini model is powering Siri, and will be hosted by Apple and part of their offering within their walled garden. This appears to be the first big move where Gemini is used as more than just a model or ok assistant within Google Workspace to actually be the foundational infrastructure to a product. In the future, the money for these models isn’t going to be made on subscriptions to consumers or end users that strictly use chat functionality, but as either usage based billing for API calls or for licensing of the models to then self-host and build products with. Right now, Anthropic owns this vertical with another small slice by OpenAI (eg I believe it is Claude that powers Cursor, Replit, Lovable, Gamma, and many others in Forbes AI top 50). We are barely at the tip of the iceberg for AI integrated tools.
Gemini API for enterprise (vertex ai) is incredibly unstable. Obviously Apple will get some dedicated servers but I wonder if they manage to scale them accordingly since they did not manage to get it done the last year for what is essentially their flagship product on Google cloud
Gemini definitely tops every benchmark. However, in my experience, Gemini is BY FAR the worst of the top models. It’s way too verbose, it often doesn’t read uploaded files automatically, it gets confused if you start talking about new things in a single conversation. Its API costs are just cheap. That’s why they chose it
I run Pro, is there any point in Ultra if I'm not an API nerd
They are basically gonna be supporting the onshoring of API then the manufacturing and the logistics... its huge just gonna take some time. If you gonna invest here then forget it for the next 2 years.
Gonna be honest - when everyone on Reddit is asking for the “next 10x”, thats usually a sign we’re closer to a top than a bottom lol. ASTS and RKLB ran because they were unknown, now they’re in every thread. Thats actually why I started tracking Reddit sentiment for stocks - by the time something is being hyped everywhere, most of the easy gains are gone. The real alpha is finding stuff before it gets popular here. Built an API for it at <https://adanos.org/reddit-stock-sentiment> if anyones curious. My actual “high conviction” move is staying diversified and not chasing memes. Boring but it works.
What are your API calls for data from?
Amazon is going to start aggressively replacing humans with robots in their logistics anywhere they can, because they've never showed any concern about workers and they're not gonna start now. I think that's going to lead to them showing incredible savings beyond what people are estimating. Google's big plays are obviously Gemini, but that's largely just to keep their head above water vs. everyone else, not necessarily to improve their own internal functioning. So maybe they start seeing some revenue from people using that in workspaces and paying via API? But I honestly don't think we start seeing real AI society destruction before next year. BUT I am mostly long on Google for stuff like Waymo, and some other things I can't remember now bc I just smoked a joint. But look, the point is, both good picks.
Making a LOT of assumptions here, brah. Did you miss the part up there at the top where I clearly state >I'm not sure I trust Schwab's API and would like to double check it.
If Google Gemini wants to win the LLM war: acquire Reddit If Google wanted to structurally weaken competing LLMs, it wouldn’t just out-train them, it would control the data exhaust they all depend on. Right now, a huge portion of high-quality conversational training data comes from Reddit (40% of citations). If Google vertically integrated those surfaces into Gemini (tightening API access, rate-limiting scraping, or bundling premium data behind Google-only licenses) rival models like ChatGPT would see a steady degradation in freshness and relevance and users will just gravitate to Gemini. The game isn’t compute anymore... it’s data privilege. Whoever owns the largest stream of real-world human interaction doesn’t just build better models, they slowly starve everyone else. Reddit is $46b market cap. Google is $4,000b so ~10% of market cap.
my app uses snaptrade for investment/brokage API.
What the OP is describing is definitely possible. I’ve done this myself using webhooks from TradingView, sent to my computer, which then connects via the Kraken API to execute trades. It does work. I usually trade crypto and haven’t yet explored how I’d handle this with stocks. That said, it can be hard for many people to understand exactly what’s being done here. Still, it’s not that unreasonable—assuming high leverage, a solid strategy, and low spreads. Most markets and algorithms are unprofitable, with roughly 50% profitability at best.
EDI and API connections are boring but insanely important.
I have a Google Sheet I use to track all my holdings. I use the Google Finance API to track prices. Last night I noticed my price for TSMC wasn't using the API and was hardcoded to $213. I updated the cell to use Google Finance API... Hey! I had $15,000 more of TSMC than I thought I did!
Working on the API version of this agent will send it in the next week or so, if you don't mind, please DM me as it helps me keep track of everyone I need to get back to! Appreciate it!
Before this turns into the usual noise, let’s set a baseline. These DDs are built from primary data first. Filings, financials, ownership, dilution history, liquidity, positioning, and sentiment. That raw information is then structured and checked to surface risks, inconsistencies, and failure points that are easy to miss when people rush to conclusions. Data is pulled via API's and then delivered to LLM platform for better and cleaner content delivery. The work is about organizing reality, not selling a narrative. **If you think something is missing, incorrect, or misread, point to the data and add value. That improves the work and helps everyone reading.** If the contribution is “this is incomplete,” “this is AI,” or vague negativity without specifics, it adds nothing and will be ignored. I am not here to hype tickers, argue with strangers, or babysit bad faith takes. This is a working floor. If you want to help sharpen the analysis, you’re welcome. If not, keep scrolling. **For the MODs planing to flag this** : The site they link to hosts long-form DDs because Reddit’s character limits make it impossible to post this volume of work without breaking it into dozens of comments, which isn’t realistic or readable. All I’m asking is that intent, execution, and substance be weighed alongside optics. There’s a difference between dropping affiliate links and publishing free research that happens to live off platform due to format constraints. From an intent standpoint, this initiative started because this sub, like most penny stock spaces, is flooded with hype, pumps, and drive by narratives. These DDs are built from primary data and take real time to produce. They’re not designed to push tickers or sell subscriptions. They’re designed to slow people down and give them context before they trade. I also want to point out that these tickers came directly from community voting. I didn’t pick them. I didn’t inject my own agenda. I did the work the community asked for and shared it in the most practical way available. If the community decides it doesn’t want this kind of content, I’ll respect that. But I’d argue there’s room for a win win here where people get higher uality analysis without turning the sub into a billboard.
Before this turns into the usual noise, let’s set a baseline. These DDs are built from primary data first. Filings, financials, ownership, dilution history, liquidity, positioning, and sentiment. That raw information is then structured and checked to surface risks, inconsistencies, and failure points that are easy to miss when people rush to conclusions. Data is pulled via API's and then delivered to LLM platform for better and cleaner content delivery. The work is about organizing reality, not selling a narrative. **If you think something is missing, incorrect, or misread, point to the data and add value. That improves the work and helps everyone reading.** If the contribution is “this is incomplete,” “this is AI,” or vague negativity without specifics, it adds nothing and will be ignored. I am not here to hype tickers, argue with strangers, or babysit bad faith takes. This is a working floor. If you want to help sharpen the analysis, you’re welcome. If not, keep scrolling. **For the MODs planing to flag this** : The site they link to hosts long-form DDs because Reddit’s character limits make it impossible to post this volume of work without breaking it into dozens of comments, which isn’t realistic or readable. All I’m asking is that intent, execution, and substance be weighed alongside optics. There’s a difference between dropping affiliate links and publishing free research that happens to live off platform due to format constraints. From an intent standpoint, this initiative started because this sub, like most penny stock spaces, is flooded with hype, pumps, and drive by narratives. These DDs are built from primary data and take real time to produce. They’re not designed to push tickers or sell subscriptions. They’re designed to slow people down and give them context before they trade. I also want to point out that these tickers came directly from community voting. I didn’t pick them. I didn’t inject my own agenda. I did the work the community asked for and shared it in the most practical way available. If the community decides it doesn’t want this kind of content, I’ll respect that. But I’d argue there’s room for a win win here where people get higher uality analysis without turning the sub into a billboard.
Totally! Free API access can definitely attract deelopers, but it's gotta keep delivering value to maintain that interest after the trial…
Check them out. They have an API and a very responsive team. I’ve been working on trying them out and I like what I see so far.
Gemini provides free quota for API access which is good for developers.
Discord has a great fucking API compared to slack but other than that I think slack is really good as well. Teams sucks a load of donkey cocks but what's wrong with slack?
Monarch Money and Copilot are the gold standards for visibility. It’s a necessary step for complex portfolios, mirroring the control rooms of professional family offices. But don't ignore the 2022 Plaid disruptions. Single API aggregators create a dangerous blind spot during liquidity shocks. Which is why manual oversight of primary accounts remains mandatory when markets fracture.
You know they sell API access, right? The LLM is only a POST request away...
> Right, so you gotta work in a container cause you know it's a flawed technology, Buddy, no technology is ever free from flaws. The idea is that you weigh the pros and the cons. The pros outweigh the cons here massively for many types of work. Even seperate to this, many types of development, especially webdev, are just much better developing in containers where you have everything setup just right, the exact same way every time, rather than dealing with specific configuration and tooling hell all the time. It just makes sense. >Like imagine you're doing data science work and when you got 5% left it just nukes your entire work, even if containerized, surely this technology is worth something. What part of working in a container and backing up makes you think anyone would get 95% of the way through a project or significant goal without having ever though to save their progress?? So many of the room temperature IQ anti AI takes rely on just.... not at all adapting to new technology and then pretending it can't work because you wont use it right. Its like saying cars are worthless because if you're drunk they can't drive you home like horses did... Like yeah, thats a flaw, but then just don't use them while drunk/call a taxi is the answer. >Just do another 2 million API calls! You truly don't realize how much of a nothing sentence this is. It exposes a level of ignorance you don't even realize it exposes. Firstly, its extremely unlikely that youll be making 2 million api calls over the course of ... a very long period of time, much less due to losing any amount of work someone might lose with the container workflow I described. Secondly, 2 million API calls to what exactly? An API call isn't a fixed cost thing. The cost and usecases for calls varies so wildly 2 million could be anything from completely reasonable to outrageous. For instance, a game probably makes 2 million calls to a graphics API within a gaming session. On the other hand, making like 500 api calls from a short shopping visit to a website is a lot. Basically, you've exposed that you know basically nothing about the entire space you are giving your take on, by basing your whole take on the weakest possible argument one could make, and exposing that you have no idea of the significance of essential terms related to the space. I've probably given you more knowledge than you had before in this one comment, and I'm just some dude. If *I* feel like I know next to nothing, imagine how little you know.
Most microcaps say integration and mean a Zapier webhook. EDI and API into tier-one TMS is a different league.
Right, so you gotta work in a container cause you know it's a flawed technology, I'm sure it's gonna do stuff just fine. Like imagine you're doing data science work and when you got 5% left it just nukes your entire work, even if containerized, surely this technology is worth something. Just do another 2 million API calls! AI is worthless "technology"
It's tough to troubleshoot without additional information about your specific request. Is it possible that the option contract you queried did not trade within the timeframe you requested? If it's far in/out of the money, it may be that no trades happened, and therefore no bars were available. I can confirm that this request returns hourly bars using a free-tier API key: [https://api.massive.com/v2/aggs/ticker/O:SPY260105C00683000/range/1/hour/2025-12-22/2026-01-01?adjusted=true&sort=asc&limit=120&apiKey=](https://api.massive.com/v2/aggs/ticker/O:SPY260105C00683000/range/1/hour/2025-12-22/2026-01-01?adjusted=true&sort=asc&limit=120&apiKey=)
I’ve followed Strategy #2 for about half a year and lost nearly all my savings. My decisions, my responsibility, but please take this as a warning. Managing at 1.2 or 0.8 EM is a mirage—NDX can move from 1.5× EM to ITM in seconds, and the “roll” often becomes just continued pain with larger losses. Unless you’ve built a fully tested, risk‑bounded trading system with API access to your broker that executes algorithmically and enforces hard limits, you’re trying to out‑react moves that happen faster than you can click. I do believe papakong88 has good intentions, but the risks here are too great to call this sustainable. Don’t do it. I beg you. Don't be a fool like me.
Appreciate the detailed breakdown. For anything "agentic commerce" I keep coming back to the same question you raised, contract-to-GAAP revenue conversion and what the retention/renewal story looks like after the initial deployments. Are you planning to track any leading indicators like cohort usage (API calls per customer) or gross margin on the checkout/payment components separately? On the broader agentic AI side (non-stock specific), Ive been collecting examples of real-world agent patterns and what tends to break in production here: https://www.agentixlabs.com/blog/
Au/trbodeez Thanks! I wrote a few scanners that use AI agents, statistical inference engines and news and gossip sentiment. This one specifically logs in every night after the OPRA files are available and then makes API calls against Alpaca's brokerage (best balance between cost and value). It gets a history of purchased call contracts to establish an average call volume and then it looks at yesterday's call volume. If yesterday's call volume is 200% higher than the average and higher than each of the five trading days, it makes the screen. The report builds graphs from the screen. https://preview.redd.it/nqbm5hi2xdbg1.jpeg?width=2340&format=pjpg&auto=webp&s=2578602959bb5ac88da0d2c3b4214558ab191d68 I'll continue in next response will only let me include one picture. This is sample output from my saturday 1:30 AM run.
This is very out of date. IEX cloud shut down their API months (more than a year?) ago. Like another commenter said, just an AI post.
To anyone interested in power: Large industrial diesel generators (gensets) are quite often used just as backup generators (mission critical). They can individually provide up to 5MW of power with ~100 liters of displacement. Companies involved in this market: **Caterpillar, Cummins, Kohler, Mitsubishi, Rolls Royce, Generac, Modine, API Heat Transfer, AKG Thermal, Adams Thermal** -Someone who’s worked in the genset industry.
Connecting your brokerage to any third party API is incredibly stupid if you have lots of money in it imo. I don't care if it says SOC 2 compliant or whatever, mistakes & bugs happen.
Yea its a good library which is what it was promised to be, your error isn’t unique, and there is a lot of existing information on that API, if you understand what I mean, so its just reading and giving, but actual coding especially software coding is way more intricate and even now, unless a human checks and verifies nobody uses AI…its faster to do it yourself rather than trying to understand why this test doesn’t work or this output is weird…its like Open book test…the AI are facts written in the book, so that is easy to find because it can find the page, but coding as in software coding or developing is not an open book test…it requires combining objectives, test and trials, reasoning and so much.
Am in the same boat, and there is no good answer. Wikipedia shows some historical data, e.g. Nasdaq-100 until 2007. API providers usually have limited lookback periods (like 5 or 10 years), unless you shell out the big bucks. I have now committed to doing the grunt work and manually extract the data from available press releases.
Yeah, I use intraday data. And there are million+ options, each having its one price history, so it’s like the data for a million of stocks. Then options also have many data points, bid, ask, last price, OI, IV, 1st and 2nd order Greeks, etc. Then you can also pre-calculate variety of indicators that you may want to store with the options price data. Currently I even use a 768 GB RAM server to store just the recent options data for the last few weeks in memory to be able to pull them up via an internal API, and that’s barely enough RAM to store just a couple months of recent options data.
"large amount" but I'm curious if someone can do an API call and sift the data for the percent of people who lost all their retirement fund and in massive debt vs the success posts.
Even the most staunch AI boommongers will admit that there will be job loss. Demand is not infinite. Nobody knows what will happen, but one cannot create consumer or business demand by juicing supply. It feels dumb just typing that, but here we are. Your (correct) reasoning is first year Econ stuff, but nobody's pulling their hair out of their skull here over that intractable fact. Powell appears unperturbed because of "real revenue," which I take to mean conversion of model compute to income. The analysis needs to pinpoint *where exactly that compute is happening.* API tokens? Consumer accounts? What are the use cases? Only then will we begin to understand how AI will affect the job market.
Is this API actually secure or is this going to end up being a back door for people to scrape data from?
API calls are riddled with problems.....LLMs are tainted with False-False and False-True nodes; Hallucinations.....Bug ARE a big deal: a user sends an order for 10 @ 100.0654 and you send the exchange an order for 10 @ 10.07...big difference at volumes & equity spent....never mind the UI/UX interaction with the users and "their" expectations and customizations. I developed Desktop, Server, Databases/Farms & Websites for the past 25 years.....You have a potentially good idea, you need more heads in the room and that is the purpose of the additional devs.
how did you compile this list? did you use an API? manual reading? i’d like to do the same.
Thank you! I'm glad to hear that your platform has extensive historical data. However, after I logged in, it mentioned that the data is accessed via an API. Where can I download ready-made CSV files? Or could you please help me export the data directly?
API of course. Had the help of a tech-forward colleague.
You did this by hand? Or API?
I’d encourage you to check out Massive.com. We provide a couple of years of minute-level options data for free. The data can be queried in CSV format, which makes it easy to import into Excel or other spreadsheet tools. Many users also pull the data directly from the API into Excel. I’m happy to set up a trial if you’d like to take a closer look. Disclosure: I work at Massive.
Happy to chat more about Public (I'm the COO) if you're interested in learning more! Can also introduce you to our Developer Advocate and our API Sales team who can help get you onboarded smoothly. DMs open!
Main thing at 18: treat process > stock picks. Analyst ratings (on moomoo, Yahoo, Morningstar) are fine as a filter, but don’t copy them blindly; track a few ratings in a spreadsheet and see who’s accurate over 1–3 years. Build a simple core first (broad index funds in your TFSA/RRSP), then use a tiny sandbox for “learning” stocks. For deeper fundamentals and API-style data pulls, I’ve mixed Koyfin, Alpha Vantage, and DreamFactory to run my own screens and backtests.
I've been testing [Paradox Intelligence](https://www.paradoxintelligence.com) too. Curious how you're connecting it to Google Sheets..are you using their API directly or pulling data manually?
The main thing here is that debt only works if your capacity comes online on time and stays near full. Coreweave’s problem isn’t just leverage, it’s timing risk: construction delays plus any GPU supply hiccup or AI demand slowdown and the interest meter keeps running while racks sit dark. OP’s point about 4x+ debt/revenue is brutal when the assets are so specialized and resale is limited. I’d watch three signals: power contracts (locked and cheap or not), actual utilization rates, and renewal terms with their biggest AI tenants. In my shop we’ve seen smaller infra players use Equinix, Digital Realty, and even DreamFactory-backed API layers to stay flexible instead of betting the farm on owning everything. The main point: this business model lives or dies on timing and utilization, not just “AI boom” headlines.
Cloudflare NET (bet on AI internet infrastructure) GOOG (bet on models + API layer) Intel (bet on American fab for AI)
Nonsense, two lines of code to call google_ai and three lines to call the trading API.
And how do you automate the trades? Also APIs? The only broker with API access I know of is IBKR.
My 11k loc script to call chatgpt API will destroy yours!
What you can’t change is that data and api’s change. So any strategy you employ needs to accept that. From there you have two options 1) avoid, 2) manage Avoid is what I do. I have 4 ETFs, a house, a mortgage, and cash. That’s 7 line items in a spreadsheet. I update them periodically and can see things like debt to assets, exposures etc. Manage would have to accept change at the API layer. One thing I would think about is having AI code me up something that grabs the data from the API and puts it in a csv. Then separately I would build a spreadsheet/dashboard built on the schema of that csv. This separates data ingest from all the ways you want to see the data. The csv becomes a tab in the spreadsheet and can be updated.
It uses the Yahoo Finance API (so no personal API key is required)
Which API do you use for the live prices?
I’ll try my best here….could be long >Some believe that only those who has insider's (secret) information make the most profits. While there are some (Stephen A Cohen and folks) most aren’t or don’t get enough to make it consistently profitable. Instead it’s about processing that new information into signals and acting on it. That s a retail we just don’t have access to. >Others think that tools that funds, banks and investment firms are using exclusive to them and anything that is available to public is not really working. Absolutely. Asset managers spends millions on people and their raw resource is data. - For example TSLA earnings come at 4PM. I’m paying Bloomberg to send me the full balance sheet by 8PM. By then I have the competitor data loaded, industry data loaded and derived. Models waiting for new input to crunch out adjustments to TSLA. By 9AM I have my orders for how much to buy/sell to adjust across the portfolios - data is #2 operating expense behind researcher salaries. - process is crucial which removes so many emotional traps and behavioral biases. That alone probably accounts for so many retail mistakes. The process is hardened over time with support, checks, metrics to improve incrementally. >But I also think that there are many myths that are made up by people just to find an excuse for their losses. Some elements of that like it’s stacked against retail…but truth is yea it kind is. In academia circles they coin the term smart and dumb money (guess who’s the smart money). I tried to replicate some of the processes myself without using company resource and found myself locked out. I tried to reach out to Morningstar about how I can get an API license for data….yea they don’t reply unless you’re an actual business. So….as retail…simplicity probably works best because we aren’t setup to try to get complicated.
Right now it is only available to private testers for testing and bug fixing. But alpaca is the broker and relaydesk is the API endpoint
Yessir! Alpaca is the broker, relaydesk is the API endpoint
Is this your paid data API that you used? I had too many false positives following LLM based analysis. Did you verify it manually.
With all of the new light oil (40 API) coming out of the permian basin, refiners actually need heavier oil to blend with it. This is more of a midstream issue i.e. The Shale Revolution made the US the largest producer of Sweet Light crude while having the largest capacity to refine heavy sour crude on the planet. Hence the blending. Unless the plan is to become an LPG giant(which the West does not use much, at least not compared to developing nations), you cannot simply refine Shale crude oil in a Louisiana or Texas Refinery. You will end up with a lot of Petrol and LPG and the diesel yield will not be profitable.
All they had to do was buy one of the many Rddt clients they crushed when raising API fees. There were so many good ones - instead we're stuck with this dogshit UI.
Came across AI model API keys. They charge for usage. So i guess that's a good way of making monies?
Thought they had killed the reddit API
I think they will have to and want to make a deal with Uber at some point. Most of these AV companies are tech strong in the AV side but much weaker in the consumer side. Tesla actually being the worst on the consumer side with Waymo likely being the best. It would be extremely easy for anyone of these AV companies to partner with Uber at some point and more so, it would be highly financially beneficial to Uber and these companies to do so. And all Uber has to do is allow their clients the option to have AV or human driver or "I do not care" and start adding it to their fleet. As for the liability and remote control, the AV backend will maintain that. Uber does not want that. The will just lease their cars and all controlled thru an API.
its not all free, but I am happy to support it financially in the building phase an open it to people for using it. Eventually, perhaps I will build a paid product layer on top of this to cover my expenses for advanced features like Portfolio strategy management, cross tracking with other news sources, Live ticker price etc. but for now, happy to absorb the cost to allow this tool to benefit others. I'll DM you a detailed breakdown of the tech stack shortly. Mostly a mix of FinBert, Vader, Grok, Reddit API, Supabase for backend, Next js.
glad you asked actually. Just today, we were contemplating making it opensource, and releasing an API for people to use this data effectively other than just the front end. We are working on both and will revert back on the Open Source status via DM early next week.
No one mentioned it… Tradier has a friendlier API than IBKR.
I planned on IBKR API but honestly, as a swing trader, i find their cost and API limits to be silly. Instead I went with a provider for tick data and process everything myself. Its easier for me to find plays, and then time when I want to enter and how much. I can also adjust and process the data on the fly vs having to rely on them.
Your alternative approach and the IBKR+Python approach are the same thing. Your Python code needs to connect to a broker’s API, like IBKR’s, in order to execute trades. And the broker will provide a data feed. The only reasons you would want a 3rd party data feed is if the broker’s feed is too expensive or too slow. For instance, IBKR’s TWS API updated option quotes every 250 milliseconds and has limitations on simultaneous streaming symbols requiring paying more to exceed the limit: https://www.interactivebrokers.com/campus/ibkr-api-page/market-data-subscriptions/#market-data-lines There’s quite a few broker/data feed APIs: IBKR, Lightspeed, TradeStation, Public, Tradier, Tastytrade, etc. I prefer IBKR because they have great order execution personally.