See More CryptosHome

BU

Show Trading View Graph

Mentions (24Hr)

0

0.00% Today

Reddit Posts

r/CryptoCurrencySee Post

CoinLoan halting operations for all users, including withdrawals.

r/CryptoCurrencySee Post

Earn Update: A Message Regarding the Recovery of Earn Assets

r/CryptoCurrencySee Post

Dangerous Precedent set by Tornado Cash?

r/BitcoinSee Post

Switched my S19 back on this morning!!! It's economical again, yay!!!!!

r/CryptoMoonShotsSee Post

| New P2E Project | Giveaway | 25th Finish | Dont Miss | Strong Holders |

r/CryptoMoonShotsSee Post

| Road2Moon | Insert Stonks | New P2E Project | NFT+100 BUSD giveaway|

r/CryptoMoonShotsSee Post

| Road2Moon | Insert Stonks | New P2E Project | NFT+100 BUSD GIVEAWAY |

r/CryptoMoonShotsSee Post

|Insert Stonks| NFT Whitelist Giveaway | Doxxed Team | Play2Earn | 5 WL + 100BUSD

r/CryptoMoonShotsSee Post

Road2Moon| Our Utility NFTs is Launching | Play2Earn Game launching In May 2022 |NFT Drops Soon! | Deflationary Tokenomics | Doxxed Team | Take Your Place In Our Road to Moon!

r/CryptoMoonShotsSee Post

|Road2Moon| NFT Drops Soon! | Our Utility NFTs is Launching | Doxxed Team | Play2Earn Game launching In May 2022! | Deflationary Tokenomics | Take Your Place In Our Road to Moon!!!

r/CryptoMoonShotsSee Post

|Road2Moon| Our Utility NFTs is Launching | NFT Drops Soon! | Doxxed Team | Play2Earn Game launching In May 2022! | Deflationary Tokenomics | Take Your Place In Our Road to Moon!!!

r/CryptoMoonShotsSee Post

Road2Moon| Our Utility NFTs is Launching | Play2Earn Game launching In May 2022 |NFT Drops Soon! | Deflationary Tokenomics | Doxxed Team | Take Your Place In Our Road to Moon!

r/CryptoMoonShotsSee Post

Road2Moon | Guilds & Scholarships coming soon! | Doxxed & Transparent Team | Play2Earn Game Soon! | First Utility NFTs | Take Your Place In Our Road 2 Moon!!!

r/CryptoMoonShotsSee Post

Road2Moon| Our Utility NFTs is Launching | NFT Drops Soon! | Play2Earn Game launching In May 2022 | Doxxed Team | Deflationary Tokenomics | Take Your Place In Our Road to Moon!!!

r/CryptoMoonShotsSee Post

Road2Moon| Our Utility NFTs is Launching | NFT Drops Soon! | Play2Earn Game launching In May 2022 | Doxxed Team | Deflationary Tokenomics | Take Your Place In Our Road to Moon!!!

r/CryptoMoonShotsSee Post

~=> Insert Stonks <=~ Our Utility NFTs is Launching | Doxxed Team | Deflationary Tokenomics | Play2Earn Game launching In May 2022

r/CryptoCurrencySee Post

Hiring NFT marketplace guidance

Mentions

r/CryptoCurrencySee Comment

![gif](giphy|W4c2fly8zKqVjwz9BU|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

Are the crypto kids something like the club kids of the [1980's?](https://www.google.com/search?q=club+kids+1980&tbm=isch&source=univ&fir=vmoYOZK7JtkTMM%252CIGXc4fKccxh8AM%252C_%253BhkPoCnYkQJRZ-M%252CSNPcL808UqsKaM%252C_%253BU0TJt-qmEv1jqM%252CNBUtXWH2jPP1TM%252C_%253BQoDHr-gIZ0sLbM%252CkGuXfE2OWXsS4M%252C_%253Bl4QA3SKVvo8KQM%252CxH9d_Zddio-FUM%252C_%253BLjoAS5Yl7itXfM%252CKkuK8_QxJ-JyaM%252C_%253BYrbg_ImBJZYvXM%252CkGuXfE2OWXsS4M%252C_%253Ba6osicgISAXGLM%252CKkuK8_QxJ-JyaM%252C_%253B1YYnBCsp1a0NBM%252CxH9d_Zddio-FUM%252C_%253BpfSO-C8rp-mizM%252CG_omR-JolfxLKM%252C_%253BFVCOAOiJQN8yBM%252CSNPcL808UqsKaM%252C_%253ByMwCJDOP6FEQNM%252Cycow6yYD6w01LM%252C_%253BB36M9GSNoMhOzM%252CsAx_Svhk7rjYmM%252C_%253BK5UJTlv2UKEojM%252CNBUtXWH2jPP1TM%252C_%253Br83MT_WZljoBIM%252CufjfAEyiTntlIM%252C_%253BCVUTJAoG1X_ECM%252CsAx_Svhk7rjYmM%252C_&usg=AI4_-kRUSljsnzxLvuqgalnIRZQobH6noA&sa=X&ved=2ahUKEwiF96y8l679AhXBl2oFHUyoBcYQjJkEegQICBAC&biw=1920&bih=941&dpr=1)

r/BitcoinSee Comment

> It's because the Twitter user was refering to a collectible quarter selling in (BU) at auction. The tweet has nothing to do with inflation whatsoever and OP is manipulating the narrative to fit. that actually makes a lot more sense than the OP's representation.

Mentions:#BU#OP
r/BitcoinSee Comment

It's because the Twitter user was refering to a collectible quarter selling in (BU) at auction. The tweet has nothing to do with inflation whatsoever and OP is manipulating the narrative to fit.

Mentions:#BU#OP
r/CryptoCurrencySee Comment

Check the vid, it's pretty amazing. We don't *need* 1GB blocks now, but to see that they were viable 5 years ago is pretty cool. To be clear, Core still has the same bottlenecks that it had in 2017 (and BU removed), and don't plan on removing them (they don't need to if they stick to 1MB). Blocksize limit is directly related to troughput (or transactions per second), since you can fit more txs in each block. Eth and Cardano are not comparable, since Eth has some 15 second block time, while Bitcoin has 600 seconds, you'd need to make it proportional. But that wouldn't work, since BTC transactions and ETH transactions are very different.

r/CryptoCurrencySee Comment

Never heard of BU or BCHN. Regardless, 1GB block size is insane considering nowadays ETH has 96KB and Cardano has 88KB. How important would would say block size is for a blockchain? And is it related to tps?

Mentions:#BU#ETH
r/CryptoCurrencySee Comment

#D #C #A https://imgur.com/a/4xt1BU2 #H #O #D #L thank you for coming to my TED talk

Mentions:#BU
r/CryptoCurrencySee Comment

idk why you are getting downvoted. https://imgur.com/a/4xt1BU2 I'm up 85% still lol. DCA, DCA, DCA. Literally this is why people constantly beat this dead horse. 85% over 5-6 years still beats the best stock market investing. DCA the stock market every month for 5 years at a guaranteed 10% return and you'd be up 25% from what you invested overall. I've got more than 3 times that return and it's more than 4 times lower than it's ATH!!!!! Enjoying this bear. Stacking sats. Can't wait for the new ATH first bull run you lose money, 2nd you break even, 3rd you make money, 4th you're rich.

Mentions:#BU
r/CryptoCurrencySee Comment

Looks like Andrew Stone and a couple other guys from BU have fair launched a new cryptocurrency called Nexa in June to implement some of their ideas on an actual, live blockchain. Some interesting stuff under the hood on it too. Gotta respect these guys who are legitimately all about the tech, they're the ones that got us this far to begin with. It's not trading anywhere, but I'm mining to help support the fledgling network. Maybe that will turn out to be a good idea once it's on some exchanges.

Mentions:#BU
r/CryptoCurrencySee Comment

Don’t pursue LUna BU

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|26gR2BU2Hrt9HSN6E)

Mentions:#BU
r/CryptoCurrencySee Comment

Both sides were rediculous, ans embarrassing. At that time I was firmly against the big blockers. Since the small blockers have failed on so many of their promises, ND kicked mw out of their subreddit for asking questions, I am not as hard on the big blockers. The whole thing was stupid. My biggest takeaway from the blocksize wars is that everyone, literally everyone, wanted bigger blocks, and it was always the plan. (even blockstream said they wanted them larger if pressed.) Now today in 2022, the blocks are still 1mb, and btc transactions are still slow and expensive, and self custodial lightening is confusing, and buggy. It's rediculous that a tiny minority has kept the blocksize small this long. You don't have to make them as big a BU did, but even 4mb would make a huge difference, and have no effect on decentralization.

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|26gR2BU2Hrt9HSN6E)

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|0SbsK1BU26pDmTsDDP)

Mentions:#BU
r/CryptoMarketsSee Comment

There are different kinds of infinity though: https://m.youtube.com/watch?v=dEOBDIyz0BU

Mentions:#BU
r/BitcoinSee Comment

Funny how half of the replies in that thread are just shill accounts that have long since gone inactive or even deleted themselves. highintensity canada, last post three years ago. utopiawesome, last post three years ago, knight222 two years ago, ESDI2 four years ago, ... It's like this in most old threads on these subjects. And BU has essentially vanished, gone inactive while its officers quietly transfer the millions of dollars worth of donations they got from suckers into their pockets. One thing you were wrong about: The sub shouldn't have been renamed BU -- as what it's really about is *anything* that trashes Bitcoin and promotes whatever scamcoin or ponzi the subreddit's convicted felon owner is invested in right now.

Mentions:#BU
r/CryptoCurrencySee Comment

Happy Easter filthy animals. ![gif](giphy|fxeSeNieV5E1BU3PRo|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

&#x200B; ![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

I hate how much I love this comment! ![gif](giphy|d46BU8L2zFzo4Ene)

Mentions:#BU
r/CryptoCurrencySee Comment

I am aware that block propagation is an earlier bottleneck than validation. We're closer to fixing the block propagation bottleneck than the (less critical) validation ones, though. Graphene has been merged into BU and should mostly solve the issue. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. >with the massive increase in SPV traffic SPV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of Jameson Lopp's article a year back. He assumed that each SPV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. >What&#8217;s your napkin math for how long it would take a $1000 desktop computer to validate a 1GB block once the known bottlenecks are resolved? First, block propagation. With graphene, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about 1 second total. Second, IBLT decoding. I haven't seen any benchmarks for decoding the IBLTs in Graphene for 1 GB blocks, but in 2014 I saw some benchmarks for 1 MB blocks that showed decoding time to be around 10 ms. If it scales linearly, that would be around 10 seconds for decoding. Third, block sorting. A 1 GB block would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around 1 second. Fourth, computing and verifying the merkle root hash. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second. Fifth, block validation. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around 30 seconds of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. Zeroth, script verification. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into 0 seconds for the sake of this calculation. All told, we have about (1 + 10 + 1 + 0.5 + 30) = 42.5 seconds for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. And how much upstream bandwidth do you think would be required just to relay transactions to a few peers(again assuming that most transactions will come from p2p gossip and not through a block)? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. The first message is the INV message, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be 440 MB in each direction for INVs. The second and third messages are the tx request and the tx response. With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into 1.5 GB of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have about 1.94 GB bidirectional of traffic during each (average) 600 second block interval. That translates to average bandwidth of 3.23 MB/s or 25.9 Mbps. This is, again, reasonable to expect for a motivated middle-class hobbyist around 2020, though not trivial.

r/CryptoCurrencySee Comment

So the CEO has a bachelors in physiology from BU and has been working as a clinical assistant part time while running SundaeSwap and the COO right before this position was an investment banking intern... Amazing stuff, crack team, really.

Mentions:#BU
r/CryptoCurrencySee Comment

&#x200B; ![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

Now I don't even like Coin Bureau that much but don't just outright lie I went onto his youtube age typed LUNA, SOL, AVAX and guess what I found [https://www.youtube.com/watch?v=uLmVtec0px4&t=45s](https://www.youtube.com/watch?v=uLmVtec0px4&t=45s) [https://www.youtube.com/watch?v=enAoz-87D7A](https://www.youtube.com/watch?v=enAoz-87D7A) [https://www.youtube.com/watch?v=hsxgqcHk0BU](https://www.youtube.com/watch?v=hsxgqcHk0BU) You trying to say these titles aren't bullish for non ETH and ATOM projects?

r/CryptoCurrencySee Comment

![gif](giphy|BU1rWEqbUTXWw)

Mentions:#BU
r/CryptoCurrencySee Comment

This is probably the stupidest comment I've ever read. Getting a bachelor's degree in a non stem field with honors is completely trivial especially at grade inflated school like BU, which isn't "elite" by any standard. Any economist worth their salt has also completed a masters and PhD, an undergraduate degree is a worthless piece of paper in this field which just shows you've completed a bunch of unrelated Gen Ed's and a few elementary introduction classes. If her degree truly meant anything than why didn't she get a job as an economist or professor? Why did she become a bartender that spouts absolute nonsense such as "unemployment is low because people work two jobs"?

Mentions:#BU
r/CryptoCurrencySee Comment

29th https://twitter.com/kucoincom/status/1464247575461040129?t=QW0ymHJF-7nhYiz5BU2RWA&s=19

Mentions:#BU
r/CryptoCurrencySee Comment

https://nitter.net/kucoincom/status/1464247575461040129?t=QW0ymHJF-7nhYiz5BU2RWA&s=19 Here is the link to that Twitter thread on Nitter. Nitter is better for privacy and does not nag you for a login. More information can be found here: https://nitter.net/about *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/CryptoCurrency) if you have any questions or concerns.*

Mentions:#BU
r/CryptoCurrencySee Comment

George has entered the chat ![gif](giphy|BU88dlLXhiXzW)

Mentions:#BU
r/CryptoMoonShotsSee Comment

the RICHE$T in the $HIB family ha$ arrived and $tealthlaunched on the B$C! Baby $hib Trillionaire is a B$C BEP-20 token that pay$ 8% BU$D reward$ just for buying and holding the token. ​ 🌐https://www.babyshibtrillionaire.com ​ 📃CA: 0xadb9135a15e4c5969abbb0152c3791e95423a3c9 ​ 🔒LP LOCK:https://mudra.website/?certificate=yes&type=0&lp=0x864be86b88097ccd8c1cac63fd27981f927e0a6b ​ 💪$trong active community! ​ ✅ AUDITpassed with Coinscope! ​ 🐦Con$i$tant daily twitter giveaway$! ​ ⬆️Ranked top 10 on coinhunters.cc ​ 🗨DEV$ in VC daily ​ 💼10% marketing ​ 🚀🔥5% buyback and burn ​ 🤝100% community driven! ​ 💎🙌 Nothing but diamond hand$! ​ ​ Baby $hib Trillionaire is focu$ed on generating pa$$ive income for all, while creating a great sen$e of community! ​ 60% burned before launch. GONE FOREVER! ​ 16% tax on every buy and $ell tran$action. ​ 💰8% rewarded in BU$D ​ ⛲2% to LP ​ 💼2% to Marketing ​ 🔥2% to Buyback/burn ​ 🤖🔥2% to auto burn ​ Thats 2x the BURN! ​ B$T is al$o committed to frequent redistribution through airdrop$ and community giveaway contest$! ​ Huge roadmap milestone$ include ✅Blockfolio li$ting happening in 72 hour$! Poocoin ad$ running $oon! ✅Twitter Influencer$ ✅Youtube Influencer$ ✅Tiktok Influencer$ ✅CM$ Trending https://www.reddit.com/r/CryptoMoonShots/comments/quijm9/babyshibtrillionaire_8_busd_rewards_for_holders/?utm_medium=android_app&utm_source=share Web$ite/Logo/Whitepaper V.2 upgrade NFT Marketplace! BST $wap! ​ Twitter ​ https://twitter.com/BabyShibTrilli1?t=AyEw_0Z79oHgm-6pnZ7WgQ&s=09 ​ Telegram ​ https://t.me/BabyShibTrillionaireOfficial ​ In$tagram ​ https://www.instagram.com/babyshibtrillionaire ​ Audit: https://www.coinscope.co/coin/2-bst/audit ​ Grab your$elf a bag of B$T and join the community today!

r/CryptoCurrencySee Comment

You're telling me a BU node from before the split follows the bch fork? I really doubt that.

Mentions:#BU
r/CryptoCurrencySee Comment

[lol wtf](https://i.imgur.com/BU2fMwu.jpg)

Mentions:#BU
r/CryptoCurrencySee Comment

Next stop: [The One Hundred Trillion Dollar bill](https://m.media-amazon.com/images/I/81BU1wksDAL._AC_SL1165_.jpg).

Mentions:#BU#AC
r/CryptoCurrencySee Comment

Hey everyone, I've recently begun my crypto investment journey. I started late august of this year and can honestly say it was very intimidating watch my coin value fluctuate so much in such a short time, like the other 95% of people out there I wanted to pull all my money out (at a loss) and buy back into another coin which was going value, a rookie mistake DONT DO IT! But anyway to my story.. My father in 2010 was reading through the news paper and showed me that there was going to be a revolution in currency.. His words were "when your my age everyone will be using computer money" 23 at the time I actually laughed, my words "get of the beers dad you tripping. NO CHANCE" He was eager to show me so only for a minute he showed me the article.. BITCOIN WAY OF THE FUTURE! It was the beginning of August 2010 my birthday was coming up.. He wanted to buy me $100 of BITCOIN.. I told him.. Mate my car needs fuel and I need a pack of smokes.. He tried one last time to say "think about the future... WHAT IF!!!! My last words that night LOL ITS ALL BU#@$H#T to take you money... ITS A SCAM.. BITCOIN AUGUST 2010-$0.008 $100-12500 @ $0.008 BITCOIN SEPTEMBER 2021-$66400 12500X$66400 PROPOSED BITCOIN WALLET TOTAL.. $830,000,000 YES I KNOW.. He tells me everyday😁 Thanks everyone and goodluck on your own crypto journey👍

Mentions:#LOL#ITS#BU
r/BitcoinSee Comment

[https://docs.house.gov/meetings/BU/BU00/20210925/114090/BILLS-117pih-BuildBackBetterAct.pdf](https://docs.house.gov/meetings/BU/BU00/20210925/114090/BILLS-117pih-BuildBackBetterAct.pdf)

Mentions:#BU
r/BitcoinSee Comment

Have at it: https://docs.house.gov/meetings/BU/BU00/20210925/114090/BILLS-117pih-BuildBackBetterAct.pdf

Mentions:#BU
r/BitcoinSee Comment

Like there is a city called BU :) ? or what ? Because I know its Buda and Pest just wanted to make joke that we can even be more specific and split it into BU and DA :D

Mentions:#BU#D
r/BitcoinSee Comment

Weeks indictment: https://www.justice.gov/usao-nj/press-release/file/1225026/download [Here is a video](https://youtu.be/16DxRgfnqow?t=1) with Bitcoin.com ex-CEO and convicted felon Roger Ver interviewing his "long time friend" Joby Weeks. Beyond the video where convicted felon Roger Ver introduces Weeks as his long time friend and discusses bitclub with him-- there are some interesting connections between Bitclub and convicted felon Roger Ver seemingly resulting from a co-marketing agreement around BCH. Here is a video of convicted felon Roger Ver helping out this scam by deleting a message saying they were a MLM scheme, https://www.youtube.com/watch?v=Ceak5rZNKOg supposedly in trade for them endorsing BU. Promotion of Bitclub with their BCH adoption on bitcoin.com: http://web.archive.org/web/20180201225141/https://news.bitcoin.com/six-months-later-bitcoin-cash-support-continues-to-grow/ This video is [convicted felon Roger Ver having dinner with Russ Medlin, Bitclub CEO](https://www.youtube.com/watch?v=5KPCMag83to). [Bitclub promotional photo](http://archive.is/n1FI1), showing Medlin and Joe Frank (also included in the indictment) from the above dinner. Convicted felon Roger Ver [lying about his interactions with Bitclub](https://np.reddit.com/r/btc/comments/e94qxa/bitcoincom_exceo_and_convicted_felon_roger_ver/fagkgep/) (in this thread) Convicted felon Roger Ver, calling the arrest a "double standard": https://www.reddit.com/r/btc/comments/e8zdll/bitclub_network_leaders_have_been_arrested/fafjavt/ More history of the Bitclub scam can be found at https://behindmlm.com/companies/bitclub-network/bitclub-network-ditch-bitcoin-payments-for-bitcoin-cash/

Mentions:#BCH#MLM#BU
r/CryptoCurrencySee Comment

Don't get confused whether this is a bull or bear or crab market. Let me tell you what Market it is, It is BELL MARKET (**BE**AR + BU**LL**).

Mentions:#AR#BU
r/CryptoCurrencySee Comment

This should do it : https://youtu.be/hsxgqcHk0BU

Mentions:#BU
r/CryptoCurrencySee Comment

[https://www.youtube.com/watch?v=hsxgqcHk0BU](https://www.youtube.com/watch?v=hsxgqcHk0BU) [https://www.youtube.com/watch?v=7NdjivxrDoc&t=1093s](https://www.youtube.com/watch?v=7NdjivxrDoc&t=1093s) Hope this helps

Mentions:#BU
r/CryptoCurrencySee Comment

“You don’t have homework you go to fucking BU”

Mentions:#BU
r/CryptoCurrencySee Comment

[Coinbereau](https://youtu.be/hsxgqcHk0BU) explains it really well

Mentions:#BU
r/CryptoCurrencySee Comment

I am aware that block propagation is an earlier bottleneck than validation. We're closer to fixing the block propagation bottleneck than the (less critical) validation ones, though. Graphene has been merged into BU and should mostly solve the issue. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. SPV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of Jameson Lopp's article a year back. He assumed that each SPV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. **First, block propagation.** With graphene, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about **1 second** total. **Second, IBLT decoding.** I haven't seen any benchmarks for decoding the IBLTs in Graphene for 1 GB blocks, but in 2014 I saw some benchmarks for 1 MB blocks that showed decoding time to be around 10 ms. If it scales linearly, that would be **around 10 seconds** for decoding. **Third, block sorting.** A 1 GB block would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around **1 second**. **Fourth, computing and verifying the merkle root hash.** The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about **1 second**. **Fifth, block validation**. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around **30 seconds** of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. **Zeroth, script verification**. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into **0 seconds** for the sake of this calculation. **All told**, we have about (1 + 10 + 1 + 0.5 + 30) = **42.5 seconds** for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. > And how much upstream bandwidth do you think would be required just to relay transactions to a few peers(again assuming that most transactions will come from p2p gossip and not through a block)? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. **The first message is the INV message**, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be **440 MB** in each direction for INVs. **The second and third messages are the tx request and the tx response.** With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into **1.5 GB** of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have **about 1.94 GB bidirectional** of traffic during each (average) 600 second block interval. That translates to average bandwidth of 3.23 MB/s or **25.9 Mbps**. This is, again, reasonable to expect for a motivated middle-class hobbyist around 2020, though not trivial.

r/CryptoCurrencySee Comment

Anyone else feeling glonky?? https://youtu.be/4_X1rhOq6BU

Mentions:#BU
r/BitcoinSee Comment

tldr; After reading Nick Szabo’s essay “Money, Blockchains and Social Scalability”, Donald McIntyre wrote to him, “I think it has been well received and has high impact because it provides a very good and timely framework for several things that are under hot debate presently or needed clarification for a long time: – the block size debate (core vs BU), the public vs private blockchain debate, the trustless vs trust minimized confusion, etc.” McIntyre added that Szabo commands more breadth, more depth, in more subjects. *This summary is auto generated by a bot and not meant to replace reading the original article. As always, DYOR.*

Mentions:#BU
r/CryptoCurrencySee Comment

Do you guys know the officially funniest video on YouTube? https://youtu.be/4_X1rhOq6BU

Mentions:#BU
r/CryptoCurrencySee Comment

https://youtu.be/gLnCD\_8s6BU

Mentions:#BU
r/CryptoCurrencySee Comment

One of my.most used words is BUY. Man I wish I could've bought some coins before they went to ATH. BUY BUY BU6

Mentions:#BUY#ATH#BU
r/CryptoCurrencySee Comment

&#x200B; ![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

I am aware that block propagation is an earlier bottleneck than validation. We're closer to fixing the block propagation bottleneck than the (less critical) validation ones, though. Graphene has been merged into BU and should mostly solve the issue. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. > with the massive increase in SPV traffic SPV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of Jameson Lopp's article a year back. He assumed that each SPV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. > What’s your napkin math for how long it would take a $1000 desktop computer to validate a 1GB block once the known bottlenecks are resolved? **First, block propagation.** With graphene, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about **1 second** total. **Second, IBLT decoding.** I haven't seen any benchmarks for decoding the IBLTs in Graphene for 1 GB blocks, but in 2014 I saw some benchmarks for 1 MB blocks that showed decoding time to be around 10 ms. If it scales linearly, that would be **around 10 seconds** for decoding. **Third, block sorting.** A 1 GB block would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around **1 second**. **Fourth, computing and verifying the merkle root hash.** The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about **1 second**. **Fifth, block validation**. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around **30 seconds** of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. **Zeroth, script verification**. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into **0 seconds** for the sake of this calculation. **All told**, we have about (1 + 10 + 1 + 0.5 + 30) = **42.5 seconds** for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. > And how much upstream bandwidth do you think would be required just to relay transactions to a few peers(again assuming that most transactions will come from p2p gossip and not through a block)? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. **The first message is the INV message**, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be **440 MB** in each direction for INVs. **The second and third messages are the tx request and the tx response.** With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into **1.5 GB** of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have **about 1.94 GB bidirectional** of traffic during each (average) 600 second block interval. That translates to average bandwidth of 3.23 MB/s or **25.9 Mbps**. This is, again, reasonable to expect for a motivated middle-class hobbyist around 2020, though not trivial.

r/CryptoCurrencySee Comment

Sorry it's a bit sloppy because I wrote it quickly. import requests import pandas as pd import json import matplotlib.pyplot as plt import seaborn as sns sns.set_theme() coins = {'BTCUSD': 'XXBTZUSD'} all_coins_df = pd.DataFrame() def bollinger_bands(df, n, m): data = (df['high'] + df['low'] + df['close']) / 3.0 B_MA = pd.Series((data.rolling(n, min_periods=n).mean()), name='B_MA') sigma = data.rolling(n, min_periods=n).std() BU = pd.Series((B_MA + m * sigma), name='BU') BL = pd.Series((B_MA - m * sigma), name='BL') df = df.join(B_MA) df = df.join(BU) df = df.join(BL) return df for coin, pair in coins.items(): parameters = {'since': 1565673998, 'pair': coin, 'interval': 1440} r = requests.get('https://api.kraken.com/0/public/OHLC', params=parameters) df = pd.DataFrame(json.loads(r.content.decode())['result'][pair], columns=['date', 'open', 'high', 'low', 'close', 'vwap', 'volume', 'count']) df = df.apply(pd.to_numeric) df['date'] = pd.to_datetime(df['date'],unit='s') df = bollinger_bands(df, 20, 2) plt.figure(figsize=(15, 5)) plt.title(coin) plt.plot(df['date'], df['close']) plt.plot(df['date'], df['BU'], alpha=0.3, color='red') plt.plot(df['date'], df['BL'], alpha=0.3, color='green') plt.fill_between(df['date'], df['BU'], df['BL'], color='grey', alpha=0.1) plt.show()

Mentions:#BU
r/CryptoCurrencySee Comment

Working out has helped my mental health so much, man, dips don’t even faze me. My body, my mind and my portfolio are healthy. May not be gaining but i’m making gains in one aspect 💪🏽 ![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|1ZwSeIzMMm1BU0KrUq|downsized)

Mentions:#BU
r/CryptoMoonShotsSee Comment

**Your submission has been removed because 1 submission with a similar title has been posted on the subreddit in the past.** **OP:** /u/tatatita **Date:** 2021-07-08 21:00:18 **Duplicates:** N | User | Date | Posted... | Similarity | Title :-:|:-:|:-:|:-:|:-:|:-:|:-: 1| [/u/Elastic_Pizza_](https://www.reddit.com/user/Elastic_Pizza_) | 2021-07-08 00:36:46 | 20 hours 23 minutes before | [97%](https://www.reddit.com/r/CryptoMoonShots/comments/ofw1s3/olympicdoge_the_first_doge_token_for_the_tokyo/) | [| 🏅OlympicDoge 🏅 | The first Doge token for the Tokyo Olympic Games with BU...](https://redd.it/ofw1s3) I am a bot. If you believe this was sent in error, reply to this comment and a moderator will review your post. **Do not delete your post or moderators won't be able to review it.**

Mentions:#BU
r/BitcoinSee Comment

tldr; After reading Nick Szabo’s essay “Money, Blockchains and Social Scalability”, Donald McIntyre wrote to him, “I think it has been well received and has high impact because it provides a very good and timely framework for several things that are under hot debate presently or needed clarification for a long time: – the block size debate (core vs BU), the public vs private blockchain debate, the trustless vs trust minimized confusion, etc.” McIntyre added that Szabo commands more breadth, more depth, in more subjects. *This summary is auto generated by a bot and not meant to replace reading the original article. As always, DYOR.*

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

"Im blue, ah bu dee, ah bu Da.... AH BU DEE, AH BU DAJ....."

Mentions:#BU
r/CryptoCurrencySee Comment

*DECEMBER CORN FUTURES FALL BY EXCHANGE LIMIT TO $5.325/BU https://twitter.com/DeItaone/status/1405584470095323141?s=19 Corn is weak. Bearish

r/CryptoCurrencySee Comment

Mac ![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

Oh cool you could have a QR code as your BU then?

Mentions:#BU
r/BitcoinSee Comment

EU monetary policy is primarily built for the German economy, not the SPIG economies or the Eastern European economies. True Germany contributes more to the budgets than anyone else, but they can afford that because they have a whole continent working to their playbook. All it takes is a look over the past 15-20 years around growth levels, unemployment levels, etc etc across the EU and the kind of priorities EU monetary policy had (i.e. the problems it would normally be implemented to fix) to see. Add to that Germany is now hoovering up more of the PEPP QE cash than anyone else despite other countries being in objectively worse positions... https://www.reuters.com/article/us-ecb-qe-idUSKBN2BU2XX

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BU87VWKagvTAA|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

Me after waking up seeing ETH **LORD HAVE MERCY I’M ABOUT TO BU-**

Mentions:#ETH#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoCurrencySee Comment

That's because China doesn't need to drop bombs. They use asymmetrical warfare instead. [CHINA'S PREOCCUPATION WITH ASYMMETRIC WAR](https://smallwarsjournal.com/jrnl/art/chinas-preoccupation-with-asymmetric-war) Quite smart. But when China actually does start dropping bombs during its invasion of Taiwan I wonder what will your mental gymnastics will look like then? [China sends more jets; Taiwan says it will fight to the end if there's war](https://www.reuters.com/article/us-taiwan-defence-idUSKBN2BU0HJ)

Mentions:#S#WAR#BU
r/CryptoCurrencySee Comment

> They are incredibly simple, Q.E.D. Nothing stops you from forking bitcoin and making those changes. In fact, plenty of people already have. Now do you see the hard part? > its just that a fractured community make them hard Au contraire, hard forks causes fractures in a community. Thats why we have BTC, BCH, BSV, BU and whatever the heck they forked off those forks this week, I cant keep up. > And yet that's an epic fail, miners do what they want, because ultimately users cannot progress the chain without miners. Miners are rule takers. They are incentivized by profits and profits can only come from people buying the bitcoins they mine, following the consensus rules they chose (and verify). The moment miners become rule *makers*, it becomes completely pointless.

r/CryptoCurrencySee Comment

>Calling out hypocrisy and double standards isn’t a whataboutisms. But what you did sure is. Yeah, you don't know shit. I was talking about voting , and you come into the thread bleating, "BU-BU-BUT **WHATABOUT** DEMOCRATS AND MUH GUNZ!"

Mentions:#BU
r/CryptoCurrencySee Comment

![gif](giphy|BaSHs78BU2ZYQ|downsized)

Mentions:#BU
r/CryptoMoonShotsSee Comment

https://www.youtube.com/watch?v=7wXYUHL32BU

Mentions:#BU
r/CryptoCurrencySee Comment

LOL. Reuters is the name of the news source my friend. Jump Trading, a UK trading firm is backing CHZ. Reuters is just reporting on it. Source: https://reuters.com/article/amp/idUSKBN2BU0Z5?__twitter_impression=true

Mentions:#LOL#CHZ#BU
r/CryptoCurrencySee Comment

&#x200B; ![gif](giphy|26gR2BU2Hrt9HSN6E)

Mentions:#BU#HSN
r/CryptoCurrencySee Comment

> Discussions about altcoins are not allowed on r/bitcoin, so after blocksize debate was settled bigblockers were talking about a hardfork, which is an altcoin. Discussion in favor of Bitcoin XT (which was a software proposal for Bitcoin to raise the blocksize to 8MB before the fork had happened) was completely banned on the Bitcoin subreddit. Banning happened before BCH existed. Discussion in favor of the BU miner consensus proposal (which was a proposal for Bitcoin to allow miners to determine block size before the fork happened) was completely banned on the Bitcoin subreddit. Again - before BCH existed. Discussion in favor of the Segwit 2x proposal after Segwit had activated (Yet again - another proposal) was completely banned on the Bitcoin subreddit. This change happened after BCH was created, yet it was still the most favored Bitcoin proposal. >Also funny how you think immediate centralization is bad, but not immediate is apparently ok? xD What's funny is how scared maximalists are of technology. It's pretty clear from context that I'm referring to asking the question "what centralization pressures will exist if we increase the blocksize to X? BTC "determined" by ignoring technological improvements, putting their fingers in their ears, and saying "it's dangerous to raise the blocksize... no matter what." Meanwhile - a lot of us want *actual data.* If we raise the blocksize to 8MB - what centralization pressures are there and exactly what attacks will this block size enable. Even your response wasn't an actual attack vector that will pop up... it was snark directed at taking my initial comment out-of-context with literally no substance. >Larger blocks make node requirements higher and more expensive, which inevitably and detrimentally affects decentralization. This is just common sense. It's also common sense that computer hardware gets better over time. Further, it's common sense that efficiencies in software development improve over time. Both of those alleviate potential centralization pressures. In addition centralization means that a single entity or cohort thereof can gain undue influence over the operation of the network. A network can become "more centralized" yet still be "completely decentralized" if no actor has any real chance of gaming the network. So if the answer to "who will be able to game the network" and "how easy will it be to do so" if a blocksize reaches XMB is "nobody" and "statistically impossible," then we've just found a safe block size. >But more importantly - if instead of solving the problem you set the precedent of “let’s just bump this number”, the problem will never be solved and slowly but surely the network will centralize. Really - it sets the prescient of gathering network metrics and doing proper research before upping a block size. >The issue was never 8mb or 32mb, it was setting the precedent of how to approach the problem. If done blindly sure... raising the blocksize without any research would be... Why - exactly as misinformed as claiming that "anything more than 1MB is dangerous forever." >And that satoshi quote was from when he believed efficient fraud proofs were possible. Without fraud proofs SPV is not as secure as running full node. So you're saying a coin choosing this path should [enable fraud proofs?](https://read.cash/@TomZ/double-spend-proofs-phase-2-73d26263) Man - if only that technology existed.

r/CryptoCurrencySee Comment

Hold on just a second... /r/BTC the subreddit was formed *before* the fork. When the scaling debate happened circa 2016-2017, the primary subreddit to discuss Bitcoin (of course, /r/Bitcoin *wouldn't even allow discussion* of proposals like Bitcoin XT, BU's Miner Consensus, and post SegWit activation, SegWit 2x was no longer allowed to be discussed despite gaining a 95% majority consensus. The penalty for discussing these protocols was banning. The backup location to discuss Bitcoin was /r/BTC. You are free to discuss BTC, BCH, and every other Bitcoin fork there. I completely concede the fact that the sentiment there strongly favors /r/Bitcoin, however it was the censorship actions of /r/ Bitcoin towards the best pathway forward that self-selected that subset of people. With that being said, it's still absolutely ludacrous to me that forums to discuss the best possible pathway for an open-source software would not even allow discussion of any type of it involved making blocks larger. If there was *literally any* research-based metric that backed up immediate centralization based on making blocks larger at all, I'd be more on board. Instead, the only refutation was a banhammer. And if left up to Satoshi himself, it's pretty clear that blocksize would have been a tool to move forward. I'll leave you with this:

r/CryptoCurrencySee Comment

![gif](giphy|BU8H7pea0K1DW)

Mentions:#BU
r/CryptoCurrencySee Comment

I am aware that block propagation is an earlier bottleneck than validation. We're closer to fixing the block propagation bottleneck than the (less critical) validation ones, though. Graphene has been merged into BU and should mostly solve the issue. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. with the massive increase in SPV traffic SPV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of Jameson Lopp's article a year back. He assumed that each SPV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. What’s your napkin math for how long it would take a $1000 desktop computer to validate a 1GB block once the known bottlenecks are resolved? First, block propagation. With graphene, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about 1 second total. Second, IBLT decoding. I haven't seen any benchmarks for decoding the IBLTs in Graphene for 1 GB blocks, but in 2014 I saw some benchmarks for 1 MB blocks that showed decoding time to be around 10 ms. If it scales linearly, that would be around 10 seconds for decoding. Third, block sorting. A 1 GB block would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around 1 second. Fourth, computing and verifying the merkle root hash. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second. Fifth, block validation. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around 30 seconds of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. Zeroth, script verification. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into 0 seconds for the sake of this calculation. All told, we have about (1 + 10 + 1 + 0.5 + 30) = 42.5 seconds for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. And how much upstream bandwidth do you think would be required just to relay transactions to a few peers(again assuming that most transactions will come from p2p gossip and not through a block)? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. The first message is the INV message, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be 440 MB in each direction for INVs. The second and third messages are the tx request and the tx response. With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into 1.5 GB of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have about 1.94 GB bidirectional of traffic during each (average) 600 second block interval. That translates to average bandwidth of 3.23 MB/s or 25.9 Mbps. This is, again, reasonable to expect for a motivated middle-class hobbyist around 2020, though not trivial.