See More CryptosHome

INV

Inverse Finance

Show Trading View Graph

Mentions (24Hr)

0

0.00% Today

Reddit Posts

r/CryptoCurrencySee Post

Can you guys please help me on making a stupid decision?

r/BitcoinSee Post

DONT MISS OUT! Inverse finance is about to get a big investor.

r/CryptoMoonShotsSee Post

Invent (INV), We utilize our tokenomics to launch User Submitted inventions to market! Certik Audited, Incorporated and Doxxed. Join Our Initial Presale and Token Raffle Now!!!

r/CryptoMoonShotsSee Post

Invent (INV), We utilize our tokenomics to launch YOUR inventions to market! Certik Audited, Incorporated and Doxxed. Join Our Initial Presale and Token Raffle Now!!!

r/CryptoCurrencySee Post

Inverse Finance (INV) does NOT have a hard supply cap of 100k tokens

r/CryptoCurrencySee Post

Serious question for long term traders- how does, or can you predict things like the INV 80% rise in an hour today?

r/CryptoCurrencySee Post

Serious question for long term crypto traders - how can anyone predict this? INV goes 80% in an hour.

r/CryptoCurrencySee Post

Here is how Ethereum COULD scale without increasing centralisation and without depending on layer two's.

r/CryptoCurrencySee Post

Here is how Bitcoin COULD scale to have 1 Gigabyte big blocks without increasing centralisation and without having to depend on custodial Lightning wallets.

r/CryptoMoonShotsSee Post

Invent (INV), We utilize our tokenomics to launch YOUR inventions to market! Certik Audited, Incorporated and Doxxed. Join Our Initial Presale Whitelist Now!!!

r/CryptoCurrencySee Post

NEW Coinbase listings! INV, LQTY, NCT, PRO.

r/CryptoMarketsSee Post

Scam accusation - How Inverse DAO [INV] scammed Meme contest participants

Mentions

r/BitcoinSee Comment

So if your node serves the history and listens to inbound connections most of its bandwidth will be used by syncing traffic. If you are pruned, set to limited, or don't accept inbound most of your bandwidth will be related to relay. The vast majority of your relay bandwidth is INV messages, which are you telling a peer you have a txn or them tell you that they have it. INVs are individually small (32 bytes/txn) but every transaction on the network will cause an INV to be sent to or received from every peer (and sometimes both). Then bandwidth is used to relay the transaction to you. If you accept the transaction into your mempool then no more bandwidth is needed when the transaction shows up in a block, you already have it. Then bandwidth may be used if one of your peers doesn't already have the transaction and fetches it from you. On the network each peer on average receives each transaction once, and so each peer will *on average* send each transaction once. If you drop a transaction instead of relaying it, and it eventually gets mined you will usually have to download it a second time and slow down your block processing quite a bit as a result (since it will take at least an extra round trip to get it). So if a few nodes don't relay a transaction you likely won't save anything on average if you don't, due to the extra transmission at mining time. If the vast majority nodes don't (>90%) then a little bandwidth is saved but at the cost of massively harming block propagation speed on the network. Of course you always have the option of running a node with blocks only mode, which won't participate in txn relay at all. If you do that and run in limited/pruned mode too running a node will use only about 300MB/day.

Mentions:#INV
r/BitcoinSee Comment

> Since you're implicitly acknowledging that for some users this does manage their resources, I do not. > it just needs to be of use to those choosing to use it. How would they even know? They should expect that it will have no effect to a slight increase on average, and that it may increase or decrease for any particular transaction/user. From the discussion you clearly believed going into this that it reduced usage. Intuitively sounds like it should, and it was a reasonable thing to expect-- it's something I thought it did until I actually went to try to calculate the savings, expecting it to be dimished by the INV costs which aren't eliminated and *thoroughly* dominate txn bandwidth for nodes with inbound--, but it's not so. Now I'm left feeling like you're gaslighting me because you don't like that you were mistaken on this point, -- as if someone has a genuine preference for their usage to be randomized. I would agree that there are people who've twiddled this setting because they *believe* it will reduce their usage in a way that it doesn't. In any case, anyone sophisticated enough to figure out that for whatever reason it's a net benefit for them would surely have no problem applying the patch themselves ... it can be included with whatever patch they're running that counts the bandwidth cost vs savings from it. :P > It is extreme to suggest people managing their own nodes are sybil attackers because you dislike reconstruction latency. okay what? I didn't say that. I made a quip about trying to get lots of connections, this is by more or less by definition a sybil attack (unless you meant by people addnoding each other, I suppose) presumably a benign one! My comment on the connections had nothing to do with reconstruction latency, as I have no reason to worry that you're doing anything particular to harm it (should I?). That said, you remark about reconstruction latency as if it is some kind of personal aesthetic preference-- it's one of the more important performance criteria in the system since its a driver for how progress free mining is... If I did have concerns there you should want to learn more. > and they thus are entitled to significant feedback So would you complain if I referred to you as "entitled" in the future? :D :D

Mentions:#INV
r/BitcoinSee Comment

See the points in [this comment](https://old.reddit.com/r/Bitcoin/comments/1kab15o/bitcoin_cores_github_mods_have_been_banning_users/mpou6xb/). They don't actually manage your resources in any significant sense, unless the transactions don't get mined at all, you're likely using more bandwidth receiving the transactions multiple times and discarding them. If they're mined you transfer them at least twice-- the first to discard the second when they show up in blocks. It might still be a net savings due to not offering them to others but since they tend to be big the INV costs are not significant. In addition to the relay performance / mining centralization issues mentioned in that comment having a policy inconsistent with what actually gets mined also messes up your fee estimation (though to be fair I've never tried to gauge by how much). And you're a really well informed party, so the fact that you're not aware that it's at least arguable that it harms the network (separate from the potential justification of censorship issues) and potentially harms you is another argument that it probably shouldn't exist as a configuration. I think the principal should go like this: The software exists to embody best practices that should result in a healthy working network. Some users have different requirements and so it's reasonable for the software to have options to accommodate even though it's expensive to have options due to the combination blow up in potential interactions. But is something like this knob a setting that will accommodate different needs? I don't think so. Instead it's kind of a footgun, like you think you're saving bandwidth but you're probably not and you are goofing up block propagation latency for yourself and nodes proximal to you. You might think you're helping protect the network from spam, but it doesn't work because of direct to miner txn and the pressure for direct to miner txn instead contributes centralization pressure which I'm sure is not what you want. Sometimes developers have left in options to silence disputes from people in the sense of "look if there is an option it will have no effect because virtually no one will set it, and the people committed to this argument can feel that they won, that they did something, but it's really just a placebo". It's expedient but arguably it's not particularly respectful. The substantive change in the PR isn't the removal of the option, it's the change to make its effective default unlimited. From a couple people in this thread (yourself and Fiach_Dubh at least) what I'm hearing is that you want the placebo. I'm guessing that if you make enough fuss you could get that compromise from the author of the PR who presumably mostly cares about the substantive change. It's not free to provide the placebo though, as it has maintenance overheads and to the extent that the concern about the non-finance data is a real concern it diverts attention away from other ideas that might help to feel good setting knobs where you can flip a switch and [feel like you've done something](https://www.youtube.com/watch?v=JeXX_zwhmi8) when it's really had no real effect. While discussing this here it does occur to me that there is a different line of compromise which might make more sense-- Bitcoin Core conflates relay policy and mining policy. The historical reason for this is that they should match or vaguely bad things happen. But in case where there is a dispute over policy it makes more sense for mining to be the more restrictive of the two, because the negative effects largely come from miners including txn nodes weren't expecting. So maybe it would make sense to change the option to be one that only changed mining policy. At least as mining policy it has a real effect (assuming you're mining!) -- I'm not sure, this is just an off the cuff thought.

Mentions:#INV
r/CryptoCurrencySee Comment

🤣 I’m just going down with the ship. Make some profit in the meantime though! Keep an eye on ‘INV’ & ‘TIME’ via Coinbase. Big bucks to be made.

Mentions:#INV#TIME
r/CryptoCurrencySee Comment

The only u undervalued gem on there is INV. I’ve been bagging them lately.

Mentions:#INV
r/CryptoMarketsSee Comment

INV

Mentions:#INV
r/CryptoCurrencySee Comment

INV. Low supply =big gains

Mentions:#INV
r/CryptoCurrencySee Comment

Thank you for the feedback brother!! I was thinking of this as well…the only issue I have with INV, is that the daily volume is always pretty low like $200k. Wonder if that can be a good thing because it can be easier ti manipulate if the whales decide to run the price up? Idk how all that really works lol

Mentions:#INV
r/CryptoCurrencySee Comment

I think you should put all 2000 of it into INV. One of the things I like about this one is that it has an extremely low total supply (480k), so if this one gains any traction, the price is going to sky rocket.

Mentions:#INV
r/CryptoCurrencySee Comment

INV

Mentions:#INV
r/CryptoCurrencySee Comment

CRYPTORANK Inverse Finance Market Cap $ 18.12M G B763.73 FDMC: $ 23.18M ATH: $ 269.95M G Supply Circulating: INV148.54K (78.18%) Total: INV 190.00K Max: INV 148.54K 24h Trade Volume $ 1.11M B 46.97 INV 914K All-Time-High (ATH) Price $ 1.82K (13 Mar 2021) B 0.032 (08 Mar 2021) ETH 0.996 (16 Mar 2021) From ATH:-93.3% To ATH: +1,389.9%

Mentions:#INV#ETH
r/CryptoCurrencySee Comment

What's your opinions about INV? I don't really understand the point of this coin

Mentions:#INV
r/CryptoCurrencySee Comment

Anyone have any thoughts on INV. That market cap is sweet.

Mentions:#INV
r/CryptoCurrencySee Comment

Not exactly a hack, just someone familiar with the finance world that made an action that is illegal in the finance world, just not (yet) in crypto. The guy bought for 3 millions worth of INV to trick the oracle used by the lending platform into believing INV skyrocketed. Then he borrowed against that INV. The oracle finally got the price covered, and his position was surely liquidated, but he kept whatever he borrowed. This is again not really a hack. This is what happens when you skimp on architecture. If that lending protocol had used more than an oracle, this attack would have been much harder and more expensive to do.

Mentions:#INV
r/BitcoinSee Comment

I’ve won and lost with day trading. However I’ve lost more. Sometimes pumps are not organic. Whales often manipulate and price growth can disappear instantly leaving your holding the bag. Only coin I recommend for day trading is INV. Great potential with low supply. I’ve made $600 in minutes by only puting in $1400

Mentions:#INV
r/CryptoCurrencySee Comment

etherscan says that max supply is 100k i assume a governance vote would be required to mint new tokens. fun fact. INV tokens were originally non-transferable. They only became transferrable after the community voted for it through governance

Mentions:#INV
r/CryptoCurrencySee Comment

Yes u see the cat spoke to the moon which had an effect on the gravitational pull of the refrigerator causes the moons to shift poorly in the dildo field surrounding INV

Mentions:#INV
r/CryptoCurrencySee Comment

No idea.. What’s worse, Coinbase didn’t even have the Max supply of 100k tokens listed in the info until today I believe. Which I assume is why it pumped to hard. None of it makes any sense to me. Either CMC and all these exchanges are clueless about this upgrade removing the hard supply cap, or they’re trying to hide it to keep INV pumping.

Mentions:#INV
r/CryptoCurrencySee Comment

Can someone explain the crazy gains for INV?

Mentions:#INV
r/CryptoCurrencySee Comment

>Anyone happen to know the smallest market caps available on Coinbase pro? Coinbase makes it easier to sort by market cap on the trade screen. While Coinbase and Coinbase Pro sometimes don't have identical lists, they are usually similar. KNC, INV, MCO2, ORCA, and WCFG are some of the smaller ones when I sort the list. Note I am not a micro-cap buyer, so I typically don't look at small cryptos in the sub-billion dollar range.

r/CryptoCurrencySee Comment

Post is by: i_have_chosen_a_name and the url/text [ ](https://goo.gl/GP6ppk)is: /r/CryptoCurrency/comments/se5yb5/here_is_how_ethereum_could_scale_without/ There are two bottlenecks. - bllock propagation - bllock validation. Bllock propagation matters for miners behind their full nodes. Bllock validation matters for full nodes that don't mine. Bigg bllocks messing up block propagation will lead to miners risking other miners finding the next block before 51% of the network has seen there block. This would mean their orphan rates go up and they start losing money. Then miners will group together and centralise to protect themselves against losses which would mean Bitcoin start losing decentralisation. Messing up block validation means that your nodes stop syncing as new blocks arrive faster then your node can verify if all the rules are being followed. This then means you start depending on other people their nodes and will make it so that only the people will the most expensive computers can run a full node. However keep in mind that Bitcoin from the very beginning was designed to run as a layered system. Full mining nodes at the top then full nodes that don't mine Then Simpleton Payment Verification servers that plug in to full nodes. The designer designed it this way because according to him (bitcorntalk.org/index.php?topic=532.msg6306#msg6306) >The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don't generate. So that means we have to solve three problems. Bllock propagation, block validation and SPVV traffic. Let's solve them! First bllock propagation. A block contains transactions that a miner has collected in his mempool. But there are more mempools. All miners run one and full nodes that don't mine also run one. The transactions that arrive in one mempool get copied constantly to all the other mempools. That means at any given time about 99% of the transactions in the mempools are duplicates. So when a miner finds a block rather then putting all these in a block and sending the transactions that other miners already have in their mempool. A miner can also do a quick exchange with these mempools and figure out what information is missing and then only send the missing information. The mempools of the other miners can then just reconstruct the block, rather then having to download the entire blocks. This idea is called [Pencill] people.cs.umass.edu/~gbiss/pencill.pdf should mostly solve the issue. With Pencill, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about 1 second total. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. Second SPVV traffic. SPVV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of JL article a year back. He assumed that each SPVV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPVV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPVV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPVV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. Third, block sorting. A 1 GB bllock would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around 1 second. Fourth, computing and verifying the merkle root hash. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second. Fifth, bllock validation. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around 30 seconds of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. Zeroth, script verification. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into 0 seconds for the sake of this calculation. All told, we have about (1 + 10 + 1 + 0.5 + 30) = 42.5 seconds for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. It means on average every time minute a 1000 dollar modern desktop computer is given 10% of the time between blocks to validate them. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. Now let's talk about upstream bandwidth. How much upload bandwith would be required just to relay transactions to a few peers? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. The first message is the INV message, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be 440 MB in each direction for INVs. The second and third messages are the tx request and the tx response. With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into 1.5 GB of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have about 1.94 GB bidirectional of traffic during each (average) 600 seco

r/CryptoCurrencySee Comment

I am aware that block propagation is an earlier bottleneck than validation. We're closer to fixing the block propagation bottleneck than the (less critical) validation ones, though. Graphene has been merged into BU and should mostly solve the issue. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. >with the massive increase in SPV traffic SPV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of Jameson Lopp's article a year back. He assumed that each SPV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. >What’s your napkin math for how long it would take a $1000 desktop computer to validate a 1GB block once the known bottlenecks are resolved? First, block propagation. With graphene, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about 1 second total. Second, IBLT decoding. I haven't seen any benchmarks for decoding the IBLTs in Graphene for 1 GB blocks, but in 2014 I saw some benchmarks for 1 MB blocks that showed decoding time to be around 10 ms. If it scales linearly, that would be around 10 seconds for decoding. Third, block sorting. A 1 GB block would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around 1 second. Fourth, computing and verifying the merkle root hash. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second. Fifth, block validation. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around 30 seconds of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. Zeroth, script verification. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into 0 seconds for the sake of this calculation. All told, we have about (1 + 10 + 1 + 0.5 + 30) = 42.5 seconds for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. And how much upstream bandwidth do you think would be required just to relay transactions to a few peers(again assuming that most transactions will come from p2p gossip and not through a block)? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. The first message is the INV message, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be 440 MB in each direction for INVs. The second and third messages are the tx request and the tx response. With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into 1.5 GB of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have about 1.94 GB bidirectional of traffic during each (average) 600 second block interval. That translates to average bandwidth of 3.23 MB/s or 25.9 Mbps. This is, again, reasonable to expect for a motivated middle-class hobbyist around 2020, though not trivial.

r/CryptoMoonShotsSee Comment

I think you are confused ser, this isnt INV

Mentions:#INV
r/CryptoCurrencySee Comment

tldr; Coinbase has announced that two finance-focused altcoins and a pair of other crypto assets will start trading on its Pro platform. Inverse Finance (INV) saw its price go vertical from $621.82 to as high as $822 after the announcement. Propy (PRO) jumped from $1.59 to $2.30 almost instantly after the news broke. Liquity (LQTY) initially spiked 62.3% from $5. *This summary is auto generated by a bot and not meant to replace reading the original article. As always, DYOR.*

Mentions:#INV#PRO#LQTY
r/CryptoCurrencySee Comment

NEW COINBASE PRO LISTINGS TODAY! Starting Today, Wednesday January 12, transfer INV, LQTY, NCT AND PRO into your Coinbase Pro account ahead of trading. Support for INV, LQTY, NCT AND PRO will generally be available in Coinbase’s supported jurisdictions with certain exceptions as indicated in each asset page here. Trading will begin on or after 9AM Pacific Time (PT), Thursday January 13, if liquidity conditions are met.

r/CryptoMarketsSee Comment

Yes, it is really disappointing. I still hope some INV team members will be honest and act like it was promised initially because Inverse Finance isn't a little project. INV can't afford to pull fraudulent moves on community members.

Mentions:#INV
r/CryptoCurrencySee Comment

Imo you have to look for low caps with strong fundamentals and aligned to the upcoming narratives. Mine are VADER (40%), BAO (20%), INV (10%), CARD (10%) and the rest stables for now.

Mentions:#BAO#INV#CARD
r/CryptoCurrencySee Comment

Coinbase listing INV.

Mentions:#INV