See More CryptosHome

INV

Inverse Finance

Show Trading View Graph

Mentions (24Hr)

0

0.00% Today

Reddit Posts

r/CryptoCurrencySee Post

Can you guys please help me on making a stupid decision?

r/BitcoinSee Post

DONT MISS OUT! Inverse finance is about to get a big investor.

r/CryptoMoonShotsSee Post

Invent (INV), We utilize our tokenomics to launch User Submitted inventions to market! Certik Audited, Incorporated and Doxxed. Join Our Initial Presale and Token Raffle Now!!!

r/CryptoMoonShotsSee Post

Invent (INV), We utilize our tokenomics to launch YOUR inventions to market! Certik Audited, Incorporated and Doxxed. Join Our Initial Presale and Token Raffle Now!!!

r/CryptoCurrencySee Post

Inverse Finance (INV) does NOT have a hard supply cap of 100k tokens

r/CryptoCurrencySee Post

Serious question for long term traders- how does, or can you predict things like the INV 80% rise in an hour today?

r/CryptoCurrencySee Post

Serious question for long term crypto traders - how can anyone predict this? INV goes 80% in an hour.

r/CryptoCurrencySee Post

Here is how Ethereum COULD scale without increasing centralisation and without depending on layer two's.

r/CryptoCurrencySee Post

Here is how Bitcoin COULD scale to have 1 Gigabyte big blocks without increasing centralisation and without having to depend on custodial Lightning wallets.

r/CryptoMoonShotsSee Post

Invent (INV), We utilize our tokenomics to launch YOUR inventions to market! Certik Audited, Incorporated and Doxxed. Join Our Initial Presale Whitelist Now!!!

r/CryptoCurrencySee Post

NEW Coinbase listings! INV, LQTY, NCT, PRO.

r/CryptoMarketsSee Post

Scam accusation - How Inverse DAO [INV] scammed Meme contest participants

Mentions

The only u undervalued gem on there is INV. I’ve been bagging them lately.

Mentions:#INV

INV

Mentions:#INV

INV. Low supply =big gains

Mentions:#INV
r/CryptoCurrencySee Comment

Thank you for the feedback brother!! I was thinking of this as well…the only issue I have with INV, is that the daily volume is always pretty low like $200k. Wonder if that can be a good thing because it can be easier ti manipulate if the whales decide to run the price up? Idk how all that really works lol

Mentions:#INV
r/CryptoCurrencySee Comment

I think you should put all 2000 of it into INV. One of the things I like about this one is that it has an extremely low total supply (480k), so if this one gains any traction, the price is going to sky rocket.

Mentions:#INV
r/CryptoCurrencySee Comment

INV

Mentions:#INV
r/CryptoCurrencySee Comment

CRYPTORANK Inverse Finance Market Cap $ 18.12M G B763.73 FDMC: $ 23.18M ATH: $ 269.95M G Supply Circulating: INV148.54K (78.18%) Total: INV 190.00K Max: INV 148.54K 24h Trade Volume $ 1.11M B 46.97 INV 914K All-Time-High (ATH) Price $ 1.82K (13 Mar 2021) B 0.032 (08 Mar 2021) ETH 0.996 (16 Mar 2021) From ATH:-93.3% To ATH: +1,389.9%

Mentions:#INV#ETH
r/CryptoCurrencySee Comment

What's your opinions about INV? I don't really understand the point of this coin

Mentions:#INV
r/CryptoCurrencySee Comment

Anyone have any thoughts on INV. That market cap is sweet.

Mentions:#INV
r/CryptoCurrencySee Comment

Not exactly a hack, just someone familiar with the finance world that made an action that is illegal in the finance world, just not (yet) in crypto. The guy bought for 3 millions worth of INV to trick the oracle used by the lending platform into believing INV skyrocketed. Then he borrowed against that INV. The oracle finally got the price covered, and his position was surely liquidated, but he kept whatever he borrowed. This is again not really a hack. This is what happens when you skimp on architecture. If that lending protocol had used more than an oracle, this attack would have been much harder and more expensive to do.

Mentions:#INV
r/BitcoinSee Comment

I’ve won and lost with day trading. However I’ve lost more. Sometimes pumps are not organic. Whales often manipulate and price growth can disappear instantly leaving your holding the bag. Only coin I recommend for day trading is INV. Great potential with low supply. I’ve made $600 in minutes by only puting in $1400

Mentions:#INV
r/CryptoCurrencySee Comment

etherscan says that max supply is 100k i assume a governance vote would be required to mint new tokens. fun fact. INV tokens were originally non-transferable. They only became transferrable after the community voted for it through governance

Mentions:#INV
r/CryptoCurrencySee Comment

Yes u see the cat spoke to the moon which had an effect on the gravitational pull of the refrigerator causes the moons to shift poorly in the dildo field surrounding INV

Mentions:#INV
r/CryptoCurrencySee Comment

No idea.. What’s worse, Coinbase didn’t even have the Max supply of 100k tokens listed in the info until today I believe. Which I assume is why it pumped to hard. None of it makes any sense to me. Either CMC and all these exchanges are clueless about this upgrade removing the hard supply cap, or they’re trying to hide it to keep INV pumping.

Mentions:#INV
r/CryptoCurrencySee Comment

Can someone explain the crazy gains for INV?

Mentions:#INV
r/CryptoCurrencySee Comment

>Anyone happen to know the smallest market caps available on Coinbase pro? Coinbase makes it easier to sort by market cap on the trade screen. While Coinbase and Coinbase Pro sometimes don't have identical lists, they are usually similar. KNC, INV, MCO2, ORCA, and WCFG are some of the smaller ones when I sort the list. Note I am not a micro-cap buyer, so I typically don't look at small cryptos in the sub-billion dollar range.

r/CryptoCurrencySee Comment

Post is by: i_have_chosen_a_name and the url/text [ ](https://goo.gl/GP6ppk)is: /r/CryptoCurrency/comments/se5yb5/here_is_how_ethereum_could_scale_without/ There are two bottlenecks. - bllock propagation - bllock validation. Bllock propagation matters for miners behind their full nodes. Bllock validation matters for full nodes that don't mine. Bigg bllocks messing up block propagation will lead to miners risking other miners finding the next block before 51% of the network has seen there block. This would mean their orphan rates go up and they start losing money. Then miners will group together and centralise to protect themselves against losses which would mean Bitcoin start losing decentralisation. Messing up block validation means that your nodes stop syncing as new blocks arrive faster then your node can verify if all the rules are being followed. This then means you start depending on other people their nodes and will make it so that only the people will the most expensive computers can run a full node. However keep in mind that Bitcoin from the very beginning was designed to run as a layered system. Full mining nodes at the top then full nodes that don't mine Then Simpleton Payment Verification servers that plug in to full nodes. The designer designed it this way because according to him (bitcorntalk.org/index.php?topic=532.msg6306#msg6306) >The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don't generate. So that means we have to solve three problems. Bllock propagation, block validation and SPVV traffic. Let's solve them! First bllock propagation. A block contains transactions that a miner has collected in his mempool. But there are more mempools. All miners run one and full nodes that don't mine also run one. The transactions that arrive in one mempool get copied constantly to all the other mempools. That means at any given time about 99% of the transactions in the mempools are duplicates. So when a miner finds a block rather then putting all these in a block and sending the transactions that other miners already have in their mempool. A miner can also do a quick exchange with these mempools and figure out what information is missing and then only send the missing information. The mempools of the other miners can then just reconstruct the block, rather then having to download the entire blocks. This idea is called [Pencill] people.cs.umass.edu/~gbiss/pencill.pdf should mostly solve the issue. With Pencill, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about 1 second total. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. Second SPVV traffic. SPVV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of JL article a year back. He assumed that each SPVV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPVV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPVV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPVV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. Third, block sorting. A 1 GB bllock would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around 1 second. Fourth, computing and verifying the merkle root hash. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second. Fifth, bllock validation. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around 30 seconds of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. Zeroth, script verification. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into 0 seconds for the sake of this calculation. All told, we have about (1 + 10 + 1 + 0.5 + 30) = 42.5 seconds for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. It means on average every time minute a 1000 dollar modern desktop computer is given 10% of the time between blocks to validate them. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. Now let's talk about upstream bandwidth. How much upload bandwith would be required just to relay transactions to a few peers? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. The first message is the INV message, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be 440 MB in each direction for INVs. The second and third messages are the tx request and the tx response. With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into 1.5 GB of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have about 1.94 GB bidirectional of traffic during each (average) 600 seco

r/CryptoCurrencySee Comment

I am aware that block propagation is an earlier bottleneck than validation. We're closer to fixing the block propagation bottleneck than the (less critical) validation ones, though. Graphene has been merged into BU and should mostly solve the issue. After that, UDP+FEC should get us to the point where we can forget about block prop issues for a long time. >with the massive increase in SPV traffic SPV traffic is pretty easy to serve from a few high-performance nodes in datacenters. You might be thinking of Jameson Lopp's article a year back. He assumed that each SPV request requires reading the full block from disk just for that one request, and that's not at all true on a protocol level, although it currently is true on the implementation level. You can have different nodes that keep different blocks in RAM, and shard your SPV requests out among different nodes based on which blocks they have cached. These nodes can also condense several different SPV requests into a single bloom filter, and use that one bloom filter to check the block for relevant transactions for 100 or 1000 SPV requests all at the same time. It's really not going to be that hard to scale that part. Home users' full nodes can simply elect not to serve SPV, and leave that part to businesses and miners. We professionals can handle the problem efficiently enough that the costs won't be significant, just as the costs per user aren't significant now. >What’s your napkin math for how long it would take a $1000 desktop computer to validate a 1GB block once the known bottlenecks are resolved? First, block propagation. With graphene, a typical 1 GB block can be encoded in about 20 kB, most of which is order information. With a canonical ordering, that number should drop to about 5 kB. Sending 20 kB or 5 kB over the internet is pretty trivial, and should add about 1 second total. Second, IBLT decoding. I haven't seen any benchmarks for decoding the IBLTs in Graphene for 1 GB blocks, but in 2014 I saw some benchmarks for 1 MB blocks that showed decoding time to be around 10 ms. If it scales linearly, that would be around 10 seconds for decoding. Third, block sorting. A 1 GB block would have about 2.5 million transactions. Assuming that we're using a canonical lexical ordering, we will need to sort the txids for those transactions. Single-threaded sorting is typically between 1 million keys per second and (for uint64_t keys) 10 million keys per second, so sorting should take around 1 second. Fourth, computing and verifying the merkle root hash. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. With 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second. Fifth, block validation. This step is hard to estimate, because we don't have any good benchmarks for how an ideal implementation would perform, nor do we even have a good idea of what the ideal implementation would look like. Does the node have the full UTXO set in RAM, or does it need to do SSD reads? Are we going to shard the UTXO set by txid across multiple nodes? Are we using flash or Optane for the SSD reads and writes? But you said napkin, so here's a shot. A 1 GB block is likely to have around 5 million inputs and 5 million outputs. Database reads can be done as a single disk IO op pretty easily, but writes generally have to be done more carefully, with separate writes to the journal, and then to multiple levels of the database tree structure. For the sake of simplicity, let's assume that each database write consists of four disk writes plus one disk read, or 5 ops total. This means that a 1 GB block will require around 10 million reads and 20 million writes. Current-gen m.2 PCIE NVMe top-of-the-line SSDs can get up to 500k IOPS. In two years, a good (but not top-end) SSD will probably be getting around 1 million random IOPS in both reads and writes. This would put the disk accesses at around 30 seconds of delay. Sharding the database onto multiple SSDs or multiple nodes can reduce that, but I presume desktop computers won't have access to that. If we have Optane, the UTXO stuff should get way faster (10x? 50x?), as Optane has byte-level addressability for both reads and writes, so we will no longer need to read and write 4 kB blocks for each 30 byte UTXO. Optane also has much better latency, so a good database will be able to get lower write amplification without needing to worry about corruption. Zeroth, script verification. This is generally done when a transaction hits mempool, and those validation results are cached for later use, so no substantial extra script verification should need to be done during block validation. All we need is to make sure AcceptToMemoryPool doesn't get bogged down in the 10 minutes before the block. A single CPU core can verify about 5000 p2pkh scripts (i.e. single ECDSA sigop) per second, so an 8-core desktop should be able to handle 40,000 p2pkh inputs per second. Verifying the 5 million inputs in advance should take 125 seconds out of our 600 second window. That's cutting our safety margins a bit close, but it's tolerable for a non-mission-critical min-spec machine. Because this is done in advance, that 125/600 seconds turns into 0 seconds for the sake of this calculation. All told, we have about (1 + 10 + 1 + 0.5 + 30) = 42.5 seconds for a decent desktop to receive and verify a 1 GB block, assuming that all the code bottlenecks get fixed. There are probably a few other steps that I didn't think of, so maybe 60 seconds is a more fair estimate. Still, it's reasonable. Miners will, of course, need to be able to receive and process blocks much faster than this, but they will have the funding to buy computers with much greater parallelization, so their safety margin versus what they can afford should be about the same as for a casual desktop user. And how much upstream bandwidth do you think would be required just to relay transactions to a few peers(again assuming that most transactions will come from p2p gossip and not through a block)? This largely depends on how many peers that our user has. Let's assume that our desktop user is a middle-class hobbyist, and is only sacrificing peer count a little bit in favor of reduced hardware requirements. Our user has 8 peers. Transaction propagation comes from 3 different p2p messages. The first message is the INV message, which is used to announce that a node knows one or more transactions with a specified TXID or TXIDs. These INV messages are usually batched into groups of 3 or so right now, but in a higher-throughput context, would likely be batched in groups of 20. The TCP/IP and other overhead is significant, so an INV for a single TXID is around 120 bytes, and each additional TXID adds around 40 bytes (not the 32 byte theoretical minimum). With 20 tx per inv, that's 880 bytes. For each peer connection, half of the transactions will be part of a received INV, and half will be part of a sent INV. This means that per 2.5 million transactions (i.e. one block) per peer, our node would send and receive 55 MB. For all 8 peers, that would be 440 MB in each direction for INVs. The second and third messages are the tx request and the tx response. With overhead, these two messages should take around 600 bytes for a 400 byte transaction. If our node downloads each transaction once and uploads once, that turns into 1.5 GB of traffic in each direction per block. Lastly, we need to propagate the blocks themselves. With Graphene, the traffic needed for this step is trivial, so we can ignore it. In total, we have about 1.94 GB bidirectional of traffic during each (average) 600 second block interval. That translates to average bandwidth of 3.23 MB/s or 25.9 Mbps. This is, again, reasonable to expect for a motivated middle-class hobbyist around 2020, though not trivial.

r/CryptoMoonShotsSee Comment

I think you are confused ser, this isnt INV

Mentions:#INV
r/CryptoCurrencySee Comment

tldr; Coinbase has announced that two finance-focused altcoins and a pair of other crypto assets will start trading on its Pro platform. Inverse Finance (INV) saw its price go vertical from $621.82 to as high as $822 after the announcement. Propy (PRO) jumped from $1.59 to $2.30 almost instantly after the news broke. Liquity (LQTY) initially spiked 62.3% from $5. *This summary is auto generated by a bot and not meant to replace reading the original article. As always, DYOR.*

Mentions:#INV#PRO#LQTY
r/CryptoCurrencySee Comment

NEW COINBASE PRO LISTINGS TODAY! Starting Today, Wednesday January 12, transfer INV, LQTY, NCT AND PRO into your Coinbase Pro account ahead of trading. Support for INV, LQTY, NCT AND PRO will generally be available in Coinbase’s supported jurisdictions with certain exceptions as indicated in each asset page here. Trading will begin on or after 9AM Pacific Time (PT), Thursday January 13, if liquidity conditions are met.

r/CryptoMarketsSee Comment

Yes, it is really disappointing. I still hope some INV team members will be honest and act like it was promised initially because Inverse Finance isn't a little project. INV can't afford to pull fraudulent moves on community members.

Mentions:#INV
r/CryptoCurrencySee Comment

Imo you have to look for low caps with strong fundamentals and aligned to the upcoming narratives. Mine are VADER (40%), BAO (20%), INV (10%), CARD (10%) and the rest stables for now.

Mentions:#BAO#INV#CARD
r/CryptoCurrencySee Comment

Coinbase listing INV.

Mentions:#INV