Cant stress this enough, always conduct your own research. Theres also a few tools out there to help you decide whether you should best trade your hard cash on. I like using social analytics, where basically based on the social data on a specific coin you can notified which ones experiencing social anomalies. Theres a few out there but I think [RedSeaCrypto.com](https://RedSeaCrypto.com) actually uses AI/ML to help select coins.
Everyone is saying this but does it actually work? Have the exchanges not blacklisted it? Like I get that tornadocash is technically "innocent" but how many legit uses does it actually have? Every time I see it mentioned its in threads to do with ML. Why wouldn't exchanges just refuse to accept crypto from accounts that got their crypto from tornado?
get 9ver yourself dude well if i had trusted my prejudices and probed further then i would have maybe eventually found deep links to Israeli money laundering networks which would have been giant red flags. parts of UAE, ISrael are huge ML hubs and human trafficking endpoints &this is not remotely controversial in econ. but because i am too busy working on this stuff , putting my own money into my work and things like BTC not other schemes so my message was "seems okay despite being israeli - so DYOR" because i am not a member of criminal elite or a forensic accountant i do not know more about this. DYOR lol
People's GPUs running in service of games use the same amount of "power as a small country" - if OP is guilty of posting bonafide shit, you are at least as bad on the other end of the spectrum. Every argument is dishonest. Gaming, again, being such a great prototype for the vast majority of us knowing just about nothing about run-off benefit for any given piece of technology. Decades we accepted that everything we do in service of better grafics and higher fidelity was a waste of research funds and, most of all, detrimental to our youth. Both things that turned out to be blatantly wrong right away, and yet it took at least two more decades until people got a right glimpse into the very salient fact that anything can and will turn out to benefit us. Research it, and we're going to create new ice cream flavors in the process. And what did gaming end up doing for us? Pretty much the biggest revolution ever. People had a great taste of what real machine learning opens up for us back in 2012, when Spotify virtually solved the preliminary problems to full-blown music recommendation. Shit, until then, we weren't even sure whether music was necessarily easier to give the old ML treatment to, compared to TV shows or movies - or the endless hours of footage on YT. Turns out, it's kind of difficult to extract the essence of short text descriptions and star ratings, but discrete feature engineering in the audio domain was a very clear landmark signalling the dreaded AI winter thawing for good. Which was exactly what was happening. Thanks, CUDA cores. If we didn't spend needless resources on making rasterization incrementally better and have people waste energy and labor on it - with a very distinct horizon of the entire niche "wasting" on the order of small countries' power consumption, we would still be stuck in 2005 in many regards. CCs don't use the same amount power as small countries anymore than other, very much useful technologies use comparable amounts and have been doing so forever. People always quote the same shitty forbes articles that have been debunked over and over again, none of it accounts for massive efficiencies provided by L2 and plenty of native protocols - along with, believe it or not, centralized exchanges alleviating quite a couple of pessimistic estimates people love to hold against bitcoin... nobody talks about the dynamics of using cheap and green energy in the process, something that very much is not a thing in lots of traditional finance. Not going to knit some weird nitpick out of the fact that everything, somehow, still, has to be printed on paper, but I guess the shift in convenience between different demographics has to count for something too. And even if all that didn't really factor in: what is your argument for assuming that CCs can't scale beyond highly rigid, not exactly adversarially challenged systems we use today? Rollup-py stuff isn't just some vapid promise, we basically know that it is possible to almost infinitely scale any transactional protocol. It's just a matter of engineering it, as much as ETH haters will tell you that it is totally untenable how they hold Ethereum hostages by taking "so long" to finish the merge and PoS. The real cringe is people trying so hard to defend against thoroughly flawed arguments. There never was a reason why BTC and friends couldn't use as much energy as they did, we have plenty of precedent for massively expensive efforts leading to fantastic technological benefits down the road. We did it before, we will keep doing it - and if you want to be conscientious about the environment, fucking put meat on the agenda. Small tangent, but I love seeing all the fat fucks clearly rolling coal chiming in on the climate discussion surrounding crypto as if they were even remotely as willing to do something about it as both "investors" and the people making them happen in the first place. Nobody wants PoW a whole lot these days - something we pay with in terms of highly reduced security, to be fair - CCs are very much shooting for efficiency in every respect. The discussion surrounding all this and booing CCs for being nothing but speculative assets has been dictated by universally recognized and acclaimed assholes, and somehow they convinced the contrarians that this is a major goal. Sure worked, didn't help that NFTs and NFT-fans basically squeezed in and co-opted an entire field of research to make just about anyone involved look like asses... and still - what if crypto actually goes the way of machine learning and suddenly people realize Saylor was right by calling digital currencies highly flexible infrastructure that will benefit everyone and every market? Not something anyone really could anticipate, but if it happens, it definitely is a "significant part" of anyone's infrastructure. Nothing saying it isn't possible or unlikely. We were stacked against much worse odds and look where we are now.
The rate of technological change in many areas is quite staggering. IoT connectivity has developed substantially over the last decade, Moores law is still holding true, SSD drives have pretty much replaced disks in new machines. Speech and image recognition are now taken for-granted, but they rely upon AI/ML technology that was a theoretical ‘what if’ not that long ago, and this is now being applied to less trivial use cases with lots of promise. And then there is quantum computing which could be almost as significant as the invention of the computer itself. From a funding perspective, global venture investment in early stage companies has massively increased over the last decade, and these statistics aren’t hard to find. At the risk of being reductive, I think your argument can be summarised as “certain consumer devices aren’t developing as fast as they used to, therefore we are in an innovation winter”, and I don’t think that stands up to scrutiny. I think what’s happened is that those consumer devices have reached maturity and are therefore advancing incrementally.
For me, there isn’t any TA that I respect, it’s more just fun to look at. Even the halving cycle patterns are breaking down. It makes sense if you think about it. If there are any patterns to be found that can make someone money, it will be exploited immediately and will stop being that pattern. Especially now more than ever, with all the ML bots.
Maybe not financials specifically, but Kasparov has seen tech, especially machine learning, growth over the past two decades. He helped with training IBM's Deep Blue and was the first grandmaster to be defeated by a machine in tournament. More importantly, it seems like his stance 20 years ago was more about doubting ML, being frustrated with it winning, and wanting to know exactly how it could make decision, but now he understands more about the ML space and respects it. I see direct parallels with financial folks doubting blockchain, decentralization, trustless environments, etc
Hmm que raro, hay varios restaurantes con el QR de usdt a la vista, siempre en la zona centro centro claro, madero y eso. Y si, coincido, cuando alguna fintech lo sume a su app va a explotar de verdad. De hecho, ML ya tiene MercadoCrypto en Brasil, acá supongo que no es rentable por el delirio místico de la AFIP que piensa que puede rasguñar algo.
>The development of cutting-edge technologies such as advanced catalysts, AI, machine learning and quantum computing, just to name a few of humanity’s greatest advancements, have already been integrated in the fight against climate change and helped us turn the tide on the planet’s rising temperature… The only one of those that has *definitely* helped slow climate change is advanced catalysts. The amount of energy to run AI/ML/QC, what they're used for and the enormous supply chain of computer tech - means that a few little puff piece use cases won't account for the vast investment of energy required to get here in the first place.
This will bebmy attemptnat an ELI5, and I will acknowledge that I come from a macjine learning direction, so am makingnsome assumptions that ML uses the terms in a way that is not too out of step with statisticians use. I do not know this Cowen guy, but what the other person was saying is that he does not follow best practice. A holdout dataset, also known as a validation dataset, is data that you keep to the side to test your trained model on. So you build your model by using the rest of the data, and then to see if it went well you compare how it does on the validation data which is indipendant of the training set. One thing this will help catch is overfitting. Overfitting is where you excessively capture every aspect of the training data in a way that does not generalise. Say you took one sample of what sports team people in a store support, and then the ages of those people. Overfitting would be rigidly assuming that that exact proportion of sports teams to age is gospel- so that when you go to a different store in a different city and see that the demographics are a little older than your initial test batch, you assume that the sports following will skew towards what the older people in your training set preferred. Realistically, the two samples are likely to be quite different because sports teams tend to be more correlated to geography than age- but because you never tested your model on any samples that were outside of your training set you never caught the discrepency and now you have made bold claims on YouTube based on poorly validated models.
At best you can predict minutes into the future with reasonable accuracy using statistical models. Going beyond that would require information that just isnt fed into most models and a lot of that isnt likely public data. Most large price movements are caused by external events such as a report or a change in some variable (like the fed interest rates) that you have no way of knowing ahead of time. The world is so chaotic and people are so chaotic that price movements that are exaggerated can occur at even a minor blip of news, or a major story can run and everyone ignores it.. theres very little way to know which way it will go. There are definitely ways to make money using statistics and ML models but the moment you start using them and making any appreciable money with them you become a variable influencing the outcome as you will be either eating supply or providing supply that wouldn’t have been accounted for in your model. I have run simulations on this but havent actually tried it with real money at this point, but the simulations don’t have the issue of creating any supply/demand so I’m not sure if in practice if it would actually work. You cant just track one coin though, that information is chaotic at best, typically you have to look at arbitrage opportunities and interdependent price movements, and it becomes extremely complicated. But yah, I agree, if your statistical models were actually worth using thats what you would do.. I know if anything I do actually works, I might talk about some things in theory with others but I’m going to sit down, shutup, put them into action to set myself up and definitely not yell about it on youtube..
A ML alghorithm might be able to. Imagine what can Facebook do if it processes those. Markets that are so volatile like crypto have some form of corelation with the human element,problem is figuring out the function and use it to predict the putcome at above 80% accuracy.
Agreed he’s mostly a businessman at this point. But he’s still considered to have mastered Python, C++, C Pearl, Shell, and ML Stacks. Among having fairly deep knowledge in various other fields. Is he as adept as someone who spent 30 years programming in a handful of languages, probably not, but he’s surprisingly not far off. Based on what I’ve read about him over the years from those who’ve worked closely with him.
🍃SoundMint 2 ML Giveaway🍃 Happy to announce a collab with generative music nft brand for their upcoming drop! To enter: 1⃣ RT +💚 2⃣ Follow @ ... BF PARTY Whitelist Giveaway P2E game 💜 2x WL giveaway 💜 Thanks and co-founder for this WL ... # BFParty play to earn NFT https://bfparty.app
Just 6 years huh? 6 years ago Deepmind created an AI/ML platform that competitively plays Go. Two years ago the same platform now predicts protein structures to the width of an atom. These are things that actually benefit society in a tangible way, e.g making it better and cheaper to make medications/vaccines. The fact that you smoothbrained this response and thought it was a smart comeback shows how fucking delusional you are. Compared to the speed of progress in other tech sectors cryptos are basically standing still.
I just looked up Vectorspace's website ( https://vectorspace.ai/ ) and didn't see anything mentioned about this coin. It looks like they're just a ML-enabled dataset processing and analysis company. What does the coin do, exactly? Like what is it used for?
Yay! You gave /u/ML1948 3.512 garlicoin, hopefully they can now create some tasty garlic bread. If ML1948 doesn't know what it is, they should visit the [Garlicoin subreddit](https://np.reddit.com/r/garlicoin/) [Need help?](https://np.reddit.com/message/compose/?to=grlctipsbot&subject=help&message=help) [Garlicoin Official Link Tree](https://linktr.ee/Garlicoin)
Thank you for participating. To interact with the tip bot use private messages, not chat requests, and make sure each command is a separate private message rather than a reply to a previous private message. You can have the bot private message you a list of supported commands by sending the message help to it. [Click here for a pre-filled private message for the help command](https://np.reddit.com/message/compose/?to=grlctipsbot&subject=help&message=help) /u/grlctipsbot 3.512 ML1948
KYCing unhosted wallets is anti-crypto and anti-privacy. It is anti crypto because it increases the cost of running an exchange and of using an unhosted wallet. It is anti privacy because it forces people to dox their own wallets. It has nothing to do with fighting ML or terrorist financing, because ML and terrorist financing is done through banks not through crypto.
Oh Im very familiar with sports betting which is why Im wondering how this system works. It sounds like if I bet $100 on ML odds at +225 and also the opposite bet, assuming the underdog wins then I just made 225$ minus the vig?
- Bitcoin hashrate: BTC.com, actual figure - Hashrate per kW: Antminer S19, 115 TH/s for 3.5 kW. Rounded up to 150 TH/s for 3 kW. - Lightning node count: Actual number, 1ML - Lightning capacity: Actual number, 1ML - Power consumed by BTC miners: 200 EH/s times 1 kW per 50 TH/s, as before - Payments processed on Lightning: 200% × channel capacity every day is definitely an overestimation. The real number is certainly less. Actual number impossible to ascertain. - Consumption of power for a LN node: 5 W (actual number, Raspberry Pi) plus 30 W Netgear N600 router. Rounded down to 30 W. - MC + Visa payment volume: $7.36 trillion (actual number, [Statitia](https://www.statista.com/statistics/678109/purchase-volume-payment-cards-usa-by-type/&sa=U&ved=2ahUKEwiA-I_t_5v3AhXMEEQIHdDyBakQFXoECAEQAg&usg=AOvVaw1h7XXo_Was4PRsD3lqWfk4)) - Number of AgBank branches: [Actual number](https://www.worldlistmania.com/largest-banks-world/&sa=U&ved=2ahUKEwjzhL6agJz3AhWFC0QIHW0qBMUQFXoECAUQDQ&usg=AOvVaw0RR8Kh89_DWl58Gb8PE1r4) - Wells Fargo branch count: Actual number, [Wells Fargo](https://www.google.com/url?q=https://www.wellsfargojobs.com/about-us&sa=U&ved=2ahUKEwjMxcHGgJz3AhXrD0QIHUuMAn4QFXoECAMQAg&usg=AOvVaw3t1lLRJkSD4uMNxYeCrwmB) - Power consumption of a bank branch: 6× 125W ([Dell Poweredge R710](https://www.itconnected.tech/blog/dell-poweredge-r710/&sa=U&ved=2ahUKEwjK7o3zgJz3AhUZD0QIHV_KC_MQFXoECAwQAg&usg=AOvVaw23r3L2oQ31VUULVq9GgUyc)) + 250 W
Oh I LOVE Python for data science... PANDAS is amazing... obviously NumPy... but then you have all the big data/AI stuff like TensorFlow, tons of NLP packages... and tons more ML stuff. I love it. I'm a huge proponent of Node in the backend with Vue or React in the front, but I'm agnostic. I'll code any language that's Turing-level. ifyou ever wanna talk, always feel free to gmme a shout.
I agree in general, but like you said assets do correlate more in crisis (which I think is easy enough to verify). The issue with correlation indexes is that the parameters to set the index value are generalized. In a simple case, a (bad) index could say if SPX green and BTC green = positive correlation (with it's converse being true). But doesn't deal with (or give enough weight to) near flat days for BTC where it's red or green but in scope just noise, while SPX is red. It would make sense BTC would be flat first because the basics of portfolio theory is reduce exposure to the most volatile assets first. So it'll drop faster, flatten out quicker (big funds rebalancing finishing, leaving smaller, set amounts in). A great correlation index would be able to create a basis mapping for volatility to SPX, even them out and be able to distinguish it. But with variable choice, honestly I'd leave it to ML to figure out parameters for it later down the road. Usually correlation indexes deal with similar conditions but in the case of BTC it's still a frontier market and has too many nuances to consider apples to apples with no adjustments.
It depends on which wallet you choose, they give varying levels of control and have different models. Phoenix is what I reccomend to most people, but I use Zap as a remote control for an lnd node I run myself. If all you care about is making outbound payments then it's pretty plug-and-play, you download the wallet, set it up like any other wallet, find a couple random nodes on 1ML.com to open channels with and you're good to go. Getting inbound liquidity is *a little* harder, but if you're receiving a payment from someone then it's likely they'll be happy to open a channel with you. And dual-funding exists and is being slowly rolled out. There's lots of projects the works to efficiently allocate liquidity and rebalance existing channels to smooth things out for already-open channels that become unbalanced. But as an end-user you rarely need to care about that.
One thing I’ve never seen mentioned is incentivizing peers to afford resources to the network with testing to determine the grade, and therefore the compensation of in chain rewards, for each contributor per asset. For mesh networking, you get paid more for the % uptime, total bandwidth consistency, etc. For storage, you get paid more for faster response times, more uninterrupted loading, etc. Then just adjust the numbers (even algorithmically, if you can find a good way) until you’re getting an optimal midpoint between the best resources and decentralization. The only question underneath this is whether the chain rewards can be stable enough to cause people to invest in infrastructure to provide to the network, or just enough so people will offer their unused resources + maybe buy a small node or something for mostly humanitarian reasons. Content moderation is the easy one - if it’s a universal catalog, just instance a ML filtration and recommendation system per user with exposed parameters to adjust to the users’ liking. Put it on the authorities to develop technologies to actively remove resources from the network which supply the content, and on the users to prevent themselves from accessing it. There are definitely good solutions - they will have different risk profiles, but our current risk profile includes deconstructing the sanity of culture at large.
I disagree. Fusion is something we should spend money looking into but we shouldn't ignore viable alternatives that already exists. Fusion is many many decades away from being a viable power source. It's like machine learning. People were theorizing about it in the 1970s and writing papers and algorithms that we didn't have the compute to even attempt until the 2000s. Fusion today is like ML in the 1970s. It'll be cool when it gets here but in the 50 or more years until then we gotta an alternative.
>The cluster in total holds all the transactions So you just trust the other nodes in the cluster? >In a world, in which there's need for 10,000 TPS, there's a ton of fees that can be saved using NANO, making the storage requirements look cheap. How would you store 10,000 transactions each second in a distrubuted ledger without having giga-nodes? >smaller nodes have a better price per performance No... That's pretty much false with everything related to tech. Would you rather buy and use 3 RTX 3060s, or 1 RTX 3080TI for a ML task? Which do you think would be cheaper? Smaller nodes have worse price for performance. A theoretical node that can process 1000 TPS is cheaper than running 10 nodes that process 100 TPS, but this comparison is completely and utterly pointless, since if you had 10 nodes doing 100 TPS, they would all have to trust eachother, instead of verifying the whole 1000 TPS. You can't have nodes process part of the network. That would just mean the cost of attacking the network would go down to the cost of attacking the smallest unit of the network. It would make a decentralized ledger pretty much useless. I honestly don't know what you think clustering is. If you have inter-cluster communication, you automatically have to either verify what the other clusters do, making the clustering useless, or just trust whatever they do, which would make the system based on trust.
https://www.reuters.com/article/eib-bonds-idUSL8N2ML346 Could you imagine how much non American money is either invested in ether or invested ON ether, meaning stored in smart contracts. Hundreds of billions.... US already classified eth as a commodity. https://www.sullivanlaw.com/news-ether-is-a-commodity.html#:~:text=It%20is%20official.,the%20jurisdiction%20of%20the%20CFTC. I'm not sure where you get your info from but maybe it's time to start looking around
I’m not sure what you mean by “target hash”? Blocks have hashes, and miners have a difficult target. Miners add random data to a block and hash it until the block’s hash (which is essentially just a number) is below a target number. This target regularly changes. So old block data might not be useful for a ML model. The theory. Whisk it is that there is no way to predict or correlate SHA256 hashes. So random attempts are the only useful strategy. But even if you found a better strategy it might not matter unless you can keep it a secret and only use it yourself. The difficulty increases arbitrary in order to maintain 10 minute average block times.
>The barriers to entry for the ticket sales business are the software development and business relationships with venues. Neither is that onerous a cost, which is why the ticket reselling business is competitive. You shouldn't underestimate the cost of getting a software product fully operable and secure. Many start-ups will take massive shortcuts in operations and security, cross their fingers and hope for the best, usually putting a large amount of risk on their users (see the largely underfunded security sector and all the user-harming data leaks that result out of it). But honestly, what's the point in many private companies trying competing to hold the maximal value extraction? It fucking sucks for users. Companies will be enable interoperability only when it benefits them, and nerf it otherwise. They will actively try and make their products as "sticky" as possible so they can sit on their arses extracting value, whilst maybe funding horizontal growth to try and win other markets as well. This is literally what all companies try to do. I'm not saying competition is bad, but most competition that happens in the private sphere is done through marketing/advertising/sales. The actual work and innovation is often only a portion. Isn't it just way better if we compete openly? Open-source everything? Enable mobility of ideas and innovation so we can all work together to improve software/technology? Of course, if you do that now you're putting yourself at a massive competitive disadvantage because closed-source shops can steal your work and privately fund improvements on it (which is basically the model that most companies use now when they use open-source technology). Now, an alternative model is having co-operatives built on top of open-sourced, decentralized, trustless networks. Participants in the network are rewarded for if network thrives, and because the network has some underlying utility for the participants (imagine event creators, or avid event attendees), they are encouraged to keep the network secure, and fund any feature improvements that get enough traction. Any remainder is divided between the participants (essentially the fees get refunded if they are not spent improving the network). The current software business model is a total fuck-around. Everyone's building the same shit and outside massively established companies, there's essentially a race-to-the-bottom on sustainability/quality of the engineering. And the revenue that established software companies can make is really really unequitable compared to other industries, being able to scale the work you do for someone to potentially billions of users with negligible extra cost. It's just one whole wealth extraction op, the worst part of globalization. Anyway, sorry for the rant. More power to you if you read all this. I think we all generally want the same shit, the most simplest forms of the best products that are fair on users and workers. I am just very doubtful that traditional enterprises can give us that, and they are becoming so complex and opaque (with ML) that they are going to further become unregulatable.
Every time this is posted, there's a lot of comments saying it's just useless horoscope data. While I agree that not every indicator is useful, TA as a data science most certainly is. I use it to train ML models for determining positions, and in my experience of several thousand hours of training models against hundreds of combinations of indicators, I can confirm that TA makes a significant impact on accuracy. Most of the people here don't understand how to apply it correctly, or what indicators work best for the type or trading you do and the time increments involved. As is typical of reddit, when people don't understand how something works, they say it's trash.
tldr; Smart Finance is a decentralized finance (DeFi) solution based on Artificial Intelligence (AI), Machine Learning (ML) and Mathematical Expectations (ME). The AntiScamAI (ASAI) is an AI-powered scanner, which will analyse new token projects to determine potential scams. The Whale AI Tracker can track the wallets of user selected whales. The AI engine will be able to monitor for transactions and copy them before they complete. *This summary is auto generated by a bot and not meant to replace reading the original article. As always, DYOR.*
Jup, they shoujd have also done one model that self-feeds the next timestep into the window to get a future prediction. That would be the real game changer, but this is a rather simplistic ML approach. They don't even use third party correlation, do they?
They're already doing it! I'm planning on making my next web app on there. Cloud computing is so expensive because it's a oligopoly right now. You essentially have 3 options: amazon, google, or microsoft; and god knows the kind of collusion that goes on between them. I used Google's ML API for a couple graphs over a weekend and I got a $400 bill lol.
Can't speak about literal blockchain developers but I knew a guy who made a ML algo for post evaluation on Steem. When he disappeared, I could still find his personal site and he started an ML-based startup in the construction industry. I assume if you get the math there's a lot of overlapping areas to go for. As for the rest, I have no idea. It's very possible they stay at crypto; I know a Rust dev in crypto scene, at some point he simply moved on to another crypto company, still doing Rust
When thinking about metrics like "coder proficiency" it seems like you quickly hit points that may need new tooling to really dig out. Things like the following: * How much they replace or add new * Are they consistently touching code others have submitted * How many of the includes/modules did they write * How many times are their methods used by other portions of code * If their replacement reducing lines needed for same function This quickly gets rather complicated as your now trying to define impact of code commits and the problem is begging for machine learning. There is a project called TinyML (Tiny Machine Learning: [https://www.tinyml.org/](https://www.tinyml.org/)) and is meant to bring machine learning to the sub megawatt range of computing platforms. Think a small Raspberry Pi or device that can run off of a watch battery. "TinyML is already being used for speech and image recognition, neural networks and other applications," said Muhammad Taimoor, data science trainee at training platform provider Data Science Dojo. "It's a simple, intuitive machine learning API and also a great starting point for anyone interested in learning more about machine learning." Right now this mostly uses TensorFlow Light but has a good amount of interest and activity. Several of the basic algorithms for common tasks are available and there is quite a bit of interest in the space to adapt and add to what this system can process. Incorporating this though could require a refactor (a curse word in programming for some) so only OP can determine if this is a match worth the effort. Also obviously it's machine learning so there may be a bit of a learning curve there but then again for all I know OP is already an ML expert. To OP nice work and I hope this can be helpful! It should be able to assist with the load you are talking about for the system and it could certainly add some very valuable feedback that would be extremely difficult and time consuming to code outside of ML. Have been thinking about this off an on for a while now because of how complicated code and contract reviews can get. Mostly my thinking was more of a teaching the code the commonalities between crypto scams and having it grade the "health" of the project.
Remember Folding@Home? Wouldn't it be cool if we were solving problems related to cancer research?! I think AlphaFold has basically solved protein structure but if we had a list of progressively more difficult calculations that solved important problems (perhaps weather forecasting calculations or enzyme development to break down plastics) then we could change the difficulty (like the Bitcoin halving) when some amazing algorithm solves the current problem. You, in effect, have a huge monetary incentive to solve real world problems and when it's solved, you move on to the next problem. Run out of problems? Back to reversing hashes but that's okay because you can now mine raw materials from waste, have near perfect weather forecasting, cured cancer, P=np, etc. Do you think you could generate a basic income by training ML models with some kind of ProofOfHuman? Not quite Universal Basic Income, but something close that could employ post-retirement ages and keep their minds active while providing income.
I have run a profitable LN node for over a year. **I feel obligated to point out some dangerous misinformation in this post.** > Now with taproot, people won´t even be able to see that it´s a multisig, so they won´t even know that´s it problaby a channel opening transaction. Completely untrue. 1. Lightning Network does not yet support taproot. It's not even in any BOLTs, let alone implemented in actual LN clients. There are years worth of work ahead until this changes. 2. Public channels have their funding txid announced in the gossip, which is public (and inspectable, e.g. on 1ML or Amboss). Even with taproot, your public channel funding txid is permanently linked to your LN node and it will be 100% obvious that txo is a Lightning Network channel. 3. If at any time either channel party has at least 1% of the channel funds, then channel closing will *always* give *both* parties an output. If you close your channel and your channel peer subsequently uses their output from the channel to fund a public channel, **even private channel funding txes will be identifiable as a LN channel**. > The Lightning Network does not increase the privacy of payment, it makes payments 100% private. Absolutely and unequivocally private. End of. Again completely untrue. Payments are routed across the network as HTLCs, which requires that each LN node along the route sees the same payment hash with an ever-decreasing payment amount (as fees get deducted along the way) and CLTV (lock time). This makes it very easy to trace transactions across the network, and lots of information that can be used to deduce payment source/destination (cross-referencing channel CLTV deltas with `cltv_expiry`s on incoming HTLCs), failed payments, round payment amounts, timing of payment attempts, and more. **LN payments privacy is way better than on-chain txes but VERY far from "absolutely and unequivocally private"!** > What in essence you have done is an ad hoc CoinJoin which severs the relationship between your KYC account and your Bitcoin. (As along as you do not use the same address as you opened the channel with, it goes without saying). Completely untrue. Again, both channel peers will get one output from the channel closing tx. Regardless of what address you use, when *your peer* subsequently spends their UTXO--e.g. to fund a new channel from their node--it will be 100% clear who owns which UTXO. This is **not** a CoinJoin and does **not** provide the same privacy benefit. Even if it did, a CoinJoin with only one other party gives very little privacy benefit.
I do agree… you can make money going down, and I’m not a moon boy either, I just like the strategy and using AI and ML to make objective decisions and not trying to call the top/bottom… also the fact of thinking an investment of AT LEAST 4 years
Here is a [Nitter link](https://nitter.net/disclosetv/status/1486801072211570695?t=OVJrB_ML2kzBYuN8OeTYCw&s=19) for the Twitter thread linked above. Nitter is better for privacy and does not nag you for a login. More information can be found [here](https://nitter.net/about). --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/CryptoCurrency) if you have any questions or concerns.*
There is a set of tasks that proven to be extremely hard to solve for any ML algorithms that exist today, yet easy for humans. It called ARC-dataset and you may try it here: https://volotat.github.io/ARC-Game/ and read more about it here: https://github.com/fchollet/ARC although it not necessary to be this exact concept, it could be one of the solutions.
It's most likely machine learning that scrapes information on coins based on what is said about them on reddit and other forums. Since reddit often jokes about tether to the moon, the ML algorithm has most likely picked that up unironcially.
It sounds like you're burying the lede under a bulleted list of generic crypto selling points. To implement a solution to the problem you've outlined, you need: * the hospital running some server software that can access the patient data, allows upload of the ML models, runs a batch process that feeds the data to the algorithms and continues to run them for however many iterations the developer wants * a login/payment system to control access to that server. Neither of those requires or would be made significantly easier by involving blockchain/smart contracts. But you tell me, since you're supposedly building this technology and can therefore speak to actual implementation details, at what point in that hospital server / payment backend requires a crypto-based solution.
I mean you can have it once you've grown big as well. The issue that Youtube and a number of tech start up companies have is that they grew far too big, far too quickly because they didn't limit user numbers (or stopped doing so too early). The rapid increase in user base pushed a reliance on ML to do a lot of moderation, or all the moderation in some cases. ML is great for a first pass, but it's not great for final decisions because people can find out how to 'play' the algorithms and such. As always, it's a case of huge companies not wanting to pay the resource cost for an ethical platform.
>It automates the processes Blockchain is a ledger. It doesn't automate anything. >Can you see now why paper work, meetings, and time can be cut down in accessing data via smart contracts and blockchain I can see the value of a system where ML algorithms are trained on real patient data behind a closed firewall. The *"and it runs on blockchain!"* part is where you lose me. If I had to build this system without blockchain, how much harder would it be?
Right but clean, appropriate, and meaningful data is central to ML. You don't really know if your results are meaningful if you don't know what data the algorithm trained on. How do you tune and modify the algorithm for problems that might come up like overfitting, validation, etc. I guess I can imagine that it is possible to do so, but it seems extraordinarily inefficient to do things in such a manner.
HMT is my most promising bag right now, apps built there are already used by millions of people to label data and prevent bot abuse, enabling next generation AI and ML technologies. The HUMAN open-source community is now working to support many other apps and use cases, this is a big one.
Not these new bots. They roam around and bide their time. It's a new level of machine learning that whomever is running them is using. They showed up a couple of months ago, and only post once in a while from each account, but they do reply to one another, and upvote one another, now and then, so as not to look obvious. You really have to know what you're looking for and understand how ML type language systems work to see it. I'm finding it fascinating. They're getting better all the time, and really passing the Turing test for most of the general public.
I hope they extend it somehow so that this goes beyond protein folding. ML model training for instance. So many students fighting for server time, this could be a real game changer, and competing business model with aws et al. The challenge is the frameworks within which to actually carry all this out in a safe way, although perhaps containers can provide an answer here.
I think if they did want to monitor everyone they would need a China style, super dragnet where it’s certain they are spying on every conversation and using NLP and ML to tag conversations for review, this is how it could be done. Problem is too many rich people are close to the policy makers, if we had Bernie Sanders as president you bet he’d have the left high on the idea of monitoring all our chats to ensure people aren’t skirting their taxes, and that’s an awful idea I think we can all agree would suck. Hypothetically speaking of course.