Block Chain — Bitcoin

Long live decentralized bitcoin(!) A reading list

Newbs might not know this, but bitcoin recently came out of an intense internal drama. Between July 2015 and August 2017 bitcoin was attacked by external forces who were hoping to destroy the very properties that made bitcoin valuable in the first place. This culminated in the creation of segwit and the UASF (user activated soft fork) movement. The UASF was successful, segwit was added to bitcoin and with that the anti-decentralization side left bitcoin altogether and created their own altcoin called bcash. Bitcoin's price was $2500, soon after segwit was activated the price doubled to $5000 and continued rising until a top of $20000 before correcting to where we are today.
During this drama, I took time away from writing open source code to help educate and argue on reddit, twitter and other social media. I came up with a reading list for quickly copypasting things. It may be interesting today for newbs or anyone who wants a history lesson on what exactly happened during those two years when bitcoin's very existence as a decentralized low-trust currency was questioned. Now the fight has essentially been won, I try not to comment on reddit that much anymore. There's nothing left to do except wait for Lightning and similar tech to become mature (or better yet, help code it and test it)
In this thread you can learn about block sizes, latency, decentralization, segwit, ASICBOOST, lightning network and all the other issues that were debated endlessly for over two years. So when someone tries to get you to invest in bcash, remind them of the time they supported Bitcoin Unlimited.
For more threads like this see UASF

Summary / The fundamental tradeoff

A trip to the moon requires a rocket with multiple stages by gmaxwell (must read)
Bram Cohen, creator of bittorrent, argues against a hard fork to a larger block size
gmaxwell's summary of the debate
Core devs please explain your vision (see luke's post which also argues that blocks are already too big)
Mod of btc speaking against a hard fork
It's becoming clear to me that a lot of people don't understand how fragile bitcoin is
Blockchain space must be costly, it can never be free
Charlie Lee with a nice analogy about the fundamental tradeoff
gmaxwell on the tradeoffs
jratcliff on the layering

Scaling on-chain will destroy bitcoin's decentralization

Peter Todd: How a floating blocksize limit inevitably leads towards centralization [Feb 2013] mailing list with discussion on reddit in Aug 2015
Nick Szabo's blog post on what makes bitcoin so special
There is academic research showing that even small (2MB) increases to the blocksize results in drastic node dropoff counts due to the non-linear increase of RAM needed.
Reddit summary of above link. In this table, you can see it estimates a 40% drop immediately in node count with a 2MB upgrade and a 50% over 6 months. At 4mb, it becomes 75% immediately and 80% over 6 months. At 8, it becomes 90% and 95%.
Larger block sizes make centralization pressures worse (mathematical)
Talk at scalingbitcoin montreal, initial blockchain synchronization puts serious constraints on any increase in the block size with transcript
Bitcoin's P2P Network: The Soft Underbelly of Bitcoin someone's notes: reddit discussion
In adversarial environments blockchains dont scale
Why miners will not voluntarily individually produce smaller blocks
Hal Finney: bitcoin's blockchain can only be a settlement layer (mostly interesting because it's hal finney and its in 2010)
petertodd's 2013 video explaining this
luke-jr's summary
Another jratcliff thread

Full blocks are not a disaster

Blocks must be always full, there must always be a backlog
Same as above, the mining gap means there must always be a backlog talk: transcript:
Backlogs arent that bad
Examples where scarce block space causes people to use precious resources more efficiently
Full blocks are fine
High miner fees imply a sustainable future for bitcoin
gmaxwell on why full blocks are good
The whole idea of the mempool being "filled" is wrong headed. The mempool doesn't "clog" or get stuck, or anything like that.


What is segwit

luke-jr's longer summary
Charlie Shrem's on upgrading to segwit
Original segwit talk at scalingbitcoin hong kong + transcript
Segwit is not too complex
Segwit does not make it possible for miners to steal coins, contrary to what some people say
Segwit is required for a useful lightning network It's now known that without a malleability fix useful indefinite channels are not really possible.
Clearing up SegWit Lies and Myths:
Segwit is bigger blocks
Typical usage results in segwit allowing capacity equivalent to 2mb blocks

Why is segwit being blocked

Jihan Wu (head of largest bitcoin mining group) is blocking segwit because of perceived loss of income
Witness discount creates aligned incentives
or because he wants his mining enterprise to have control over bitcoin

Segwit is being blocked because it breaks ASICBOOST, a patented optimization used by bitmain ASIC manufacturer

Details and discovery by gmaxwell
Reddit thread with discussion
Simplified explaination by jonny1000
Bitmain admits their chips have asicboost but they say they never used it on the network (haha a likely story)
Worth $100m per year to them (also in gmaxwell's original email)
Other calculations show less
This also blocks all these other cool updates, not just segwit
Summary of bad consequences of asicboost
Luke's summary of the entire situation
Prices goes up because now segwit looks more likely
Asicboost discovery made the price rise
A pool was caught red handed doing asicboost, by this time it seemed fairly certain that segwit would get activated so it didnt produce as much interest as earlier and and
This btc user is outraged at the entire forum because they support Bitmain and ASICBOOST
Antbleed, turns out Bitmain can shut down all its ASICs by remote control:

What if segwit never activates

What if segwit never activates? with and


bitcoinmagazine's series on what lightning is and how it works
The Lightning Network ELIDHDICACS (Explain Like I Don’t Have Degrees in Cryptography and Computer Science)
Ligtning will increases fees for miners, not lower them
Cost-benefit analysis of lightning from the point of view of miners
Routing blog post by rusty and reddit comments
Lightning protocol rfc
Blog post with screenshots of ln being used on testnet video
Video of sending and receiving ln on testnet
Lightning tradeoffs
Beer sold for testnet lightning and
Lightning will result in far fewer coins being stored on third parties because it supports instant transactions
jgarzik argues strongly against LN, he owns a coin tracking startup
luke's great debunking / answer of some misinformation questions
Lightning centralization doesnt happen
roasbeef on hubs and charging fees and

Immutability / Being a swiss bank in your pocket / Why doing a hard fork (especially without consensus) is damaging

A downside of hard forks is damaging bitcoin's immutability
Interesting analysis of miners incentives and how failure is possible, don't trust the miners for long term
waxwing on the meaning of cash and settlement
maaku on the cash question
Digital gold funamentalists gain nothing from supporting a hard fork to larger block sizes
Those asking for a compromise don't understand the underlying political forces
Nobody wants a contentious hard fork actually, anti-core people got emotionally manipulated
The hard work of the core developers has kept bitcoin scalable
Recent PRs to improve bitcoin scaleability ignored by the debate
gmaxwell against hard forks since 2013
maaku: hard forks are really bad

Some metrics on what the market thinks of decentralization and hostile hard forks

The price history shows that the exchange rate drops every time a hard fork threatens:
and this example from 2017 btc users lose money
price supporting theymos' moderation
old version
older version
about 50% of nodes updated to the soft fork node quite quickly

Bitcoin Unlimited / Emergent Consensus is badly designed, changes the game theory of bitcoin

Bitcoin Unlimited was a proposed hard fork client, it was made with the intention to stop segwit from activating
A Future Led by Bitcoin Unlimited is a Centralized Future
Flexible transactions are bugged
Bugged BU software mines an invalid block, wasting 13 bitcoins or $12k employees are moderators of btc
miners don't control stuff like the block size
even gavin agreed that economic majority controls things
fork clients are trying to steal bitcoin's brand and network effect, theyre no different from altcoins
BU being active makes it easier to reverse payments, increases wasted work making the network less secure and giving an advantage to bigger miners
bitcoin unlimited takes power away from users and gives it to miners
bitcoin unlimited's accepted depth
BU's lying propaganda poster

BU is bugged, poorly-reviewed and crashes

bitcoin unlimited allegedly funded by kraken stolen coins
Other funding stuff
A serious bug in BU
A summary of what's wrong with BU:

Bitcoin Unlimited Remote Exploit Crash 14/3/2017
BU devs calling it as disaster also btc deleted a thread about the exploit
Summary of incident
More than 20 exchanges will list BTU as an altcoin
Again a few days later

User Activated Soft Fork (UASF)

site for it, including list of businesses supporting it
luke's view
threat of UASF makes the miner fall into line in litecoin
UASF delivers the goods for vertcoin
UASF coin is more valuable
All the links together in one place
p2sh was a uasf
jgarzik annoyed at the strict timeline that segwit2x has to follow because of bip148
Committed intolerant minority
alp on the game theory of the intolerant minority
The risk of UASF is less than the cost of doing nothing
uasf delivered the goods for bitcoin, it forced antpool and others to signal (May 2016) "When asked specifically whether Antpool would run SegWit code without a hard fork increase in the block size also included in a release of Bitcoin Core, Wu responded: “No. It is acceptable that the hard fork code is not activated, but it needs to be included in a ‘release’ of Bitcoin Core. I have made it clear about the definition of ‘release,’ which is not ‘public.’”"
Screenshot of peter rizun capitulating

Fighting off 2x HF
b2x is most of all about firing core

Misinformation / sockpuppets
three year old account, only started posting today
Why we should not hard fork after the UASF worked:


Good article that covers virtually all the important history
Interesting post with some history pre-2015
The core scalabality roadmap + my summary from 3/2017 my summary
History from summer 2015
Brief reminders of the ETC situation
Longer writeup of ethereum's TheDAO bailout fraud
Point that the bigblocker side is only blocking segwit as a hostage
jonny1000's recall of the history of bitcoin

Misc (mostly memes)

libbitcoin's Understanding Bitcoin series (another must read, most of it)
github commit where satoshi added the block size limit
hard fork proposals from some core devs
blockstream hasnt taken over the entire bitcoin core project
blockstream is one of the good guys
Forkers, we're not raising a single byte! Song lyrics by belcher
Some stuff here along with that cool photoshopped poster
Nice graphic
gmaxwell saying how he is probably responsible for the most privacy tech in bitcoin, while mike hearn screwed up privacy
Fairly cool propaganda poster
btc tankman
asicboost discovery meme
gavin wanted to kill the bitcoin chain
stuff that btc believes
after segwit2x NYA got agreed all the fee pressure disappeared, laurenmt found they were artificial spam
theymos saying why victory isnt inevitable
with ignorant enemies like these its no wonder we won ""So, once segwit2x activates, from that moment on it will require a coordinated fork to avoid the up coming "baked in" HF. ""
a positive effect of bcash, it made blockchain utxo spammers move away from bitcoin
summary of craig wright, jihan wu and roger ver's positions
Why is bitcoin so strong against attack?!?! (because we're motivated and awesome)
what happened to #oldjeffgarzik
big blockers fully deserve to lose every last bitcoin they ever had and more
gavinandresen brainstorming how to kill bitcoin with a 51% in a nasty way
Roger Ver as bitcoin Judas
A bunch of tweets and memes celebrating UASF | | | | | | | | | | | |
submitted by belcher_ to Bitcoin [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:


Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

New BIP for the implementation of the Consensus 2017 Scaling Agreement (ie. New York/Silbert) includes BIP148 UASF (August 1st SegWit activation) and a 2mB hard-fork locking in 6 months thereafter

See Calvin Rechner's BIP: [bitcoin-dev] Compatibility-Oriented Omnibus Proposal.
Signalling is via the string "COOP."
Here is some of the BIP in question:


This document describes a virtuous combination of James Hilliard’s “Reduced signalling threshold activation of existing segwit deployment”[2], Shaolin Fry’s “Mandatory activation of segwit deployment”[3], Sergio Demian Lerner’s “Segwit2Mb”[4] proposal, Luke Dashjr’s “Post-segwit 2 MB block size hardfork”[5], and hard fork safety mechanisms from Johnson Lau’s “Spoonnet”[6][7] into a single omnibus proposal and patchset.


Proposal Signaling
The string “COOP” is included anywhere in the txn-input (scriptSig) of the coinbase-txn to signal compatibility and support.
Soft Fork
Fast-activation (segsignal): deployed by a "version bits" with an 80% activation threshold BIP9 with the name "segsignal" and using bit 4... [with a] start time of midnight June 1st, 2017 (epoch time 1496275200) and timeout on midnight November 15th 2017 (epoch time 1510704000). This BIP will cease to be active when segwit is locked-in.[2]
Flag-day activation (BIP148): While this BIP is active, all blocks must set the nVersion header top 3 bits to 001 together with bit field (1<<1) (according to the existing segwit deployment). Blocks that do not signal as required will be rejected... This BIP will be active between midnight August 1st 2017 (epoch time 1501545600) and midnight November 15th 2017 (epoch time 1510704000) if the existing segwit deployment is not locked-in or activated before epoch time 1501545600. This BIP will cease to be active when segwit is locked-in. While this BIP is active, all blocks must set the nVersion header top 3 bits to 001 together with bit field (1<<1) (according to the existing segwit deployment). Blocks that do not signal as required will be rejected.[3]
Hard Fork
The hard fork deployment is scheduled to occur 6 months after SegWit activates:
(HardForkHeight = SEGWIT_ACTIVE_BLOCK_HEIGHT + 26280)
For blocks equal to or higher than HardForkHeight, Luke-Jr’s legacy witness discount and 2MB limit are enacted, along with the following Spoonnet-based improvements[6][7]:


Deployment of the “fast-activation” soft fork is exactly identical to Hilliard’s segsignal proposal[2]. Deployment of the “flag-day” soft fork is exactly identical to Fry’s BIP148 proposal[3]. HardForkHeight is defined as 26280 blocks after SegWit is set to ACTIVE. All blocks with height greater than or equal to this value must adhere to the consensus rules of the 2MB hard fork.

Backwards compatibility

This deployment is compatible with the existing "segwit" bit 1 deployment scheduled between midnight November 15th, 2016 and midnight November 15th, 2017.
To prevent the risk of building on top of invalid blocks, miners should upgrade their nodes to support segsignal as well as BIP148.
The intent of this proposal is to maintain full legacy consensus compatibility for users up until the HardForkHeight block height, after which backwards compatibility is waived as enforcement of the hard fork consensus ruleset begins.
I will expound upon this later, but I support this proposal. Primarily because it includes BIP148 UASF, secondarily because it includes a 2mB blocksize increase, which I support in principle (I am a big blocker but opposed to divergent consensus.)
submitted by AltF to Bitcoin [link] [comments]

r/Bitcoin recap - January 2018

Hi Bitcoiners!
I’m back with the thirteenth monthly Bitcoin news recap. I must say it's becoming pretty hard to select just 1 or 2 stories per day, too much is going on!
For those unfamiliar, each day I pick out the most popularelevant/interesting stories in Bitcoin and save them. At the end of the month I release them in one batch, to give you a quick (but not necessarily the best) overview of what happened in the Bitcoin space over the past month.
You can see recaps of the previous months on
A recap of Bitcoin in January 2018
submitted by SamWouters to Bitcoin [link] [comments]



What are cryptocurrencies?
Cryptocurrencies are peer to peer technology protocols which rely on the block-chain; a system of decentralized record keeping which allows people to exchange unmodifiable and indestructible information “coins,” globally in little to no time with little to no fees – this translates into the exchange of value as these coins cannot be counterfeit nor stolen. This concept was started by Satoshi Nakamoto (allegedly a pseudonym for a single man or organization) whom described and coded Bitcoin in 2009.
What is DigiByte?
DigiByte (DGB) is a cryptocurrency like Bitcoin. It is also a decentralized applications protocol in a similar fashion to Neo or Ethereum.
DigiByte was founded and created by Jared Tate in 2014. DigiByte allows for fast (virtually instant) and low cost (virtually free) transactions. DigiByte is hard capped at 21 billion coins which will ever be mined, over a period of 21 years. DigiByte was never an ICO and was mined/created in the same way that Bitcoin or Litecoin initially were.
DigiByte is the fastest UTXO PoW scalable block-chain in the world. We’ll cover what this really means down below.
DigiByte has put forth and applied solutions to many of the problems that have plagued Bitcoin and cryptocurrencies in general – those being:
We will address these point by point in the subsequent sections.
The DigiByte Protocol
DigiByte maintains these properties through use of various technological innovations which we will briefly address below.
Why so many coins? 21 Billion
When initially conceived Bitcoin was the first of a kind! And came into the hands of a few! The beginnings of a coin such as Bitcoin were difficult, it had to go through a lot of initial growth pains which following coins did not have to face. It is for this reason among others why I believe Bitcoin was capped at 21 million; and why today it has thus secured a place as digital gold.
When Bitcoin was first invented no one knew anything about cryptocurrencies, for the inventor to get them out to the public he would have to give them away. This is how the first Bitcoins were probably passed on, for free! But then as interest grew so did the community. For them to be able to build something and create something which could go on to have actual value, it would have to go through a steady growth phase. Therefore, the control of inflation through mining was extremely important. Also, why the cap for Bitcoin was probably set so low - to allow these coins to amass value without being destroyed by inflation (from mining) in the same way fiat is today! In my mind Satoshi Nakamoto knew what he was doing when setting it at 21 million BTC and must have known and even anticipated others would take his design and build on top of it.
At DigiByte, we are that better design and capped at 21 billion. That's 1000 times larger than the supply of Bitcoin. Why though? Why is the cap on DigiByte so much higher than that of Bitcoin? Because DigiByte was conceived to be used not as a digital gold, nor as any sort of commodity, but as a real currency!
Today on planet Earth, we are approximately 7.6 billion people. If each person should want or need to use and live off Bitcoin; then equally split at best each person could only own 0.00276315789 BTC. The market cap for all the money on the whole planet today is estimated to have recently passed 80 trillion dollars. That means that each whole unit of Bitcoin would be worth approximately $3,809,523.81!
This is of course in an extreme case where everyone used Bitcoin for everything. But even in a more conservative scenario the fact remains that with such a low supply each unit of a Bitcoin would become absurdly expensive if not inaccessible to most. Imagine trying to buy anything under a dollar!
Not only would using Bitcoin as an everyday currency be a logistical nightmare but it would be nigh impossible. For each Satoshi of a Bitcoin would be worth much, much, more than what is realistically manageable.
This is where DigiByte comes in and where it shines. DigiByte aims to be used world-wide as an international currency! Not to be hoarded in the same way Bitcoin is. If we were to do some of the same calculations with DigiByte we'd find that the numbers are a lot more reasonable.
At 7.6 billion people, each person could own 2.76315789474 DGB. Each whole unit of DGB would be worth approximately $3,809.52.
This is much more manageable and remember in an extreme case where everyone used DigiByte for everything! I don't expect this to happen anytime soon, but with the supply of DigiByte it would allow us to live and transact in a much more realistic and fluid fashion. Without having to divide large numbers on our phone's calculator to understand how much we owe for that cup of coffee! With DigiByte it's simple, coffee cost 1.5 DGB, the cinema 2.8 DGB, a plane ticket 500 DGB!
There is a reason for DigiByte's large supply, and it is a good one!
Decentralisation is an important concept for the block-chain and cryptocurrencies in general. This allows for a system which cannot be controlled nor manipulated no matter how large the organization in play or their intentions. DigiByte’s chain remains out of the reach of even the most powerful government. This allows for people to transact freely and openly without fear of censorship.
Decentralisation on the DigiByte block-chain is assured by having an accessible and fair mining protocol in place – this is the multi-algorithm (MultiAlgo) approach. We believe that all should have access to DigiByte whether through purchase or by mining. Therefore, DigiByte is minable not only on dedicated mining hardware such as Antminers, but also through use of conventional graphics cards. The multi-algorithm approach allows for users to mine on a variety of hardware types through use of one of the 5 mining algorithms supported by DigiByte. Those being:
Please note that these mining algorithms are modified and updated from time to time to assure complete decentralisation and thus ultimate security.
The problem with using only one mining algorithm such as Bitcoin or Litecoin do is that this allows for people to continually amass mining hardware and hash power. The more hash power one has, the more one can collect more. This leads to a cycle of centralisation and the creation of mining centres. It is known that a massive portion of all hash power in Bitcoin comes from China. This kind of centralisation is a natural tendency as it is cheaper for large organisations to set up in countries with inexpensive electricity and other such advantages which may be unavailable to the average miner.
DigiByte mitigates this problem with the use of multiple algorithms. It allows for miners with many different kinds of hardware to mine the same coin on an even playing field. Mining difficulty is set relative to the mining algorithm used. This allows for those with dedicated mining rigs to mine alongside those with more modest machines – and all secure the DigiByte chain while maintaining decentralisation.
Low Fees
Low fees are maintained in DigiByte thanks to the MultiAlgo approach working in conjunction with MultiShield (originally known as DigiShield). MultiShield calls for block difficulty readjustment between every single block on the chain; currently blocks last 15 seconds. This continuous difficulty readjustment allows us to combat any bad actors which may wish to manipulate the DigiByte chain.
Manipulation may be done by a large pool or a single entity with a great amount of hash power mining blocks on the chain; thus, increasing the difficulty of the chain. In some coins such as Bitcoin or Litecoin difficulty is readjusted every 2016 blocks at approximately 10mins each and 2mins respectively. Meaning that Bitcoin’s difficulty is readjusted about every two weeks. This system can allow for large bad actors to mine a coin and then abandon it, leaving it with a difficulty level far too high for the present hash rate – and so transactions can be frozen, and the chain stopped until there is a difficulty readjustment and or enough hash power to mine the chain. In such a case users may be faced with a choice - pay exorbitant fees or have their transactions frozen. In an extreme case the whole chain could be frozen completely for extended periods of time.
DigiByte does not face this problem as its difficulty is readjusted per block every 15 seconds. This innovation was a technological breakthrough and was adopted by several other coins in the cryptocurrency environment such as Dogecoin, Z-Cash, Ubiq, Monacoin, and Bitcoin Gold.
This difficulty readjustment along with the MultiAlgo approach allows DigiByte to maintain the lowest fees of any UTXO – PoW – chain in the world. Currently fees on the DigiByte block-chain are at about 0.0001 DGB per transaction of 100 000 DGB sent. This depends on the amount sent and currently 100 000 DGB are worth around $2000.00 with the fee being less than 0.000002 cents. It would take 500 000 transactions of 100 000 DGB to equal 1 penny’s worth. This was tested on a Ledger Nano S set to the low fees setting.
Fast transaction times
Fast transactions are ensured by the conjunctive use of the two aforementioned technology protocols. The use of MultiShield and MultiAlgo allows the mining of the DigiByte chain to always be profitable and thus there is always someone mining your transactions. MultiAlgo allows there to a greater amount of hash power spread world-wide, this along with 15 second block times allows for transactions to be near instantaneous. This speed is also ensured by the use DigiSpeed. DigiSpeed is the protocol by which the DigiByte chain will decrease block timing gradually. Initially DigiByte started with 30 second block times in 2014; which today are set at 15 seconds. This decrease will allow for ever faster and ever more transactions per block.
Robust security + The Immutable Ledger
At the core of cryptocurrency security is decentralisation. As stated before decentralisation is ensured on the DigiByte block chain by use of the MultiAlgo approach. Each algorithm in the MultiAlgo approach of DigiByte is only allowed about 20% of all new blocks. This in conjunction with MultiShield allows for DigiByte to be the most secure, most reliable, and fastest UTXO block chain on the planet. This means that DigiByte is a proof of work (PoW) block-chain where all transactional activities are stored on the immutable public ledger world-wide. In DigiByte there is no need for the Lightning protocol (although we have it) nor sidechains to scale, and thus we get to keep PoW’s security.
There are many great debates as to the robustness or cleanliness of PoW. The fact remains that PoW block-chains remain the only systems in human history which have never been hacked and thus their security is maximal.
For an attacker to divert the DigiByte chain they would need to control over 93% of all the hashrate on one algorithm and 51% of the other four. And so DigiByte is immune to the infamous 51% attack to which Bitcoin and Litecoin are vulnerable.
Moreover, the DigiByte block-chain is currently spread over 200 000 plus servers, computers, phones, and other machines world-wide. The fact is that DigiByte is one of the easiest to mine coins there is – this is greatly aided by the recent release of the one click miner. This allows for ever greater decentralisation which in turn assures that there is no single point of failure and the chain is thus virtually un-attackable.
On Chain Scalability
The biggest barrier for block-chains today is scalability. Visa the credit card company can handle around 2000 transactions per second (TPS) today. This allows them to ensure customer security and transactional rates nation-wide. Bitcoin currently sits at around 7 TPS and Litecoin at 28 TPS (56 TPS with SegWit). All the technological innovations I’ve mentioned above come together to allow for DigiByte to be the fastest PoW block-chain in the world and the most scalable.
DigiByte is scalable because of DigiSpeed, the protocol through which block times are decreased and block sizes are increased. It is known that a simple increase in block size can increase the TPS of any block-chain, such is the case with Bitcoin Cash. This is however not scalable. The reason a simple increase in block size is not scalable is because it would eventually lead to some if not a great amount of centralization. This centralization occurs because larger block sizes mean that storage costs and thus hardware cost for miners increases. This increase along with full blocks – meaning many transactions occurring on the chain – will inevitably bar out the average miner after difficulty increases and mining centres consolidate.
Hardware cost, and storage costs decrease over time following Moore’s law and DigiByte adheres to it perfectly. DigiSpeed calls for the increase in block sizes and decrease in block timing every two years by a factor of two. This means that originally DigiByte’s block sizes were 1 MB at 30 seconds each at inception in 2014. In 2016 DigiByte increased block size by two and decreased block timing by the same factor. Perfectly following Moore’s law. Moore’s law dictates that in general hardware increases in power by a factor of two while halving in cost every year.
This would allow for DigiByte to scale at a steady rate and for people to adopt new hardware at an equally steady rate and reasonable expense. Thus so, the average miner can continue to mine DigiByte on his algorithm of choice with entry level hardware.
DigiByte was one of the first block chains to adopt segregated witness (SegWit in 2017) a protocol whereby a part of transactional data is removed and stored elsewhere to decrease transaction data weight and thus increase scalability and speed. This allows us to fit more transactions per block which does not increase in size!
DigiByte currently sits at 560 TPS and could scale to over 280 000 TPS by 2035. This dwarfs any of the TPS capacities; even projected/possible capacities of some coins and even private companies. In essence DigiByte could scale worldwide today and still be reliable and robust. DigiByte could even handle the cumulative transactions of all the top 50 coins in and still run smoothly and below capacity. In fact, to max out DigiByte’s actual maximum capacity (today at 560 TPS) you would have to take all these transactions and multiply them by a factor of 10!
Oher Uses for DigiByte
Note that DigiByte is not only to be used as a currency. Its immense robustness, security and scalability make it ideal for building decentralised applications (DAPPS) which it can host. DigiByte can in fact host DAPPS and even centralised versions which rely on the chain which are known as Digi-Apps. This application layer is also accompanied by a smart contract layer.
Thus, DigiByte could host several Crypto Kitties games and more without freezing out or increasing transaction costs for the end user.
Currently there are various DAPPS being built on the DigiByte block-chain, these are done independently of the DigiByte core team. These companies are simply using the DigiByte block-chain as a utility much in the same way one uses a road to get to work. One such example is Loly – a Tinderesque consensual dating application.
DigiByte also hosts a variety of other platform projects such as the following:
The DigiByte Foundation
As previously mentioned DigiByte was not an ICO. The DigiByte foundation was established in 2017 by founder Jared Tate. Its purpose is as a non-profit organization dedicated to supporting and developing the DigiByte block-chain.
DigiByte is a community effort and a community coin, to be treated as a public resource as water or air. Know that anyone can work on DigiByte, anyone can create, and do as they wish. It is a permissionless system which encourages innovation and creation. If you have an idea and or would like to get help on your project do not hesitate to contact the DigiByte foundation either through the official website and or the telegram developer’s channel.
For this reason, it is ever more important to note that the DigiByte foundation cannot exist without public support. And so, this is the reason I encourage all to donate to the foundation. All funds are used for the maintenance of DigiByte servers, marketing, and DigiByte development.
DigiByte Resources and Websites
Please refer to the sidebar of this sub-reddit for more resources and information.
Edit - Removed Jaxx wallet.
Edit - A new section was added to the article: Why so many coins? 21 Billion
Edit - Adjusted max capacity of DGB's TPS - Note it's actually larger than I initially calculated.
Edit – Grammar and format readjustment
I hope you’ve enjoyed my article, I originally wrote this for the reddit sub-wiki where it generally will most likely, probably not, get a lot of attention. So instead I've decided to make this sort of an introductory post, an open letter, to any newcomers to DGB or for those whom are just curious.
I tried to cover every aspect of DGB, but of course I may have forgotten something! Please leave a comment down below and tell me why you're in DGB? What convinced you? Me it's the decentralised PoW that really convinced me. Plus, just that transaction speed and virtually no fees! Made my mouth water!
-Dereck de Mézquita
I'm a student typing this stuff on my free time, help me pay my debts? Thank you!
submitted by xeno_biologist to Digibyte [link] [comments]

Can we talk about sharding and decentralized scaling for Raiblocks?

This essay contains a healthy dose of math sprinkled with opinion, and I would be the first to admit that my math and personal opinions are sometimes wrong. The beauty of these forums is that it allows us to discuss topics in depth, and with enough group scrutiny we should arrive at the truth. I'm actually a cryptocurrency noob; I've only been looking at it in earnest for a few months, but I've seen enough to conclude that we are in the middle of a revolution, and if I don't intellectually participate somehow, I think I'll regret it for the rest of my life.
Here I analyze sharding in a PoS (proof-of-stake) system, and I will show that not only is sharding good, but I will quantify just how beneficial it is to Tps (transactions per second of the whole network) and mps (messages per second processed by each individual node). I use Raiblocks as my point of departure, regarding it as both my inspiration and my object of critique. But much of the discussion should be relevant to any PoS sharded system.
As you may know, Raiblocks does not employ ledger sharding, but seeing as every wallet is already in its own separate blockchain, it's basically already half-way there! From an engineering perspective, sharding is low-hanging fruit for a block-lattice structure like Raiblock's, especially when you compare it to how complicated it is for single-blockchain currencies.
For the record, I think that Raiblocks will scale just fine according to the current strategy laid out by Colin LeMahieu (u/meor) . By using only full nodes and hosting them in enterprise grade servers (basically datacenters), chances are good that the network will be able to keep up with future Tps (transaction per second) growth. Skeptics have been questioning if people are going to be willing to run nodes pro bono, just to support the network. But I don't doubt that many vendors will jump at the chance. If I'm Amazon, and I've been paying 3% of everything to Visa all these years, when there's an option to basically run my own Visa, I take it.
Payment networks like Paypal have been offering free person-to-person payments for years, eating the costs of processing those transactions in exchange for the opportunity to take their cut when those same people pay online vendors like Amazon. This makes business sense because only a minority of transactions are person-to-person anyway. Most payments result from people buying stuff. So, in a sense, vendors like Amazon have already been subsidizing our free transactions for years. By running Raiblocks nodes, they would still be subsidizing our transactions, but it would be a better deal than what they were getting before.
But have we forgotten something here? Is this really the dream of the instant, universal, decentralized, uncensorable payment network that was promised and only kinda delivered by Bitcoin? Decentralization comes in a spectrum, and while this is certainly better than a private blockchain like Ripple, the future of Raiblocks that we're looking at is a smallish number of supernodes run by a consortium of corporations, governments, and maybe a sprinkling of die-hard fans.
You may ask, but what about the nodes run by you and me on our dinky home computers and cable modem connections? Well, people need to remember that Raiblocks nodes need to talk to each other every time there's a transaction, in order to exchange their votes. The more nodes there are, the more messages have to be received and sent per node per transaction. Having more nodes may improve the decentralization, redudancy, and robustness of the network, but speed it definitely does not. Sure, the SSD of a computer running a mock node will handle 7000 tps, but the real bottleneck is network IO, not disk IO, and how many Comcast internet plans are going to keep up with 7000 x N messages per second, where N is the total number of nodes? If you take the message size to be 260 bytes (credit to u/juanjux's packet-sniffing skills), and the number of nodes to be 1000, that's 1.8 GB/s. Also, if you consider that at least two messages will need to be exchanged with every node (one for the sending wallet, one for the receiving), the network requirements per node becomes 3.6 GB/s. This requirement applies to both the download and upload bandwidth, since in addition to receiving votes from other nodes, you have to announce your own vote to all of them as well. Maybe with multicasting upload requirements can be relaxed, but the overall story is the same: you almost want to convince small players not to run their own nodes, so N doesn't grow too large. Hence, the lack of dividends.
So, if we're resigned to running Raiblocks from corporate supernodes in the future, we might want to ask ourselves, why is decentralization so important anyway? For 99.9% of the cases, I actually think it won't matter. People just want their transactions to complete in a low-cost and timely fashion. And that's why I think Ripple and Raiblocks on their current trajectories have bright futures. They are the petty cash of the future. But for bulk wealth storage, you want decentralization because it makes it hard for any one entity to gain control over your money. No government will be able to step in and freeze your funds if you're Wikileaks or a political dissident when your cryptocurrency network is hosted on millions of computers scattered across the internet. I know the millions number sounds outlandish given that Bitcoin itself has fewer than 12k nodes at present, but that's my vision for the future. And I hope that by the end of this essay, you'll agree it's plausible.
The main benefit of sharding is that it allows nodes to divide the task of hosting the ledger into smaller chunks, reducing the per-node bandwidth requirements to achieve a certain Tps. I'll show that this benefit comes without having to sacrifice ledger redundancy, so long as sufficient nodes can be recruited. One disadvantage that must be noted is the increased overhead of coordinating a large number of nodes subscribed to partial ledgers. At the very least, nodes will need to know how wealthy other nodes are for voting purposes. However, I don't see how an up-to-the-second update of nodal wealth is necessary, since wealth changes on the timescale of months, if not years. It should be sufficient to conduct a role call once every few weeks to update nodes on who the other nodes are and to impart information about wealth and ledger subscriptions. Nonetheless, in principle this overhead means it is still possible to have too many nodes even with sharding.
Raiblocks has a unique advantage over single-chain cryptocoins in that each wallet address is already its own blockchain. This makes it especially amenable to sharding, since each wallet can already be thought of as its own shard! You just need a clever algorithm to decide which nodes subscribe to which wallets. For the purposes of this analysis, I assume a random subscription, so that for example if both you and I subscribe to 10% of the ledger, our subscriptions are probabilistically independent, and we intersect on roughly one percent of the total wallet space. I will also assume that all nodes are identical to each other in bandwidth, though in practice I think each node's owner should decide how much bandwidth he is willing to commit, letting the node's software dynamically adjust its P to maintain the desired bandwidth, where P, or the participation level, is the fraction of the ledger that the node is subscribed to. That way, when the Tps of the network increases over time, each node will use the increasing bandwidth demand as a feedback signal to automatically lower its ledger subscription percentage. Then, all that would be missing for smooth and seamless network growth is a mechanism for ensuring node count growth.
Some math
Symbol Definition
mps messages per second received/sent per individual node
N total number of nodes
Tps transactions per second processed by the whole network
R ledger redundancy
P fractional participation level of an individual node
k role call frequency
From the definitions, it should be apparent that
(1) R = NP
There are two types of messages that nodes have to deal with, transaction messages and role-call messages. Transaction messages are those related to updating the ledger when money is sent from one wallet to another. For each transaction, each node presiding over the sending wallet/shard will need to
  1. Broadcast its vote to the other R members of the shard. In the normal case this is a thumbs up signal and no conflict resolution is required.
  2. Receive votes from the other R members of the shard
  3. Broadcast its thumbs up to the R members of the receiving wallet/shard
Each node presiding over the receiving wallet/shard will need to
  1. receive thumbs up signals from the R members of the sending wallet/shard
Therefore, on a macro level upload and download requirements are the same. (Two messages sent, two messages received.)
Role-call messages are those related to disseminating an active directory of which nodes are participating in which wallets and information about nodal wealth. Knowledge about each individual node is broadcasted to the network at a rate of k. I think 10-6 Hz is reasonable, for an update interval of 12 days. For each update, all R nodes presiding over the wallet of the node whose information is being shared will broadcast their view of the node's wealth to all N nodes. Therefore, from the perspective on an individual node:
  1. The rate that role-call messages are received is kRN.
  2. The rate that role-call messages are sent is k(# node wallets presided over)N = k(NP)N = kRN.
Again, upload and download rates are the same. Since upload and download rates are symmetric (which intuitively should be true since every message that is sent needs to be received), the parameter mps can be used equally to describe upload and download bandwidth.
(2) mps = 2R(PTps) + kRN,
where the two terms correspond to the transaction and role-call messages, respectively. Using (1), (2) can be rewritten as
(3) mps = 2R2Tps/N + kRN
Here, we see an interesting relationship between the different message categories and the node count. For a fixed ledger redundancy R and Tps, the number of transaction messages is inversely proportional to the number of nodes. This is intuitive. If all of a sudden there are twice as many nodes and ledger redundancy remains the same, then each node has halved its ledger subscription and only has to deal with half as many transactions. This is the "many hands make light work" phenomenon in action. On the other hand, the number of role-call messages increases in proportion to the number of nodes. The interplay between these two factors determines the sweet spot where mps is at a local minimum. Since the calculus is straightforward, I'll leave it as an exercise to the reader to show that
(4) N_sweetspot = (2RTps/k)1/2
Alternatively, another way of looking things is to consider mps to be fixed. This may be more appropriate if each node is pegged at its committed bandwidth. Then (3) describes the relationship between the ledger redundancy and N. You may ask how this can be reconciled with (1), which seems to imply that N and R are directly proportional, but in this scenario each node is dynamically adjusting its ledger subscription P in response to a changing N to maintain a constant bandwidth mps. In this view, the sweet spot for N is where R is maximized. Interestingly, regardless of which view you take, you arrive at the same expression for the sweet spot (4).
If N < N_sweetspot, then transaction messages dominate the total message count. The system is in the transaction-heavy regime and needs more nodes to help carry the transaction load. If N > N_sweetspot (the node-heavy regime), transaction messages are low, but the number of role-call messages is large and it becomes expensive to keep the whole network in sync. When N = N_sweetspot, the two message categories occur at the same rate, which is easily verified by plugging (4) back into (3). This is when the network is at its most decentralized: message count per node is low while redundancy is high.
Note that N_sweetspot increases as Tps1/2. This implies that, as transaction rate increases, the network will not optimally scale without somehow attracting new people to run nodes. But the incentives can't be too good either, or N may increase beyond N_sweetspot. Ideally, a feedback mechanism using market forces will encourage the network to gravitate towards the sweet spot (more on this later).
One special case is where P=1 and N=R. This is when the network is at its most centralized operating point, with every single node acting as a full node. This minimizes node count for a given redundancy level R and is how Raiblocks is currently designed. I will show that for most real-world numbers, the role-call term is so small as to be negligible, but the mps is many orders of magnitude higher than in the decentralized case because of the large transaction term.
Assuming that we are able to keep the network operating at its sweet spot, by plugging (4) into (3), we arrive at
(5) mps_sweetspot = R3/2(8kTps)1/2
If instead we plug N=R into (3), we arrive at
(6) mps_centralized = 2RTps + kR2
So, we see that in the decentralized case the mps of individual nodes increases as the square root of Tps, a much more sustainable form of scaling than the linear relationship in the centralized case.
And now, the moment we've all been waiting for: plugging various network load scenarios into these formulas and comparing the most decentralized case to the most centralized. Real world operation will be somewhere in between these two extremes.
Fixed parameters
packet size (bytes) 260
k (Hz) 1.00E-06
R 1000
transaction fee ($) $0.01
0.1 1 10 100 1,000 10,000 100,000
Total monthly dividends $2,592 $25,920 $259,200 $2,592,000 $25,920,000 $259,200,000 $2,592,000,000
Decentralized node requirements
mps (Hz) 28 89 283 894 2,828 8,944 28,284
node traffic (bytes/s) 7.35E+03 2.33E+04 7.35E+04 2.33E+05 7.35E+05 2.33E+06 7.35E+06
N 1.41E+04 4.47E+04 1.41E+05 4.47E+05 1.41E+06 4.47E+06 1.41E+07
P 7.07E-02 2.24E-02 7.07E-03 2.24E-03 7.07E-04 2.24E-04 7.07E-05
Total Network Traffic (bytes/s) 1.04E+08 1.04E+09 1.04E+10 1.04E+11 1.04E+12 1.04E+13 1.04E+14
Yearly Network Traffic (bytes) 3.28E+15 3.28E+16 3.28E+17 3.28E+18 3.28E+19 3.28E+20 3.28E+21
Decentralized node income
monthly per node ($) $0.18 $0.58 $1.83 $5.80 $18.33 $57.96 $183.28
income/GB ($/GB) $0.0096 $0.0096 $0.0096 $0.0096 $0.0096 $0.0096 $0.0096
Centralized node requirements
mps (Hz) 2.01E+02 2.00E+03 2.00E+04 2.00E+05 2.00E+06 2.00E+07 2.00E+08
node traffic (bytes/s) 5.23E+04 5.20E+05 5.20E+06 5.20E+07 5.20E+08 5.20E+09 5.20E+10
N 1000 1000 1000 1000 1000 1000 1000
P 1 1 1 1 1 1 1
Total Network Traffic (bytes/s) 5.23E+07 5.20E+08 5.20E+09 5.20E+10 5.20E+11 5.20E+12 5.20E+13
Yearly Network Traffic (bytes) 1.65E+15 1.64E+16 1.64E+17 1.64E+18 1.64E+19 1.64E+20 1.64E+21
Centralized node income
monthly per node ($) $2.59 $25.92 $259.20 $2,592 $25,920 $259,200 $2,592,000
income/GB ($/GB) 0.0191 0.0192 0.0192 0.0192 0.0192 0.0192 0.0192
Yes, I did sneak a transaction fee in there, which is anathema to the Raiblocks way. But I wanted to incentivize people to run nodes. Observe that income per gigabyte remains the same, independent of network Tps, because both total income and total bandwidth scale proportionally to Tps. The decentralized case has half the income/GB because the role-call overhead doubles network activity. In either case, the income per GB depends on transaction fee and is independent of network load.
An interesting number to check online is the price/GB that various ISP's charge. With Google Fiber, it is possible to purchase bandwidth as low as $0.00076 per GB, meaning that it may be possible for nodes to be profitable even if fees were lowered by another order of magnitude. As time progresses, bandwidth costs will only go down, so fees may be able to be lowered even further past that. But because of electricity and other miscellaneous costs, I think a one cent transaction fee is probably pretty close to what people need to incentivize them to run nodes.
With sharding, even many home broadband connections today can feasibly support 100,000 transactions per second, with each node subscribed to about one ten thousandth of the total ledger and handling about 7 MB/s. Getting 14 million people to run nodes may seem like a tall order, but the financial incentives are there. Just look at all the people who have rushed to do GPU mining. Here, bandwidth replaces hashing power as the tool used for mining.
According to a study done by Cisco, yearly internet traffic is projected to reach 3.3 ZB by 2021. Looking at the table, that means if we ever reach 100,000 Tps, Sharded Raiblocks traffic would be equal to the rest of the world combined. Yikes! But if you think about it, nobody along the way is taking on an unbearable load. Users pay low fees for transactions. Nodes get dividends. ISPs get additional customers. The only ones who lose out are Visa, Paypal, and banks.
With such a large network presence, the cultural impact of this coin would be huge. That, in addition to the sheer number of participants running nodes as side businesses would cement this as the coin of the people.
From a macro level, I see no red flags that would indicate this is economically or technically infeasible. Of course, the devil's in the details so I'm posting this to see if people think I'm on the right track. To me, it seems that the possibilities are tantalizing and someone needs to build a test net to see if this idea flies (u/meor, if any of this sounds appealing, are you guys hiring? ;) ).
I've only scratched the surface and there are many other topics that are worthy of deeper discussion:
submitted by Cookiemole to RaiBlocks [link] [comments]

With the bitcoin blockchain at well over 100 GB, why is point 7. of the Satoshi Nakamoto whitepaper not implemented?

There are quite a few complaints about disk space needed to run a full node of the bitcoin blockchain and the time to download the blockchain. Point 7. of the a.m. whitepaper reads:
"7. Reclaiming Disk Space Once the latest transaction in a coin is buried under enough blocks, the spent transactions before it can be discarded to save disk space. To facilitate this without breaking the block's hash, transactions are hashed in a Merkle Tree, with only the root included in the block's hash. Old blocks can then be compacted by stubbing off branches of the tree. The interior hashes do not need to be stored. A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = *4.2MB per year*. With computer systems typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory."
Why is this not being implemented to get the required disk space down to almost nothing (about 4.2 MB/year)?
DISCLAIMER: I'm not a programmer, cryptographer or geek, so I don't understand the Merkle Tree and respective analysis/conclusions. Nevertheless it seems clear, that Satoshi Nakamoto did foresee the existing problem and included a solution in the whitepaper.
submitted by pewi6969 to Bitcoin [link] [comments]

Antminer S9 no longer hashing?

Good morning folks,
I have an Antminer S9 that has performed flawlessly. After I moved it to a better location, I noticed that it no longer seems to be working. The green light is flashing, but it doesn't seem to be hashing to my pool (Nicehash).
I'm fairly new to Bitcoining mining and can't make sense of some of the information on my status screen. Before I jump into Bitmain support, I was wondering if anyone could clue me in as to what the problem might be.
I'll post my Kernal Log here.
Thank you in advance!!!
KERNAL LOG: [ 0.000000] Booting Linux on physical CPU 0x0
[ 0.000000] Linux version 3.14.0-xilinx-ge8a2f71-dirty ([email protected]) (gcc version 4.8.3 20140320 (prerelease) (Sourcery CodeBench Lite 2014.05-23) ) #82 SMP PREEMPT Tue May 16 19:49:53 CST 2017
[ 0.000000] CPU: ARMv7 Processor [413fc090] revision 0 (ARMv7), cr=18c5387d
[ 0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[ 0.000000] Machine model: Xilinx Zynq
[ 0.000000] cma: CMA: reserved 128 MiB at 27800000
[ 0.000000] Memory policy: Data cache writealloc
[ 0.000000] On node 0 totalpages: 258048
[ 0.000000] free_area_init_node: node 0, pgdat c0740a40, node_mem_map e6fd8000
[ 0.000000] Normal zone: 1520 pages used for memmap
[ 0.000000] Normal zone: 0 pages reserved
[ 0.000000] Normal zone: 194560 pages, LIFO batch:31
[ 0.000000] HighMem zone: 496 pages used for memmap
[ 0.000000] HighMem zone: 63488 pages, LIFO batch:15
[ 0.000000] PERCPU: Embedded 8 pages/cpu @e6fc0000 s9088 r8192 d15488 u32768
[ 0.000000] pcpu-alloc: s9088 r8192 d15488 u32768 alloc=8*4096
[ 0.000000] pcpu-alloc: [0] 0 [0] 1
[ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 256528
[ 0.000000] Kernel command line: noinitrd mem=1008M console=ttyPS0,115200 root=ubi0:rootfs ubi.mtd=1 rootfstype=ubifs rw rootwait
[ 0.000000] PID hash table entries: 4096 (order: 2, 16384 bytes)
[ 0.000000] Dentry cache hash table entries: 131072 (order: 7, 524288 bytes)
[ 0.000000] Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
[ 0.000000] Memory: 884148K/1032192K available (5032K kernel code, 283K rwdata, 1916K rodata, 204K init, 258K bss, 148044K reserved, 253952K highmem)
[ 0.000000] Virtual kernel memory layout:
[ 0.000000] vector : 0xffff0000 - 0xffff1000 ( 4 kB)
[ 0.000000] fixmap : 0xfff00000 - 0xfffe0000 ( 896 kB)
[ 0.000000] vmalloc : 0xf0000000 - 0xff000000 ( 240 MB)
[ 0.000000] lowmem : 0xc0000000 - 0xef800000 ( 760 MB)
[ 0.000000] pkmap : 0xbfe00000 - 0xc0000000 ( 2 MB)
[ 0.000000] modules : 0xbf000000 - 0xbfe00000 ( 14 MB)
[ 0.000000] .text : 0xc0008000 - 0xc06d1374 (6949 kB)
[ 0.000000] .init : 0xc06d2000 - 0xc0705380 ( 205 kB)
[ 0.000000] .data : 0xc0706000 - 0xc074cf78 ( 284 kB)
[ 0.000000] .bss : 0xc074cf84 - 0xc078d9fc ( 259 kB)
[ 0.000000] Preemptible hierarchical RCU implementation.
[ 0.000000] Dump stacks of tasks blocking RCU-preempt GP.
[ 0.000000] RCU restricting CPUs from NR_CPUS=4 to nr_cpu_ids=2.
[ 0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
[ 0.000000] NR_IRQS:16 nr_irqs:16 16
[ 0.000000] ps7-slcr mapped to f0004000
[ 0.000000] zynq_clock_init: clkc starts at f0004100
[ 0.000000] Zynq clock init
[ 0.000015] sched_clock: 64 bits at 333MHz, resolution 3ns, wraps every 3298534883328ns
[ 0.000308] ps7-ttc #0 at f0006000, irq=43
[ 0.000618] Console: colour dummy device 80x30
[ 0.000658] Calibrating delay loop... 1325.46 BogoMIPS (lpj=6627328)
[ 0.040207] pid_max: default: 32768 minimum: 301
[ 0.040436] Mount-cache hash table entries: 2048 (order: 1, 8192 bytes)
[ 0.040459] Mountpoint-cache hash table entries: 2048 (order: 1, 8192 bytes)
[ 0.042612] CPU: Testing write buffer coherency: ok
[ 0.042974] CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
[ 0.043036] Setting up static identity map for 0x4c4b00 - 0x4c4b58
[ 0.043263] L310 cache controller enabled
[ 0.043282] l2x0: 8 ways, CACHE_ID 0x410000c8, AUX_CTRL 0x72760000, Cache size: 512 kB
[ 0.121037] CPU1: Booted secondary processor
[ 0.210227] CPU1: thread -1, cpu 1, socket 0, mpidr 80000001
[ 0.210357] Brought up 2 CPUs
[ 0.210376] SMP: Total of 2 processors activated.
[ 0.210385] CPU: All CPU(s) started in SVC mode.
[ 0.211051] devtmpfs: initialized
[ 0.213481] VFP support v0.3: implementor 41 architecture 3 part 30 variant 9 rev 4
[ 0.214724] regulator-dummy: no parameters
[ 0.223736] NET: Registered protocol family 16
[ 0.226067] DMA: preallocated 256 KiB pool for atomic coherent allocations
[ 0.228361] cpuidle: using governor ladder
[ 0.228374] cpuidle: using governor menu
[ 0.235908] syscon f8000000.ps7-slcr: regmap [mem 0xf8000000-0xf8000fff] registered
[ 0.237440] hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers.
[ 0.237453] hw-breakpoint: maximum watchpoint size is 4 bytes.
[ 0.237572] zynq-ocm f800c000.ps7-ocmc: ZYNQ OCM pool: 256 KiB @ 0xf0080000
[ 0.259435] bio: create slab at 0
[ 0.261172] vgaarb: loaded
[ 0.261915] SCSI subsystem initialized
[ 0.262814] usbcore: registered new interface driver usbfs
[ 0.262985] usbcore: registered new interface driver hub
[ 0.263217] usbcore: registered new device driver usb
[ 0.263743] media: Linux media interface: v0.10
[ 0.263902] Linux video capture interface: v2.00
[ 0.264150] pps_core: LinuxPPS API ver. 1 registered
[ 0.264162] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <[[email protected]](mailto:[email protected])>
[ 0.264286] PTP clock support registered
[ 0.264656] EDAC MC: Ver: 3.0.0
[ 0.265719] Advanced Linux Sound Architecture Driver Initialized.
[ 0.268708] DMA-API: preallocated 4096 debug entries
[ 0.268724] DMA-API: debugging enabled by kernel config
[ 0.268820] Switched to clocksource arm_global_timer
[ 0.289596] NET: Registered protocol family 2
[ 0.290280] TCP established hash table entries: 8192 (order: 3, 32768 bytes)
[ 0.290375] TCP bind hash table entries: 8192 (order: 4, 65536 bytes)
[ 0.290535] TCP: Hash tables configured (established 8192 bind 8192)
[ 0.290612] TCP: reno registered
[ 0.290633] UDP hash table entries: 512 (order: 2, 16384 bytes)
[ 0.290689] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes)
[ 0.290971] NET: Registered protocol family 1
[ 0.291346] RPC: Registered named UNIX socket transport module.
[ 0.291359] RPC: Registered udp transport module.
[ 0.291368] RPC: Registered tcp transport module.
[ 0.291376] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 0.291391] PCI: CLS 0 bytes, default 64
[ 0.291857] hw perfevents: enabled with ARMv7 Cortex-A9 PMU driver, 7 counters available
[ 0.293945] futex hash table entries: 512 (order: 3, 32768 bytes)
[ 0.295408] bounce pool size: 64 pages
[ 0.296323] jffs2: version 2.2. (NAND) © 2001-2006 Red Hat, Inc.
[ 0.296525] msgmni has been set to 1486
[ 0.297330] io scheduler noop registered
[ 0.297343] io scheduler deadline registered
[ 0.297385] io scheduler cfq registered (default)
[ 0.308358] dma-pl330 f8003000.ps7-dma: Loaded driver for PL330 DMAC-2364208
[ 0.308380] dma-pl330 f8003000.ps7-dma: DBUFF-128x8bytes Num_Chans-8 Num_Peri-4 Num_Events-16
[ 0.434378] e0001000.serial: ttyPS0 at MMIO 0xe0001000 (irq = 82, base_baud = 3124999) is a xuartps
[ 1.006815] console [ttyPS0] enabled
[ 1.011106] xdevcfg f8007000.ps7-dev-cfg: ioremap 0xf8007000 to f0068000
[ 1.018731] [drm] Initialized drm 1.1.0 20060810
[ 1.036029] brd: module loaded
[ 1.045494] loop: module loaded
[ 1.055163] e1000e: Intel(R) PRO/1000 Network Driver - 2.3.2-k
[ 1.060985] e1000e: Copyright(c) 1999 - 2013 Intel Corporation.
[ 1.068779] libphy: XEMACPS mii bus: probed
[ 1.073341] ------------- phy_id = 0x3625e62
[ 1.078112] xemacps e000b000.ps7-ethernet: pdev->id -1, baseaddr 0xe000b000, irq 54
[ 1.087072] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[ 1.093912] ehci-pci: EHCI PCI platform driver
[ 1.101155] zynq-dr e0002000.ps7-usb: Unable to init USB phy, missing?
[ 1.107952] usbcore: registered new interface driver usb-storage
[ 1.114850] mousedev: PS/2 mouse device common for all mice
[ 1.120975] i2c /dev entries driver
[ 1.127946] zynq-edac f8006000.ps7-ddrc: ecc not enabled
[ 1.133474] cpufreq_cpu0: failed to get cpu0 regulator: -19
[ 1.139426] Xilinx Zynq CpuIdle Driver started
[ 1.144261] sdhci: Secure Digital Host Controller Interface driver
[ 1.150384] sdhci: Copyright(c) Pierre Ossman
[ 1.154700] sdhci-pltfm: SDHCI platform and OF driver helper
[ 1.161601] mmc0: no vqmmc regulator found
[ 1.165614] mmc0: no vmmc regulator found
[ 1.208845] mmc0: SDHCI controller on e0100000.ps7-sdio [e0100000.ps7-sdio] using ADMA
[ 1.217539] usbcore: registered new interface driver usbhid
[ 1.223054] usbhid: USB HID core driver
[ 1.227806] nand: device found, Manufacturer ID: 0x2c, Chip ID: 0xda
[ 1.234107] nand: Micron MT29F2G08ABAEAWP
[ 1.238074] nand: 256MiB, SLC, page size: 2048, OOB size: 64
[ 1.244027] Bad block table found at page 131008, version 0x01
[ 1.250251] Bad block table found at page 130944, version 0x01
[ 1.256303] 3 ofpart partitions found on MTD device pl353-nand
[ 1.262080] Creating 3 MTD partitions on "pl353-nand":
[ 1.267174] 0x000000000000-0x000002000000 : "BOOT.bin-env-dts-kernel"
[ 1.275230] 0x000002000000-0x00000b000000 : "angstram-rootfs"
[ 1.282582] 0x00000b000000-0x000010000000 : "upgrade-rootfs"
[ 1.291630] TCP: cubic registered
[ 1.294869] NET: Registered protocol family 17
[ 1.299597] Registering SWP/SWPB emulation handler
[ 1.305497] regulator-dummy: disabling
[ 1.309875] UBI: attaching mtd1 to ubi0
[ 1.836565] UBI: scanning is finished
[ 1.848221] UBI: attached mtd1 (name "angstram-rootfs", size 144 MiB) to ubi0
[ 1.855302] UBI: PEB size: 131072 bytes (128 KiB), LEB size: 126976 bytes
[ 1.862063] UBI: min./max. I/O unit sizes: 2048/2048, sub-page size 2048
[ 1.868728] UBI: VID header offset: 2048 (aligned 2048), data offset: 4096
[ 1.875605] UBI: good PEBs: 1152, bad PEBs: 0, corrupted PEBs: 0
[ 1.881586] UBI: user volume: 1, internal volumes: 1, max. volumes count: 128
[ 1.888693] UBI: max/mean erase counter: 4/1, WL threshold: 4096, image sequence number: 1134783803
[ 1.897736] UBI: available PEBs: 0, total reserved PEBs: 1152, PEBs reserved for bad PEB handling: 40
[ 1.906953] UBI: background thread "ubi_bgt0d" started, PID 1080
[ 1.906959] drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
[ 1.911038] ALSA device list:
[ 1.911042] No soundcards found.
[ 1.927420] UBIFS: background thread "ubifs_bgt0_0" started, PID 1082
[ 1.956473] UBIFS: recovery needed
[ 2.016970] UBIFS: recovery completed
[ 2.020709] UBIFS: mounted UBI device 0, volume 0, name "rootfs"
[ 2.026635] UBIFS: LEB size: 126976 bytes (124 KiB), min./max. I/O unit sizes: 2048 bytes/2048 bytes
[ 2.035771] UBIFS: FS size: 128626688 bytes (122 MiB, 1013 LEBs), journal size 9023488 bytes (8 MiB, 72 LEBs)
[ 2.045653] UBIFS: reserved for root: 0 bytes (0 KiB)
[ 2.050693] UBIFS: media format: w4/r0 (latest is w4/r0), UUID B079DD56-06BB-4E31-8F5E-A6604F480DB2, small LPT model
[ 2.061987] VFS: Mounted root (ubifs filesystem) on device 0:11.
[ 2.069184] devtmpfs: mounted
[ 2.072297] Freeing unused kernel memory: 204K (c06d2000 - c0705000)
[ 2.920928] random: dd urandom read with 0 bits of entropy available
[ 3.318860]
[ 3.318860] bcm54xx_config_init
[ 3.928853]
[ 3.928853] bcm54xx_config_init
[ 7.929682] xemacps e000b000.ps7-ethernet: Set clk to 124999998 Hz
[ 7.935787] xemacps e000b000.ps7-ethernet: link up (1000/FULL)
[ 22.563181] In axi fpga driver!
[ 22.566260] request_mem_region OK!
[ 22.569676] AXI fpga dev virtual address is 0xf01fe000
[ 22.574751] *base_vir_addr = 0x8c510
[ 22.590723] In fpga mem driver!
[ 22.593791] request_mem_region OK!
[ 22.597361] fpga mem virtual address is 0xf3000000
[ 23.408156]
[ 23.408156] bcm54xx_config_init
[ 24.038071]
[ 24.038071] bcm54xx_config_init
[ 28.038487] xemacps e000b000.ps7-ethernet: Set clk to 124999998 Hz
[ 28.044593] xemacps e000b000.ps7-ethernet: link up (1000/FULL)
This is XILINX board. Totalram: 1039794176
Detect 1GB control board of XILINX
DETECT HW version=0008c510
miner ID : 8118b4c610358854
Miner Type = S9
AsicType = 1387
real AsicNum = 63
use critical mode to search freq...
get PLUG ON=0x000000e0
Find hashboard on Chain[5]
Find hashboard on Chain[6]
Find hashboard on Chain[7]
set_reset_allhashboard = 0x0000ffff
Check chain[5] PIC fw version=0x03
Check chain[6] PIC fw version=0x03
Check chain[7] PIC fw version=0x03
chain[5]: [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255]
has freq in PIC, will disable freq setting.
chain[5] has freq in PIC and will jump over...
Chain[5] has core num in PIC
Chain[5] ASIC[15] has core num=5
Check chain[5] PIC fw version=0x03
chain[6]: [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255]
has freq in PIC, will disable freq setting.
chain[6] has freq in PIC and will jump over...
Chain[6] has core num in PIC
Chain[6] ASIC[17] has core num=8
Check chain[6] PIC fw version=0x03
chain[7]: [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255]
has freq in PIC, will disable freq setting.
chain[7] has freq in PIC and will jump over...
Chain[7] has core num in PIC
Chain[7] ASIC[8] has core num=13
Chain[7] ASIC[9] has core num=11
Chain[7] ASIC[13] has core num=11
Chain[7] ASIC[19] has core num=14
Chain[7] ASIC[30] has core num=6
Chain[7] ASIC[32] has core num=1
Chain[7] ASIC[42] has core num=2
Chain[7] ASIC[55] has core num=1
Chain[7] ASIC[57] has core num=2
Check chain[7] PIC fw version=0x03
get PIC voltage=108 on chain[5], value=880
get PIC voltage=74 on chain[6], value=900
get PIC voltage=108 on chain[7], value=880
set_reset_allhashboard = 0x00000000
chain[5] temp offset record: 62,0,0,0,0,0,35,28
chain[5] temp chip I2C addr=0x98
chain[5] has no middle temp, use special fix mode.
chain[6] temp offset record: 62,0,0,0,0,0,35,28
chain[6] temp chip I2C addr=0x98
chain[6] has no middle temp, use special fix mode.
chain[7] temp offset record: 62,0,0,0,0,0,35,28
chain[7] temp chip I2C addr=0x98
chain[7] has no middle temp, use special fix mode.
set_reset_allhashboard = 0x0000ffff
set_reset_allhashboard = 0x00000000
CRC error counter=0
set command mode to VIL
--- check asic number
After Get ASIC NUM CRC error counter=0
The min freq=700
set real timeout 52, need sleep=379392
After TEST CRC error counter=0
set_reset_allhashboard = 0x0000ffff
set_reset_allhashboard = 0x00000000
search freq for 1 times, completed chain = 3, total chain num = 3
set_reset_allhashboard = 0x0000ffff
set_reset_allhashboard = 0x00000000
restart Miner chance num=2
waiting for receive_func to exit!
waiting for pic heart to exit!
bmminer not found= 1643 root 0:00 grep bmminer
bmminer not found, restart bmminer ...
This is user mode for mining
Detect 1GB control board of XILINX
Miner Type = S9
Miner compile time: Fri Nov 17 17:57:49 CST 2017 type: Antminer S9set_reset_allhashboard = 0x0000ffff
set_reset_allhashboard = 0x00000000
set_reset_allhashboard = 0x0000ffff
miner ID : 8118b4c610358854
set_reset_allhashboard = 0x0000ffff
Checking fans!get fan[2] speed=6120
get fan[2] speed=6120
get fan[2] speed=6120
get fan[2] speed=6120
get fan[2] speed=6120
get fan[2] speed=6120
get fan[5] speed=13440
get fan[2] speed=6120
get fan[5] speed=13440
get fan[2] speed=6120
get fan[5] speed=13440
chain[5]: [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255]
Chain[J6] has backup chain_voltage=880
Chain[J6] test patten OK temp=-126
Check chain[5] PIC fw version=0x03
chain[6]: [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255]
Chain[J7] has backup chain_voltage=900
Chain[J7] test patten OK temp=-120
Check chain[6] PIC fw version=0x03
chain[7]: [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255] [63:255]
Chain[J8] has backup chain_voltage=880
Chain[J8] test patten OK temp=-125
Check chain[7] PIC fw version=0x03
Chain[J6] orignal chain_voltage_pic=108 value=880
Chain[J7] orignal chain_voltage_pic=74 value=900
Chain[J8] orignal chain_voltage_pic=108 value=880
set_reset_allhashboard = 0x0000ffff
set_reset_allhashboard = 0x00000000
Chain[J6] has 63 asic
Chain[J7] has 63 asic
Chain[J8] has 63 asic
Chain[J6] has core num in PIC
Chain[J6] ASIC[15] has core num=5
Chain[J7] has core num in PIC
Chain[J7] ASIC[17] has core num=8
Chain[J8] has core num in PIC
Chain[J8] ASIC[8] has core num=13
Chain[J8] ASIC[9] has core num=11
Chain[J8] ASIC[13] has core num=11
Chain[J8] ASIC[19] has core num=14
Chain[J8] ASIC[30] has core num=6
Chain[J8] ASIC[32] has core num=1
Chain[J8] ASIC[42] has core num=2
Chain[J8] ASIC[55] has core num=1
Chain[J8] ASIC[57] has core num=2
miner total rate=13999GH/s fixed rate=13500GH/s
read PIC voltage=940 on chain[5]
Chain:5 chipnum=63
Chain[J6] voltage added=0.2V
Chain:5 temp offset=0
Chain:5 base freq=487
Asic[ 0]:618
Asic[ 1]:631 Asic[ 2]:681 Asic[ 3]:618 Asic[ 4]:631 Asic[ 5]:681 Asic[ 6]:618 Asic[ 7]:631 Asic[ 8]:675
Asic[ 9]:618 Asic[10]:631 Asic[11]:681 Asic[12]:631 Asic[13]:637 Asic[14]:606 Asic[15]:487 Asic[16]:637
Asic[17]:675 Asic[18]:618 Asic[19]:637 Asic[20]:675 Asic[21]:631 Asic[22]:650 Asic[23]:687 Asic[24]:631
Asic[25]:537 Asic[26]:687 Asic[27]:631 Asic[28]:587 Asic[29]:687 Asic[30]:612 Asic[31]:650 Asic[32]:687
Asic[33]:631 Asic[34]:650 Asic[35]:687 Asic[36]:631 Asic[37]:662 Asic[38]:693 Asic[39]:631 Asic[40]:662
Asic[41]:662 Asic[42]:543 Asic[43]:668 Asic[44]:693 Asic[45]:568 Asic[46]:675 Asic[47]:700 Asic[48]:631
Asic[49]:568 Asic[50]:700 Asic[51]:631 Asic[52]:625 Asic[53]:700 Asic[54]:631 Asic[55]:675 Asic[56]:662
Asic[57]:631 Asic[58]:662 Asic[59]:687 Asic[60]:631 Asic[61]:681 Asic[62]:700
Chain:5 max freq=700
Chain:5 min freq=487
read PIC voltage=940 on chain[6]
Chain:6 chipnum=63
Chain[J7] voltage added=0.1V
Chain:6 temp offset=0
Chain:6 base freq=687
Asic[ 0]:650
Asic[ 1]:650 Asic[ 2]:650 Asic[ 3]:650 Asic[ 4]:650 Asic[ 5]:650 Asic[ 6]:650 Asic[ 7]:650 Asic[ 8]:650
Asic[ 9]:650 Asic[10]:650 Asic[11]:650 Asic[12]:650 Asic[13]:650 Asic[14]:650 Asic[15]:650 Asic[16]:650
Asic[17]:650 Asic[18]:650 Asic[19]:650 Asic[20]:650 Asic[21]:650 Asic[22]:650 Asic[23]:650 Asic[24]:650
Asic[25]:650 Asic[26]:656 Asic[27]:656 Asic[28]:656 Asic[29]:656 Asic[30]:656 Asic[31]:656 Asic[32]:656
Asic[33]:656 Asic[34]:656 Asic[35]:656 Asic[36]:656 Asic[37]:656 Asic[38]:656 Asic[39]:656 Asic[40]:656
Asic[41]:656 Asic[42]:656 Asic[43]:656 Asic[44]:656 Asic[45]:656 Asic[46]:656 Asic[47]:656 Asic[48]:656
Asic[49]:656 Asic[50]:656 Asic[51]:656 Asic[52]:656 Asic[53]:656 Asic[54]:656 Asic[55]:656 Asic[56]:656
Asic[57]:656 Asic[58]:656 Asic[59]:656 Asic[60]:656 Asic[61]:656 Asic[62]:656
Chain:6 max freq=656
Chain:6 min freq=650
read PIC voltage=940 on chain[7]
Chain:7 chipnum=63
Chain[J8] voltage added=0.2V
Chain:7 temp offset=0
Chain:7 base freq=637
Asic[ 0]:656
Asic[ 1]:656 Asic[ 2]:656 Asic[ 3]:656 Asic[ 4]:656 Asic[ 5]:656 Asic[ 6]:656 Asic[ 7]:656 Asic[ 8]:637
Asic[ 9]:637 Asic[10]:656 Asic[11]:656 Asic[12]:656 Asic[13]:637 Asic[14]:656 Asic[15]:662 Asic[16]:662
Asic[17]:662 Asic[18]:662 Asic[19]:637 Asic[20]:662 Asic[21]:662 Asic[22]:662 Asic[23]:662 Asic[24]:662
Asic[25]:662 Asic[26]:662 Asic[27]:662 Asic[28]:662 Asic[29]:662 Asic[30]:637 Asic[31]:662 Asic[32]:662
Asic[33]:662 Asic[34]:662 Asic[35]:662 Asic[36]:662 Asic[37]:662 Asic[38]:662 Asic[39]:662 Asic[40]:662
Asic[41]:662 Asic[42]:650 Asic[43]:662 Asic[44]:662 Asic[45]:662 Asic[46]:662 Asic[47]:662 Asic[48]:662
Asic[49]:662 Asic[50]:662 Asic[51]:662 Asic[52]:662 Asic[53]:662 Asic[54]:662 Asic[55]:650 Asic[56]:662
Asic[57]:650 Asic[58]:662 Asic[59]:662 Asic[60]:662 Asic[61]:662 Asic[62]:662
Chain:7 max freq=662
Chain:7 min freq=637
Miner fix freq ...
read PIC voltage=940 on chain[5]
Chain:5 chipnum=63
Chain[J6] voltage added=0.2V
Chain:5 temp offset=0
Chain:5 base freq=487
Asic[ 0]:618
Asic[ 1]:631 Asic[ 2]:650 Asic[ 3]:618 Asic[ 4]:631 Asic[ 5]:656 Asic[ 6]:618 Asic[ 7]:631 Asic[ 8]:656
Asic[ 9]:618 Asic[10]:631 Asic[11]:656 Asic[12]:631 Asic[13]:637 Asic[14]:606 Asic[15]:487 Asic[16]:637
Asic[17]:656 Asic[18]:618 Asic[19]:637 Asic[20]:656 Asic[21]:631 Asic[22]:650 Asic[23]:656 Asic[24]:631
Asic[25]:537 Asic[26]:656 Asic[27]:631 Asic[28]:587 Asic[29]:656 Asic[30]:612 Asic[31]:650 Asic[32]:656
Asic[33]:631 Asic[34]:650 Asic[35]:656 Asic[36]:631 Asic[37]:656 Asic[38]:656 Asic[39]:631 Asic[40]:656
Asic[41]:656 Asic[42]:543 Asic[43]:656 Asic[44]:656 Asic[45]:568 Asic[46]:656 Asic[47]:656 Asic[48]:631
Asic[49]:568 Asic[50]:656 Asic[51]:631 Asic[52]:625 Asic[53]:656 Asic[54]:631 Asic[55]:656 Asic[56]:656
Asic[57]:631 Asic[58]:656 Asic[59]:656 Asic[60]:631 Asic[61]:656 Asic[62]:656
Chain:5 max freq=656
Chain:5 min freq=487
read PIC voltage=940 on chain[6]
Chain:6 chipnum=63
Chain[J7] voltage added=0.1V
Chain:6 temp offset=0
Chain:6 base freq=687
Asic[ 0]:631
Asic[ 1]:631 Asic[ 2]:631 Asic[ 3]:631 Asic[ 4]:631 Asic[ 5]:631 Asic[ 6]:631 Asic[ 7]:631 Asic[ 8]:631
Asic[ 9]:631 Asic[10]:631 Asic[11]:631 Asic[12]:631 Asic[13]:631 Asic[14]:631 Asic[15]:631 Asic[16]:631
Asic[17]:631 Asic[18]:631 Asic[19]:631 Asic[20]:631 Asic[21]:631 Asic[22]:631 Asic[23]:631 Asic[24]:631
Asic[25]:631 Asic[26]:631 Asic[27]:631 Asic[28]:631 Asic[29]:631 Asic[30]:631 Asic[31]:631 Asic[32]:631
Asic[33]:631 Asic[34]:631 Asic[35]:637 Asic[36]:637 Asic[37]:637 Asic[38]:637 Asic[39]:637 Asic[40]:637
Asic[41]:637 Asic[42]:637 Asic[43]:637 Asic[44]:637 Asic[45]:637 Asic[46]:637 Asic[47]:637 Asic[48]:637
Asic[49]:637 Asic[50]:637 Asic[51]:637 Asic[52]:637 Asic[53]:637 Asic[54]:637 Asic[55]:637 Asic[56]:637
Asic[57]:637 Asic[58]:637 Asic[59]:637 Asic[60]:637 Asic[61]:637 Asic[62]:637
Chain:6 max freq=637
Chain:6 min freq=631
read PIC voltage=940 on chain[7]
Chain:7 chipnum=63
Chain[J8] voltage added=0.2V
Chain:7 temp offset=0
Chain:7 base freq=637
Asic[ 0]:637
Asic[ 1]:637 Asic[ 2]:637 Asic[ 3]:637 Asic[ 4]:637 Asic[ 5]:637 Asic[ 6]:637 Asic[ 7]:637 Asic[ 8]:637
Asic[ 9]:637 Asic[10]:637 Asic[11]:637 Asic[12]:637 Asic[13]:637 Asic[14]:637 Asic[15]:637 Asic[16]:637
Asic[17]:637 Asic[18]:637 Asic[19]:637 Asic[20]:637 Asic[21]:637 Asic[22]:637 Asic[23]:637 Asic[24]:637
Asic[25]:637 Asic[26]:637 Asic[27]:637 Asic[28]:637 Asic[29]:637 Asic[30]:637 Asic[31]:637 Asic[32]:637
Asic[33]:637 Asic[34]:637 Asic[35]:637 Asic[36]:637 Asic[37]:637 Asic[38]:637 Asic[39]:637 Asic[40]:637
Asic[41]:637 Asic[42]:637 Asic[43]:637 Asic[44]:637 Asic[45]:637 Asic[46]:637 Asic[47]:637 Asic[48]:637
Asic[49]:643 Asic[50]:643 Asic[51]:643 Asic[52]:643 Asic[53]:643 Asic[54]:643 Asic[55]:643 Asic[56]:643
Asic[57]:643 Asic[58]:643 Asic[59]:643 Asic[60]:643 Asic[61]:643 Asic[62]:643
Chain:7 max freq=643
Chain:7 min freq=637
max freq = 656
set baud=1
Chain[J6] PIC temp offset=62,0,0,0,0,0,35,28
chain[5] temp chip I2C addr=0x98
chain[5] has no middle temp, use special fix mode.
Chain[J6] chip[244] use PIC middle temp offset=0 typeID=55
New offset Chain[5] chip[244] local:26 remote:27 offset:29
Chain[J6] chip[244] get middle temp offset=29 typeID=55
Chain[J7] PIC temp offset=62,0,0,0,0,0,35,28
chain[6] temp chip I2C addr=0x98
chain[6] has no middle temp, use special fix mode.
Chain[J7] chip[244] use PIC middle temp offset=0 typeID=55
New offset Chain[6] chip[244] local:26 remote:27 offset:29
Chain[J7] chip[244] get middle temp offset=29 typeID=55
Chain[J8] PIC temp offset=62,0,0,0,0,0,35,28
chain[7] temp chip I2C addr=0x98
chain[7] has no middle temp, use special fix mode.
Chain[J8] chip[244] use PIC middle temp offset=0 typeID=55
New offset Chain[7] chip[244] local:26 remote:28 offset:28
Chain[J8] chip[244] get middle temp offset=28 typeID=55
miner rate=13501 voltage limit=900 on chain[5]
get PIC voltage=880 on chain[5], check: must be < 900
miner rate=13501 voltage limit=900 on chain[6]
get PIC voltage=900 on chain[6], check: must be < 900
miner rate=13501 voltage limit=900 on chain[7]
get PIC voltage=880 on chain[7], check: must be < 900
Chain[J6] set working voltage=880 [108]
Chain[J7] set working voltage=900 [74]
Chain[J8] set working voltage=880 [108]
do heat board 8xPatten for 1 times
start send works on chain[5]
start send works on chain[6]
start send works on chain[7]
get send work num :57456 on Chain[5]
get send work num :57456 on Chain[6]
get send work num :57456 on Chain[7]
wait recv nonce on chain[5]
wait recv nonce on chain[6]
wait recv nonce on chain[7]
get nonces on chain[5]
require nonce number:912
require validnonce number:57456
asic[00]=912 asic[01]=912 asic[02]=912 asic[03]=912 asic[04]=912 asic[05]=912 asic[06]=912 asic[07]=912
asic[08]=912 asic[09]=912 asic[10]=912 asic[11]=912 asic[12]=912 asic[13]=912 asic[14]=912 asic[15]=912
asic[16]=912 asic[17]=912 asic[18]=912 asic[19]=912 asic[20]=912 asic[21]=912 asic[22]=912 asic[23]=912
asic[24]=912 asic[25]=912 asic[26]=912 asic[27]=912 asic[28]=912 asic[29]=912 asic[30]=912 asic[31]=912
asic[32]=912 asic[33]=912 asic[34]=912 asic[35]=912 asic[36]=912 asic[37]=912 asic[38]=912 asic[39]=912
asic[40]=912 asic[41]=912 asic[42]=912 asic[43]=912 asic[44]=912 asic[45]=912 asic[46]=912 asic[47]=912
asic[48]=912 asic[49]=912 asic[50]=912 asic[51]=912 asic[52]=912 asic[53]=912 asic[54]=912 asic[55]=912
asic[56]=912 asic[57]=912 asic[58]=912 asic[59]=912 asic[60]=912 asic[61]=912 asic[62]=912
Below ASIC's core didn't receive all the nonce, they should receive 8 nonce each!
freq[00]=618 freq[01]=631 freq[02]=650 freq[03]=618 freq[04]=631 freq[05]=656 freq[06]=618 freq[07]=631
freq[08]=656 freq[09]=618 freq[10]=631 freq[11]=656 freq[12]=631 freq[13]=637 freq[14]=606 freq[15]=487
freq[16]=637 freq[17]=656 freq[18]=618 freq[19]=637 freq[20]=656 freq[21]=631 freq[22]=650 freq[23]=656
freq[24]=631 freq[25]=537 freq[26]=656 freq[27]=631 freq[28]=587 freq[29]=656 freq[30]=612 freq[31]=650
freq[32]=656 freq[33]=631 freq[34]=650 freq[35]=656 freq[36]=631 freq[37]=656 freq[38]=656 freq[39]=631
freq[40]=656 freq[41]=656 freq[42]=543 freq[43]=656 freq[44]=656 freq[45]=568 freq[46]=656 freq[47]=656
freq[48]=631 freq[49]=568 freq[50]=656 freq[51]=631 freq[52]=625 freq[53]=656 freq[54]=631 freq[55]=656
freq[56]=656 freq[57]=631 freq[58]=656 freq[59]=656 freq[60]=631 freq[61]=656 freq[62]=656
total valid nonce number:57456
total send work number:57456
require valid nonce number:57456
get nonces on chain[6]
require nonce number:912
require validnonce number:57456
asic[00]=912 asic[01]=912 asic[02]=912 asic[03]=912 asic[04]=912 asic[05]=912 asic[06]=912 asic[07]=912
asic[08]=912 asic[09]=912 asic[10]=912 asic[11]=912 asic[12]=912 asic[13]=912 asic[14]=912 asic[15]=912
asic[16]=912 asic[17]=912 asic[18]=912 asic[19]=912 asic[20]=912 asic[21]=912 asic[22]=912 asic[23]=912
asic[24]=912 asic[25]=912 asic[26]=912 asic[27]=912 asic[28]=912 asic[29]=912 asic[30]=912 asic[31]=912
asic[32]=912 asic[33]=912 asic[34]=912 asic[35]=912 asic[36]=912 asic[37]=912 asic[38]=912 asic[39]=912
asic[40]=912 asic[41]=912 asic[42]=912 asic[43]=912 asic[44]=912 asic[45]=912 asic[46]=912 asic[47]=912
asic[48]=912 asic[49]=912 asic[50]=912 asic[51]=912 asic[52]=912 asic[53]=912 asic[54]=912 asic[55]=912
asic[56]=912 asic[57]=912 asic[58]=912 asic[59]=912 asic[60]=912 asic[61]=912 asic[62]=912
Below ASIC's core didn't receive all the nonce, they should receive 8 nonce each!
freq[00]=631 freq[01]=631 freq[02]=631 freq[03]=631 freq[04]=631 freq[05]=631 freq[06]=631 freq[07]=631
freq[08]=631 freq[09]=631 freq[10]=631 freq[11]=631 freq[12]=631 freq[13]=631 freq[14]=631 freq[15]=631
freq[16]=631 freq[17]=631 freq[18]=631 freq[19]=631 freq[20]=631 freq[21]=631 freq[22]=631 freq[23]=631
freq[24]=631 freq[25]=631 freq[26]=631 freq[27]=631 freq[28]=631 freq[29]=631 freq[30]=631 freq[31]=631
freq[32]=631 freq[33]=631 freq[34]=631 freq[35]=637 freq[36]=637 freq[37]=637 freq[38]=637 freq[39]=637
freq[40]=637 freq[41]=637 freq[42]=637 freq[43]=637 freq[44]=637 freq[45]=637 freq[46]=637 freq[47]=637
freq[48]=637 freq[49]=637 freq[50]=637 freq[51]=637 freq[52]=637 freq[53]=637 freq[54]=637 freq[55]=637
freq[56]=637 freq[57]=637 freq[58]=637 freq[59]=637 freq[60]=637 freq[61]=637 freq[62]=637
total valid nonce number:57456
total send work number:57456
require valid nonce number:57456
get nonces on chain[7]
require nonce number:912
require validnonce number:57456
asic[00]=912 asic[01]=912 asic[02]=912 asic[03]=912 asic[04]=912 asic[05]=912 asic[06]=912 asic[07]=912
asic[08]=907 asic[09]=912 asic[10]=912 asic[11]=912 asic[12]=912 asic[13]=912 asic[14]=912 asic[15]=912
asic[16]=912 asic[17]=912 asic[18]=912 asic[19]=909 asic[20]=912 asic[21]=912 asic[22]=912 asic[23]=912
asic[24]=912 asic[25]=912 asic[26]=912 asic[27]=912 asic[28]=912 asic[29]=912 asic[30]=912 asic[31]=912
asic[32]=912 asic[33]=912 asic[34]=912 asic[35]=912 asic[36]=912 asic[37]=912 asic[38]=912 asic[39]=912
asic[40]=912 asic[41]=912 asic[42]=912 asic[43]=912 asic[44]=912 asic[45]=912 asic[46]=912 asic[47]=912
asic[48]=912 asic[49]=912 asic[50]=912 asic[51]=912 asic[52]=912 asic[53]=912 asic[54]=912 asic[55]=911
asic[56]=912 asic[57]=912 asic[58]=912 asic[59]=912 asic[60]=912 asic[61]=912 asic[62]=912
Below ASIC's core didn't receive all the nonce, they should receive 8 nonce each!
core[049]=7 core[053]=5 core[056]=7
core[064]=7 core[112]=6
freq[00]=637 freq[01]=637 freq[02]=637 freq[03]=637 freq[04]=637 freq[05]=637 freq[06]=637 freq[07]=637
freq[08]=637 freq[09]=637 freq[10]=637 freq[11]=637 freq[12]=637 freq[13]=637 freq[14]=637 freq[15]=637
freq[16]=637 freq[17]=637 freq[18]=637 freq[19]=637 freq[20]=637 freq[21]=637 freq[22]=637 freq[23]=637
freq[24]=637 freq[25]=637 freq[26]=637 freq[27]=637 freq[28]=637 freq[29]=637 freq[30]=637 freq[31]=637
freq[32]=637 freq[33]=637 freq[34]=637 freq[35]=637 freq[36]=637 freq[37]=637 freq[38]=637 freq[39]=637
freq[40]=637 freq[41]=637 freq[42]=637 freq[43]=637 freq[44]=637 freq[45]=637 freq[46]=637 freq[47]=637
freq[48]=637 freq[49]=643 freq[50]=643 freq[51]=643 freq[52]=643 freq[53]=643 freq[54]=643 freq[55]=643
freq[56]=643 freq[57]=643 freq[58]=643 freq[59]=643 freq[60]=643 freq[61]=643 freq[62]=643
total valid nonce number:57447
total send work number:57456
require valid nonce number:57456
chain[5]: All chip cores are opened OK!
Test Patten on chain[5]: OK!
chain[6]: All chip cores are opened OK!
Test Patten on chain[6]: OK!
chain[7]: All chip cores are opened OK!
Test Patten on chain[7]: OK!
setStartTimePoint total_tv_start_sys=217 total_tv_end_sys=218
restartNum = 2 , auto-reinit enabled...
do read_temp_func once...
do check_asic_reg 0x08
get RT hashrate from Chain[5]: (asic index start from 1-63)
Asic[01]=72.5110 Asic[02]=68.6020 Asic[03]=74.4230 Asic[04]=74.6750 Asic[05]=71.4540 Asic[06]=77.5610 Asic[07]=74.7760 Asic[08]=74.3900
Asic[09]=77.7790 Asic[10]=76.7220 Asic[11]=73.8020 Asic[12]=68.5850 Asic[13]=76.1680 Asic[14]=72.4770 Asic[15]=73.0470 Asic[16]=57.8810
Asic[17]=74.4740 Asic[18]=76.4530 Asic[19]=67.8800 Asic[20]=70.1280 Asic[21]=73.7520 Asic[22]=74.6580 Asic[23]=73.6850 Asic[24]=78.5170
Asic[25]=73.6850 Asic[26]=63.6860 Asic[27]=80.9660 Asic[28]=73.9200 Asic[29]=68.9870 Asic[30]=75.6310 Asic[31]=74.9770 Asic[32]=69.4570
Asic[33]=74.6580 Asic[34]=79.8930 Asic[35]=76.6710 Asic[36]=74.3730 Asic[37]=66.6050 Asic[38]=76.7380 Asic[39]=71.4540 Asic[40]=69.3060
Asic[41]=72.5610 Asic[42]=73.8530 Asic[43]=58.9210 Asic[44]=75.3800 Asic[45]=73.1310 Asic[46]=68.4000 Asic[47]=77.6780 Asic[48]=73.1150
Asic[49]=69.2890 Asic[50]=62.8130 Asic[51]=74.2720 Asic[52]=73.1480 Asic[53]=67.4440 Asic[54]=72.4940 Asic[55]=68.1990 Asic[56]=72.4100
Asic[57]=75.3460 Asic[58]=66.1350 Asic[59]=72.9800 Asic[60]=78.1480 Asic[61]=72.3260 Asic[62]=72.5610 Asic[63]=77.7950
get RT hashrate from Chain[6]: (asic index start from 1-63)
Asic[01]=67.6620 Asic[02]=75.9840 Asic[03]=70.3300 Asic[04]=75.5640 Asic[05]=62.8470 Asic[06]=70.2790 Asic[07]=74.5240 Asic[08]=72.9130
Asic[09]=70.6320 Asic[10]=72.5610 Asic[11]=73.9370 Asic[12]=77.3420 Asic[13]=72.4440 Asic[14]=68.8030 Asic[15]=73.0810 Asic[16]=73.8360
Asic[17]=73.5510 Asic[18]=73.9700 Asic[19]=71.0340 Asic[20]=71.1680 Asic[21]=72.1580 Asic[22]=78.8190 Asic[23]=71.9230 Asic[24]=69.4570
Asic[25]=67.7630 Asic[26]=71.7220 Asic[27]=76.4030 Asic[28]=71.1180 Asic[29]=68.7360 Asic[30]=69.7090 Asic[31]=77.5610 Asic[32]=70.1790
Asic[33]=67.9140 Asic[34]=72.3930 Asic[35]=64.5920 Asic[36]=72.1920 Asic[37]=74.6080 Asic[38]=75.4470 Asic[39]=73.8700 Asic[40]=73.9370
Asic[41]=66.2860 Asic[42]=79.4230 Asic[43]=75.8160 Asic[44]=68.6350 Asic[45]=74.7920 Asic[46]=70.7990 Asic[47]=71.2360 Asic[48]=73.8700
Asic[49]=66.5380 Asic[50]=70.6150 Asic[51]=72.6280 Asic[52]=75.7490 Asic[53]=71.8400 Asic[54]=76.5370 Asic[55]=73.5340 Asic[56]=69.2390
Asic[57]=75.1280 Asic[58]=74.3230 Asic[59]=73.4330 Asic[60]=72.3430 Asic[61]=77.6780 Asic[62]=82.4600 Asic[63]=69.5240
get RT hashrate from Chain[7]: (asic index start from 1-63)
Asic[01]=73.5510 Asic[02]=75.9160 Asic[03]=80.1110 Asic[04]=76.9900 Asic[05]=76.1510 Asic[06]=73.5170 Asic[07]=74.9940 Asic[08]=73.1150
Asic[09]=70.6650 Asic[10]=70.6990 Asic[11]=72.4770 Asic[12]=70.1450 Asic[13]=74.3060 Asic[14]=71.8060 Asic[15]=74.7420 Asic[16]=75.6650
Asic[17]=76.8220 Asic[18]=69.5240 Asic[19]=72.0910 Asic[20]=75.2620 Asic[21]=72.0240 Asic[22]=73.2660 Asic[23]=76.2690 Asic[24]=69.9440
Asic[25]=67.7290 Asic[26]=71.7050 Asic[27]=74.6250 Asic[28]=78.2320 Asic[29]=69.8430 Asic[30]=68.4670 Asic[31]=71.5210 Asic[32]=68.9540
Asic[33]=74.6250 Asic[34]=71.8730 Asic[35]=74.4400 Asic[36]=74.8760 Asic[37]=73.9030 Asic[38]=72.9300 Asic[39]=69.6250 Asic[40]=74.9430
Asic[41]=72.7620 Asic[42]=69.4910 Asic[43]=67.4270 Asic[44]=71.4870 Asic[45]=74.4570 Asic[46]=66.6550 Asic[47]=67.5450 Asic[48]=75.4800
Asic[49]=72.2590 Asic[50]=72.9300 Asic[51]=75.6820 Asic[52]=71.9070 Asic[53]=67.9640 Asic[54]=67.8470 Asic[55]=74.3900 Asic[56]=71.0010
Asic[57]=75.8490 Asic[58]=74.9270 Asic[59]=72.3930 Asic[60]=74.3730 Asic[61]=75.5310 Asic[62]=73.8190 Asic[63]=72.4440
Check Chain[J6] ASIC RT error: (asic index start from 1-63)
Check Chain[J7] ASIC RT error: (asic index start from 1-63)
Check Chain[J8] ASIC RT error: (asic index start from 1-63)
Done check_asic_reg
do read temp on Chain[5]
Chain[5] Chip[62] TempTypeID=55 middle offset=29
Chain[5] Chip[62] local Temp=60
Chain[5] Chip[62] middle Temp=70
Special fix Chain[5] Chip[62] middle Temp = 75
Done read temp on Chain[5]
do read temp on Chain[6]
Chain[6] Chip[62] TempTypeID=55 middle offset=29
Chain[6] Chip[62] local Temp=60
Chain[6] Chip[62] middle Temp=72
Special fix Chain[6] Chip[62] middle Temp = 75
Done read temp on Chain[6]
do read temp on Chain[7]
Chain[7] Chip[62] TempTypeID=55 middle offset=28
Chain[7] Chip[62] local Temp=62
Chain[7] Chip[62] middle Temp=72
Special fix Chain[7] Chip[62] middle Temp = 77
Done read temp on Chain[7]
set FAN speed according to: temp_highest=62 temp_top1[PWM_T]=62 temp_top1[TEMP_POS_LOCAL]=62 temp_change=0 fix_fan_steps=0
read_temp_func Done!
CRC error counter=0
submitted by Timsierramist to BitcoinMining [link] [comments]

Magic Money: The Bitcoin Revolution  Full Documentary ... Bitcoin 80% Crash after the Halving! How to Buy Altcoins with Coinbase - Quick Step by Step Tutorial BITCOIN JUST HAD SOMETHING AMAZING HAPPEN TO IT! - NOW THE BITCOIN BULL RUN CAN COMMENCE! BITCOIN PUMP TO $10,500?!  Hashrate At 80 Quintillion Hashes Per Second!!!

This Free Bitcoin units calculator helps you convert any amount from one unit to another. Conversion between BTC, Bits, mBTC, Satoshis and US dollars. Ein Blockheader ohne Transaktionen würde ungefähr 80 Bytes umfassen. Wenn wir annehmen, dass Blöcke alle 10 Minuten erzeugt werden, 80 Bytes * 6 * 24 * 365 = 4,2 MB pro Jahr. Mit Computersystemen, die typischerweise ab 2008 mit 2 GB RAM verkauft werden, und dem Mooreschen Gesetz, das ein aktuelles Wachstum von 1,2 GB pro Jahr vorhersagt ... At the same time, the pool is also receiving the next broadcasted header (80 bytes of data tethered to a block), and the pool begins working on its next block. Not only are miners dedicating time ... 1 Block = 0.000512 Megabyte (10^6 bytes) Block to Megabyte (10^6 bytes) Megabyte (10^6 bytes) to Block: 1 Block = 3.814697265625E-6 Gigabit [Gb] Block to Gigabit: Gigabit to Block: 1 Block = 4.7683715820311E-7 Gigabyte [GB] Block to Gigabyte: Gigabyte to Block: 1 Block = 5.12E-7 Gigabyte (10^9 bytes) Block to Gigabyte (10^9 bytes) To organize a data blob of 1.2 MB of transactions on your computer into a Bitcoin block, you’ll need something around 15000 bytes of “sorting information” (6 bytes per transaction shortID ...

[index] [46] [49029] [41176] [29803] [8392] [21248] [49333] [12171] [662] [34186]

Magic Money: The Bitcoin Revolution Full Documentary ...

Check out the Cryptocurrency Technical Analysis Academy here: Use the coupon code "BullRun2020" to get $40 off of the Cryptocurrency T... After the first Bitcoin Halving in November 2012 the price of Bitcoin crashed more than 80% a couple months later. How likely is such a Bitcoin dump after th... Unless you have over 1 BTC to trade, never send Bitcoin to an exchange!!! Why? You'll be charged an incredibly high fee because the Bitcoin network is so overworked at the moment. Why? Bitcoin - 80 Trillion Dollar Exit. I talk about how Bitcoin will eventually become an exit ramp from the crashing 80 trillion dollar financial system, the ec... Exploring the revolutionary Bitcoin digital currency. It doesn't need banks or to be printed. It can be transferred in a second to anywhere in the world. Wit...