What is the Bitcoin Mempool? A Beginner's Explanation ...
Bitcoin / Charts — Blockchair
18 of the Coolest Visualizations for Exploring the Bitcoin ...
These graphs show that fees for inclusion in 2nd block just shot up 10x from 50 to 500 satoshis/kB, and mempool size just shot up from <5 MB to 30 MB. Would you feel safe sending a transaction into the network now? Can Bitcoin rally if the blocksize remains artificially limited by Blockstream/Core?
http://statoshi.info/dashboard/db/fee-estimates To select a longer time period, zoom out on the graph by clicking on the word "6 hours ago" to the right of the words "Zoom Out" - which will reveal a drop-down menu. https://tradeblock.com/bitcoin To see the increase in the Mempool Size (from less than 5 MB, to 30 MB), go to the graph on the graph on the lower right called "Recent Mempool", and use the two menus to select "7 Days" and "Size". How can Bitcoin continue to rally, if the network is becoming backlogged due to unnecessary congestion?
These graphs show that fees for inclusion in 2nd block just shot up 10x from 50 to 500 satoshis/kB, and mempool size just shot up from <5 MB to 30 MB. Would you feel safe sending a transaction into the network now? Can Bitcoin rally if the blocksize remains artificially limited by Blockstream /r/btc
"My transaction is stuck, what to do?" - an explainer [DRAFT]
In the last days we have been experiencing a sharp rise in price, which is historically correlated with many people transacting over the Bitcoin network. Many people transacting over the Bitcoin network implies that the blockspace is in popular demand, meaning that when you send a transaction, it has to compete with other transactions for the inclusion in one of the blocks in the future. Miners are motivated by profits and transactions that pay more than other transactions are preferred when mining a new block. Although the network is working as intended (blockspace is a scarce good, subject to supply/demand dynamics, regulated purely by fees), people who are unfamiliar with it might feel worried that their transaction is “stuck” or otherwise somehow lost or “in limbo”. This post attempts to explain how the mempool works, how to optimize fees and that one does not need to worry about their funds.
TL;DR: Your funds are safe. Just be patient* and it'll be confirmed at some point. A transaction either will be confirmed or it never leaves your wallet, so there is nothing to worry about in regards to the safety of your coins.
You can see how the mempool "ebbs and flows", and lower fee tx's get confirmed in the "ebb" times (weekends, nights): https://jochen-hoenicke.de/queue/#0,30d * if you are in hurry there are things like RBF (Replace By Fee) and CPFC (Child Pays For Parent), which you can use to boost your transaction fees; you will need an advanced wallet like Bitcoin Core or Electrum for that though. Keep also in mind that this is not possible with any transaction (RBF requires opt in before sending, f.ex). If nothing else works and your transaction really needs a soon confirmation, you can try and contact a mining pool to ask them if they would include your transaction. Some mining pools even offer a web-interface for this: 1, 2. Here’s how Andreas Antonopoulos describes it:
In bitcoin there is no "in transit". Transactions are atomic meaning they either happen all at once or don't happen at all. There is no situation where they "leave" one wallet and are not simultaneously and instantaneously in the destination address. Either the transaction happened or it didn't. The only time you can't see the funds is if your wallet is hiding them because it is tracking a pending transaction and doesn't want you to try and spend funds that are already being spent in another transaction. It doesn't mean the money is in limbo, it's just your wallet waiting to see the outcome. If that is the case, you just wait. Eventually the transaction will either happen or will be deleted by the network. tl;dr: your funds are safe
How is the speed of confirmations determined in bitcoin?
Open this site: https://jochen-hoenicke.de/queue/#0,2w Here you see how many transactions are currently (and were historically) waiting to be confirmed, i.e how many transactions are currently competing with your transaction for blockspace (=confirmation). You can see two important things: the differently coloured layers, each layer representing a different fee (higher layer = higher fees). You can point at a layer and see which fees (expressed in sat/byte) are represented in this layer. You can then deduct which layer your own transaction is currently at, and how far away from the top your position is (miners work through the mempool always from the top, simply because the tx's on top pay them more). You can estimate that each newly mined block removes roughly 1.xMB from the top (see the third graph which shows the mempool size in MB). On average, a new block is produced every ten minutes. But keep in mind that over time more transactions come into the mempool, so there can be periods where transactions are coming faster than transactions being “processed” by miners. The second important observation is that the mempool "ebbs and flows", so even the lower paid transactions are periodically being confirmed at some point. In short: what determines the speed of a confirmation is A) how high you set the fees (in sat/byte), B) how many other transactions with same or higher fees are currently competing with yours and C) how many transactions with higher paid fees will be broadcast after yours. A) you can influence directly, B) you can observe in real time, but C) is difficult to predict. So it's always a little tricky to tell when the first confirmation happens if you set your fees low. But it's quite certain that at some point even the cheap transactions will come through.
So what happens if my transaction stays unconfirmed for days or even weeks?
Transactions are being broadcast by the full nodes on the network. Each node can adjust their settings for how long they keep unconfirmed transactions in their mempool. That’s why there is not a fixed amount of time after which a transaction is dropped from the mempool, but most nodes drop unconfirmed tx’s after two weeks [IS THIS CORRECT?]. This means that in the absolute worst case the unconfirmed transaction will simply disappear from the network, as if it never happened. Keep in mind that in those two weeks the coins never actually leave your wallet. It’s just that your wallet doesn’t show them as “available”, but you still have options like RBF and CPFP to get your transaction confirmed with higher fees, or to “cancel” your transaction by spending the same coins onto another address with a higher fee.
Helpful tools to estimate fees for future transactions:
Here are some resources that can help you estimate fees when sending a bitcoin transaction, so you don't end up overpaying (or underpaying) unnecessarily. Keep in mind that in order to take advantage of this, you need a proper bitcoin wallet which allows for custom fee setting. A selection of such wallets you can find here or here. The order here is roughly from advanced to easy. 1) https://jochen-hoenicke.de/queue/#0,24h Here you can see a visualization of how many unconfirmed transactions are currently on the network, as well as how many were there in the past. Each coloured layer represents a different fee amount. F.ex the deep blue (lowest layer) are the 1sat/byte transactions, slightly brighter level above are the 2sat/byte transactions and so on. The most interesting graph is the third one, which shows you the size of the current mempool in MB and the amount of transactions with different fee levels, which would compete with your transaction if you were to send it right now. This should help you estimating how high you need to set the fee (in sat/byte) in order to have it confirmed "soon". But this also should help you to see that even the 1sat/byte transactions get confirmed very regularly, especially on weekends and in the night periods, and that the spikes in the mempool are always temporary. For that you can switch to higher timeframes in the upper right corner, f.ex here is a 30 days view: https://jochen-hoenicke.de/queue/#0,30d. You clearly can see that the mempool is cyclical and you can set a very low fee if you are not in hurry. 2) https://mempool.space This is also an overview of the current mempool status, although less visual than the previous one. It shows you some important stats, like the mempool size, some basic stats of the recent blocks (tx fees, size etc). Most importantly, it makes a projection of how large you need to set your fees in sat/byte if you want your transaction to be included in the next block, or within the next two/three/four blocks. You can see this projection in the left upper corner (the blocks coloured in brown). 3) https://whatthefee.io This is a simple estimation tool. It shows you the likelihood (in %) of a particular fee size (in sat/byte) to be confirmed within a particular timeframe (measured in hours). It is very simple to use, but the disadvantage is that it shows you estimates only for the next 24 hours. You probably will overpay by this method if your transaction is less time sensitive than that. 4) https://twitter.com/CoreFeeHelper This is a very simple bot that tweets out fees projections every hour or so. It tells you how you need to set the fees in order to be confirmed within 1hou6hours/12hours/1day/3days/1week. Very simple to use. Hopefully one of these tools will help you save fees for your next bitcoin transaction. Or at least help you understand that even with a very low fee setting your transaction will be confirmed sooner or later. Furthermore, I hope it makes you understand how important it is to use a wallet that allows you to set your own fees.
Collection of fee estimating tools (for saving on fees when sending a bitcoin transaction)
Here are some resources that can help you estimate fees when sending a bitcoin transaction, so you don't end up overpaying unnecessarily. Keep in mind that in order to take advantage of this, you need a proper bitcoin wallet which allows for custom fee setting. A selection of such wallets you can find here or here. The order here is roughly from advanced to easy. 1) https://jochen-hoenicke.de/queue/#0,24h Here you can see a visualization of how many unconfirmed transactions are currently on the network, as well as how many were there in the past. Each coloured layer represents a different fee amount. F.ex the deep blue (lowest layer) are the 1sat/byte transactions, slightly brighter level above are the 2sat/byte transactions and so on. The most interesting graph is the third one, which shows you the size of the current mempool in MB and the amount of transactions with different fee levels, which would compete with your transaction if you were to send it right now. This should help you estimating how high you need to set the fee (in sat/byte) in order to have it confirmed "soon". But this also should help you to see that even the 1sat/byte transactions get confirmed very regularly, especially on weekends and in the night periods, and that the spikes in the mempool are always temporary. For that you can switch to higher timeframes in the upper right corner, f.ex here is a 30 days view: https://jochen-hoenicke.de/queue/#0,30d. You clearly can see that the mempool is cyclical and you can set a very low fee if you are not in hurry. 2) https://mempool.space This is also an overview of the current mempool status, although less visual than the previous one. It shows you some important stats, like the mempool size, some basic stats of the recent blocks (tx fees, size etc). Most importantly, it makes a projection of how large you need to set your fees in sat/byte if you want your transaction to be included in the next block, or within the next two/three/four blocks. You can see this projection in the left upper corner (the blocks coloured in brown). 3) https://whatthefee.io This is a simple estimate tool. It shows you the likelihood (in %) of a particular fee size (in sat/byte) to be confirmed within a particular timeframe (measured in hours). It is very simple to use, but the disadvantage is that it shows you estimates only for the next 24 hours. You probably will still overpay by this method, if your transaction is less time sensitive than that. 4) https://twitter.com/CoreFeeHelper This is a very simple bot that tweets out fees projections every hour or so. It tells you how you need to set the fees in order to be confirmed within 1hou6hours/12hours/1day/3days/1week. Very simple to use. Hopefully one of these tools will help you save fees for your next bitcoin transaction. Or at least help you understand that even with a very low fee setting your transaction will be confirmed sooner or later. Furthermore, I hope it makes you understand how important it is to use a wallet that allows you to set your own fees.
BTC Fees amplified today by last night's difficulty adjustment. Current (peak of day) next-block fees are testing new highs.
Compounding Factors Causing the Fee Explosion Over the past 2 weeks, a large sum of SHA256 hashpower has come online as the rewards for mining in real-dollar value had been increasing. The increase over the past 2016 blocks was so great in fact that it caused an 11% jump in difficulty last night. To add to that, the price retreated back to the 2-week average meaning some hashpower has left after the price adjustment. You can see how far behind schedule the current block times are here. That has compounded to set new highs in sat/byte fees and has simultaneously escalated the price per transaction drastically While BTC blocks may be able to clear overnight when mining is running 10-20% above the expected block rate, it's pretty clear from history that every day has a peak usage that cannot be handled by the network. After readjustment, it looks like only the lull of the weekend can currently clear the backlog, and only just. I recommend checking Johoe's Mempool size in MB graph for a longer span. In the 3 month graph, you can really start to see each daily spike, weeks where the mempool only cleared on the weekend, and even a couple of weekends where the mempool didn't clear. So What is Each Blockchain Currently Capable Of? Current Segwit usage has been stagnant at around 40-45% for the past year now, but let's just say for argument's sake that segwit usage hits 100%. This represents a capacity increase of the BTC blockchain of only around 25%. That means that even if BTC hit perfect segwit usage, it could only handle around 500k transactions per day instead of 400k. This bottleneck does not exist on BCH. BCH can currently handle 16M blocks with no issue as proved by last year's stress-test and it should now be able to handle full 32MB blocks given recent parallelization improvements. The throughput of even 16MB blocks would allow for somewhere around an 8M TX/day average. Bitcoin Cash is absolutely equipped to deal with an order of magnitude more transactions than Bitcoin today while maintaining 1sat/byte fees. Blockchain technology can do so much more than BTC gives it credit for.
Over the last several days I've been looking into detail at numerous aspects of the now infamous CTOR change to that is scheduled for the November hard fork. I'd like to offer a concrete overview of what exactly CTOR is, what the code looks like, how well it works, what the algorithms are, and outlook. If anyone finds the change to be mysterious or unclear, then hopefully this will help them out. This document is placed into public domain.
What is TTOR? CTOR? AOR?
Currently in Bitcoin Cash, there are many possible ways to order the transactions in a block. There is only a partial ordering requirement in that transactions must be ordered causally -- if a transaction spends an output from another transaction in the same block, then the spending transaction must come after. This is known as the Topological Transaction Ordering Rule (TTOR) since it can be mathematically described as a topological ordering of the graph of transactions held inside the block. The November 2018 hard fork will change to a Canonical Transaction Ordering Rule (CTOR). This CTOR will enforce that for a given set of transactions in a block, there is only one valid order (hence "canonical"). Any future blocks that deviate from this ordering rule will be deemed invalid. The specific canonical ordering that has been chosen for November is a dictionary ordering (lexicographic) based on the transaction ID. You can see an example of it in this testnet block (explorer here, provided this testnet is still alive). Note that the txids are all in dictionary order, except for the coinbase transaction which always comes first. The precise canonical ordering rule can be described as "coinbase first, then ascending lexicographic order based on txid". (If you want to have your bitcoin node join this testnet, see the instructions here. Hopefully we can get a public faucet and ElectrumX server running soon, so light wallet users can play with the testnet too.) Another ordering rule that has been suggested is removing restrictions on ordering (except that the coinbase must come first) -- this is known as the Any Ordering Rule (AOR). There are no serious proposals to switch to AOR but it will be important in the discussions below.
Two changes: removing the old order (TTOR->AOR), and installing a new order (AOR->CTOR)
The proposed November upgrade combines two changes in one step:
Removing the old causal rule: now, a spending transaction can come before the output that it spends from the same block.
Adding a new rule that fixes the ordering of all transactions in the block.
In this document I am going to distinguish these two steps (TTOR->AOR, AOR->CTOR) as I believe it helps to clarify the way different components are affected by the change.
Code changes in Bitcoin ABC
In Bitcoin ABC, several thousand lines of code have been changed from version 0.17.1 to version 0.18.1 (the current version at time of writing). The differences can be viewed here, on github. The vast majority of these changes appear to be various refactorings, code style changes, and so on. The relevant bits of code that deal with the November hard fork activation can be found by searching for "MagneticAnomaly"; the variable magneticanomalyactivationtime sets the time at which the new rules will activate. The main changes relating to transaction ordering are found in the file src/validation.cpp:
Function ConnectBlock previously had one loop, that would process each transaction in order, removing spent transaction outputs and adding new transaction outputs. This was only compatible with TTOR. Starting in November, it will use the two-loop OTI algorithm (see below). The new construction has no ordering requirement.
Function ApplyBlockUndo, which is used to undo orphaned blocks, is changed to work with any order.
When orphaning a block, transactions will be returned to the mempool using addForBlock that now works with any ordering (src/txmempool.cpp).
Serial block processing (one thread)
One of the most important steps in validating blocks is updating the unspent transaction outputs (UTXO) set. It is during this process that double spends are detected and invalidated. The standard way to process a block in bitcoin is to loop through transactions one-by-one, removing spent outputs and then adding new outputs. This straightforward approach requires exact topological order and fails otherwise (therefore it automatically verifies TTOR). In pseudocode:
for tx in transactions: remove_utxos(tx.inputs) add_utxos(tx.outputs)
Note that modern implementations do not apply these changes immediately, rather, the adds/removes are saved into a commit. After validation is completed, the commit is applied to the UTXO database in batch. By breaking this into two loops, it becomes possible to update the UTXO set in a way that doesn't care about ordering. This is known as the outputs-then-inputs (OTI) algorithm.
for tx in transactions: add_utxos(tx.outputs) for tx in transactions: remove_utxos(tx.inputs)
The UTXO updates actually form a significant fraction of the time needed for block processing. It would be helpful if they could be parallelized. There are some concurrent algorithms for block validation that require quasi-topological order to function correctly. For example, multiple workers could process the standard loop shown above, starting at the beginning. A worker temporarily pauses if the utxo does not exist yet, since it's possible that another worker will soon create that utxo. There are issues with such order-sensitive concurrent block processing algorithms:
Since TTOR would be a consensus rule, parallel validation algorithms must also verify that TTOR is respected. The naive approach described above actually is able to succeed for some non-topological orders; therefore, additional checks would have to be added in order to enforce TTOR.
The worst-case performance can be that only one thread is active at a time. Consider the case of a block that is one long chain of dependent transactions.
In contrast, the OTI algorithm's loops are fully parallelizable: the worker threads can operate in an independent manner and touch transactions in any order. Until recently, OTI was thought to be unable to verify TTOR, so one reason to remove TTOR was that it would allow changing to parallel OTI. It turns out however that this is not true: Jonathan Toomim has shown that TTOR enforcement is easily added by recording new UTXOs' indices within-block, and then comparing indices during the remove phase. In any case, it appears to me that any concurrent validation algorithm would need such additional code to verify that TTOR is being exactly respected; thus for concurrent validation TTOR is a hindrance at best.
Advanced parallel techniques
With Bitcoin Cash blocks scaling to large sizes, it may one day be necessary to scale onto advanced server architectures involving sharding. A lot of discussion has been made over this possibility, but really it is too early to start optimizing for sharding. I would note that at this scale, TTOR is not going to be helpful, and CTOR may or may not lead to performance optimizations.
Block propagation (graphene)
A major bottleneck that exists in Bitcoin Cash today is block propagation. During the stress test, it was noticed that the largest blocks (~20 MB) could take minutes to propagate across the network. This is a serious concern since propagation delays mean increased orphan rates, which in turn complicate the economics and incentives of mining. 'Graphene' is a set reconciliation technique using bloom filters and invertible bloom lookup tables. It drastically reduces the amount of bandwidth required to communicate a block. Unfortunately, the core graphene mechanism does not provide ordering information, and so if many orderings are possible then ordering information needs to be appended. For large blocks, this ordering information makes up the majority of the graphene message. To reduce the size of ordering information while keeping TTOR, miners could optionally decide to order their transactions in a canonical ordering (Gavin's order, for example) and the graphene protocol could be hard coded so that this kind of special order is transmitted in one byte. This would add a significant technical burden on mining software (to create blocks in such a specific unusual order) as well as graphene (which must detect this order, and be able to reconstruct it). It is not clear to me whether it would be possible to efficiently parallelize sorting algortithms that reconstruct these orderings. The adoption of CTOR gives an easy solution to all this: there is only one ordering, so no extra ordering information needs to be appended. The ordering is recovered with a comparison sort, which parallelizes better than a topological sort. This should simplify the graphene codebase and it removes the need to start considering supporting various optional ordering encodings.
Reversibility and technical debt
Can the change to CTOR be undone at a later time? Yes and no. For block validators / block explorers that look over historical blocks, the removal of TTOR will permanently rule out usage of the standard serial processing algorithm. This is not really a problem (aside from the one-time annoyance), since OTI appears to be just as efficient in serial, and it parallelizes well. For anything that deals with new blocks (like graphene, network protocol, block builders for mining, new block validation), it is not a problem to change the ordering at a later date (to AOR / TTOR or back to CTOR again, or something else). These changes would add no long term technical debt, since they only involve new blocks. For past-block validation it can be retroactively declared that old blocks (older than a few months) have no ordering requirement.
Summary and outlook
Removing TTOR is the most disruptive part of the upgrade, as other block processing software needs to be updated in kind to handle transactions coming in any order. These changes are however quite small and they naturally convert the software into a form where concurrency is easy to introduce.
In the near term, TTOR / CTOR will show no significant performance differences for block validation. Note that right now, block validation is not the limiting factor in Bitcoin Cash, anyway.
In medium term, software switching over to concurrent block processing will likely want to use an any-order algorithm (like OTI). Although some additional code may be needed to enforce ordering rules in concurrent validation, there will still be no performance differences for TTOR / AOR / CTOR.
In the very long term, it is perhaps possible that CTOR will show advantages for full nodes with sharded UTXO databases, if that ever becomes necessary. It's probably way too early to care about this.
With TTOR removed, the further addition of CTOR is actually a very minor change. It does not require any other ecosystem software to be updated (they don't care about order). Not only that, we aren't stuck with CTOR: the ordering can be quite easily changed again in the future, if need be.
The primary near-term improvement from the CTOR will be in allowing a simple and immediate enhancement of the graphene protocol. This impacts a scaling bottleneck that matters right now: block propagation. By avoiding the topic of complex voluntary ordering schemes, this will allow graphene developers to stop worrying about how to encode ordering, and focus on optimizing the mechanisms for set reconciliation.
Taking a broader view, graphene is not the magic bullet for network propagation. Even with the CTOR-improved graphene, we might not see vastly better performance right away. There is also work needed in the network layer to simply move the messages faster between nodes. In the last stress test, we also saw limitations on mempool performance (tx acceptance and relaying). I hope both of these fronts see optimizations before the next stress test, so that a fresh set of bottlenecks can be revealed.
If you don't really understand the block size issue, I know it's fun to troll but PLEASE actually take a minute and understand it beforehand. (Wall of text explaining block size)
PLEASE don't be like this guy, posting graphs demonstrating you don't understand what's actually happening with the block size issue. If you want to understand the block size issue, please listen. I sincerely want to help you and will try and explain things as best I can. Step one is just to look at this graph. Just focus on the orange line and the blue lines. What you should see is that the "block" size for BTC (orange line) has been pegged at 1MB for a long long time. This means that every processed BTC block is at 100% capacity. What actually is a block? It's just a chunk of computer memory. However, ALL Bitcoin transactions, every time some person sends another person bitcoin, HAS to fit into the current block (or at least eventually fit into one....) and it's the job of the 'miners' to create new blocks of whatever that size is. This block creation takes on average 10 minutes, sometimes way more or less, but don't stress about that part. For Bitcoin BTC the size is 1MB, which you saw from the graph. When a miner finds the right data to create a 'valid block', they record all the currently pending transactions into that block, until it fills up. When it does that, those transactions are considered valid. Got it so far. Ok. Now because there are so many transactions currently pending, people have to pay MUCH higher fees to ensure their transactions gets into that 1MB block the miner just found, because of course if you pay more, the miner gets that fee for adding your transaction into the block. But, you tell yourself, "Aha that means Bitcoin is really popular!" Yes, it is, really really popular in fact! BUT that means it's now so popular that it can't be used the way it was designed to be! When you hear about "the mempool" what you're hearing about is all those transactions which aren't getting into blocks, and sitting around waiting. Why are they waiting? They didn't pay the current (higher and higher) fee needed to get into that little 1MB block. Some people want to send $25 to a family member. Would you pay the $10-20 now necessary for that? Probably not. Now, we know, lightning network, someday, MAYBE will fix this issue. Unfortunately some have been talking about this for literally years now. But you know what's proving right now that it works? Raising the block size, which is what Bitcoin Cash or BCH does. Not to some obscene level, just 8MB. Like, the amount of RAM a computer had in 1995. And it was done because it was NECESSARY, and unfortunately, BTC is proving that with every full block produced. Now look at the blue line in that graph which shows BCH blocks. It is sometimes above this magic 1MB level, sometimes it's almost 0. This is how a healthy coin is supposed to look, sometimes spiking up, but not never pegged at its limit. Looking closer, does that mean BCH could have gotten away with a 2 or 3MB block size? Probably, for now. But 8MB was a safe bet, meaning no 'forks' or changes to the core code would be needed for the foreseeable future. If a big event happens (some country goes through major inflation, or BTC just tanks temporarily) the BCH blocks will have room to fit YOUR (and everyone else's) transactions without needing to charge an arm and a leg. Another (more modern) analogy if you still want one: Imagine if you have a phone with 1GB of RAM. After your OS and security software, etc, you have enough memory left to run ONE application, like your banking app, your email client, your web browser. Switching between them is PAINFUL and maybe sometimes it crashes completely. Now imagine you could get a (somehow cheaper) phone with 8 times the memory and the same apps can now run quickly and at the same time. This is because there is no need for virtual (on-disk) memory to kick in and slowly switch in the new app you want to use (this is called paging but is beyond the scope of this example). Why would you keep using that 1GB phone? This is not a perfect analogy, but if you're trying to understand why people like BCH because of its on-chain scaling (putting all needed data into the blocks), that's the best analogy I could come up with. There are so many other issues to explain like segwit and so forth but this is already long enough. In closing, most of us do NOT hate BTC, we hate how BROKEN it is when the solution to fix it exists right now! If you've read this far, thank you for listening and I hope we can begin to have a productive dialog in the comments.
Bitcoin's market *price* is trying to rally, but it is currently constrained by Core/Blockstream's artificial *blocksize* limit. Chinese miners can only win big by following the market - not by following Core/Blockstream. The market will always win - either with or without the Chinese miners.
TL;DR: Chinese miners should think very, very carefully:
You can either choose to be pro-market and make bigger profits longer-term; or
You can be pro-Blockstream and make smaller profits short-term - and then you will lose everything long-term, when the market abandons Blockstream's crippled code and makes all your hardware worthless.
The market will always win - with or without you. The choice is yours. UPDATE: The present post also inspired nullc Greg Maxwell (CTO of Blockstream) to later send me two private messages. I posted my response to him, here: https://np.reddit.com/btc/comments/4ir6xh/greg_maxwell_unullc_cto_of_blockstream_has_sent/ Details If Chinese miners continue using artificially constrained code controlled by Core/Blockstream, then Bitcoin price / adoption / volume will also be artificially constrained, and billions (eventually trillions) of dollars will naturally flow into some other coin which is not artificially constrained. The market always wins. The market will inevitably determine the blocksize and the price. Core/Blockstream is temporarily succeeding in suppressing the blocksize (and the price), and Chinese miners are temporarily cooperating - for short-term, relatively small profits. But eventually, inevitably, billions (and later trillions) of dollars will naturally flow into the unconstrained, free-market coin. That winning, free-market coin can be Bitcoin - but only if Chinese miners remove the artificial 1 MB limit and install Bitcoin Classic and/or Bitcoin Unlimited. Previous posts: There is not much new to say here - we've been making the same points for months. Below is a summary of the main arguments and earlier posts:
Miners should use the cryptographic code provided by those programmers. But miners should not use an arbitrary, artificial economic limit ("MAX_BLOCKSIZE = 1 000 000") unilaterally imposed by those programmers - who understand cryptography but do not understand economics.
Blockstream is planning to steal around 90% of miners' fees, by forcing most transactions off the blockchain, and onto an unproven, centralized, off-chain system called Lightning Network.
The Bilderberg Group (major investors behind Blockstream) may be motivated to suppress Bitcoin price and adoption in order to prevent it from becoming a major world currency, and in order to allow central bankers to continue to control the world by infinitely printing their debt-backed fiat.
Independent Bitcoin implementations such as Bitcoin Classic and Bitcoin Unlimited use 99% of the same tested and proven code as Core / Blockstream - but without artificial limits on blocksize.
Bitcoin is not the only cryptocurrency game in town. There are many competing cryptocurrencies. And there is billions (eventually trillions) of dollars waiting to flow into cryptocurrency. Investors will not invest in a crippled coin. The winning coin will be the coin which is free of artificial constraints.
Investors have billions of dollars (eventually trillions) waiting to flow into cryptocurrency. Investors are software-neutral. Investors only care about wealth preservation and profit.
A Bitcoin "spinoff" (based on Bitcoin's existing ledger, but using a different hashing algorithm, to exclude existing miners) can and will be launched, if miners continue to use Core/Blockstream's crippled code.
Because a "spinoff" uses a different hashing algorithm, it would destroy existing miners' millions of dollars in hardware investment.
But because a "spinoff" uses the existing ledger, it would also preserve investors' billions of dollars in wealth.
The market will eventually win - with or without Chinese miners. The market always wins.
If the Chinese miners follow the market, then they have a simple, guaranteed path towards increasing long-term profits due to continuing rise in Bitcoin price and on-chain transaction fees - using their existing hardware.
If miners follow Core / Blockstream / Bilderberg Group, they will lose potential profits in the short term (due to suppressed price), and they will lose everything in the long term (when investors massively move to another coin with another hashing algorithm).
Previous posts providing more details on these economic arguments are provided below:
This graph shows Bitcoin price and volume (ie, blocksize of transactions on the blockchain) rising hand-in-hand in 2011-2014. In 2015, Core/Blockstream tried to artificially freeze the blocksize - and artificially froze the price. Bitcoin Classic will allow volume - and price - to freely rise again.
Bitcoin has its own E = mc2 law: Market capitalization is proportional to the square of the number of transactions. But, since the number of transactions is proportional to the (actual) blocksize, then Blockstream's artificial blocksize limit is creating an artificial market capitalization limit!
The Nine Miners of China: "Core is a red herring. Miners have alternative code they can run today that will solve the problem. Choosing not to run it is their fault, and could leave them with warehouses full of expensive heating units and income paid in worthless coins." – tsontar
Just click on these historical blocksize graphs - all trending dangerously close to the 1 MB (1000KB) artificial limit. And then ask yourself: Would you hire a CTO / team whose Capacity Planning Roadmap from December 2015 officially stated: "The current capacity situation is no emergency" ?
Blockstream is now controlled by the Bilderberg Group - seriously! AXA Strategic Ventures, co-lead investor for Blockstream's $55 million financing round, is the investment arm of French insurance giant AXA Group - whose CEO Henri de Castries has been chairman of the Bilderberg Group since 2012.
Austin Hill [head of Blockstream] in meltdown mode, desperately sending out conflicting tweets: "Without Blockstream & devs, who will code?" -vs- "More than 80% contributors of bitcoin core are volunteers & not affiliated with us."
Be patient about Classic. It's already a "success" - in the sense that it has been tested, released, and deployed, with 1/6 nodes already accepting 2MB+ blocks. Now it can quietly wait in the wings, ready to be called into action on a moment's notice. And it probably will be - in 2016 (or 2017).
Classic will definitely hard-fork to 2MB, as needed, at any time before January 2018, 28 days after 75% of the hashpower deploys it. Plus it's already released. Core will maybe hard-fork to 2MB in July 2017, if code gets released & deployed. Which one is safer / more responsive / more guaranteed?
"Bitcoin Unlimited ... makes it more convenient for miners and nodes to adjust the blocksize cap settings through a GUI menu, so users don't have to mod the Core code themselves (like some do now). There would be no reliance on Core (or XT) to determine 'from on high' what the options are." - ZB
BitPay's Adaptive Block Size Limit is my favorite proposal. It's easy to explain, makes it easy for the miners to see that they have ultimate control over the size (as they always have), and takes control away from the developers. – Gavin Andresen
Core/Blockstream is not Bitcoin. In many ways, Core/Blockstream is actually similar to MtGox. Trusted & centralized... until they were totally exposed as incompetent & corrupt - and Bitcoin routed around the damage which they had caused.
Satoshi Nakamoto, October 04, 2010, 07:48:40 PM "It can be phased in, like: if (blocknumber > 115000) maxblocksize = largerlimit / It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete."
Theymos: "Chain-forks [='hardforks'] are not inherently bad. If the network disagrees about a policy, a split is good. The better policy will win" ... "I disagree with the idea that changing the max block size is a violation of the 'Bitcoin currency guarantees'. Satoshi said it could be increased."
"They [Core/Blockstream] fear a hard fork will remove them from their dominant position." ... "Hard forks are 'dangerous' because they put the market in charge, and the market might vote against '[the] experts' [at Core/Blockstream]" - ForkiusMaximus
This ELI5 video (22 min.) shows XTreme Thinblocks saves 90% block propagation bandwidth, maintains decentralization (unlike the Fast Relay Network), avoids dropping transactions from the mempool, and can work with Weak Blocks. Classic, BU and XT nodes will support XTreme Thinblocks - Core will not.
4 weird facts about Adam Back: (1) He never contributed any code to Bitcoin. (2) His Twitter profile contains 2 lies. (3) He wasn't an early adopter, because he never thought Bitcoin would work. (4) He can't figure out how to make Lightning Network decentralized. So... why do people listen to him??
Bitcoin, huh? WTF is going on? Should we scale you on-chain or off-chain? Will you stay decentralized, distributed, immutable?
0. Shit, this is long, TLWR please! Too long, won't read. EDIT: TLDR TLWR for clarity.
Bitcoin is a decentralized, distributed, immutable network. It has users, nodes, and miners, all of which participate in building a public and pseudonymous ledger of blocks called blockchain. The blockchain requires its own currency to function and this currency is called Bitcoin.
The bitcoin network is going through growing pains. Some believe that it should be scaled on-chain with high-volume-low-cost transaction fees, whereas others believe that it has to be scaled off-chain with low-volume-high-cost transaction fees and more affordable second layer solutions. Each have relative advantages and disadvantages. A compromise has not been reached yet.
The off-chain scaling solution via Bitcoin Core SegWit’s lightning network diminishes distributed and immutable network properties. It replaces bitcoin’s peer-to-peer network with a two-layer institution-to-institution network and peer-to-hub-to-peer second layer solution.
The on-chain scaling solution via Bitcoin Cash’s increased block size limit is feasible at the moment but inefficient in the long run. It could be merged with several good concepts from the lightning network proposal and new ideas outlined in this overview.
An appropriate scaling analogy is to recall email attachments early on. They too were limited to a few MB at first, then 10MB, 20MB, up until 25MB on Gmail. But even then, Gmail eventually started using Google Drive internally.
Similarly, any second layer solutions should be integrated within the existing bitcoin network secured by miners and nodes. The revenue from any second layer solutions should be redistributed internally to miners and nodes, not to additional third party hubs which the lightning network envisions.
The author of this overview recommends on-chain scaling for the time being, with the understanding that off-chain scaling should be implemented as soon as possible, as long as these second layer solutions keep the bitcoin peer-to-peer and decentralized, distributed, immutable. Unfortunately, the lightning network does not accomplish this.
The author remains impartial to Bitcoin Core and Bitcoin Cash proposals, with a preference for Bitcoin Cash’s way of handling immutability and overall progress thus far.
1. Bitcoin, huh? Brief introduction. There are 3 sections to this overview. The first section is a brief introduction to bitcoin. The second section looks at recent developments in the bitcoin world, through the analogy of email attachments, and the third section discusses what could be next, through the perspective of resilience and network security. This is just a continuation of a long, long, possibly never-ending debate that started with the release of the bitcoin whitepaper in 2008 (see https://bitcoin.org/bitcoin.pdf). The recent mess during the past few years boils down to the controversy with the block size limit and how to appropriately scale bitcoin, the keyword appropriately. Scaling bitcoin is a controversial debate with valid arguments from all sides (see https://en.bitcoin.it/wiki/Block_size_limit_controversy). I have researched, studied, and written this overview as objectively and as impartially as possible. By all means, this is still an opinion and everyone is advised to draw their own conclusions. My efforts are to make at least a few readers aware that ultimately there is only one team, and that team is the team bitcoin. Yes, currently though, there are factions within the team bitcoin. I hope that we can get beyond partisan fights and work together for the best bitcoin. I support all scaling proposals as long as they are the best for the given moment in time. Personally, I hate propaganda and love free speech as long as it is not derogatory and as long as it allows for constructive discussions. The goal of this overview is to explain to a novice how bitcoin network works, what has been keeping many bitcoin enthusiasts concerned, and if we can keep the bitcoin network with three main properties described as decentralized, distributed, immutable. Immutable means censorship resistant. For the distinction between decentralized and distributed, refer to Figure 1: Centralized, decentralized and distributed network models by Paul Baran (1964), which is a RAND Institute study to create a robust and nonlinear military communication network (see https://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM3420.pdf). Note that for the overall network resilience and security, distributed is more desirable than decentralized, and the goal is to get as far away from central models as possible. Of course, nothing is strictly decentralized or strictly distributed and all network elements are at different levels of this spectrum. For those unaware how bitcoin works, I recommend the Bitcoin Wikipedia (see https://en.bitcoin.it/wiki/Main_Page). In short, the bitcoin network includes users which make bitcoin transactions and send them to the network memory pool called mempool, nodes which store the public and pseudonymous ledger called blockchain and which help with receiving pending transactions and updating processed transactions, thus securing the overall network, and miners which also secure the bitcoin network by mining. Mining is the process of confirming pending bitcoin transactions, clearing them from the mempool, and adding them to blocks which build up the consecutive chain of blocks on the blockchain. The blockchain is therefore a decentralized and distributed ledger built on top of bitcoin transactions, therefore impossible to exist without bitcoin. If someone claims to be working on their own blockchain without bitcoin, by the definition of the bitcoin network however, they are not talking about the actual blockchain. Instead, they intend to own a different kind of a private database made to look like the public and pseudonymous blockchain ledger. There are roughly a couple of dozen mining pools, each possibly with hundreds or thousands of miners participating in them, to several thousand nodes (see https://blockchain.info/pools and https://coin.dance/nodes). Therefore, the bitcoin network has at worst decentralized miners and at best distributed nodes. The miner and node design makes the blockchain resilient and immune to reversible changes, making it censorship resistant, thus immutable. The bitcoin blockchain avoids the previous need for a third party to trust. This is a very elegant solution to peer-to-peer financial exchange via a network that is all: decentralized, distributed, immutable. Extra features (escrow, reversibility via time-locks, and other features desirable in specific instances) can be integrated within the network or added on top of this network, however, they have not been implemented yet. Miners who participate receive mining reward consisting of newly mined bitcoins at a predetermined deflationary rate and also transaction fees from actual bitcoin transactions being processed. It is estimated that in 2022, miners will have mined more than 90% of all 21 million bitcoins ever to be mined (see https://en.bitcoin.it/wiki/Controlled_supply). As the mining reward from newly mined blocks diminishes to absolute zero in 2140, the network eventually needs the transaction fees to become the main component of the reward. This can happen either via high-volume-low-cost transaction fees or low-volume-high-cost transaction fees. Obviously, there is the need to address the question of fees when dealing with the dilemma how to scale bitcoin. Which type of fees would you prefer and under which circumstances? 2. WTF is going on? Recent developments. There are multiple sides to the scaling debate but to simplify it, first consider the 2 main poles. In particular, to scale bitcoin on blockchain or to scale it off it, that is the question! The first side likes the idea of bitcoin as it has been until now. It prefers on-chain scaling envisioned by the bitcoin creator or a group of creators who chose the pseudonym Satoshi Nakamoto. It is now called Bitcoin Cash and somewhat religiously follows Satoshi’s vision from the 2008 whitepaper and their later public forum discussions (see https://bitcointalk.org/index.php?topic=1347.msg15366#msg15366). Creators’ vision is good to follow but it should not be followed blindly and dogmatically when better advancements are possible, the keyword when. To alleviate concerning backlog of transactions and rising fees, Bitcoin Cash proponents implemented a simple one-line code update which increased the block size limit for blockhain blocks from 1MB block size limit to a new, larger 8MB limit. This was done through a fork on August 1, 2017, which created Bitcoin Cash, and which kept the bitcoin transaction history until then. Bitcoin Cash has observed significant increase in support, from 3% of all bitcoin miners at first to over 44% of all bitcoin miners after 3 weeks on August 22, 2017 (see http://fork.lol/pow/hashrate and http://fork.lol/pow/hashrateabs). An appropriate scaling analogy is to recall email attachments early on. They too were limited to a few MB at first, then 10MB, 20MB, up until 25MB on Gmail. But even then, Gmail eventually started using Google Drive internally. Note that Google Drive is a third party to Gmail, although yes, it is managed by the same entity. The second side argues that bitcoin cannot work with such a scaling approach of pre-meditated MB increases. Arguments against block size increases include miner and node centralization, and bandwidth limitations. These are discussed in more detail in the third section of this overview. As an example of an alternative scaling approach, proponents of off-chain scaling want to jump to the internally integrated third party right away, without any MB increase and, sadly, without any discussion. Some of these proponents called one particular implementation method SegWit, which stands for Segregated Witness, and they argue that SegWit is the only way that we can ever scale up add the extra features to the bitcoin network. This is not necessarily true because other scaling solutions are feasible, such as already functioning Bitcoin Cash, and SegWit’s proposed solution will not use internally integrated third party as shown next. Note that although not as elegant as SegWit is today, there are other possibilities to integrate some extra features without SegWit (see /Bitcoin/comments/5dt8tz/confused_is_segwit_needed_for_lightning_network). Due to the scaling controversy and the current backlog of transactions and already high fees, a third side hastily proposed a compromise to a 2MB increase in addition to the proposed SegWit implementation. They called it SegWit2x, which stands for Segregated Witness with 2MB block size limit increase. But the on-chain scaling and Bitcoin Cash proponents did not accept it due to SegWit’s design redundancy and hub centralization which are discussed next and revisited in the third section of this overview. After a few years of deadlock, that is why the first side broke free and created the Bitcoin Cash fork. The second side stuck with bitcoin as it was. In a way, they inherited the bitcoin network without any major change to public eye. This is crucial because major changes are about to happen and the original bitcoin vision, as we have known it, is truly reflected only in what some media refer to as a forked clone, Bitcoin Cash. Note that to avoid confusion, this second side is referred to as Bitcoin Core by some or Legacy Bitcoin by others, although mainstream media still refers to it simply as Bitcoin. The core of Bitcoin Core is quite hardcore though. They too rejected the proposed compromise for SegWit2x and there are clear indications that they will push to keep SegWit only, forcing the third side with SegWit2x proponents to create another fork in November 2017 or to join Bitcoin Cash. Note that to certain degree, already implemented and working Bitcoin Cash is technically superior to SegWit2x which is yet to be deployed (see /Bitcoin/comments/6v0gll/why_segwit2x_b2x_is_technically_inferior_to). Interestingly enough, those who agreed to SegWit2x have been in overwhelming majority, nearly 87% of all bitcoin miners on July 31, 2017 prior to the fork, and a little over 90% of remaining Bitcoin Core miners to date after the fork (see https://coin.dance/blocks). Despite such staggering support, another Bitcoin Core fork is anticipated later in November (see https://cointelegraph.com/news/bitcoin-is-splitting-once-again-are-you-ready) and the "Outcome #2: Segwit2x reneges on 2x or does not prioritize on-chain scaling" seems to be on track from the perspective of Bitcoin Core SegWit, publicly seen as the original Bitcoin (see https://blog.bridge21.io/before-and-after-the-great-bitcoin-fork-17d2aad5d512). The sad part is that although in their overwhelming majority, the miners who support SegWit2x would be the ones creating another Bitcoin Core SegWit2x fork or parting ways from the original Bitcoin. In a way, this is an ironic example how bitcoin’s built-in resiliency to veto changes causes majority to part away when a small minority has status quo and holds off fully-consented progress. Ultimately, this will give the minority Bitcoin Core SegWit proponents the original Bitcoin branding, perhaps to lure in large institutional investors and monetize on bitcoin’s success as we have it seen it during the past 9 years since its inception. Recall that bitcoin of today is already a decentralized, distributed, immutable network by its definition. The bitcoin network was designed to be an alternative to centralized and mutable institutions, so prevalent in modern capitalist societies. Bitcoin Core SegWit group wants to change the existing bitcoin network to a network with dominant third parties which, unlike Google Drive to Gmail, are not internal. In particular, they intend to do so via the lightning network, which is a second layer solution (2L). This particular 2L as currently designed relies on an artificial block size limit cap which creates a bottleneck in order to provide high incentives for miners to participate. It monetizes on backlog of transaction and high fees, which are allocated to miners, not any group in particular. Cheaper and more instantaneous transactions are shifted to the lightning network which is operated by hubs also earning revenue. Note that some of these hubs may choose to monitor transactions and can possibly censor who is allowed to participate in this no longer strictly peer-to-peer network. We lose the immutability and instead we have a peer-to-hub-to-peer network that is mutable and at best decentralized, and certainly not distributed (see https://medium.com/@jonaldfyookball/mathematical-proof-that-the-lightning-network-cannot-be-a-decentralized-bitcoin-scaling-solution-1b8147650800). For regular day-to-day and recurring transactions, it is not a considerable risk or inconvenience. And one could choose to use the main chain any time to bypass the lightning network and truly transact peer-to-peer. But since the main chain has an entry barrier in the form of artificially instilled high transaction fees, common people are not able to use bitcoin as we have known it until now. Peer-to-peer bitcoin becomes institution-to-institution bitcoin with peer-to-hub-to-peer 2L. To reiterate and stress, note the following lightning network design flaw again. Yes, activating SegWit and allowing 2L such as lightning allows for lower transaction fees to coexist side by side with more costly on-chain transactions. For those using this particularly prescribed 2L, the fees remain low. But since these 2L are managed by hubs, we introduce another element to trust, which is contrary to what the bitcoin network was designed to do at the first place. Over time, by the nature of the lightning network in its current design, these third party hubs grow to be centralized, just like Visa, Mastercard, Amex, Discover, etc. There is nothing wrong with that in general because it works just fine. But recall that bitcoin set out to create a different kind of a network. Instead of decentralized, distributed, immutable network with miners and nodes, with the lightning network we end up with at best decentralized but mutable network with hubs. Note that Bitcoin Core SegWit has a US-based organization backing it with millions of dollars (see https://en.wikipedia.org/wiki/Blockstream and https://steemit.com/bitcoin/@adambalm/the-truth-about-who-is-behind-blockstream-and-segwit-as-the-saying-goes-follow-the-money). Their proponents are quite political and some even imply $1000 fees on the main bitcoin blockchain (see https://cointelegraph.com/news/ari-paul-tuur-demeester-look-forward-to-up-to-1k-bitcoin-fees). Contrary to them, Bitcoin Cash proponents intend to keep small fees on a scale of a few cents, which in large volume in larger blockchain blocks provide sufficient incentive for miners to participate. On the one hand, sticking to the original vision of peer-to-peer network scaled on-chain has merit and holds potential for future value. On the other hand, 2L have potential to carry leaps forward from current financial infrastructure. As mentioned earlier, 2L will allow for extra features to be integrated off-chain (e.g. escrow, reversibility via time-locks), including entirely new features such as smart contracts, decentralized applications, some of which have been pioneered and tested on another cryptocurrency network called Ethereum. But such features could be one day implemented directly on the main bitcoin blockchain without the lightning network as currently designed, or perhaps with a truly integrated 2L proposed in the third section of this overview. What makes the whole discussion even more confusing is that there are some proposals for specific 2L that would in fact increase privacy and make bitcoin transactions less pseudonymous than those on the current bitcoin blockchain now. Keep in mind that 2L are not necessarily undesirable. If they add features and keep the main network characteristics (decentralized, distributed, immutable), they should be embraced with open arms. But the lightning network as currently designed gives up immutability and hub centralization moves the network characteristic towards a decentralized rather than a distributed network. In a sense, back to the initial email attachment analogy, even Gmail stopped with attachment limit increases and started hosting large files on Google Drive internally, with an embedded link in a Gmail email to download anything larger than 25MB from Google Drive. Anticipating the same scaling decisions, the question then becomes not if but when and how such 2L should be implemented, keeping the overall network security and network characteristics in mind. If you have not gotten it yet, repeat, repeat, repeat: decentralized, distributed, immutable. Is it the right time now and is SegWit (one way, my way or highway) truly the best solution? Those siding away from Bitcoin Core SegWit also dislike that corporate entities behind Blockstream, the one publicly known corporate entity directly supporting SegWit, have allegedly applied for SegWit patents which may further restrict who may and who may not participate in the creation of future hubs, or how these hubs are controlled (see the alleged patent revelations, https://falkvinge.net/2017/05/01/blockstream-patents-segwit-makes-pieces-fall-place, the subsequent Twitter rebuttal Blockstream CEO, http://bitcoinist.com/adam-back-no-patents-segwit, and the subsequent legal threats to SegWit2x proponents /btc/comments/6vadfi/blockstream_threatening_legal_action_against). Regardless if the patent claims are precise or not, the fact remains that there is a corporate entity dictating and vetoing bitcoin developments. Objectively speaking, Bitcoin Core SegWit developers paid by Blockstream is a corporate takeover of the bitcoin network as we have known it. And on the topic of patents and permissionless technological innovations, what makes all of this even more complicated is that a mining improvement technology called ASICboost is allowed on Bitcoin Cash. The main entities who forked from Bitcoin Core to form Bitcoin Cash had taken advantage of patents to the ASICboost technology on the original bitcoin network prior to the fork (see https://bitcoinmagazine.com/articles/breaking-down-bitcoins-asicboost-scandal). This boost saved estimated 20% electricity for miners on 1MB blocks and created unfair economic advantage for this one particular party. SegWit is one way that this boost is being eliminated, through the code. Larger blocks are another way to reduce the boost advantage, via decreased rate of collisions which made this boost happen at the first place (see https://bitcoinmagazine.com/articles/breaking-down-bitcoins-asicboost-scandal-solutions and https://bitslog.wordpress.com/2017/04/10/the-relation-between-segwit-and-asicboost-covert-and-overt). Therefore, the initial Bitcoin Cash proponents argue that eliminating ASICboost through the code is no longer needed or necessary. Of course, saving any amount electricity between 0% and 20% is good for all on our planet but in reality any energy saved in a mining operation is used by the same mining operation to increase their mining capacity. In reality, there are no savings, there is just capacity redistribution. The question then becomes if it is okay that only one party currently and already holds onto this advantage, which they covertly hid for relatively long time, and which they could be using covertly on Bitcoin Cash if they desired to do so, even though it would an advantage to a smaller degree. To be fair to them, they are mining manufacturers and operators, they researched and developed the advantage from own resources, so perhaps they do indeed have the right to reap ASICboost benefits while they can. But perhaps it should happen in publicly know way, not behind closed doors, and should be temporary, with agreed patent release date. In conclusion, there is no good and no bad actor, each side is its own shade of grey. All parties have their own truth (and villainy) to certain degree. Bitcoin Cash's vision is for bitcoin to be an electronic cash platform and daily payment processor whereas Bitcoin Core SegWit seems to be drawn more to the ideas of bitcoin as an investment vehicle and a larger settlement layer with the payment processor function managed via at best decentralized third party hubs. Both can coexist, or either one can eventually prove more useful and digest the other one by taking over all use-cases. Additionally, the most popular communication channel on /bitcoin with roughly 300k subscribers censors any alternative non-Bitcoin-Core-SegWit opinions and bans people from posting their ideas to discussions (see https://medium.com/@johnblocke/a-brief-and-incomplete-history-of-censorship-in-r-bitcoin-c85a290fe43). This is because their moderators are also supported by Blockstream. Note that the author of this overview has not gotten banned from this particular subreddit (yet), but has experienced shadow-banning first hand. Shadow-banning is a form of censorship. In this particular case, their moderator robot managed by people moderators, collaboratively with the people moderators, do the following:
(1) look for "Bitcoin Cash" and other undesirable keywords,
(2) warn authors that “Bitcoin Cash” is not true bitcoin (which objectively speaking it is, and which is by no means “BCash” that Bitcoin Core SegWit proponents refer to, in a coordinated effort to further confuse public, especially since some of them have published plans to officially release another cryptocurrency called “BCash” in 2018, see https://medium.com/@freetrade68/announcing-bcash-8b938329eaeb),
(3) further warn authors that if they try to post such opinions again, they could banned permanently,
(4) tell authors to delete their already posted posts or comments,
(5) hide their post from publicly seen boards with all other posts, thus preventing it from being seeing by the other participants in this roughly 300k public forum,
This effectively silences objective opinions and creates a dangerous echo-chamber. Suppressing free speech and artificially blowing up transaction fees on Bitcoin Core SegWit is against bitcoin’s fundamental values. Therefore, instead of the original Reddit communication channel, many bitcoin enthusiasts migrated to /btc which has roughly 60k subscribers as of now, up from 20k subscribers a year ago in August 2016 (see http://redditmetrics.com/btc). Moderators there do not censor opinions and allow all polite and civil discussions about scaling, including all opinions on Bitcoin Cash, Bitcoin Core, etc. Looking beyond their respective leaderships and communication channels, let us review a few network fundamentals and recent developments in Bitcoin Core and Bitcoin Cash networks. Consequently, for now, these present Bitcoin Cash with more favorable long-term prospects.
(1) The stress-test and/or attack on the Bitcoin Cash mempool earlier on August 16, 2017 showed that 8MB blocks do work as intended, without catastrophic complications that Bitcoin Core proponents anticipated and from which they attempted to discourage others (see https://jochen-hoenicke.de/queue/uahf/#2w for the Bitcoin Cash mempool and https://core.jochen-hoenicke.de/queue/#2w for the Bitcoin Core mempool). Note that when compared to the Bitcoin Core mempool on their respective 2 week views, one can observe how each network handles backlogs. On the most recent 2 week graphs, the Y-scale for Bitcoin Core is 110k vs. 90k on Bitcoin Cash. In other words, at the moment, Bitcoin Cash works better than Bitcoin Core even though there is clearly not as big demand for Bitcoin Cash as there is for Bitcoin Core. The lack of demand for Bitcoin Cash is partly because Bitcoin Cash is only 3 weeks old and not many merchants have started accepting it, and only a limited number of software applications to use Bitcoin Cash has been released so far. By all means, the Bitcoin Cash stress-test and/or attack from August 16, 2017 reveals that the supply will handle the increased demand, more affordably, and at a much quicker rate.
(2) Bitcoin Cash “BCH” mining has become temporarily more profitable than mining Bitcoin Core “BTC” (see http://fork.lol). Besides temporary loss of miners, this puts Bitcoin Core in danger of permanently fleeing miners. Subsequently, mempool backlog and transaction fees are anticipated to increase further.
(3) When compared to Bitcoin Cash transaction fees at roughly $0.02, transaction fees per kB are over 800 times as expensive on Bitcoin Core, currently at over $16 (see https://cashvscore.com).
(4) Tipping service that used to work on Bitcoin Core's /Bitcoin a few years back has been revived by a new tipping service piloted on the more neutral /btc with the integration of Bitcoin Cash (see /cashtipperbot).
3. Should we scale you on-chain or off-chain? Scaling bitcoin. Let us start with the notion that we are impartial to both Bitcoin Core (small blocks, off-chain scaling only) and Bitcoin Cash (big blocks, on-chain scaling only) schools of thought. We will support any or all ideas, as long as they allow for bitcoin to grow organically and eventually succeed as a peer-to-peer network that remains decentralized, distributed, immutable. Should we have a preference in either of the proposed scaling solutions? First, let us briefly address Bitcoin Core and small blocks again. From the second section of this overview, we understand that there are proposed off-chain scaling methods via second layer solutions (2L), most notably soon-to-be implemented lightning via SegWit on Bitcoin Core. Unfortunately, the lightning network diminishes distributed and immutable network properties by replacing bitcoin’s peer-to-peer network with a two-layer institution-to-institution network and peer-to-hub-to-peer 2L. Do we need this particular 2L right now? Is its complexity truly needed? Is it not at best somewhat cumbersome (if not very redundant)? In addition to ridiculously high on-chain transaction fees illustrated in the earlier section, the lightning network code is perhaps more robust than it needs to be now, with thousands of lines of code, thus possibly opening up to new vectors for bugs or attacks (see https://en.bitcoin.it/wiki/Lightning_Network and https://github.com/lightningnetwork/lnd). Additionally, this particular 2L as currently designed unnecessarily introduces third parties, hubs, that are expected to centralize. We already have a working code that has been tested and proven to handle 8MB blocks, as seen with Bitcoin Cash on August 16, 2017 (see https://www.cryptocoinsnews.com/first-8mb-bitcoin-cash-block-just-mined). At best, these third party hubs would be decentralized but they would not be distributed. And these hubs would be by no means integral to the original bitcoin network with users, nodes, and miners. To paraphrase Ocam’s razor problem solving principle, the simplest solution with the most desirable features will prevail (see https://en.wikipedia.org/wiki/Occam%27s_razor). The simplest scalability solution today is Bitcoin Cash because it updates only one line of code, which instantly increases the block size limit. This also allows other companies building on Bitcoin Cash to reduce their codes when compared to Bitcoin Core SegWit’s longer code, some even claiming ten-fold reductions (see /btc/comments/6vdm7y/ryan_x_charles_reveals_bcc_plan). The bitcoin ecosystem not only includes the network but it also includes companies building services on top of it. When these companies can reduce their vectors for bugs or attacks, the entire ecosystem is healthier and more resilient to hacking disasters. Obviously, changes to the bitcoin network code are desirable to be as few and as elegant as possible. But what are the long-term implications of doing the one-line update repeatedly? Eventually, blocks would have to reach over 500MB size if they were to process Visa-level capacity (see https://en.bitcoin.it/wiki/Scalability). With decreasing costs of IT infrastructure, bandwidth and storage could accommodate it, but the overhead costs would increase significantly, implying miner and/or full node centralization further discussed next. To decrease this particular centralization risk, which some consider undesirable and others consider irrelevant, built-in and integrated 2L could keep the block size at a reasonably small-yet-still-large limit. At the first sight, these 2L would remedy the risk of centralization by creating their own centralization incentive. At the closer look and Ocam’s razor principle again, these 2L do not have to become revenue-seeking third party hubs as designed with the current lightning network. They can be integrated into the current bitcoin network with at worst decentralized miners and at best distributed nodes. Recall that miners will eventually need to supplement their diminishing mining reward from new blocks. Additionally, as of today, the nodes have no built-in economic incentive to run other than securing the network and keeping the network’s overall value at its current level. Therefore, if new 2L were to be developed, they should be designed in a similar way like the lightning network, with the difference that the transaction processing revenue would not go to third party hubs but to the already integrated miners and nodes. In other words, why do we need extra hubs if we have miners and nodes already? Let us consider the good elements from the lightning network, forget the unnecessary hubs, and focus on integrating the hubs’ responsibilities to already existing miner and node protocols. Why would we add extra elements to the system that already functions with the minimum number of elements possible? Hence, 2L are not necessarily undesirable as long as they do not unnecessarily introduce third party hubs. Lastly, let us discuss partial on-chain scaling with the overall goal of network security. The network security we seek is the immutability and resilience via distributed elements within otherwise decentralized and distributed network. It is not inconceivable to scale bitcoin with bigger blocks as needed, when needed, to a certain degree. The thought process is the following:
(1) Block size limit:
We need some upper limit to avoid bloating the network with spam transactions. Okay, that makes sense. Now, what should this limit be? If we agree to disagree with small block size limit stuck at 1MB, and if we are fine with flexible block size limit increases (inspired by mining difficulty readjustments but on a longer time scale) or big block propositions (to be increased incrementally), what is holding us off next?
(2) Miner centralization:
Bigger blocks mean that more data will be transferred on the bitcoin network. Consequently, more bandwidth and data storage will be required. This will create decentralized miners instead of distributed ones. Yes, that is true. And it has already happened, due to the economy of scale, in particular the efficiency of grouping multiple miners in centralized facilities, and the creation of mining pools collectively and virtually connecting groups of miners not physically present in the same facility. These facilities tend to have huge overhead costs and the data storage and bandwidth increase costs are negligible in this context. The individual miners participating in mining pools will quite likely notice somewhat higher operational costs but allowing for additional revenue from integrated 2L described earlier will give them economic incentive to remain actively participating. Note that mining was never supposed to be strictly distributed and it was always at worst decentralized, as defined in the first section of this overview. To assure at best a distributed network, we have nodes.
(3) Node centralization:
Bigger blocks mean that more data will be transferred on the bitcoin network. Consequently, more bandwidth and data storage will be required. This will create decentralized nodes instead of distributed ones. Again, recall that we have a spectrum of decentralized and distributed networks in mind, not their absolutes. The concern about the node centralization (and the subsequent shift from distributed to decentralized network property) is valid if we only follow on-chain scaling to inconsiderate MB values. If addressed with the proposed integrated 2L that provides previously unseen economic incentives to participate in the network, this concern is less serious. Furthermore, other methods to reduce bandwidth and storage needs can be used. A popular proposal is block pruning, which keeps only the most recent 550 blocks, and eventually deletes any older blocks (see https://news.bitcoin.com/pros-and-cons-on-bitcoin-block-pruning). Block pruning addresses storage needs and makes sure that not all nodes participating in the bitcoin network have to store all transactions that have ever been recorded on the blockchain. Some nodes storing all transactions are still necessary and they are called full nodes. Block pruning does not eliminate full nodes but it does indeed provide an economic incentive for the reduction and centralization (i.e. saving on storage costs). If addressed with the proposed integrated 2L that provides previously unseen economic incentives to participate in the network, this concern is less serious. In other words, properly designed 2L should provide economic incentives for all nodes (full and pruned) to remain active and distributed. As of now, only miners earn revenue for participating. The lightning network proposes extra revenue for hubs. Instead, miner revenue could increase by processing 2L transactions as well, and full nodes could have an economic incentive as well. To mine, relatively high startup costs is necessary in order to get the most up to date mining hardware and proper cooling equipment. These have to be maintained and periodically upgraded. To run a full node, one needs only stable bandwidth and a sufficiently large storage, which can be expanded as needed, when needed. To run a full node, one needs only stable bandwidth and relatively small storage, which does not need to be expanded. Keeping the distributed characteristic in mind, it would be much more secure for the bitcoin network if one could earn bitcoin by simply running a node, full or pruned. This could be integrated with a simple code change requiring each node to own a bitcoin address to which miners would send a fraction of processed transaction fees. Of course, pruned nodes would collectively receive the least transaction fee revenue (e.g. 10%), full nodes would collectively receive relatively larger transaction fee revenue (e.g. 20%), whereas mining facilities or mining pools would individually receive the largest transaction fee revenue (e.g. 70%) in addition to the full mining reward from newly mined blocks (i.e. 100%). This would assure that all nodes would remain relatively distributed. Hence, block pruning is a feasible solution. However, in order to start pruning, one would have to have the full blockchain to begin with. As currently designed, downloading blockchain for the first time also audits previous blocks for accuracy, this can take days depending on one’s bandwidth. This online method is the only way to distribute the bitcoin blockchain and the bitcoin network so far. When the size of blockchain becomes a concern, a simpler distribution idea should be implemented offline. Consider distributions of Linux-based operating systems on USBs. Similarly, the full bitcoin blockchain up to a certain point can be distributed via easy-to-mail USBs. Note that even if we were to get the blockchain in bulk on such a USB, some form of a block audit would have to happen nevertheless. A new form of checkpoint hashes could be added to the bitcoin code. For instance, each 2016 blocks (whenever the difficulty readjusts), all IDs from previous 2015 blocks would be hashed and recorded. That way, with our particular offline blockchain distribution, the first time user would have to audit only the key 2016th blocks, designed to occur on average once in roughly 2 weeks. This would significantly reduce bandwidth concerns for the auditing process because only each 2016th block would have to be uploaded online to be audited. Overall, we are able to scale the bitcoin network via initial on-chain scaling approaches supplemented with off-chain scaling approaches. This upgrades the current network to a pruned peer-to-peer network with integrated 2L managed by miners and nodes who assure that the bitcoin network stays decentralized, distributed, immutable.
Note that the author u/bit-architect appreciates any Bitcoin Cash donations on Reddit directly or on bitcoin addresses 178ZTiot2QVVKjru2f9MpzyeYawP81vaXi bitcoincash:qp7uqpv2tsftrdmu6e8qglwr2r38u4twlq3f7a48uq (Bitcoin Cash) and 1GqcFi4Cs1LVAxLxD3XMbJZbmjxD8SYY8S (Bitcoin Core).
It has been discussed here a bit, but i just did some simple math via www.blockchair.com and since block height 507000 there have been 59 blocks that are not full. Those blocks add up to ~27.6MB, and that is around 31MB of block space. (I am using 1MB on the assumption that most of the 1sat/byte transactions are not SegWit to get the worst case) if these pools were to remove the minimums, the pool would currently be empty. Blockchair link with filter Johoe's mempool graph the pools and count come as no surprise: * AntPool 28
and since antpool and bitcoin.com mine 1sat/byte transactions on bcash, they are just being jerks. I found this annoying an informative, but if you are a miner in any of these pools, i suggest you either contact them and ask them to change, or you switch to any other pool that mines low fees, slush is one i know for certain does, but i am sure you guys can make your own recommendations
During the last 3 months almost all periods of increased fee were due to a drop in hashrate. The normal 6 blocks/hour rate would have kept the mempool close to empty instead of resulting in 200+ fees. Roger Ver et al. are literally responsible for what their propaganda machine complains about.
As you probably know, several days ago Bitcoin price shoot up to $420. While it's not a big move by Bitcoin standards, it's still highest in the last 4 weeks. A lot of people became optimistic... And today several things happened:
price went back to $415, with a week-high volume on bitfinex (massive dump)
we see blocks are full posts again; skeptical comments are massively downvoted
More about spam attack, as you can see here, for the last week there was no "fee market", many transactions paying only 5000 satoshi per kilobyte were confirmed (red line on the upper graph). Then suddenly mempool size shoots up... So... is it just a coincidence, or is there a coordinated effort to keep Bitcoin price artificially low? This idea was formulated before, but today I was able to see it having a predictive power, so to speak: I checked Bitcoin price today only after I saw "blocks are full" article on reddit, and my prediction that massive dump is coming was right. Of course, this could be just a coincidence. Or there might be a different causal relationship, e.g. there's elevated traffic due to a dump on exchanges. But these alternative explanations do not look plausible, as both spam and dump are intentional rather than random. BTW there was another event recently: a day ago we had an NYT article praising Ethereum and trashing Bitcoin (well, kinda). I don't think that NYT is on it, but it might be just a good time for a dump after this article is read.
Graph: Mempool Transaction Count - The number of transactions waiting to be confirmed. Backlogs at an all-time high, users experiencing delays, unable to transact, miners losing fees. Bitcoin network congested and unreliable due to Core/Blockstream's never-ending obstructionism, censorship and lies.
Viacoin is an open source cryptocurrency project, based on the Bitcoin blockchain. Publicly introduced on the crypto market in mid 2014, Viacoin integrates decentralized asset transaction on the blockchain, reaching speeds that have never seen before on cryptocurrencies. This Scrypt based, Proof of Work coin was created to try contrast Bitcoin’s structural problems, mainly the congested blockchain delays that inhibit microtransaction as this currency transitions from digital money to a gold-like, mean of solid value storage. Bitcoin Core developers Peter Todd and Btc have been working on this currency and ameliorated it until they was able to reach a lightning fast speed of 24 second per block. These incredible speeds are just one of the features that come with the implementation of Lightning Network, and and make Bitcoin slow transactions a thing of the past. To achieve such a dramatic improvement in performance, the developers modified Viacoin so that its OP_RETURN has been extended to 80 bytes, reducing tx and bloat sizes, overcoming multi signature hacks; the integration of ECDSA optimized C library allowed this coin to reach significant speedup for raw signature validation, making it perform up to 5 times better. This will mean easy adoption by merchants and vendors, which won’t have to worry anymore with long times between the payment and its approval. Todd role as Chief Scientist and Advisor has been proven the right choice for this coin, thanks to his focus on Tree Chains, a ground breaking feature that will fix the main problems revolving around Bitcoin, such as scalability issues and the troubles for the Viacoin miners to keep a reputation on the blockchain in a decentralized mining environment. Thanks to Todd’s expertise in sidechains, the future of this crypto currency will see the implementation of an alternative blockchain that is not linear. According to the developer, the chains are too unregulated when it comes to trying to establish a strong connection between the operations happening on one chain and what happens elsewhere. Merged mining, scalability and safety are at risk and tackling these problems is mandatory in order to create a new, disruptive crypto technology. Tree Chains are going to be the basis for a broader use and a series of protocols that are going to allow users and developers to use Viacoin’s blockchain not just to mine and store coins, but just like other new crypto currencies to allow the creation of secure, decentralized consensus systems living on the blockchain The commander role on this BIP9 compatible coin’s development team has now been taken by a programmer from the Netherlands called Romano, which has a great fan base in the cryptocurrency community thanks to his progressive views on the future of the world of cryptos. He’s in strong favor of SegWit, and considers soft forks on the chain not to be a problem but an opportunity: according to him it will provide an easy method to enable scripting upgrades and the implementation of other features that the market has been looking for, such as peer to peer layers for compact block relay. Segregation Witness allows increased capacity, ends transactions malleability, makes scripting upgradeable, and reduces UTXO set. Because of these reasons, Viacoin Core 0.13 is already SegWit ready and is awaiting for signaling. Together with implementation of SegWit, Romano has recently been working on finalizing the implementation of merged mining, something that has never been done with altcoins. Merged mining allows users to mine more than one block chain at the same time, this means that every hash the miner does contributes to the total hash rate of all currencies, and as a result they are all more secure. This release pre-announcement resulted in a market spike, showing how interested the market is in the inclusion of these features in the coin core and blockchain. The developer has been introducing several of these features, ranging from a Hierarchical Deterministic key (HD key) generation that allows all Viacoin users to backup their wallets, to a compact block relay, which decreases block propagation times on the peer to peer network; this creates a healthier network and a better baseline relay security margin. Viacoin’s support for relative locktime allows users and miners to time-lock a transaction, this means that a new transaction will be prevented until a relative time change is achieved with a new OP code, OP_CHECKSEQUENCEVERITY, which allows the execution of a script based on the age of the amount that is being spent. Support for Child-Pays-For-Parent procedures in Viacoin has been successfully enabled, CPFP will alleviate the problem of transactions that stuck for a long period in the unconfirmed limbo, either because of network bottlenecks or lack of funds to pay the fee. Thanks to this method, an algorithm will selects transactions based on federate inclusive unconfirmed ancestor transaction; this means that a low fee transaction will be more likely to get picked up by miners if another transaction with an higher fee that speeds its output gets relayed. Several optimizations have been implemented in the blockchain to allow its scaling to proceed freely, ranging from pruning of the chain itsel to save disk space, to optimizing memory use thanks to mempool transaction filtering. UTXO cache has also been optimization, further allowing for significant faster transaction times. Anonymity of transaction has been ameliorated, thanks to increased TOR support by the development team. This feature will help keep this crypto currency secure and the identity of who works on it safe; this has been proven essential, especially considering how Viacoin’s future is right now focused on segwit and lightning network . Onion technology used in TOR has also been included in the routing of transactions, rapid payments and instant transaction on bi directional payment channels in total anonymity. Payments Viacoin’s anonymity is one of the main items of this year’s roadmap, and by the end of 2017 we’ll be able to see Viacoin’s latest secure payment technology, called Styx, implemented on its blockchain. This unlinkable anonymous atomic payment hub combines off-the-blockchain cryptographic computations, thanks to Viacoin’s scriptin functionalities, and makes use of security RSA assumptions, ROM and Elliptic Curve digital signature Algorithm; this will allow participants to make fast, anonymous transfer funds with zero knowledge contingent payment proof. Wallets already offer strong privacy, thanks to transactions being broadcasted once only; this increases anonymity, since it can’t be used to link IPs and TXs. In the future of this coin we’ll also see hardware wallets support reaching 100%, with Trezor and Nano ledger support. These small, key-chain devices connect to the user’s computer to store their private keys and sign transactions in a safe environment. Including Viacoin in these wallets is a smart move, because they are targeted towards people that are outside of hardcore cryptocurrency users circle and guarantees exposure to this currency. The more casual users hear of this coin, the faster they’re going to adopt it, being sure of it’s safety and reliability. In last October, Viacoin price has seen a strong decline, probably linked to one big online retailer building a decentralized crypto stock exchange based on the Counterparty protocol. As usual with crypto currencties, it’s easy to misunderstand the market fluctuations and assume that a temporary underperforming coin is a sign of lack of strength. The change in the development team certainly helped with Viacoin losing value, but by watching the coin graphs it’s easy to see how this momentary change in price is turning out to be just one of those gentle chart dips that precede a sky rocketing surge in price. Romano is working hard on features and focusing on their implementation, keeping his head low rather than pushing on strong marketing like other alt coins are doing. All this investment on ground breaking properties, most of which are unique to this coin, means that Viacoin is one of those well kept secret in the market. Minimal order books and lack of large investors offering liquidity also help keep this coin in a low-key position, something that is changing as support for larger books is growing. As soon as the market notices this coin and investments go up, we are going to see a rapid surge in the market price, around the 10000 mark by the beginning of January 2018 or late February. Instead of focusing on a public ICO like every altcoin, which means a sudden spike in price followed by inclusion on new exchanges that will dry up volume, this crypto coin is growing slowly under the radar while it’s being well tested and boxes on the roadmap get checked off, one after the other. Romano is constantly working on it and the community around this coin knows, such a strong pack of followers is a feature that no other alt currency has and it’s what will bring it back to the top of the coin market in the near future. His attitude towards miners that are opposed to SegWit is another strong feature to add to Viacoin, especially because of what he thinks of F2Pool and Bitmain’s politics towards soft forks. The Chinese mining groups seem scared that once alternative crypto coins switch to it they’re going to lose leveraging power for what concerns Bitcoin’s future and won’t be able to speculate on the mining and trading market as much as they have been doing in the past, especially for what concerns the marketing market. It’s refreshing to see such dedication and releases being pushed at a constant manner, the only way to have structural changes in how crypto currencies work can only happen when the accent is put on development and not on just trying to convince the market. This strategy is less flashy and makes sure the road is ready for the inevitable increase in the userbase. It’s always difficult to forecast the future, especially when it concerns alternative coins when Bitcoin is raising so fast. A long term strategy suggestion would be to get around 1BTC worth of this cryptocoin as soon as possible and just hold on it: thanks to the features that are being rolled in as within 6 months there is going to be an easy gain to be made in the order of 5 to 10 times the initial investment. Using the recent market dip will make sure that the returns are maximized. What makes Viacoin an excellent opportunity right now is that the price is low and designed to rise fast, as its Lightning Network features become more mainstream. Lightning Network is secure, instant payment that aren’t going to be held back by confirmation bottlenecks, a blockchain capable to scale to the billions of transactions mark, extremely low fees that do not inhibit micropayments and cross-chain atomic swap that allow transaction across blockchain without the need of a third party custodians. These features mean that the future of this coin is going to be bright, and the the dip in price that started just a while ago is going to end soon as the market prepares for the first of August, when when the SegWit drama will affect all crypto markets. The overall trend of viacoin is bullish with a constant uptrend more media attention is expected , when news about the soft fork will spread beyond the inner circle of crypto aficionados and leak in the mainstream finance news networks. Solid coins like Viacoin, with a clear policy towards SegWit, will offer the guarantees that the market will be looking for in times of doubt. INVESTMENT REVIEW Investment Rating :- A+ https://medium.com/@VerthagOG/viacoin-investment-review-ca0982e979bd
A look back at my BTC TX fees for the last 3+ months.
I've had a few stuck BTC transactions as I've always tried to minimize my fees. Before this week I had not paid for TX acceleration in any way (though one good Samaritan miner saved me once before I tried child pays for parent). The tx below all were sent using the bitcoin core windows client and custom fee ratios used. Newest TX on top. 224 bytes fee: 0.00010062 BTC (45 per byte) 17,002.07 exchange rate - fee ~$1.71 (1 hour, used accelerator, this child tx pays for parent) 257 bytes fee: 0.00010836 BTC (42 per byte) 17,002.07 exchange rate - fee ~$1.84 (12 hours, used accelerator) 258 bytes fee: 0.00001806 BTC (7 per byte) 11,656.51 exchange rate - fee ~$0.21 (7 days, used child pays for parent) 795 bytes fee: 0.00002394 BTC (3 per byte) 8,650.00 exchange rate - fee ~$0.21 (9+ days, used accelerator) 258 bytes fee: 0.00001806 BTC (7 per byte) 6,922.15 exchange rate - fee ~$0.13 (1 hour 48 minutes to confirm) 617 bytes fee: 0.00004326 BTC (7 per byte) 5,222.83 exchange rate - fee ~$0.23 (2+ days to confirm, used accelerator) 257 bytes fee: 0.00010836 BTC (42 per byte) 4,599.10 exchange rate - fee ~$0.50 (4 hours to confirm) 258 bytes fee: 0.00018576 BTC (72 per byte) 4,331.68 exchange rate - fee ~$0.81 (20 minutes to confirm) 257 bytes fee: 0.00010836 BTC (42 per byte) 4,104.02 exchange rate - fee ~$0.44 (2 days to confirm, sent Aug 22) Now going oldest to newest to see how I thought of this whole thing: First it's probably important to say I used these txs to fund a debit card so quick tx confirmation wasn't a concern for me. as you can see in August 2017 I got a tx through at 42 per byte fee but it was sent on a Tuesday night / Wednesday morning (around 11:30pm eastern). my reaction to that first slow tx was to up the fee, but my next tx was sent on a Friday and basically went in the first block (took less than 20 minutes but I didn't keep track of the exact time). I then dropped the fee back to 42 per for another Tuesday tx and it went in just under 4 hours. By this point I thought I'd mastered fees. 4 hours sounded good for keeping the fee low and still having the tx complete while I slept or worked. So I'm watching fees and I've found a free accelerator. I try 7 per on a Thursday and it sits for 2 days. I try the free accelerator and it works almost immediately. 2 days is too long but I figure I've got the free accelerator as a fall back so... I try 7 per again on a Thursday when the pool was nearly empty (Nov 2nd). Email confirmation comes in 1 hour 48 minutes, first confirm was probably closer to an hour and half. I'm thinking I'd been happy to have it take longer so I think lets drop the fee some more. I try 3 per on a Sat when the pool seems low and I this is where I screw up with overconfidence, a few hours after I submit the pool starts overflowing by mid week doom and gloom stories about how many hundred thousand txs are pending are all over the internet. I eventually get bailed out by a good Samaritan when I ask for help. During that week I couldn't find an accelerator that would charge me less than $16. My free options had gone away (having a fee below 0.0001 kept me from using viaBTC 's accelerator). So you'd think I'd learn my lesson. No I go back to the fee sites and try to pick a fee that will just barely confirm. Another Tuesday tx, I just figured it'd sit until the weekend. I would have been happy with anything under 4 days. A week later and after trying several free accelerators it still hadn't confirmed. Today I finally did a child pays for parent tx essentially paying another $1.71 to get that tx unstuck. I also did one today at 42 per and used the viaBTC accelerator to get it to go on the next hour. My two most expensive tx both confirmed on Dec 12, 2017. One had fees I valued at $1.84, The other(s) was the child/parent pair that were $1.92 but could have been cheaper if I'd just paid a proper fee the first time. I still wonder how high fees will go, but I also have to wonder how much of it is a self fulfilling prophecy. How many people are paying something that they could wait hours for confirmation but pay the higher fee to get in the first block they can? Well, Child pays for parent and accelerator dances aren't my idea of fun in the long term. I can see why the fee spiral starts. Someone complains about a slow tx, they remember it and up the fee the next time, someone else has to compete with the new higher average fee and does the complain about a slow tx / raise the fee the next time response. sites used to estimate fees in case you want to be a tightwad and skirt the uncompleted tx line:
A scientist or economist who sees Satoshi's experiment running for these 7 years, with price and volume gradually increasing in remarkably tight correlation, would say: "This looks interesting and successful. Let's keep it running longer, unchanged, as-is."
UPDATE: Here's a shorter TL;DR:
The Bitcoin experiment, as invented by Satoshi, has been running sucessfully for 7 years now - and may also be showing a strong correlation between price and volume, as suggested by these graphs:
Any scientist, economist (or investor!) would simply favor continuing to let the experiment run unchanged.
Anyone (eg, Core / Blockstream) who proposes radically changing the experiment (by constraining block size to a long-term artificial limit of a 1 MB, against Satoshi's plan) is actually proposing a "side fork" - and is anti-science, anti-market (and anti-investors!)
Only someone who is anti-science and anti-markets (and anti-investors!) would say:
Let's radically change this successful economic experiment.
Let's ignore the inventor's clearly stated plan to increase or abolish the temporary (and no longer necessary) 1 MB blocksize limit, and make the natural blocksize start being constrained by the artificial blocksize limit for the first time in 7 years.
"The existing Visa credit card network processes about 15 million Internet purchases per day worldwide. Bitcoin can already scale much larger than that with existing hardware for a fraction of the cost. It never really hits a scale ceiling." - Satoshi Nakomoto
Let's make the network get so congested that people start to abandon the network (and the currency) for competing networks and currencies (either legacy fiat systems such as Dollars or Euros etc. via PayPal, western Union, SWIFT, VISA, MasterCard - or cryptocurrencies and networks such as LiteCoin, Ethereum, Dash, Monero etc. with their own less-congested networks).
Let's radically alter the fee system, by introducing scarcity on the blockchain, and introducing a totally new and controversial method explicitly encouraging users to engage in a behavior which was previously forbidden: doing "double spending" by repeatedly sending the same coins possibly to different using different fees (the notorious Opt-In Full RBF);
Let's radically the economic incentives by stealing fees from miners and radically complicate, centralize, and slow down the user experience, while making it more expensive - by moving most transactions off-chain, to a centralized, slow, expensive vaporware system called side-chains or Lightning Network (which btcdrak actually refers to as glorified alt-coins), being worked on, with little success so far, by a guy who never understood or believed in Bitcoin in the first place: Dr. Adam Back, President of the $75 million private company Blockstream, many of whose investors are major players in legacy fiat and may therefore have serious conflicts of interest with Bitcoin - either hoping will fail, or perhaps wanting to "short" it so they can still get in.
Core / Blockstream are the ones proposing these radical changes in the main parameters of this remarkably successful experiment. This is anti-scientific of them - and anti-markets, and anti-investors. They have forgotten the saying: "If it ain't broke, don't fix it." They should be free to make their radical changes - but on a side fork. In this sense, Classic, XT, and Bitcoin Unlimited are all on the "main fork". Meanwhile Core / Blockstream propose radically veering off onto a "side fork". Sidebar regarding the confusing terminology around "forks", and an unfortunate historical accident of mathematics allowing the "side fork" to unfairly exploit the apparent "status quo" The fact that a "hard fork" is necessary to stay on the "main fork" is merely a curious (and in this case, unfortunate) accident of mathematics in this case. This is because, in this particular case, it happens that staying on the "main fork" involves "loosening" or "widening" or "expanding" or "liberalizing" the definition of valid blocks. Due to the nature of p2p networks, any fork which "loosens" or "expands" or "liberalizes" the definitions or requirements actually gets the scary-sounding name of "hard fork" - because all of the p2p nodes have to upgrade in order for a definition to be loosened / widened / expanded. In other words, because the "main fork" involved growth, which involves loosening or removing temporary a hard-coded limit, then staying on the "main fork" actually (counterintuitively!) requires a "hard fork" in this case. And meanwhile, radically veering off onto a "side fork" can actually (paradoxically) be accomplished by using a "soft fork" - which the developers can quietly add to the network, rather than getting everyone to consciously and explicitly support it. This is a very unfortunate historical accident of mathematics - which however Core / Blockstream are shamelessly and ruthlessly exploiting (since without this unfair accidental advantage, they would have a much harder time getting the community to agree to all their radical proposed changes above). So remember:
The main fork assumes growth without artificial constraints.
Since the code contains a temporaruy anti-spam kludge which is now imposing an artificial constraint on growth, the only way we can stay on the main fork is by doing a hard fork. It sounds weird (paradoxical), but that's the way it is.
Core / Blockstream could never get support for their radical changes if they had to be introduced via a hard fork.
Conversely, there would be much more support for Satoshi's original plan, if it didn't unfortunately require a hard fork now in order to continue with it.
So this is the big paradox here:
Continuing with Satoshi's original plan requires a hard fork.
Radically changing Satoshi's plan can be done via soft forking.
And that's the tragic accident of history which we are up against (and which Core / Blockstream is shamelessly and desperately exploiting, since they know that nobody would support their radical changes if they had to be introduced via a hard fork). A possible novel economic result, shown on an interesting graph I know all the cynical kids will knee-jerk yell "correlation isn't causation" and "your statistics professor would be cringing" - but hold on a minute: the following graph is actually quite remarkable, and may be illustrating a important and novel emergent market phenomenon (which we simply never had a change to test yet with legacy fiat currencies, due to their, ahem, "irregular" ie poltically-gamed mining a/k/a emission schedule): https://imgur.com/jLnrOuK http://nakamotoinstitute.org/static/img/mempool/how-we-know-bitcoin-is-not-a-bubble/MetcalfeGraph.png
This graph shows Bitcoin price and volume (ie, blocksize of transactions on the blockchain) rising hand-in-hand in 2011-2014. In 2015, Core/Blockstream tried to artificially freeze the blocksize - and artificially froze the price. Bitcoin Classic will allow volume - and price - to freely rise again.
https://np.reddit.com/btc/comments/44xrw4/this_graph_shows_bitcoin_price_and_volume_ie/ Sometimes correlation does happen. And the correlation in that graph is pretty fucking tight. So perhaps we are about to discover some surprisingly simple and elegant new economic theories or even laws (if Core / Blockstream will let us continue with this experiment on the path intended by Satoshi) now that, for the first time in history, we have a currency where the money supply is pre-determined by an asymptotically declining algorithm - rather than a currency where the supply is established by a cartel via political and social processes which are often corrupt. Maybe the relationship between volume (velocity) and price really is as simple as suggested by the above graph - and this is the first time in history that we could actually see it (because this is the first time where the politicians and the wealthy can't mess with the supply). Now we are hitting the point where volume (also known as velocity, or blocksize) is being limited by a cartel - of centralized miners and centralized devs - and it is reasonable to formulate the hypothesis that the price is now, since around late 2014, being suppressed because the velocity / volume is now being suppressed (based on that graph, which shows price dipping away from its previous correlation with volume, starting around late 2014 - when Blockstream came on the scene, and told us we couldn't have nice things anymore). The devs at Core / Blockstream say:
they want to limit volume for the next year, even if it leads to the network getting congested, and users moving to other networks, and
they want to increase volume much later by a different, complicated, centralized, slow and expensive approach: side-chains, eg the Lightning Network, which does not exist yet and might never exist.
But a true scientist or economist would say:
The possible correlation in the above graph is indeed interesting - and good for investors!
Since the original inventor of the experiment (Satoshi Nakamoto) has been right about everything so far, we should continue with his experiment as-is, unchanged.
This includes his recommendation that the 1 MB "artificial limit" should be only temporary.
So this limit should be increased (or completely removed) so that the experiment can continue un-impeded, and so that we can continue to observe whether the striking correlation between price and volume continues to apply.
This is why Classic, XT and Bitcoin Unlimited are all on the "main fork". While Core / Blockstream are on a "side fork". TL;DR
Bitcoin has been highly successful for 7 years, also showing a remarkable correlation between volume and price which may herald a new fundamental economic theory or law applicable to cryptocurrencies with algorithmic asymptotically-declining emission schedules (and undiscoverable in legacy fiat currencies due to their erratic and politically influenced emission schedules), namely: value and volume (velocity) are correlated.
A true scientist or economist (and a true friend of investors!) would simply allow this highly successful experiment (with its interesting correlation) to continue unchanged. Let's see if the correlation continues!
In this case "continuing unchanged" - ie, remaining on the status quo or "main fork", paradoxically requires a "hard fork" now - to remove an anti-spam kludge which introduced an artificial limit (1 MB max block size) which was always intended to be temporary.
Core / Blockstream is actually proposing several very radical changes, which constitute a "side fork". But unfortunately they are able to introduce these changes quietly via "soft forks" - which is giving them an unfair advantage, which they are shamelessly exploiting.
They are also able to make the temporary (and now unnecessary) anti-spam kludge last much longer than originally intended by doing nothing at all - so inertia / status quo is on their side.
Paradoxically, adhering to Satoshi's plan, ie staying on the "main fork" of increasing actual blocksizes (and increasing price!) - requires a change in the code now - a hard fork.
The Mike Hearn Show: Season Finale (and Bitcoin Classic: Series Premiere)
This post debunks Mike Hearn's conspiracy theories RE Blockstream in his farewell post and points out issues with the behavior of the Bitcoin Classic hard fork and sketchy tactics of its advocates I used to be torn on how to judge Mike Hearn. On the one hand he has done some good work with BitcoinJ, Lighthouse etc. Certainly his choice of bloom filter has had a net negative effect on the privacy of SPV users, but all in all it works as advertised.* On the other hand, he has single handedly advocated for some of the most alarming behavior changes in the Bitcoin network (e.g. redlists, coinbase reallocation, BIP101 etc...) to date. Not to mention his advocacy in the past year has degraded from any semblance of professionalism into an adversarial us-vs-them propaganda train. I do not believe his long history with the Bitcoin community justifies this adversarial attitude. As a side note, this post should not be taken as unabated support for Bitcoin Core. Certainly the dev team is made of humans and like all humans mistakes can be made (e.g. March 2013 fork). Some have even engaged in arguably unprofessional behavior but I have not yet witnessed any explicitly malicious activity from their camp (q). If evidence to the contrary can be provided, please share it. Thankfully the development of Bitcoin Core happens more or less completely out in the open; anyone can audit and monitor the goings on. I personally check the repo at least once a day to see what work is being done. I believe that the regular committers are genuinely interested in the overall well being of the Bitcoin network and work towards the common goal of maintaining and improving Core and do their best to juggle the competing interests of the community that depends on them. That is not to say that they are The Only Ones; for the time being they have stepped up to the plate to do the heavy lifting. Until that changes in some way they have my support. The hard line that some of the developers have drawn in regards to the block size has caused a serious rift and this write up is a direct response to oft-repeated accusations made by Mike Hearn and his supporters about members of the core development team. I have no affiliations or connection with Blockstream, however I have met a handful of the core developers, both affiliated and unaffiliated with Blockstream. Mike opens his farewell address with his pedigree to prove his opinion's worth. He masterfully washes over the mountain of work put into improving Bitcoin Core over the years by the "small blockians" to paint the picture that Blockstream is stonewalling the development of Bitcoin. The folks who signed Greg's scalability road map have done some of the most important, unsung work in Bitcoin. Performance improvements, privacy enhancements, increased reliability, better sync times, mempool management, bandwidth reductions etc... all those things are thanks to the core devs and the research community (e.g. Christian Decker), many of which will lead to a smoother transition to larger blocks (e.g. libsecp256k1).(1) While ignoring previous work and harping on the block size exclusively, Mike accuses those same people who have spent countless hours working on the protocol of trying to turn Bitcoin into something useless because they remain conservative on a highly contentious issue that has tangible effects on network topology. The nature of this accusation is characteristic of Mike's attitude over the past year which marked a shift in the block size debate from a technical argument to a personal one (in tandem with DDoS and censorship in /Bitcoin and general toxicity from both sides). For example, Mike claimed that sidechains constitutes a conflict of interest, as Blockstream employees are "strongly incentivized to ensure [bitcoin] works poorly and never improves" despite thousands of commits to the contrary. Many of these commits are top down rewrites of low level Bitcoin functionality, not chump change by any means. I am not just "counting commits" here. Anyways, Blockstream's current client base consists of Bitcoin exchanges whose future hinges on the widespread adoption of Bitcoin. The more people that use Bitcoin the more demand there will be for sidechains to service the Bitcoin economy. Additionally, one could argue that if there was some sidechain that gained significant popularity (hundreds of thousands of users), larger blocks would be necessary to handle users depositing and withdrawing funds into/from the sidechain. Perhaps if they were miners and core devs at the same time then a conflict of interest on small blocks would be a more substantive accusation (create artificial scarcity to increase tx fees). The rational behind pricing out the Bitcoin "base" via capacity constraint to increase their business prospects as a sidechain consultancy is contrived and illogical. If you believe otherwise I implore you to share a detailed scenario in your reply so I can see if I am missing something. Okay, so back to it. Mike made the right move when Core would not change its position, he forked Core and gave the community XT. The choice was there, most miners took a pass. Clearly there was not consensus on Mike's proposed scaling road map or how big blocks should be rolled out. And even though XT was a failure (mainly because of massive untested capacity increases which were opposed by some of the larger pools whose support was required to activate the 75% fork), it has inspired a wave of implementation competition. It should be noted that the censorship and attacks by members of /Bitcoin is completely unacceptable, there is no excuse for such behavior. While theymos is entitled to run his subreddit as he sees fit, if he continues to alienate users there may be a point of mass exodus following some significant event in the community that he tries to censor. As for the DDoS attackers, they should be ashamed of themselves; it is recommended that alt. nodes mask their user agents. Although Mike has left the building, his alarmist mindset on the block size debate lives on through Bitcoin Classic, an implementation which is using a more subtle approach to inspire adoption, as jtoomim cozies up with miners to get their support while appealing to the masses with a call for an adherence to Satoshi's "original vision for Bitcoin." That said, it is not clear that he is competent enough to lead the charge on the maintenance/improvement of the Bitcoin protocol. That leaves most of the heavy lifting up to Gavin, as Jeff has historically done very little actual work for Core. We are thus in a potentially more precarious situation then when we were with XT, as some Chinese miners are apparently "on board" for a hard fork block size increase. Jtoomim has expressed a willingness to accept an exceptionally low (60 or 66%) consensus threshold to activate the hard fork if necessary. Why? Because of the lost "opportunity cost" of the threshold not being reached.(c) With variance my guess is that a lucky 55% could activate that 60% threshold. That's basically two Chinese miners. I don't mean to attack him personally, he is just willing to go down a path that requires the support of only two major Chinese mining pools to activate his hard fork. As a side effect of the latency issues of GFW, a block size increase might increase orphan rate outside of GFW, profiting the Chinese pools. With a 60% threshold there is no way for miners outside of China to block that hard fork. To compound the popularity of this implementation, the efforts of Mike, Gavin and Jeff have further blinded many within the community to the mountain of effort that core devs have put in. And it seems to be working, as they are beginning to successfully ostracize the core devs beyond the network of "true big block-believers." It appears that Chinese miners are getting tired of the debate (and with it Core) and may shift to another implementation over the issue.(d) Some are going around to mining pools and trying to undermine Core's position in the soft vs. hard fork debate. These private appeals to the miner community are a concern because there is no way to know if bad information is being passed on with the intent to disrupt Core's consensus based approach to development in favor of an alternative implementation controlled (i.e. benevolent dictator) by those appealing directly to miners. If the core team is reading this, you need to get out there and start pushing your agenda so the community has a better understanding of what you all do every day and how important the work is. Get some fancy videos up to show the effects of block size increase and work on reading materials that are easy for non technically minded folk to identify with and get behind. The soft fork debate really highlights the disingenuity of some of these actors. Generally speaking, soft forks are easier on network participants who do not regularly keep up with the network's software updates or have forked the code for personal use and are unable to upgrade in time, while hard forks require timely software upgrades if the user hopes to maintain consensus after a hardfork. The merits of that argument come with heavy debate. However, more concerning is the fact that hard forks require central planning and arguably increase the power developers have over changes to the protocol.(2) In contrast, the 'signal of readiness' behavior of soft forks allows the network to update without any hardcoded flags and developer oversight. Issues with hard forks are further compounded by activation thresholds, as soft forks generally require 95% consensus while Bitcoin Classic only calls for 60-75% consensus, exposing network users to a greater risk of competing chains after the fork. Mike didn't want to give the Chinese any more power, but now the post XT fallout has pushed the Chinese miners right into the Bitcoin Classic drivers seat. While a net split did happen briefly during the BIP66 soft fork, imagine that scenario amplified by miners who do not agree to hard fork changes while controlling 25-40% of the networks hashing power. Two actively mined chains with competing interests, the Doomsday Scenario. With a 5% miner hold out on a soft fork, the fork will constantly reorg and malicious transactions will rarely have more than one or two confirmations.(b) During a soft fork, nodes can protect themselves from double spends by waiting for extra confirmations when the node alerts the user that a ANYONECANSPEND transaction has been seen. Thus, soft forks give Bitcoin users more control over their software (they can choose to treat a softfork as a soft fork or a soft fork as a hardfork) which allows for greater flexibility on upgrade plans for those actively maintaining nodes and other network critical software. (2) Advocating for a low threshold hard forks is a step in the wrong direction if we are trying to limit the "central planning" of any particular implementation. However I do not believe that is the main concern of the Bitcoin Classic devs. To switch gears a bit, Mike is ironically concerned China "controls" Bitcoin, but wanted to implement a block size increase that would only increase their relative control (via increased orphans). Until the p2p wire protocol is significantly improved (IBLT, etc...), there is very little room (if any at all) to raise the block size without significantly increasing orphan risk. This can be easily determined by looking at jtoomim's testnet network data that passed through normal p2p network, not the relay network.(3) In the mean time this will only get worse if no one picks up the slack on the relay network that Matt Corallo is no longer maintaining. (4) Centralization is bad regardless of the block size, but Mike tries to conflate the centralization issues with the Blockstream block size side show for dramatic effect. In retrospect, it would appear that the initial lack of cooperation on a block size increase actually staved off increases in orphan risk. Unfortunately, this centralization metric will likely increase with the cooperation of Chinese miners and Bitcoin Classic if major strides to reduce orphan rates are not made. Mike also manages to link to a post from the ProHashing guy RE forever-stuck transactions, which has been shown to generally be the result of poorly maintained/improperly implemented wallet software.(6) Ultimately Mike wants fees to be fixed despite the fact you can't enforce fixed fees in a system that is not centrally planned. Miners could decide to raise their minimum fees even when blocks are >1mb, especially when blocks become too big to reliably propagate across the network without being orphaned. What is the marginal cost for a tx that increases orphan risk by some %? That is a question being explored with flexcaps. Even with larger blocks, if miners outside the GFW fear orphans they will not create the bigger blocks without a decent incentive; in other words, even with a larger block size you might still end up with variable fees. Regardless, it is generally understood that variable fees are not preferred from a UX standpoint, but developers of Bitcoin software do not have the luxury of enforcing specific fees beyond basic defaults hardcoded to prevent cheap DoS attacks. We must expose the user to just enough information so they can make an informed decision without being overwhelmed. Hard? Yes. Impossible. No. Shifting gears, Mike states that current development progress via segwit is an empty ploy, despite the fact that segwit comes with not only a marginal capacity increase, but it also plugs up major malleability vectors, allows pruning blocks for historical data and a bunch of other fun stuff. It's a huge win for unconfirmed transactions (which Mike should love). Even if segwit does require non-negligible changes to wallet software and Bitcoin Core (500 lines LoC), it allows us time to improve block relay (IBLT, weak blocks) so we can start raising the block size without fear of increased orphan rate. Certainly we can rush to increase the block size now and further exacerbate the China problem, or we can focus on the "long play" and limit negative externalities. And does segwit help the Lightning Network? Yes. Is that something that indicates a Blockstream conspiracy? No. Comically, the big blockians used to criticize Blockstream for advocating for LN when there was no one working on it, but now that it is actively being developed, the tune has changed and everything Blockstream does is a conspiracy to push for Bitcoin's future as a dystopic LN powered settlement network. Is LN "the answer?" Obviously not, most don't actually think that. How it actually works in practice is yet to be seen and there could be unforseen emergent characteristics that make it less useful for the average user than originally thought. But it's a tool that should be developed in unison with other scaling measures if only for its usefulness for instant txs and micropayments. Regardless, the fundamental divide rests on ideological differences that we all know well. Mike is fine with the miner-only validation model for nodes and is willing to accept some miner centralization so long as he gets the necessary capacity increases to satisfy his personal expectations for the immediate future of Bitcoin. Greg and co believe that a distributed full node landscape helps maintain a balance of decentralization in the face of the miner centralization threat. For example, if you have 10 miners who are the only sources for blockchain data then you run the risk of undetectable censorship, prolific sybil attacks, and no mechanism for individuals to validate the network without trusting a third party. As an analogy, take the tor network: you use it with an expectation of privacy while understanding that the multi-hop nature of the routing will increase latency. Certainly you could improve latency by removing a hop or two, but with it you lose some privacy. Does tor's high latency make it useless? Maybe for watching Netflix, but not for submitting leaked documents to some newspaper. I believe this is the philosophy held by most of the core development team. Mike does not believe that the Bitcoin network should cater to this philosophy and any activity which stunts the growth of on-chain transactions is a direct attack on the protocol. Ultimately however I believe Greg and co. also want Bitcoin to scale on-chain transactions as much as possible. They believe that in order for Bitcoin to increase its capacity while adhering to acceptable levels of decentralization, much work needs to be done. It's not a matter of if block size will be increased, but when. Mike has confused this adherence to strong principles of decentralization as disingenuous and a cover up for a dystopic future of Bitcoin where sidechains run wild with financial institutions paying $40 per transaction. Again, this does not make any sense to me. If banks are spending millions to co-op this network what advantage does a decentralized node landscape have to them? There are a few roads that the community can take now: one where we delay a block size increase while improvements to the protocol are made (with the understanding that some users may have to wait a few blocks to have their transaction included, fees will be dependent on transaction volume, and transactions <$1 may be temporarily cost ineffective) so that when we do increase the block size, orphan rate and node drop off are insignificant. Another is the immediate large block size increase which possibly leads to a future Bitcoin which looks nothing like it does today: low numbers of validating nodes, heavy trust in centralized network explorers and thus a more vulnerable network to government coercion/general attack. Certainly there are smaller steps for block size increases which might not be as immediately devastating, and perhaps that is the middle ground which needs to be trodden to appease those who are emotionally invested in a bigger block size. Combined with segwit however, max block sizes could reach unacceptable levels. There are other scenarios which might play out with competing chains etc..., but in that future Bitcoin has effectively failed. As any technology that requires maintenance and human interaction, Bitcoin will require politicking for decision making. Up until now that has occurred via the "vote download" for software which implements some change to the protocol. I believe this will continue to be the most robust of options available to us. Now that there is competition, the Bitcoin Core community can properly advocate for changes to the protocol that it sees fit without being accused of co-opting the development of Bitcoin. An ironic outcome to the situation at hand. If users want their Bitcoins to remain valuable, they must actively determine which developers are most competent and have their best interests at heart. So far the core dev community has years of substantial and successful contributions under its belt, while the alt implementations have a smattering of developers who have not yet publicly proven (besides perhaps Gavin--although his early mistakes with block size estimates is concerning) they have the skills and endurance necessary to maintain a full node implementation. Perhaps now it is time that we focus on the personalities who many want to trust Bitcoin's future. Let us see if they can improve the speed at which signatures are validated by 7x. Or if they can devise privacy preserving protocols like Confidential Transactions. Or can they figure out ways to improve traversal times across a merkle tree? Can they implement HD functionality into a wallet without any coin-crushing bugs? Can they successfully modularize their implementation without breaking everything? If so, let's welcome them with open arms. But Mike is at R3 now, which seems like a better fit for him ideologically. He can govern the rules with relative impunity and there is not a huge community of open source developers, researchers and enthusiasts to disagree with. I will admit, his posts are very convincing at first blush, but ultimately they are nothing more than a one sided appeal to the those in the community who have unrealistic or incomplete understandings of the technical challenges faced by developers maintaining a consensus critical, validation-heavy, distributed system that operates within an adversarial environment. Mike always enjoyed attacking Blockstream, but when survey his past behavior it becomes clear that his motives were not always pure. Why else would you leave with such a nasty, public farewell? To all the XT'ers, btc'ers and so on, I only ask that you show some compassion when you critique the work of Bitcoin Core devs. We understand you have a competing vision for the scaling of Bitcoin over the next few years. They want Bitcoin to scale too, you just disagree on how and when it should be done. Vilifying and attacking the developers only further divides the community and scares away potential future talent who may want to further the Bitcoin cause. Unless you can replace the folks doing all this hard work on the protocol or can pay someone equally as competent, please think twice before you say something nasty. As for Mike, I wish you the best at R3 and hope that you can one day return to the Bitcoin community with a more open mind. It must hurt having your software out there being used by so many but your voice snuffed. Hopefully one day you can return when many of the hard problems are solved (e.g. reduced propagation delays, better access to cheap bandwidth) and the road to safe block size increases have been paved. (*) https://eprint.iacr.org/2014/763.pdf (q) https://github.com/bitcoinclassic/bitcoinclassic/pull/6 (b) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012026.html (c) https://github.com/bitcoinclassic/bitcoinclassic/pull/1#issuecomment-170299027 (d) http://toom.im/jameshilliard_classic_PR_1.html (0) http://bitcoinstats.com/irc/bitcoin-dev/logs/2016/01/06 (1) https://github.com/bitcoin/bitcoin/graphs/contributors (2) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012014.html (3) https://toom.im/blocktime (beware of heavy website) (4) https://bitcointalk.org/index.php?topic=766190.msg13510513#msg13510513 (5) https://news.ycombinator.com/item?id=10774773 (6) http://rusty.ozlabs.org/?p=573 edit, fixed some things. edit 2, tried to clarify some more things and remove some personal bias thanks to astro
mempool.space – Mempool.space shows block stats in readily digestible fashion, detailing respective size, number of transactions, and median fee among other stats. At the bottom of the screen ... The Bitcoin Mempool – A Beginner’s Explanation. By: Ofir Beigel Last updated: 4/24/20. If you’ve been into Bitcoin long enough, you may have heard the term “Mempool” being thrown around. In this post I’ll explain exactly what the Mempool is and why it’s important. Bitcoin Mempool Summary. The Mempool is a “waiting area” for Bitcoin transactions that each full node maintains ... Bitcoin charts, BTC price, historical and live graph and other cryptocurrency visualizations Cada nodo Bitcoin crea su propia versión de mempool conectándose a la red de Bitcoin. El contenido de la mempool se agrega a partir de algunas instancias de nodos Bitcoin actualizados, mantenidos por el equipo de desarrolladores de Blockchain.com. De esta forma, recopilamos la mayor cantidad de información posible para proporcionar métricas precisas de Mempool. This page displays the number and size of the unconfirmed bitcoin transactions, also known as the transactions in the mempool. It gives a real-time view and shows how the mempool evolves over the time. The transactions are colored by the amount of fee they pay per byte. The data is generated from my full node and is updated every minute. Note that in bitcoin there is no global mempool; every ...
The Bitcoin Mempool, Difficulty Adjustment, Hashrate, Block Time, Block Reward, Transaction Fees and much more is explained simply in this video. Bitcoin onchain data: https://studio.glassnode.com ... Do you want to get transaction notifications in Slack or Discord? In this demo, explore the Notify API and how to create a new webhook to monitor Bitcoin transactions. 🔴[LIVE] Bitcoin Mining Farm Online - Digital Gold In 2020 Loot-Btc 162 watching Live now 95% Winning Forex Trading Formula - Beat The Market Maker📈 - Duration: 37:53. We take a look at Bitcoin mempool and how does it work. Why does bitcoin network get clogged with more than 100k transactions in waiting and historically hig... Bitcoin Golden Cross, ETH Baseline Protocol - Programmer explains - Duration: 48:12. ... What is the Mempool of Bitcoin? #Cryptovlog 37 - Duration: 4:17. AndiX 101 views. 4:17. Bitcoin Fees and ...