Forks, Merkle Trees, and Bitcoin (Oh My) by C. P. O ...

Technical: The Path to Taproot Activation

Taproot! Everybody wants to have it, somebody wants to make it, nobody knows how to get it!
(If you are asking why everybody wants it, see: Technical: Taproot: Why Activate?)
(Pedants: I mostly elide over lockin times)
Briefly, Taproot is that neat new thing that gets us:
So yes, let's activate taproot!

The SegWit Wars

The biggest problem with activating Taproot is PTSD from the previous softfork, SegWit. Pieter Wuille, one of the authors of the current Taproot proposal, has consistently held the position that he will not discuss activation, and will accept whatever activation process is imposed on Taproot. Other developers have expressed similar opinions.
So what happened with SegWit activation that was so traumatic? SegWit used the BIP9 activation method. Let's dive into BIP9!

BIP9 Miner-Activated Soft Fork

Basically, BIP9 has a bunch of parameters:
Now there are other parameters (name, starttime) but they are not anywhere near as important as the above two.
A number that is not a parameter, is 95%. Basically, activation of a BIP9 softfork is considered as actually succeeding if at least 95% of blocks in the last 2 weeks had the specified bit in the nVersion set. If less than 95% had this bit set before the timeout, then the upgrade fails and never goes into the network. This is not a parameter: it is a constant defined by BIP9, and developers using BIP9 activation cannot change this.
So, first some simple questions and their answers:

The Great Battles of the SegWit Wars

SegWit not only fixed transaction malleability, it also created a practical softforkable blocksize increase that also rebalanced weights so that the cost of spending a UTXO is about the same as the cost of creating UTXOs (and spending UTXOs is "better" since it limits the size of the UTXO set that every fullnode has to maintain).
So SegWit was written, the activation was decided to be BIP9, and then.... miner signalling stalled at below 75%.
Thus were the Great SegWit Wars started.

BIP9 Feature Hostage

If you are a miner with at least 5% global hashpower, you can hold a BIP9-activated softfork hostage.
You might even secretly want the softfork to actually push through. But you might want to extract concession from the users and the developers. Like removing the halvening. Or raising or even removing the block size caps (which helps larger miners more than smaller miners, making it easier to become a bigger fish that eats all the smaller fishes). Or whatever.
With BIP9, you can hold the softfork hostage. You just hold out and refuse to signal. You tell everyone you will signal, if and only if certain concessions are given to you.
This ability by miners to hold a feature hostage was enabled because of the miner-exit allowed by the timeout on BIP9. Prior to that, miners were considered little more than expendable security guards, paid for the risk they take to secure the network, but not special in the grand scheme of Bitcoin.

Covert ASICBoost

ASICBoost was a novel way of optimizing SHA256 mining, by taking advantage of the structure of the 80-byte header that is hashed in order to perform proof-of-work. The details of ASICBoost are out-of-scope here but you can read about it elsewhere
Here is a short summary of the two types of ASICBoost, relevant to the activation discussion.
Now, "overt" means "obvious", while "covert" means hidden. Overt ASICBoost is obvious because nVersion bits that are not currently in use for BIP9 activations are usually 0 by default, so setting those bits to 1 makes it obvious that you are doing something weird (namely, Overt ASICBoost). Covert ASICBoost is non-obvious because the order of transactions in a block are up to the miner anyway, so the miner rearranging the transactions in order to get lower power consumption is not going to be detected.
Unfortunately, while Overt ASICBoost was compatible with SegWit, Covert ASICBoost was not. This is because, pre-SegWit, only the block header Merkle tree committed to the transaction ordering. However, with SegWit, another Merkle tree exists, which commits to transaction ordering as well. Covert ASICBoost would require more computation to manipulate two Merkle trees, obviating the power benefits of Covert ASICBoost anyway.
Now, miners want to use ASICBoost (indeed, about 60->70% of current miners probably use the Overt ASICBoost nowadays; if you have a Bitcoin fullnode running you will see the logs with lots of "60 of last 100 blocks had unexpected versions" which is exactly what you would see with the nVersion manipulation that Overt ASICBoost does). But remember: ASICBoost was, at around the time, a novel improvement. Not all miners had ASICBoost hardware. Those who did, did not want it known that they had ASICBoost hardware, and wanted to do Covert ASICBoost!
But Covert ASICBoost is incompatible with SegWit, because SegWit actually has two Merkle trees of transaction data, and Covert ASICBoost works by fudging around with transaction ordering in a block, and recomputing two Merkle Trees is more expensive than recomputing just one (and loses the ASICBoost advantage).
Of course, those miners that wanted Covert ASICBoost did not want to openly admit that they had ASICBoost hardware, they wanted to keep their advantage secret because miners are strongly competitive in a very tight market. And doing ASICBoost Covertly was just the ticket, but they could not work post-SegWit.
Fortunately, due to the BIP9 activation process, they could hold SegWit hostage while covertly taking advantage of Covert ASICBoost!

UASF: BIP148 and BIP8

When the incompatibility between Covert ASICBoost and SegWit was realized, still, activation of SegWit stalled, and miners were still not openly claiming that ASICBoost was related to non-activation of SegWit.
Eventually, a new proposal was created: BIP148. With this rule, 3 months before the end of the SegWit timeout, nodes would reject blocks that did not signal SegWit. Thus, 3 months before SegWit timeout, BIP148 would force activation of SegWit.
This proposal was not accepted by Bitcoin Core, due to the shortening of the timeout (it effectively times out 3 months before the initial SegWit timeout). Instead, a fork of Bitcoin Core was created which added the patch to comply with BIP148. This was claimed as a User Activated Soft Fork, UASF, since users could freely download the alternate fork rather than sticking with the developers of Bitcoin Core.
Now, BIP148 effectively is just a BIP9 activation, except at its (earlier) timeout, the new rules would be activated anyway (instead of the BIP9-mandated behavior that the upgrade is cancelled at the end of the timeout).
BIP148 was actually inspired by the BIP8 proposal (the link here is a historical version; BIP8 has been updated recently, precisely in preparation for Taproot activation). BIP8 is basically BIP9, but at the end of timeout, the softfork is activated anyway rather than cancelled.
This removed the ability of miners to hold the softfork hostage. At best, they can delay the activation, but not stop it entirely by holding out as in BIP9.
Of course, this implies risk that not all miners have upgraded before activation, leading to possible losses for SPV users, as well as again re-pressuring miners to signal activation, possibly without the miners actually upgrading their software to properly impose the new softfork rules.

BIP91, SegWit2X, and The Aftermath

BIP148 inspired countermeasures, possibly from the Covert ASiCBoost miners, possibly from concerned users who wanted to offer concessions to miners. To this day, the common name for BIP148 - UASF - remains an emotionally-charged rallying cry for parts of the Bitcoin community.
One of these was SegWit2X. This was brokered in a deal between some Bitcoin personalities at a conference in New York, and thus part of the so-called "New York Agreement" or NYA, another emotionally-charged acronym.
The text of the NYA was basically:
  1. Set up a new activation threshold at 80% signalled at bit 4 (vs bit 1 for SegWit).
    • When this 80% signalling was reached, miners would require that bit 1 for SegWit be signalled to achive the 95% activation needed for SegWit.
  2. If the bit 4 signalling reached 80%, increase the block weight limit from the SegWit 4000000 to the SegWit2X 8000000, 6 months after bit 1 activation.
The first item above was coded in BIP91.
Unfortunately, if you read the BIP91, independently of NYA, you might come to the conclusion that BIP91 was only about lowering the threshold to 80%. In particular, BIP91 never mentions anything about the second point above, it never mentions that bit 4 80% threshold would also signal for a later hardfork increase in weight limit.
Because of this, even though there are claims that NYA (SegWit2X) reached 80% dominance, a close reading of BIP91 shows that the 80% dominance was only for SegWit activation, without necessarily a later 2x capacity hardfork (SegWit2X).
This ambiguity of bit 4 (NYA says it includes a 2x capacity hardfork, BIP91 says it does not) has continued to be a thorn in blocksize debates later. Economically speaking, Bitcoin futures between SegWit and SegWit2X showed strong economic dominance in favor of SegWit (SegWit2X futures were traded at a fraction in value of SegWit futures: I personally made a tidy but small amount of money betting against SegWit2X in the futures market), so suggesting that NYA achieved 80% dominance even in mining is laughable, but the NYA text that ties bit 4 to SegWit2X still exists.
Historically, BIP91 triggered which caused SegWit to activate before the BIP148 shorter timeout. BIP148 proponents continue to hold this day that it was the BIP148 shorter timeout and no-compromises-activate-on-August-1 that made miners flock to BIP91 as a face-saving tactic that actually removed the second clause of NYA. NYA supporters keep pointing to the bit 4 text in the NYA and the historical activation of BIP91 as a failed promise by Bitcoin developers.

Taproot Activation Proposals

There are two primary proposals I can see for Taproot activation:
  1. BIP8.
  2. Modern Softfork Activation.
We have discussed BIP8: roughly, it has bit and timeout, if 95% of miners signal bit it activates, at the end of timeout it activates. (EDIT: BIP8 has had recent updates: at the end of timeout it can now activate or fail. For the most part, in the below text "BIP8", means BIP8-and-activate-at-timeout, and "BIP9" means BIP8-and-fail-at-timeout)
So let's take a look at Modern Softfork Activation!

Modern Softfork Activation

This is a more complex activation method, composed of BIP9 and BIP8 as supcomponents.
  1. First have a 12-month BIP9 (fail at timeout).
  2. If the above fails to activate, have a 6-month discussion period during which users and developers and miners discuss whether to continue to step 3.
  3. Have a 24-month BIP8 (activate at timeout).
The total above is 42 months, if you are counting: 3.5 years worst-case activation.
The logic here is that if there are no problems, BIP9 will work just fine anyway. And if there are problems, the 6-month period should weed it out. Finally, miners cannot hold the feature hostage since the 24-month BIP8 period will exist anyway.

PSA: Being Resilient to Upgrades

Software is very birttle.
Anyone who has been using software for a long time has experienced something like this:
  1. You hear a new version of your favorite software has a nice new feature.
  2. Excited, you install the new version.
  3. You find that the new version has subtle incompatibilities with your current workflow.
  4. You are sad and downgrade to the older version.
  5. You find out that the new version has changed your files in incompatible ways that the old version cannot work with anymore.
  6. You tearfully reinstall the newer version and figure out how to get your lost productivity now that you have to adapt to a new workflow
If you are a technically-competent user, you might codify your workflow into a bunch of programs. And then you upgrade one of the external pieces of software you are using, and find that it has a subtle incompatibility with your current workflow which is based on a bunch of simple programs you wrote yourself. And if those simple programs are used as the basis of some important production system, you hve just screwed up because you upgraded software on an important production system.
And well, one of the issues with new softfork activation is that if not enough people (users and miners) upgrade to the newest Bitcoin software, the security of the new softfork rules are at risk.
Upgrading software of any kind is always a risk, and the more software you build on top of the software-being-upgraded, the greater you risk your tower of software collapsing while you change its foundations.
So if you have some complex Bitcoin-manipulating system with Bitcoin somewhere at the foundations, consider running two Bitcoin nodes:
  1. One is a "stable-version" Bitcoin node. Once it has synced, set it up to connect=x.x.x.x to the second node below (so that your ISP bandwidth is only spent on the second node). Use this node to run all your software: it's a stable version that you don't change for long periods of time. Enable txiindex, disable pruning, whatever your software needs.
  2. The other is an "always-up-to-date" Bitcoin Node. Keep its stoarge down with pruning (initially sync it off the "stable-version" node). You can't use blocksonly if your "stable-version" node needs to send transactions, but otherwise this "always-up-to-date" Bitcoin node can be kept as a low-resource node, so you can run both nodes in the same machine.
When a new Bitcoin version comes up, you just upgrade the "always-up-to-date" Bitcoin node. This protects you if a future softfork activates, you will only receive valid Bitcoin blocks and transactions. Since this node has nothing running on top of it, it is just a special peer of the "stable-version" node, any software incompatibilities with your system software do not exist.
Your "stable-version" Bitcoin node remains the same version until you are ready to actually upgrade this node and are prepared to rewrite most of the software you have running on top of it due to version compatibility problems.
When upgrading the "always-up-to-date", you can bring it down safely and then start it later. Your "stable-version" wil keep running, disconnected from the network, but otherwise still available for whatever queries. You do need some system to stop the "always-up-to-date" node if for any reason the "stable-version" goes down (otherwisee if the "always-up-to-date" advances its pruning window past what your "stable-version" has, the "stable-version" cannot sync afterwards), but if you are technically competent enough that you need to do this, you are technically competent enough to write such a trivial monitor program (EDIT: gmax notes you can adjust the pruning window by RPC commands to help with this as well).
This recommendation is from gmaxwell on IRC, by the way.
submitted by almkglor to Bitcoin [link] [comments]

nChain Chief Scientist Craig Wright does not understand hashing or Merkle trees

nChain Chief Scientist Craig Wright does not understand hashing or Merkle trees submitted by CombustibleBitcoiner to bsv [link] [comments]

300 days to increase transaction demand by 100x, what's the plan?

How will BCH pay for their security if fees are near zero and block reward keeps halving?
All numbers based off of. https://fork.lol/
BCH needs to process at least 0.77 BCH in fees per block to stay on par with BTC if it wants to maintain the same amount of sha256 hashpower (assuming the price ratio is the same). After the halving, BTC fees with make up 14.24% of mining reward, while BCH is projected to have 0.09% in fees.
How can BCH miners make up that deficit?
Many claim that tx volume will make up for it. So how many transactions does BCH need to process if your target fee rate is ~$0.01 per tx?
Today's price: $400/BCH
0.77 BCH = $308 . (required fee revenue to stay on par with BTC)
$308 / $0.01= 30800 tx per block.
BCH is currently doing ~320 tx per block.
How are you going to increase tx demand by 100x in the next 300 days AND have everyone pay at least $0.01 per tx???
Another interesting complication of having your users think about fees in terms of fiat units ($0.01), is that it gets even worse as the price goes up.
Future price: $800/BCH
0.77 BCH = $616 . (required fee revenue to stay on par with BTC)
$616 / $0.01= 61600 tx per block.
submitted by fnchain to btc [link] [comments]

Here’s all the money in the world, in one chart

Here’s all the money in the world, in one chart submitted by giannunes to Bitcoin [link] [comments]

So all this Bitmain, Ver & Jihan BU drama is actually really about ASICBOOST exploit?

submitted by Butt_Cheek_Spreader to Bitcoin [link] [comments]

Proof Of Work Explained

Proof Of Work Explained
https://preview.redd.it/hl80wdx61j451.png?width=1200&format=png&auto=webp&s=c80b21c53ae45c6f7d618f097bc705a1d8aaa88f
A proof-of-work (PoW) system (or protocol, or function) is a consensus mechanism that was first invented by Cynthia Dwork and Moni Naor as presented in a 1993 journal article. In 1999, it was officially adopted in a paper by Markus Jakobsson and Ari Juels and they named it as "proof of work".
It was developed as a way to prevent denial of service attacks and other service abuse (such as spam on a network). This is the most widely used consensus algorithm being used by many cryptocurrencies such as Bitcoin and Ethereum.
How does it work?
In this method, a group of users competes against each other to find the solution to a complex mathematical puzzle. Any user who successfully finds the solution would then broadcast the block to the network for verifications. Once the users verified the solution, the block then moves to confirm the state.
The blockchain network consists of numerous sets of decentralized nodes. These nodes act as admin or miners which are responsible for adding new blocks into the blockchain. The miner instantly and randomly selects a number which is combined with the data present in the block. To find a correct solution, the miners need to select a valid random number so that the newly generated block can be added to the main chain. It pays a reward to the miner node for finding the solution.
The block then passed through a hash function to generate output which matches all input/output criteria. Once the result is found, other nodes in the network verify and validate the outcome. Every new block holds the hash of the preceding block. This forms a chain of blocks. Together, they store information within the network. Changing a block requires a new block containing the same predecessor. It is almost impossible to regenerate all successors and change their data. This protects the blockchain from tampering.
What is Hash Function?
A hash function is a function that is used to map data of any length to some fixed-size values. The result or outcome of a hash function is known as hash values, hash codes, digests, or simply hashes.
https://preview.redd.it/011tfl8c1j451.png?width=851&format=png&auto=webp&s=ca9c2adecbc0b14129a9b2eea3c2f0fd596edd29
The hash method is quite secure, any slight change in input will result in a different output, which further results in discarded by network participants. The hash function generates the same length of output data to that of input data. It is a one-way function i.e the function cannot be reversed to get the original data back. One can only perform checks to validate the output data with the original data.
Implementations
Nowadays, Proof-of-Work is been used in a lot of cryptocurrencies. But it was first implemented in Bitcoin after which it becomes so popular that it was adopted by several other cryptocurrencies. Bitcoin uses the puzzle Hashcash, the complexity of a puzzle is based upon the total power of the network. On average, it took approximately 10 min to block formation. Litecoin, a Bitcoin-based cryptocurrency is having a similar system. Ethereum also implemented this same protocol.
Types of PoW
Proof-of-work protocols can be categorized into two parts:-
· Challenge-response
This protocol creates a direct link between the requester (client) and the provider (server).
In this method, the requester needs to find the solution to a challenge that the server has given. The solution is then validated by the provider for authentication.
The provider chooses the challenge on the spot. Hence, its difficulty can be adapted to its current load. If the challenge-response protocol has a known solution or is known to exist within a bounded search space, then the work on the requester side may be bounded.
https://preview.redd.it/ij967dof1j451.png?width=737&format=png&auto=webp&s=12670c2124fc27b0f988bb4a1daa66baf99b4e27
Source-wiki
· Solution–verification
These protocols do not have any such prior link between the sender and the receiver. The client, self-imposed a problem and solve it. It then sends the solution to the server to check both the problem choice and the outcome. Like Hashcash these schemes are also based on unbounded probabilistic iterative procedures.
https://preview.redd.it/gfobj9xg1j451.png?width=740&format=png&auto=webp&s=2291fd6b87e84395f8a4364267f16f577b5f1832
Source-wiki
These two methods generally based on the following three techniques:-
CPU-bound
This technique depends upon the speed of the processor. The higher the processor power greater will be the computation.
Memory-bound
This technique utilizes the main memory accesses (either latency or bandwidth) in computation speed.
Network-bound
In this technique, the client must perform a few computations and wait to receive some tokens from remote servers.
List of proof-of-work functions
Here is a list of known proof-of-work functions:-
o Integer square root modulo a large prime
o Weaken Fiat–Shamir signatures`2
o Ong–Schnorr–Shamir signature is broken by Pollard
o Partial hash inversion
o Hash sequences
o Puzzles
o Diffie–Hellman–based puzzle
o Moderate
o Mbound
o Hokkaido
o Cuckoo Cycle
o Merkle tree-based
o Guided tour puzzle protocol
A successful attack on a blockchain network requires a lot of computational power and a lot of time to do the calculations. Proof of Work makes hacks inefficient since the cost incurred would be greater than the potential rewards for attacking the network. Miners are also incentivized not to cheat.
It is still considered as one of the most popular methods of reaching consensus in blockchains. Though it may not be the most efficient solution due to high energy extensive usage. But this is why it guarantees the security of the network.
Due to Proof of work, it is quite impossible to alter any aspect of the blockchain, since any such changes would require re-mining all those subsequent blocks. It is also difficult for a user to take control over the network computing power since the process requires high energy thus making these hash functions expensive.
submitted by RumaDas to u/RumaDas [link] [comments]

Hans: for example implement bitcoin on top of the tangle then the bitcoin

Hans Moog [IF]Yesterday at 23:57
so if somebody would for example implement bitcoin on top of the tangle then the bitcoin miners would need to buy IOTA to be able to send messages in the network
We will most probably see a lot of companies also build proprietary stuff on the tangle
you can even have "private" apps on the tangle
that are encrypted
so you can have stuff that would require a "private inhouse blockchain" to use the global infrastructure of the tangle
its extremely powerfull

Jack KerouacYesterday at 23:58
I thought of that already - making a "new coin" that is private like monero on the basis of the tangle - using a privat DAPP
Crazy powerful!

Hans Moog [IF]Yesterday at 23:59
yeah you can have a "privacy coin" on top of the tangle
that is maybe not as fast and scalable as IOTA itself and might even have fees


Hans Moog [IF]Yesterday at 23:59
but if people are willing to pay for this extra service then you can have private transactions without having to "leave" the ecosystem


Hans Moog [IF]Today at 00:00
and without having to give up scalability for the IOTA base layer

MaKlaToday at 00:00
(it really sounds great. unexpected simplicity in a versatile yet realistically applicable solution is rare :slight_smile: )
well... "simplicity"....^^"
Jack KerouacToday at 00:02
Crazy powerful!
and everything would need IOTAs...
Qubic and Oracles would also be a base DAPP on this tangle ...
Hans Moog [IF]Today at 00:05
exactly
Jack KerouacToday at 00:06
Looking forward to this!
Hans Moog [IF]Today at 00:07
:trollface:

CorienToday at 00:07
So if i understand correct, other coins on the Tangle would not have a negatieve impact on the value of the IOTA. Even the other way around.
Dave [EF]Today at 00:08
Sounds really exciting!

Hans Moog [IF]Today at 00:08
IOTA would always be the native coin which would be the fastest and most secure one, yeah
if there would ever be a coin that would be faster and more secure then it would make sense to implement that in IOTA core

Jack KerouacToday at 00:10
in a way, yes but its a bit more powerfull now as the DAPPS are completely isolated and "rejecting" transactions in one app does not cause all of the transactions that approve it to also be rejected

u/Hans Moog [IF] so no transaction will be rejected anymore from the base tangle layer? In the worst case - if in every DAPP consens mechanism this transaction would be rejected, because of double spend or not usefull for this DAPP - it is just seen as data transaction?!

Hans Moog [IF]Today at 00:10
yeah more or less
you might still have to reattach a value transfer rare edge cases (if your node is out of sync or sth) but you will never have to reattach a "non-value-transfer"
or well ...
if you attach sth to a part of the tangle that is really old and everybody has snapshotted that already then you would still have to reattach
but a tx does not "depend" on other apps anymore
so if 1 app says its bad and we want to orphan it then this happens on another layer

ocmoneToday at 00:14
Holy Guacamole!!!

grasToday at 00:15
So weaker nodes may work without ledger, like just a hub?

Jack KerouacToday at 00:16
As all of IOTA would rely on this stuff - DID, MAM, ... - it should be done very fast :wink:

Hans Moog [IF]Today at 00:16
nah you always need to support iota value transfers for the rate control

TobiasToday at 00:16
Is there a ETA for value tx in GoShimmer?

Jack KerouacToday at 00:16
because of the mana?!

Hans Moog [IF]Today at 00:16
but a node that is not interested in the decentralized facebook and only wants to process MAM messages can do so
yeah mana is the thing that ties everything together and to know the mana you need to know IOTA value transfers
I mean ... you could rely on a centralized service that provides you the mana balances, so even very hardware contraint nodes could theoretically take part in the network. but then you might process a few txs that others drop if this centralized service would give you the wrong balances
but the whole point of IOTA is to be shardable and lightweight so you wont need much for the value transfer layer anyway

Jack KerouacToday at 00:19
oh I forgot about sharding - how will that be done in this kind of environment?

CorienToday at 00:21
u/Hans Moog [IF] Have you ever thought about how much storage (permanode) you need if IOTA becomes the new TCP/IP ?

Hans Moog [IF]Today at 00:21
but if you want to only "issue" transactions and receive your mana by people assigning it to you (i.e. a company remotely loading up their sensors with mana), then you can essentially do that
one of the first applications I will implement is an "archiving APP" that records the activity in the tangle and allows you to "prove" that a certain tx was part of the tangle at some point in the past
recording 100 years of activity in the tangle (independently of the TPS) will require less than 1 GB
much less actually
the magic of merkle trees

Jack KerouacToday at 00:23
of all transactions of every DAPP?

Hans Moog [IF]Today at 00:23
yes
everything that ever happened in the tangle

Jack KerouacToday at 00:24
Is the sharding the ontology concept - or how will that be done in this kind of environment?

Hans Moog [IF]Today at 00:25
nah its not related
Jack KerouacToday at 00:25
ok
Hans Moog [IF]Today at 00:25
or well everything is somehow related but these APPS are not the "shards"
submitted by longfld to IOTAmarkets [link] [comments]

For devs and advanced users that are still in the dark: Read this to get redpilled about why Bitcoin (SV) is the real Bitcoin

This post by cryptorebel is a great intro for newbies. Here is a continuation for a technical audience. I'll be making edits for readability and maybe even add more content.
The short explanation of why BSV is the real Bitcoin is that it implements the original L1 scripting language, and removes hacks like p2sh. It also removes the block size limit, and yes that leads to a small number of huge nodes. It might not be the system you wanted. Nodes are miners.
The key thing to understand about the UTXO architecture is that it is maximally "sharded" by default. Logically dependent transactions may require linear span to construct, but they can be validated in sublinear span (actually polylogarithmic expected span). Constructing dependent transactions happens out-of-band in any case.
The fact that transactions in a block are merkelized is an obvious sign that Bitcoin was designed for big blocks. But merkle trees are only half the story. UTXOs are essentially hash-addressed stateful continuation snapshots which can also be "merged" (validated) in a tree.
I won't even bother talking about how broken Lightning Network is. Of all the L2 scaling solutions that could have been used with small block sizes, it's almost unbelievable how many bad choices they've made. We should be kind to them and assume it was deliberate sabotage rather than insulting their intelligence.
Segwit is also outside the scope of this post.
However I will briefly hate on p2sh. Imagine seeing a stunted L1 script language, and deciding that the best way to implement multisigs was a soft-fork patch in the form of p2sh. If the intent was truly backwards-compatability with old clients, then by that logic all segwit and p2sh addresses are supposed to only be protected by transient rules outside of the protocol. Explain that to your custody clients.
As far as Bitcoin Cash goes, I was in the camp of "there's still time to save BCH" until not too long ago. Unfortunately the galaxy brains behind BCH have doubled down on their mistakes. Again, it is kinder to assume deliberate sabotage. (As an aside, the fact that they didn't embrace the name "bcash" when it was used to attack them shows how unprepared they are when the real psyops start to hit. Or, again, that the saboteurs controlled the entire back-and-forth.)
The one useful thing that came out of BCH is some progress on L1 apps based on covenants, but the issue is that they are not taking care to ensure every change maintains the asymptotic validation complexity of bitcoin's UTXO.
Besides that, The BCH devs missed something big. So did I.
It's possible to load the entire transaction onto the stack without adding any new opcodes. Read this post for a quick intro on how transaction meta-evaluation leads to stateful smart contract capabilities. Note that it was written before I understood how it was possible in Bitcoin, but the concept is the same. I've switching to developing a language that abstracts this behavior and compiles to bitcoin's L1. (Please don't "told you so" at me if you just blindly trusted nChain but still can't explain how it's done.)
It is true that this does not allow exactly the same class of L1 applications as Ethereum. It only allows those than can be made parallel, those that can delegate synchronization to "userspace". It forces you to be scalable, to process bottlenecks out-of-band at a per-application level.
Now, some of the more diehard supporters might say that Satoshi knew this was possible and meant for it to be this way, but honestly I don't believe that. nChain says they discovered the technique 'several years ago'. OP_PUSH_TX would have been a very simple opcode to include, and it does not change any aspect of validation in any way. The entire transaction is already in the L1 evaluation context for the purpose of checksig, it truly changes nothing.
But here's the thing: it doesn't matter if this was a happy accident. What matters is that it works. It is far more important to keep the continuity of the original protocol spec than to keep making optimizations at the protocol level. In a concatenative language like bitcoin script, optimized clients can recognize "checksig trick phrases" regardless of their location in the script, and treat them like a simple opcode. Script size is not a constraint when you allow the protocol to scale as designed. Think of it as precompiles in EVM.
Now let's address Ethereum. V. Buterin recently wrote a great piece about the concept of credible neutrality. The only way for a blockchain system to achieve credible neutrality and long-term decentralization of power is to lock down the protocol rules. The thing that caused Ethereum to succeed was the yellow paper. Ethereum has outperformed every other smart contract platform because the EVM has clear semantics with many implementations, so people can invest time and resources into applications built on it. The EVM is apolitical, the EVM spec (fixed at any particular version) is truly decentralized. Team Ethereum can plausibly maintain credibility and neutrality as long as they make progress towards the "Serenity" vision they outlined years ago. Unfortunately they have already placed themselves in a precarious position by picking and choosing which catastrophes they intervene on at the protocol level.
But those are social and political issues. The major technical issue facing the EVM is that it is inherently sequential. It does not have the key property that transactions that occur "later" in the block can be validated before the transactions they depend on are validated. Sharding will hit a wall faster than you can say "O(n/64) is O(n)". Ethereum will get a lot of mileage out of L2, but the fundamental overhead of synchronization in L1 will never go away. The best case scaling scenario for ETH is an L2 system with sublinear validation properties like UTXO. If the economic activity on that L2 system grows larger than that of the L1 chain, the system loses key security properties. Ethereum is sequential by default with parallelism enabled by L2, while Bitcoin is parallel by default with synchronization forced into L2.
Finally, what about CSW? I expect soon we will see a lot of people shouting, "it doesn't matter who Satoshi is!", and they're right. The blockchain doesn't care if CSW is Satoshi or not. It really seems like many people's mental model is "Bitcoin (BSV) scales and has smart contracts if CSW==Satoshi". Sorry, but UTXO scales either way. The checksig trick works either way.
Coin Woke.
submitted by -mr-word- to bitcoincashSV [link] [comments]

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.

Bitcoin (BTC)A Peer-to-Peer Electronic Cash System.
  • Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
  • Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with a blockchain network structure, a notion first created by Stuart Haber and W. Scott Stornetta in 1991.
  • Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
  • The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
  • With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.

https://preview.redd.it/s2gmpmeze3151.png?width=256&format=png&auto=webp&s=9759910dd3c4a15b83f55b827d1899fb2fdd3de1

1. What is Bitcoin (BTC)?

  • Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
  • Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
  • The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
  • The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
  • Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).

2. Bitcoin’s core features

For a more beginner’s introduction to Bitcoin, please visit Binance Academy’s guide to Bitcoin.

Unspent Transaction Output (UTXO) model

A UTXO transaction works like cash payment between two parties: Alice gives money to Bob and receives change (i.e., unspent amount). In comparison, blockchains like Ethereum rely on the account model.
https://preview.redd.it/t1j6anf8f3151.png?width=1601&format=png&auto=webp&s=33bd141d8f2136a6f32739c8cdc7aae2e04cbc47

Nakamoto consensus

In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW).
The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer.
Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs.
As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”).
Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so.
With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic.
Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.

The blockchain

Block production

The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979.
With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”).
An illustration of block production in the Bitcoin Protocol is demonstrated below.

https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d

Block time and mining difficulty

Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty.
Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly.
Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.

What are orphan blocks?

In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency.
It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency.
Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted.
The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network.
However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.

3. Bitcoin’s additional features

Segregated Witness (SegWit)

Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017.
SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin.
SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become.
https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e
The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit.
Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade.
Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values.
For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890.
Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid.
This can create many issues, as illustrated in the below example:
  1. Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
  2. Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
  3. At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
  4. Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
  5. As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID.
Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.

Lightning Network

Lightning Network is a second-layer micropayment solution for scalability.
Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins.
Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ.
A list of curated resources relevant to Lightning Network can be found here.
In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions.
Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel.
https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8
One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel.
However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.

Schnorr Signature upgrade proposal

Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain.
https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4
However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys.
This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block.
https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e
The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually.
Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.

4. Economics and supply distribution

The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years.
As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
submitted by D-platform to u/D-platform [link] [comments]

[Discord Conv.] Salute to Crazy ones

Disclaimer:
This is my editing. So there could be some misunderstandings.
Even if u/longfld posted similar screenshots already(thanks to him/her), I'd like to share this summary again, 'cause it has some more contents.
Sometimes, we need more enjoyable stuff to read on this rough, dynamic ride to a new world.

2/18
*** Here's to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They're not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can't do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the one who are crazy enough to think they can change the world are the ones who do. *** from 'think different' ad campaign

TCP/IP
TCP/IP, or the Transmission Control Protocol/Internet Protocol, is a suite of communication protocols used to interconnect network devices on the internet.
TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications that identify how it should be broken into packets, addressed, transmitted, routed and received at the destination.

DAPP
DApp is an abbreviated form for decentralized application.
A DApp has its backend code running on a decentralized peer-to-peer network. Contrast this with an app where the backend code is running on centralized servers.

Ontology
a set of concepts and categories in a subject area or domain that shows their properties and the relations between them.

Hans Moog [IF]어제 오후 9:00
[about the release of GoShimmer v0.2.0?]
Originally we wanted to release the new version end of next week but due to the hack this might be a bit delayed
but yeah we have a pretty concrete plan
the next version will be pretty interesting as it will introduce the changes that will turn IOTA into a general purpose DLT platform that can run pretty much anything that is even remotely related to DLT on it
I will write a bit more about that when there is time (maybe even do a video)

Hans Moog [IF]어제 오후 9:02
value transfers will essentially be the first DAPP that runs on the tangle
so we need that before we can integrate the ledger

Hans Moog [IF]어제 오후 9:04
[are you talking about the atomic transaction layout?]
not just that
also a different layered architecture which we call "the ontologies concept"
but its one of the building blocks yeah
tangle will essentially be like the decentralized version of TCP/IP
a general purpose protocol
once value transfers are implemented I have already a few interesting ideas for additional DAPPS
Coordicide is pretty "complex" as it requires a lot of different protocols - being able to completely separate the building blocks makes it much easier to get them "secure"

Hans Moog [IF]어제 오후 9:08
the decentralized randomness required for FPC will for example be an app running on the tangle
DRAND App

Hans Moog [IF]어제 오후 9:14
[about the new ontologies concept]
It's not really layed out in a public forum post yet but we discussed and finalized the specs on the last research summit last week

Hans Moog [IF]어제 오후 9:19
[isn't this new ontology almost like part of multiverse consensus?]
yes
a lot of the idea that were part of the original multiverse concepts are useful also for the current coordicide
we will for example be able to separate the fate of data transactions and value transactions, which means that you can send data txs without having to be worried that they are rejected, because they attach to a double spend,
which is a requirement for a general purpose protocol anyway

Hans Moog [IF]어제 오후 9:20
[that sounds awesome!! To have a "base" tangle, and on this base tangle different "subtangles" for different applications - value transfers, DRAND, messaging, ...]
Archiving, DRAND, chat, MAM, DID, ...
you could even have phone calls on the tangle

Hans Moog [IF]어제 오후 9:21
but i guess for these kind of things it makes more sense to have a 1:1 connection
its extremely simple
the point is that it makes the code much simpler

Hans Moog [IF]어제 오후 9:22
you can even have different consensus mechanisms next to each other for different apps
some apps like decentralized chats for example don't even need consensus
or if you want to build a decentralized version of facebook

Hans Moog [IF]어제 오후 9:22
you don't need consensus here
and it not going to be "separate tangles" that have nothing to do with each other
everything runs in the same main tangle

Hans Moog [IF]어제 오후 9:30
it's not very complex - that's the beautiful thing
in fact it makes stuff much easier

Hans Moog [IF]어제 오후 9:31
Maybe I can do a video about it in the coming days
writing a network application if you can use TCP/IP is much easier than if you would have to implement the networking from scatch and communicate with the wires in your pc

Hans Moog [IF]어제 오후 9:39
**[**Is the "base decentralized TCP/IP layer" already spec out and ready to be implemented? As this will become the "heart" of the tangle.]
yep
its coded already
we started merging the code

Hans Moog [IF]어제 오후 9:44
[Doesn't "DRNG via committee" still make the network somewhat centralized?]
Not really.
1. The committee members do not "control" consensus.
2. If committee members get taken offline by for example government intervention, then the next highest mana holder just joins the committee.
So you still maintain all the benefits of a decentralized network.
resilience against outside actors, no single actor controls the system

Hans Moog [IF]어제 오후 9:46
[How does commitee get chosen, by who, which parameters?]
as the first version of goshimmer we will most probably have a fixed committee of some IF nodes + selected community members, but in the final protocol, the highest mana nodes will just issue a randomness beacon according to the protocol
so the committee is dynamic

Hans Moog [IF]어제 오후 9:47
[Does commitee agree on the same number and then broadcast it or...?]
If I am informed correctly, then its based on threshold signatures
But I didn't work on that part, so I am not 100% sure how exactly it works
its a bit hard to keep track of everything and also code at the same time

Hans Moog [IF]어제 오후 9:47
we have a few teams in the research department where every team specs one of the building blocks for engineering

Hans Moog [IF]어제 오후 9:49
[You know perhaps if this drand stuff has been coded already as well?]
yeah, but don't ask me in which branch

Hans Moog [IF]어제 오후 9:49
and it will be adjusted to the new ontologies concepts

Hans Moog [IF]어제 오후 9:50
[isn't this approach with DAPPs similar to ICT and IXIs?]
in a way, yes but its a bit more powerful now, as the DAPPS are completely isolated and "rejecting" transactions in one app does not cause all of the transactions that approve it to be rejected
in IXI and ICT a chat message would disappear if it approved a double spend

Hans Moog [IF]어제 오후 9:56
[if Jaguar would implement a new DAPP with a "new coin" on the tangle with a custom DAPP consensus mechanism?]
yeah you could create new coins on top of the tangle with their custom consensus mechanism
but every one of these new coins would need to be able to also understand IOTA transfers
and the nodes would ultimately have to have mana to be able to take part in the network
so if somebody would for example implement bitcoin on top of the tangle, then the bitcoin miners would need to buy IOTA to be able to send messages in the network
We will most probably see a lot of companies also build proprietary stuff on the tangle
you can even have "private" apps on the tangle
that are encrypted
so you can have stuff that would require a "private inhouse blockchain" to use the global infrastructure of the tangle
its extremely powerful

Hans Moog [IF]어제 오후 9:59
[making a "new coin" that is private like monero on the basis of the tangle - using a private DAPP? Crazy powerful!]
yeah you can have a "privacy coin" on top of the tangle
that is maybe not as fast and scalable as IOTA itself and might even have fees

Hans Moog [IF]어제 오후 9:59
but if people are willing to pay for this extra service then you can have private transactions without having to "leave" the ecosystem
and without having to give up scalability for the IOTA base layer

Hans Moog [IF]어제 오후 10:05
[Crazy powerful!
and everything would need IOTAs...
Qubic and Oracles would also be a base DAPP on this tangle ...]
exactly

Hans Moog [IF]어제 오후 10:08
[So if i understand correct, other coins on the Tangle would not have a negative impact on the value of the IOTA. Even the other way around.****]
IOTA would always be the native coin which would be the fastest and most secure one, yeah
if there would ever be a coin that would be faster and more secure, then it would make sense to implement that in IOTA core

Hans Moog [IF]어제 오후 10:10
[so no transaction will be rejected anymore from the base tangle layer? In the worst case - if in every DAPP consense mechanism this transaction would be rejected, because of double spend or not useful for this DAPP - it is just seen as data transaction?!]
yeah more or less
you might still have to reattach a value transfer rare edge cases (if your node is out of sync or sth) but you will never have to reattach a "non-value-transfer"
or well ...
if you attach sth to a part of the tangle that is really old and everybody has snapshotted that already then you would still have to reattach
but a tx does not "depend" on other apps anymore
so if 1 app says its bad and we want to orphan it, then this happens on another layer

Hans Moog [IF]어제 오후 10:16
[So weaker nodes may work without ledger, like just a hub?]
nah you always need to support iota value transfers for the rate control

Hans Moog [IF]어제 오후 10:16
[because of the mana?!]
but a node that is not interested in the decentralized facebook and only wants to process MAM messages can do so
yeah mana is the thing that ties everything together and to know the mana you need to know IOTA value transfers
I mean ... you could rely on a centralized service that provides you the mana balances, so even very hardware-constraint nodes could theoretically take part in the network
but then you might process a few txs that others drop if this centralized service would give you the wrong balances
but the whole point of IOTA is to be shardable and lightweight
so you wont need much for the value transfer layer anyway

Hans Moog [IF]어제 오후 10:21
[Have you ever thought about how much storage (permanode) you need if IOTA becomes the new TCP/IP ?]
but if you want to only "issue" transactions and receive your mana by people assigning it to you (i.e. a company remotely loading up their sensors with mana), then you can essentially do that.
one of the first applications I will implement is an "archiving APP" that records the activity in the tangle and allows you to "prove" that a certain tx was part of the tangle at some point in the past.
recording 100 years of activity in the tangle (independently of the TPS) will require less than 1 GB
much less actually
the magic of merkle trees

Hans Moog [IF]어제 오후 10:23
[of all transactions of every DAPP?]
yes
everything that ever happened in the tangle

Hans Moog [IF]어제 오후 10:25
[Is the sharding the ontology concept ]
nah its not related
or well everything is somehow related but these APPS are not the "shards"

Hans Moog [IF]어제 오후 10:33
its an essential part of coordicide - this is the next step before integrating the value transfers
so next major version of goshimmer
2-3 weeks max

Hans Moog [IF]어제 오후 10:36
[Nice, so value transfers will probably be implemented till the end of march]
that's the plan, yeah
might be like 1-2 weeks late now due to the hack but we will see
we try our best to catch up after this is sorted out

Hans Moog [IF]오늘 오전 2:24
[so next goshimmer version will be binary too!!!]
yes, I think the next (major) goshimmer version will bring a few of the most fundamental changes in how we perceive the protocol as a whole
not just binary <-> tinary but also regarding its "expressiveness"
and it's really funny because it is essentially just a slight shift of perception regarding the already established principles, that interestingly directly converts into better architecture and simpler code
submitted by btlkhs to Iota [link] [comments]

80.241.217.46 mining 18 blocks today containing mostly 1 -> 64 -> -128 -> 256 -> 512 transactions

Who is 80.241.217.46? This IP is mainly producing blocks with 2 base system pattern... even blocks with 1 transaction. Seems like a waste not to include more transaction and looks rather suspicious to me. Currently they got 3 of the past 4 blocks so they seem strong. Server located in Germany.
Is this the reason why Ive been waiting almost an hour for confirmation of my 0.0001 fee transaction?
UPDATE: Only 192 transactions has been confirmed in over 1,5 hour because of this pool.
submitted by nostr to Bitcoin [link] [comments]

MXC AMA Recapitulation-Filenet

MXC AMA Recapitulation-Filenet

https://preview.redd.it/6u8t4y55nay41.png?width=1200&format=png&auto=webp&s=6ad7775ac648def445571a6a80e285f1c152a803

Guest: FN Global Community Rep,Andrew Chan

Host: Molly

Introduction:

Andrew:
Nice to meet you guys here,it's my honor to stand here speach for Filenet.Filenet(FN) is the world's first public chain of distributed storage application who has lauchned the mainet, and is also the world's first public chain of distributed storage application using DPOS + POC consensus mechanism.Filenet is dedicated to storing and distributing valuable content, rewarding miners in the form of mining to contribute idle bandwidth and storage. The mission of Filenet is to establish a powerful distributed data service system by connecting all idle storage to form, so any storage device that can connect to the Internet can participate in mining. Generally, Filenet is a super cloud system based on distributed storage and content sharing.

Questions from community:

Molly: Q1.What are the benefits of the FN project for business? What is the main role FN plays in business for five validation and security?
Andrew:
As we said just now,Filenet(FN) is the world's first public chain of distributed storage.
Filenet is dedicated to storing and distributing valuable content. The system provides a file promotion system. The more data is retrieved, the more popular it becomes, and the file can be mined.The DAO mechanism adopted by Filenet, in the system of Filenet, users need not pay for uploading and downloading, which greatly reduces the cost of enterprise server and bandwidth.Besides that Filenet is used to retrieve and distribute mining patterns, pledge a certain amount of deposit and provide a certain amount of storage space to participate in mining. The higher the miner's contribution, the higher the probability of a block.
Filenet is a leader in the field of distributed storage because of its unique consensus mechanism, business model, economic model, ecological strategy and governance structure, enabling blockchain storage to break out of the shackles and develop into a new format, and providing a key role for the development of other blockchain storage systems.
On the level of consensus, Filenet adopts the DPOS+POC mechanism as the consensus mechanism for distribution in the context of POC storage and mining, avoiding the direct contradiction between equipment efficiency and resource allocation, and greatly improving the mining mode in the blockchain 3.0 era.
The specific operation process of DPOS algorithm is that stakeholders, namely the Token holders and miners, vote to select Filenet Super Nodes through the election program, and then the Super Nodes in the block will be randomly pseudorandomly, and the Filenet Super Nodes can choose whether to produce blocks within a specified time.
As for smart contracts, Filenet is a common chain for developers that provides special programming primitives for DApp to interact with stored data.
These primitives are contained within the EVM (ethereum intelligent contract virtual machine). Thus, information about the location of data, storage nodes, and miners can also be accessed in smart contracts.
The world's first distributed storage DApp "Ztiao" developed based on Filenet network is now on the market. All chat data in this application will be stored in a fragmented form at any node in the world, transferred by private key, and the ecology in the application will be circulated and settled with Fn as payment token.
Filenet's smart contracts apply primarily to miners' coin holdings.The smart contracts we have developed may be rapidly realized through EVM (ethereum smart contract virtual machine) and solsea.
Filenet itself has the potential to implement an intelligent contract mechanism, and we believe that future versions of EVM and WASM will naturally integrate with the capabilities of Filenet and allow other main chains to benefit from Filenet.
In terms of data structure, the Filenet block saves all data trace parameters, and the data uploaded to Filenet is of various types and large quantities. While traditional linked list structures make blocks redundant and complex to express, Filenet USES a block chain data structure with Merkle tree and DAG (directed acyclic graph) structure.
The DAG structure is more flexible, more powerful, and faster than the traditional blockchain chain structure, greatly improving the efficiency of block packaging, thereby improving the performance of the Filenet network.
The Merkle tree does not require complete block information, but only the key Merkle node information to verify the block chain number filenet. IO page 10, a total of 24 data, which makes the node lighter and more energy and resources are devoted to business processing and providing services for the filenet network.
At the same time, Merkle tree can also simplify the verification process and further improve network performance.
Molly: Q2.Why does Filenet use the DPOS + POC consensus mechanism? What is the advantage?
Andrew:
As we all know,the core element of blockchain technology is the consensus mechanism. Currently, the most commonly used mechanisms include PoW (Proof-of-Work), PoS (Proof-of-Stake), DPoS (Delegated-Proof-of Stake), and PoC (Proof-of-Contribution). Proof of Work requires miners to solve complex cryptographic math problems and relies on computing power. The advantage of the system is that it is secure and reliable. Disadvantages are its limited capacity and the possibility of “51% attacks”. The Proof of Stake consensus mechanism selects miners according to how many coins he or she has. An immediate advantage is its low resource consumption. However, it opens itself to a range of attacks, such as nothing-at-stake, and also results in centralization since wealth brings more rewards and more decision-making power. In DPoS, the majority of people holding voting rights authorize a small number of nodes to act for them. The system’s merits are its high efficiency, throughput capacity and concurrency. However, the power is then concentrated in the hands of a few nodes, which is not safe. Proof of Contribution allocates mining and validating rights according to the contributions made by the nodes. The advantage of this system is that it does not waste resources thanks to its concept of selection based on resources provided to network. A disadvantage is that the calculation of contributions depends on specific scenarios. In the era of Blockchain 3.0, the consensus mechanisms are to advance under the principles of economy of resources, security focus and scalability, throughput capacity and concurrency.

https://preview.redd.it/krjv4rm9may41.png?width=1066&format=png&auto=webp&s=40875d9f7c76c5259faba1ad09f2396447231fa5
Molly: Q3.What is the main reason behind the formation of FN? Why do you think coins like FN should be in the Marketplace?
Andrew:
As I just said,Filenet is an IPFS incentive layer to reward miners for sharing their storage and networking resources.
Filenet is also a token which powers a distributed certification mechanism. It creates a cloud-level system for content-sharing dedicated to storing and distributing valuable content on IPFS,demand leaders to results. Filenet solve the problem of data distribution and storage.Why coins like FN should be in the marketplace?
This is easy to understand,why bitcoin should be in the market?All coins can be in the market for just one reason-the consensus.If there just one person who think FN is valueble,we cannot say this is consesus,but if there is 10000,or 1 billion who make consesus,then you can say,FN should be in the market.Fn happens to have so many users make the consesus.The number of people in Filenet community has reached 210000+,the autonomy community is up to 21,the global super nodes is over 51+,Our community is still growing,our consensus is also deepening,because we all believe in the future of FN.In short term,in the mining mode, on the one hand: the tokens will be locked, and the decrease in circulation can increase the value of the token; on the other hand: mining can also generate income.
On the long term,Filenet can provide commercial applications with commercial value. Giant Internet companies such as Tencent WeSee and Byte Dance with giant data amount will have requirements for massive storage. Filenet can provide distributed storage services to solve the problem. Companies need to pay and lock FN for the distributed storage services. In this way, the circulation of FN on the market can be controlled, and thereby the value can be appreciated.
Molly: Q4.Can ordinary users also participate in mining? If can participate, how much mining can ordinary user do? And please explain the role of FN Coin easily.
Andrew:
Ordinary people can also participate in mining,as long as you pledge 400FN,and provide 4T storage space,you can join to mining.And the specific details depend on the mining pool you joined,you can see these pictures for a detailed mining tutorial.

https://preview.redd.it/z9hr1knkmay41.png?width=864&format=png&auto=webp&s=61abf14e3e659430f8387915389e024a1523ad2e

https://preview.redd.it/g8r2hmgmmay41.png?width=864&format=png&auto=webp&s=d82d323958a9ff11761b7c165be0179d7aeb91d9
Molly: Q5.What's the future plan of Filenet?
Andrew:
In the 1.0 stage, Filenet is the first distributed storage application public chain on the mainnet, the first distributed storage application public chain on the exchange, and the first distributed storage application public chain using the DPOS + POC consensus mechanism.
Filenet 2.0 comprehensively solves the key shortcomings of centralized data service centers.
In Filenet3.0 stage, the vision can catch up with and surpass many leading projects and brands of the decentralized distributed storage track, such as Filecoin, IBM, Amazon, Maidsafe, and become the leader of the track.

Free-asking Session

Q1.What is the difficulty bomb solution? Can you tell us more about [email protected]
Andrew:

https://preview.redd.it/zghna15smay41.png?width=905&format=png&auto=webp&s=2f01912ecb429f6e543d0b74322f4c295b901015
difficulty bomb is a solution to to encourage the nodes of the entire network to contribute more storage space and bandwidth, the Filenet Foundation plans to implement the difficulty bomb program in stages from May 1, 2020.
Q2.Checking the website, I found that the transaction fees of FN coins are very low, and the transaction speed is also very high! Can you explain how the FILENET project can achieve such a high transaction rate at the lowest [email protected]
Andrew:
As I said just now,there are lots of ways to generate revennue,in short term Filenet can provide commercial applications with commercial value. Giant Internet companies such as Tencent WeSee and Byte Dance with giant data amount will have requirements for massive storage. Filenet can provide distributed storage services to solve the problem. Companies need to pay and lock FN for the distributed storage services. In this way, the circulation of FN on the market can be controlled, and thereby the value can be appreciated.And in long term Filenet is aim to encourage the nodes of the entire network to contribute more storage space and bandwidth, the Filenet Foundation plans to implement the difficulty bomb program in stages from May 1, 2020.
Q3.According to packaging node program, theywill recruit 105 packaging nodes worldwide. If 105 packaging nodes have been allocated, can I still participate in the activities of this packaging [email protected]
Andrew:
yes,of course,our paging nodes have proceed to the fifth issue,you can join us.
Q4.Why do people have to buy FN or hold it back? What is the FILENET team's plan to keep competing in the [email protected]
Andrew:
You could also refer to the eco mode and the apppreciation logic I've jsut share.
Q5.what are the benefits of $FN Long Term [email protected]
Andrew:
As we just shared: For long term, Filenet can provide commercial applications with commercial value. Giant Internet companies such as Tencent WeSee and Byte Dance with giant data amount will have requirements for massive storage. Filenet can provide distributed storage services to solve the problem. Companies need to pay and lock FN for the distributed storage services. In this way, the circulation of FN on the market can be controlled, and thereby the value can be appreciated.
Follow us:
Telegram: https://t.me/MXCEnglish
MXC trading: https://t.me/MXCtrade
Twitter: https://twitter.com/MXC_Exchange
https://twitter.com/MXC_Fans
Reddit: https://www.reddit.com/MXCexchange/
Facebook: https://www.facebook.com/mxcexchangeofficial/
Discord: https://discord.gg/zu5drS8
submitted by SimonZhu666 to MXCexchange [link] [comments]

Hans: for example implement bitcoin on top of the tangle then the bitcoin


Hans Moog [IF]Yesterday at 23:57
so if somebody would for example implement bitcoin on top of the tangle then the bitcoin miners would need to buy IOTA to be able to send messages in the network
We will most probably see a lot of companies also build proprietary stuff on the tangle
you can even have "private" apps on the tangle
that are encrypted
so you can have stuff that would require a "private inhouse blockchain" to use the global infrastructure of the tangle. its extremely powerfull
Jack KerouacYesterday at 23:58
I thought of that already - making a "new coin" that is private like monero on the basis of the tangle - using a privat DAPP
Crazy powerful!
Hans Moog [IF]Yesterday at 23:59
yeah you can have a "privacy coin" on top of the tangle
that is maybe not as fast and scalable as IOTA itself and might even have fees

Hans Moog [IF]Yesterday at 23:59
but if people are willing to pay for this extra service then you can have private transactions without having to "leave" the ecosystem

Hans Moog [IF]Today at 00:00
and without having to give up scalability for the IOTA base layer
MaKlaToday at 00:00
(it really sounds great. unexpected simplicity in a versatile yet realistically applicable solution is rare :slight_smile: )
well... "simplicity"....^^"
Jack KerouacToday at 00:02
Crazy powerful!
and everything would need IOTAs...
Qubic and Oracles would also be a base DAPP on this tangle ...
Hans Moog [IF]Today at 00:05
exactly
Jack KerouacToday at 00:06
Looking forward to this!
Hans Moog [IF]Today at 00:07
:trollface:

CorienToday at 00:07
So if i understand correct, other coins on the Tangle would not have a negatieve impact on the value of the IOTA. Even the other way around.
Dave [EF]Today at 00:08
Sounds really exciting!
Hans Moog [IF]Today at 00:08
IOTA would always be the native coin which would be the fastest and most secure one, yeah
if there would ever be a coin that would be faster and more secure then it would make sense to implement that in IOTA core
Jack KerouacToday at 00:10
in a way, yes but its a bit more powerfull now as the DAPPS are completely isolated and "rejecting" transactions in one app does not cause all of the transactions that approve it to also be rejected

u/Hans Moog [IF] so no transaction will be rejected anymore from the base tangle layer? In the worst case - if in every DAPP consens mechanism this transaction would be rejected, because of double spend or not usefull for this DAPP - it is just seen as data transaction?!

Hans Moog [IF]Today at 00:10
yeah more or less
you might still have to reattach a value transfer rare edge cases (if your node is out of sync or sth) but you will never have to reattach a "non-value-transfer"
or well ...
if you attach sth to a part of the tangle that is really old and everybody has snapshotted that already then you would still have to reattach
but a tx does not "depend" on other apps anymore
so if 1 app says its bad and we want to orphan it then this happens on another layer

ocmoneToday at 00:14
Holy Guacamole!!!

grasToday at 00:15
So weaker nodes may work without ledger, like just a hub?

Jack KerouacToday at 00:16
As all of IOTA would rely on this stuff - DID, MAM, ... - it should be done very fast :wink:

Hans Moog [IF]Today at 00:16
nah you always need to support iota value transfers for the rate control

TobiasToday at 00:16
Is there a ETA for value tx in GoShimmer?

Jack KerouacToday at 00:16
because of the mana?!

Hans Moog [IF]Today at 00:16
but a node that is not interested in the decentralized facebook and only wants to process MAM messages can do so
yeah mana is the thing that ties everything together and to know the mana you need to know IOTA value transfers
I mean ... you could rely on a centralized service that provides you the mana balances, so even very hardware contraint nodes could theoretically take part in the network. but then you might process a few txs that others drop if this centralized service would give you the wrong balances
but the whole point of IOTA is to be shardable and lightweight so you wont need much for the value transfer layer anyway

Jack KerouacToday at 00:19
oh I forgot about sharding - how will that be done in this kind of environment?

CorienToday at 00:21
u/Hans Moog [IF] Have you ever thought about how much storage (permanode) you need if IOTA becomes the new TCP/IP ?

Hans Moog [IF]Today at 00:21
but if you want to only "issue" transactions and receive your mana by people assigning it to you (i.e. a company remotely loading up their sensors with mana), then you can essentially do that
one of the first applications I will implement is an "archiving APP" that records the activity in the tangle and allows you to "prove" that a certain tx was part of the tangle at some point in the past
recording 100 years of activity in the tangle (independently of the TPS) will require less than 1 GB
much less actually
the magic of merkle trees

Jack KerouacToday at 00:23
of all transactions of every DAPP?

Hans Moog [IF]Today at 00:23
yes
everything that ever happened in the tangle

Jack KerouacToday at 00:24
Is the sharding the ontology concept - or how will that be done in this kind of environment?

Hans Moog [IF]Today at 00:25
nah its not related
Jack KerouacToday at 00:25
ok
Hans Moog [IF]Today at 00:25
or well everything is somehow related but these APPS are not the "shards"
submitted by longfld to Iota [link] [comments]

A radical new way to mine Bitcoin?

I have this crazy idea that I feel could Optimize Solo Mining and make it profitable again, and I'm trying to figure out how to build and program a rig so that I could try this:
It is my understanding that, when a new block is created, the miner first generates a Merkle Root from a Merkle Tree consisting of all the transactions that will be listed in the Block (including the Coinbase Transaction), adds said Merkle Root to the new Block's Header, and then tries to find a number between 0 and 4,294,967,295 (the nonce) that, when combined with the rest of the Block's Header and Hashed will result in a Hash with a certain amount of Zeros. If such a nonce is found, then the block is accepted by the Bitcoin Network, added to the Blockchain, and newly created Bitcoin appears in the Miner's Wallet; but, if none of the nonces work, then a new Merkle Root is generated by changing the Coinbase Transaction (but leaving the other transactions the same as before), and the miner again tests all of the possible nonces to see if any of them will result in the proper hash. And all this goes on and on until the miner finds the right combination of Merkle Root and Nonce that will work with the rest of the Block's Header.
Now, because there are 4,294,967,296 possible nonces, this means that the Miner will Hash bad Merkle Root 4,294,967,296 times; which, if you think about it enough seems like such a waste. However, because there is only 612,772 blocks in the blockchain (as of the typing of this post), this also means that it's quite unlikely that any 2 or more Blocks will share the same Nonce (not impossible, but unlikely).
Hence, my idea is to configure a miner so that, instead of checking all 4,294,967,296 possible nonces, an Artificial Intelligence analyses the Blockchain and guesses the best nonce to try, and the Miner then keeps changing the Merkle Root (leaving the nonce the same) until to hopefully finds a Merkle Root that, when Hashed with the previously chosen nonce and the rest of the Block's Header will produce the appropriate Hash in which the Block will be accepted. This can also be scaled up with work with multiple miners: For example, if you have 8 miners, you can have the AI choose the 8 best nonces to try, assign each of those nonces to a single miner, and each miner keeps trying it assigned nonce with different Merkle Roots until one of the miners finds a combination that works.
submitted by sparky77734 to Bitcoin [link] [comments]

Continuous Proof of Bitcoin Burn: trust minimized sidechains and bitcoin-pegs w/o oracles/federations today

Original design presented for discussion and criticism
originally posted here: https://bitcointalk.org/index.php?topic=5212814.0
TLDR: Proposing the following that's possible today to use for any existing or new altcoins:
_______________________________________

Disclaimer:

This is not an altcoin thread. I'm not making anything. The design discussed options for existing altcoins and new ways to built on top of Bitcoin inheriting some of its security guarantees. 2 parts: First, the design allows any altcoins to switch to securing themselves via Bitcoin instead of their own PoW or PoS with significant benefits to both altcoins and Bitcoin (and environment lol). Second, I explain how to create Bitcoin-pegged assets to turn altcoins into a Bitcoin sidechain equivalent. Let me know if this is of interest or if it exists, feel free to use or do anything with this, hopefully I can help.

Issue:

Solution to first few points:

PoW altcoin switching to CPoBB would trade:

PoS altcoin switching to CPoBB would trade:

We already have a permissionless, compact, public, high-cost-backed finality base layer to build on top - Bitcoin! It will handle sorting, data availability, finality, and has something of value to use instead of capital or energy that's outside the sidechain - the Bitcoin coins. The sunk costs of PoW can be simulated by burning Bitcoin, similar to concept known as Proof of Burn where Bitcoin are sent to unspendable address. Unlike ICO's, no contributors can take out the Bitcoins and get rewards for free. Unlike PoS, entry into supply lies outside the alt-chain and thus doesn't depend on permission of alt-chain stake-coin holders. It's hard to find a more bandwidth or state size protective blockchain to use other than Bitcoin as well so altcoins can be Bitcoin-aware at little marginal difficulty - 10 years of history fully validates in under a day.

What are typical issues with Proof of Burn?

Solution:

This should be required for any design for it to stay permissionless. Optional is constant fixed emission rate for altcoins not trying to be money if goal is to maximize accessibility. Since it's not depending on brand new PoW for security, they don't have to depend on massive early rewards giving disproportionate fraction of supply at earliest stage either. If 10 coins are created every block, after n blocks, at rate of 10 coins per block, % emission per block is = (100/n)%, an always decreasing number. Sidechain coin doesn't need to be scarce money, and could maximize distribution of control by encouraging further distribution. If no burners exist in a block, altcoin block reward is simply added to next block reward making emission predictable.
Sidechain block content should be committed in burn transaction via a root of the merkle tree of its transactions. Sidechain state will depend on Bitcoin for finality and block time between commitment broadcasts. However, the throughput can be of any size per block, unlimited number of such sidechains can exist with their own rules and validation costs are handled only by nodes that choose to be aware of a specific sidechain by running its consensus compatible software.
Important design decision is how can protocol determine the "true" side-block and how to distribute incentives. Simplest solution is to always :
  1. Agree on the valid sidechain block matching the merkle root commitment for the largest amount of Bitcoin burnt, earliest inclusion in the bitcoin block as the tie breaker
  2. Distribute block reward during the next side-block proportional to current amounts burnt
  3. Bitcoin fee market serves as deterrent for spam submissions of blocks to validate
e.g.
sidechain block reward is set always at 10 altcoins per block Bitcoin block contains the following content embedded and part of its transactions: tx11: burns 0.01 BTC & OP_RETURN tx56: burns 0.05 BTC & OP_RETURN ... <...root of valid sidechain block version 1> ... tx78: burns 1 BTC & OP_RETURN ... <...root of valid sidechain block version 2> ... tx124: burns 0.2 BTC & OP_RETURN ... <...root of INVALID sidechain block version 3> ...
Validity is deterministic by rules in client side node software (e.g. signature validation) so all nodes can independently see version 3 is invalid and thus burner of tx124 gets no reward allocated. The largest valid burn is from tx78 so version 2 is used for the blockchain in sidechain. The total valid burn is 1.06 BTC, so 10 altcoins to be distributed in the next block are 0.094, 0.472, 9.434 to owners of first 3 transactions, respectively.
Censorship attack would require continuous costs in Bitcoin on the attacker and can be waited out. Censorship would also be limited to on-sidechain specific transactions as emission distribution to others CPoB contributors wouldn't be affected as blocks without matching coin distributions on sidechain wouldn't be valid. Additionally, sidechains can allow a limited number of sidechain transactions to happen via embedding transaction data inside Bitcoin transactions (e.g. OP_RETURN) as a way to use Bitcoin for data availability layer in case sidechain transactions are being censored on their network. Since all sidechain nodes are Bitcoin aware, it would be trivial to include.
Sidechain blocks cannot be reverted without reverting Bitcoin blocks or hard forking the protocol used to derive sidechain state. If protocol is forked, the value of sidechain coins on each fork of sidechain state becomes important but Proof of Burn natively guarantees trust minimized and permissionless distribution of the coins, something inferior methods like obscure early distributions, trusted pre-mines, and trusted ICO's cannot do.
More bitcoins being burnt is parallel to more hash rate entering PoW, with each miner or burner getting smaller amount of altcoins on average making it unprofitable to burn or mine and forcing some to exit. At equilibrium costs of equipment and electricity approaches value gained from selling coins just as at equilibrium costs of burnt coins approaches value of altcoins rewarded. In both cases it incentivizes further distribution to markets to cover the costs making burners and miners dependent on users via markets. In both cases it's also possible to mine without permission and mine at a loss temporarily to gain some altcoins without permission if you want to.
Altcoins benefit by inheriting many of bitcoin security guarantees, bitcoin parties have to do nothing if they don't want to, but will see their coins grow more scarce through burning. The contributions to the fee market will contribute to higher Bitcoin miner rewards even after block reward is gone.

Sidechain Bitcoin-pegs:

What is the ideal goal of the sidechains? Ideally to have a token that has the bi-directionally pegged value to Bitcoin and tradeable ~1:1 for Bitcoin that gives Bitcoin users an option of a different rule set without compromising the base chain nor forcing base chain participants to do anything different.
Issues with value pegs:
Let's get rid of the idea of needing Bitcoin collateral to back pegged coins 1:1 as that's never secure, independent, or scalable at same security level. As drive-chain design suggested the peg doesn't have to be fast, can take months, just needs to exist so other methods can be used to speed it up like atomic swaps by volunteers taking on the risk for a fee.
In continuous proof of burn we have another source of Bitcoins, the burnt Bitcoins. Sidechain protocols can require some minor percentage (e.g. 20%) of burner tx value coins via another output to go to reimburse those withdrawing side-Bitcoins to Bitcoin chain until they are filled. If withdrawal queue is empty that % is burnt instead. Selection of who receives reimbursement is deterministic per burner. Percentage must be kept small as it's assumed it's possible to get up to that much discount on altcoin emissions.
Let's use a really simple example case where each burner pays 20% of burner tx amount to cover withdrawal in exact order requested with no attempts at other matching, capped at half amount requested per payout. Example:
withdrawal queue: request1: 0.2 sBTC request2: 1.0 sBTC request3: 0.5 sBTC
same block burners: tx burns 0.8 BTC, 0.1 BTC is sent to request1, 0.1 BTC is sent to request2 tx burns 0.4 BTC, 0.1 BTC is sent to request1 tx burns 0.08 BTC, 0.02 BTC is sent to request 1 tx burns 1.2 BTC, 0.1 BTC is sent to request1, 0.2 BTC is sent to request2
withdrawal queue: request1: filled with 0.32 BTC instead of 0.2 sBTC, removed from queue request2: partially-filled with 0.3 BTC out of 1.0 sBTC, 0.7 BTC remaining for next queue request3: still 0.5 sBTC
Withdrawal requests can either take long time to get to filled due to cap per burn or get overfilled as seen in "request1" example, hard to predict. Overfilling is not a big deal since we're not dealing with a finite source. The risk a user that chooses to use the sidechain pegged coin takes on is based on the rate at which they can expect to get paid based on value of altcoin emission that generally matches Bitcoin burn rate. If sidechain loses interest and nobody is burning enough bitcoin, the funds might be lost so the scale of risk has to be measured. If Bitcoins burnt per day is 0.5 BTC total and you hope to deposit or withdraw 5000 BTC, it might take a long time or never happen to withdraw it. But for amounts comparable or under 0.5 BTC/day average burnt with 5 side-BTC on sidechain outstanding total the risks are more reasonable.
Deposits onto the sidechain are far easier - by burning Bitcoin in a separate known unspendable deposit address for that sidechain and sidechain protocol issuing matching amount of side-Bitcoin. Withdrawn bitcoins are treated as burnt bitcoins for sake of dividing block rewards as long as they followed the deterministic rules for their burn to count as valid and percentage used for withdrawals is kept small to avoid approaching free altcoin emissions by paying for your own withdrawals and ensuring significant unforgeable losses.
Ideally more matching is used so large withdrawals don't completely block everyone else and small withdrawals don't completely block large withdrawals. Better methods should deterministically randomize assigned withdrawals via previous Bitcoin block hash, prioritized by request time (earliest arrivals should get paid earlier), and amount of peg outstanding vs burn amount (smaller burns should prioritize smaller outstanding balances). Fee market on bitcoin discourages doing withdrawals of too small amounts and encourages batching by burners.
The second method is less reliable but already known that uses over-collateralized loans that create a oracle-pegged token that can be pegged to the bitcoin value. It was already used by its inventors in 2014 on bitshares (e.g. bitCNY, bitUSD, bitBTC) and similarly by MakerDAO in 2018. The upside is a trust minimized distribution of CPoB coins can be used to distribute trust over selection of price feed oracles far better than pre-mined single trusted party based distributions used in MakerDAO (100% pre-mined) and to a bit lesser degree on bitshares (~50% mined, ~50% premined before dpos). The downside is 2 fold: first the supply of BTC pegged coin would depend on people opening an equivalent of a leveraged long position on the altcoin/BTC pair, which is hard to convince people to do as seen by very poor liquidity of bitBTC in the past. Second downside is oracles can still collude to mess with price feeds, and while their influence might be limited via capped price changes per unit time and might compromise their continuous revenue stream from fees, the leverage benefits might outweight the losses. The use of continous proof of burn to peg withdrawals is superior method as it is simply a minor byproduct of "mining" for altcoins and doesn't depend on traders positions. At the moment I'm not aware of any market-pegged coins on trust minimized platforms or implemented in trust minimized way (e.g. premined mkr on premined eth = 2 sets of trusted third parties each of which with full control over the design).
_______________________________________

Brief issues with current altchains options:

  1. PoW: New PoW altcoins suffer high risk of attacks. Additional PoW chains require high energy and capital costs to create permissionless entry and trust minimized miners that are forever dependent on markets to hold them accountable. Using same algorithm or equipment as another chain or merge-mining puts you at a disadvantage by allowing some miners to attack and still cover sunk costs on another chain. Using a different algorithm/equipment requires building up the value of sunk costs to protect against attacks with significant energy and capital costs. Drive-chains also require miners to allow it by having to be sidechain aware and thus incur additional costs on them and validating nodes if the sidechain rewards are of value and importance.
  2. PoS: PoS is permissioned (requires permission from internal party to use network or contribute to consensus on permitted scale), allows perpetual control without accountability to others, and incentivizes centralization of control over time. Without continuous source of sunk costs there's no reason to give up control. By having consensus entirely dependent on internal state network, unlike PoW but like private databases, cannot guarantee independent permissionless entry and thus cannot claim trust minimization. Has no built in distribution methods so depends on safe start (snapshot of trust minimized distributions or PoW period) followed by losing that on switch to PoS or starting off dependent on a single trusted party such as case in all significant pre-mines and ICO's.
  3. Proof of Capacity: PoC is just shifting costs further to capital over PoW to achieve same guarantees.
  4. PoW/PoS: Still require additional PoW chain creation. Strong dependence on PoS can render PoW irrelevant and thus inherit the worst properties of both protocols.
  5. Tokens inherit all trust dependencies of parent blockchain and thus depend on the above.
  6. Embedded consensus (counterparty, veriblock?, omni): Lacks mechanism for distribution, requires all tx data to be inside scarce Bitcoin block space so high cost to users instead of compensated miners. If you want to build a very expressive scripting language, might very hard & expensive to fit into Bitcoin tx vs CPoBB external content of unlimited size in a committed hash. Same as CPoBB is Bitcoin-aware so can respond to Bitcoin being sent but without source of Bitcoins like burning no way to do any trust minimized Bitcoin-pegs it can control fully.

Few extra notes from my talks with people:

Main questions to you:

open to working on this further with others
submitted by awasi868 to CryptoTechnology [link] [comments]

BIP proposal: Inhibiting a covert attack on the Bitcoin POW function | Gregory Maxwell | Apr 05 2017

Gregory Maxwell on Apr 05 2017:
A month ago I was explaining the attack on Bitcoin's SHA2 hashcash which
is exploited by ASICBOOST and the various steps which could be used to
block it in the network if it became a problem.
While most discussion of ASICBOOST has focused on the overt method
of implementing it, there also exists a covert method for using it.
As I explained one of the approaches to inhibit covert ASICBOOST I
realized that my words were pretty much also describing the SegWit
commitment structure.
The authors of the SegWit proposal made a specific effort to not be
incompatible with any mining system and, in particular, changed the
design at one point to accommodate mining chips with forced payout
addresses.
Had there been awareness of exploitation of this attack an effort
would have been made to avoid incompatibility-- simply to separate
concerns. But the best methods of implementing the covert attack
are significantly incompatible with virtually any method of
extending Bitcoin's transaction capabilities; with the notable
exception of extension blocks (which have their own problems).
An incompatibility would go a long way to explain some of the
more inexplicable behavior from some parties in the mining
ecosystem so I began looking for supporting evidence.
Reverse engineering of a particular mining chip has demonstrated
conclusively that ASICBOOST has been implemented
in hardware.
On that basis, I offer the following BIP draft for discussion.
This proposal does not prevent the attack in general, but only
inhibits covert forms of it which are incompatible with
improvements to the Bitcoin protocol.
I hope that even those of us who would strongly prefer that
ASICBOOST be blocked completely can come together to support
a protective measure that separates concerns by inhibiting
the covert use of it that potentially blocks protocol improvements.
The specific activation height is something I currently don't have
a strong opinion, so I've left it unspecified for the moment.
BIP: TBD
Layer: Consensus
Title: Inhibiting a covert attack on the Bitcoin POW function
Author: Greg Maxwell
Status: Draft
Type: Standards Track
Created: 2016-04-05
License: PD
==Abstract==
This proposal inhibits the covert exploitation of a known
vulnerability in Bitcoin Proof of Work function.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.
==Motivation==
Due to a design oversight the Bitcoin proof of work function has a potential
attack which can allow an attacking miner to save up-to 30% of their energy
costs (though closer to 20% is more likely due to implementation overheads).
Timo Hanke and Sergio Demian Lerner claim to hold a patent on this attack,
which they have so far not licensed for free and open use by the public.
They have been marketing their patent licenses under the trade-name
ASICBOOST. The document takes no position on the validity or enforceability
of the patent.
There are two major ways of exploiting the underlying vulnerability: One
obvious way which is highly detectable and is not in use on the network
today and a covert way which has significant interaction and potential
interference with the Bitcoin protocol. The covert mechanism is not
easily detected except through its interference with the protocol.
In particular, the protocol interactions of the covert method can block the
implementation of virtuous improvements such as segregated witness.
Exploitation of this vulnerability could result in payoff of as much as
$100 million USD per year at the time this was written (Assuming at
50% hash-power miner was gaining a 30% power advantage and that mining
was otherwise at profit equilibrium). This could have a phenomenal
centralizing effect by pushing mining out of profitability for all
other participants, and the income from secretly using this
optimization could be abused to significantly distort the Bitcoin
ecosystem in order to preserve the advantage.
Reverse engineering of a mining ASIC from a major manufacture has
revealed that it contains an undocumented, undisclosed ability
to make use of this attack. (The parties claiming to hold a
patent on this technique were completely unaware of this use.)
On the above basis the potential for covert exploitation of this
vulnerability and the resulting inequality in the mining process
and interference with useful improvements presents a clear and
present danger to the Bitcoin system which requires a response.
==Background==
The general idea of this attack is that SHA2-256 is a merkle damgard hash
function which consumes 64 bytes of data at a time.
The Bitcoin mining process repeatedly hashes an 80-byte 'block header' while
incriminating a 32-bit nonce which is at the end of this header data. This
means that the processing of the header involves two runs of the compression
function run-- one that consumes the first 64 bytes of the header and a
second which processes the remaining 16 bytes and padding.
The initial 'message expansion' operations in each step of the SHA2-256
function operate exclusively on that step's 64-bytes of input with no
influence from prior data that entered the hash.
Because of this if a miner is able to prepare a block header with
multiple distinct first 64-byte chunks but identical 16-byte
second chunks they can reuse the computation of the initial
expansion for multiple trials. This reduces power consumption.
There are two broad ways of making use of this attack. The obvious
way is to try candidates with different version numbers. Beyond
upsetting the soft-fork detection logic in Bitcoin nodes this has
little negative effect but it is highly conspicuous and easily
blocked.
The other method is based on the fact that the merkle root
committing to the transactions is contained in the first 64-bytes
except for the last 4 bytes of it. If the miner finds multiple
candidate root values which have the same final 32-bit then they
can use the attack.
To find multiple roots with the same trailing 32-bits the miner can
use efficient collision finding mechanism which will find a match
with as little as 216 candidate roots expected, 224 operations to
find a 4-way hit, though low memory approaches require more
computation.
An obvious way to generate different candidates is to grind the
coinbase extra-nonce but for non-empty blocks each attempt will
require 13 or so additional sha2 runs which is very inefficient.
This inefficiency can be avoided by computing a sqrt number of
candidates of the left side of the hash tree (e.g. using extra
nonce grinding) then an additional sqrt number of candidates of
the right side of the tree using transaction permutation or
substitution of a small number of transactions. All combinations
of the left and right side are then combined with only a single
hashing operation virtually eliminating all tree related
overhead.
With this final optimization finding a 4-way collision with a
moderate amount of memory requires ~224 hashing operations
instead of the >228 operations that would be require for
extra-nonce grinding which would substantially erode the
benefit of the attack.
It is this final optimization which this proposal blocks.
==New consensus rule==
Beginning block X and until block Y the coinbase transaction of
each block MUST either contain a BIP-141 segwit commitment or a
correct WTXID commitment with ID 0xaa21a9ef.
(See BIP-141 "Commitment structure" for details)
Existing segwit using miners are automatically compatible with
this proposal. Non-segwit miners can become compatible by simply
including an additional output matching a default commitment
value returned as part of getblocktemplate.
Miners SHOULD NOT automatically discontinue the commitment
at the expiration height.
==Discussion==
The commitment in the left side of the tree to all transactions
in the right side completely prevents the final sqrt speedup.
A stronger inhibition of the covert attack in the form of
requiring the least significant bits of the block timestamp
to be equal to a hash of the first 64-bytes of the header. This
would increase the collision space from 32 to 40 or more bits.
The root value could be required to meet a specific hash prefix
requirement in order to increase the computational work required
to try candidate roots. These change would be more disruptive and
there is no reason to believe that it is currently necessary.
The proposed rule automatically sunsets. If it is no longer needed
due to the introduction of stronger rules or the acceptance of the
version-grinding form then there would be no reason to continue
with this requirement. If it is still useful at the expiration
time the rule can simply be extended with a new softfork that
sets longer date ranges.
This sun-setting avoids the accumulation of technical debt due
to retaining enforcement of this rule when it is no longer needed
without requiring a hard fork to remove it.
== Overt attack ==
The non-covert form can be trivially blocked by requiring that
the header version match the coinbase transaction version.
This proposal does not include this block because this method
may become generally available without restriction in the future,
does not generally interfere with improvements in the protocol,
and because it is so easily detected that it could be blocked if
it becomes an issue in the future.
==Ba...[message truncated here by reddit bot]...
original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/013996.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Full malleability fix with BIP-134 hardfork

Is there any strong opposition to trying to activate BIP-134 on Bitcoin Cash, in a controlled manner (requiring overwhelming consensus, akin to BIP-9)?
BIP-134 (Flexible Transactions) is a new choice for transaction format, fixing all known kinds of transaction malleability and allowing for Lightning Network on Bitcoin Cash. It differs from Segwit because it is simply creates an alternative transaction format, requiring a hardfork and update of every software client, instead of using a convoluted trickery to fool old clients into accepting segwit transactions as valid.
The way I see it, many may believe it is not strictly needed (soft opposition), because we've already have a path for on-chain scalability, but I can't see why would someone think it is not desirable (hard opposition), because we already (hopefully) did overcome the psychological barrier against hardforks.
Such Pareto optimal improvement should be accepted by node runners without much resistance.
submitted by lcvella to btc [link] [comments]

Vuluntary Proof of NOT using AsicBoost

As nobody is currently using AsicBoost we should be able to clear the fog of drama a bit if all miners would provide voluntary proof that they really don't use covert AsicBoost.
It should not be too difficult to implement and I am quite certain a patch could be provided quickly by u/nullc or others. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/013996.html
submitted by bithobbes to Bitcoin [link] [comments]

BIP-23 criticism: Why have we made miners generate new a new merkleroot instead of increase the nonce space to 64bit?

Hi everyone,
I'm new to the low-level protocol stuff. Here is my understanding of the scenario, which might be incorrect and hopefully will be corrected:
Miners can mutate nonce (32bit) + time (mutates once a second). This allows for 232 (~4million) hashes per second. That's not enough anymore for our ASICs as they perform in the TH/s now rather than MH/s. So we allowed miners to mutate the coinbase transaction, but this requires us to generate a new merkletree. This means that a miner needs to generate a new merkletree every 232 hashes. at 1TH/s The miner must generate a new merkle tree 250,000 per second.
TLDR: Is Bitcoin PoW actually sha256 + merkletree generation? And not pure sha256?
If I'm correct in asserting that Bitcoin PoW is sha256+merkletree, does this slow the commoditization of ASICs and therefore slow decentralization, as ASICs now must be more complex than if they did SHA256+nonce mutations?
Hopefully this was coherent, I'm new to protocol stuff, thanks for reading.
submitted by Ascendzor to Bitcoin [link] [comments]

Merkle Root

How does the calculation of the Merkle Root reduce block size and, ultimately, the size of the blockchain (or does it)?
Don't blocks still contain all transactions, in detail?
submitted by inexile14 to Bitcoin [link] [comments]

AMA: Ask Mike Anything

Hello again. It's been a while.
People have been emailing me about once a week or so for the last year to ask if I'm coming back to Bitcoin now that Bitcoin Cash exists. And a couple of weeks ago I was summoned on a thread called "Ask Mike Hearn Anything", but that was nothing to do with me and I was on holiday in Japan at the time. So I figured I should just answer all the different questions and answers in one place rather than keep doing it individually over email.
Firstly, thanks for the kind words on this sub. I don't take part anymore but I still visit occasionally to see what people are talking about, and the people posting nice messages is a pleasant change from three years ago.
Secondly, who am I? Some new Bitcoiners might not know.
I am Satoshi.
Just kidding. I'm not Satoshi. I was a Bitcoin developer for about five years, from 2010-2015. I was also one of the first Bitcoin users, sending my first coins in April 2009 (to SN), about 4 months after the genesis block. I worked on various things:
You can see a trend here - I was always interested in developing peer to peer decentralised applications that used Bitcoin.
But what I'm best known for is my role in the block size debate/civil war, documented by Nathaniel Popper in the New York Times. I spent most of 2015 writing extensively about why various proposals from the small-block/Blockstream faction weren't going to work (e.g. on replace by fee, lightning network, what would occur if no hard fork happened, soft forks, scaling conferences etc). After Blockstream successfully took over Bitcoin Core and expelled anyone who opposed them, Gavin and I forked Bitcoin Core to create Bitcoin XT, the first alternative node implementation to gain any serious usage. The creation of XT led to the imposition of censorship across all Bitcoin discussion forums and news outlets, resulted in the creation of this sub, and Core supporters paid a botnet operator to force XT nodes offline with DDoS attacks. They also convinced the miners and wider community to do nothing for years, resulting in the eventual overload of the main network.
I left the project at the start of 2016, documenting my reasons and what I expected to happen in my final essay on Bitcoin in which I said I considered it a failed experiment. Along with the article in the New York Times this pierced the censorship, made the wider world aware of what was going on, and thus my last gift to the community was a 20% drop in price (it soon recovered).

The last two years

Left Bitcoin ... but not decentralisation. After all that went down I started a new project called Corda. You can think of Corda as Bitcoin++, but modified for industrial use cases where a decentralised p2p database is more immediately useful than a new coin.
Corda incorporates many ideas I had back when I was working on Bitcoin but couldn't implement due to lack of time, resources, because of ideological wars or because they were too technically radical for the community. So even though it's doesn't provide a new cryptocurrency out of the box, it might be interesting for the Bitcoin Cash community to study anyway. By resigning myself to Bitcoin's fate and joining R3 I could go back to the drawing board and design with a lot more freedom, creating something inspired by Bitcoin's protocol but incorporating all the experience we gained writing Bitcoin apps over the years.
The most common question I'm asked is whether I'd come back and work on Bitcoin again. The obvious followup question is - come back and work on what? If you want to see some of the ideas I'd have been exploring if things had worked out differently, go read the Corda tech white paper. Here's a few of the things it might be worth asking about:
I don't plan on returning to Bitcoin but if you'd like to know what sort of things I'd have been researching or doing, ask about these things.
edit: Richard pointed out some essays he wrote that might be useful, Enterprise blockchains for cryptocurrency experts and New to Corda? Start here!
submitted by mike_hearn to btc [link] [comments]

Bitcoin Mining Explained in Detail: Nonce, Merkle Root, SPV,...  Part 15 Cryptography Crashcourse Bitcoin Mining im Detail erklärt: Nonce, Merkle Root, SPV ... Merkle Tree  Merkle Root  Blockchain - YouTube Hash pointers and Merkle tree Blockchain/Bitcoin for beginners 7: Blockchain header: Merkle roots and SPV transaction verification

Merkle trees are used by both Bitcoin and Ethereum. How do Merkle trees work? A Merkle tree summarizes all the transactions in a block by producing a digital fingerprint of the entire set of transactions, thereby enabling a user to verify whether or not a transaction is included in a block. Merkle trees are created by repeatedly hashing pairs of nodes until there is only one hash left (this ... Cartoon: There’s a lot of Monies in Merkle Trees The post Cartoon: Lots of Monies in Merkle Trees appeared first on BitcoinNews.com . This entry was posted in Bitcoin News , BitcoinNews.com , Blockchain , cartoon , Cartoon Corner , Cryptocurrency , merkle , merkle tree , News on January 24, 2019 by adminbtc . Bei Bitcoin benutzen die Miner tatsächlich einen solchen Merkle Tree. Sie bilden aus allen Transaktionen, die in einen Block kommen, die Root. Diese nehmen sie dann zur Grundlage, um einen gültigen Blockheader zu finden. Das ist praktisch, weil man dadurch die alten Transaktionen wieder wegwerfen kann. Das erklärt Satoshi im Whitepaper: Sobald die letzte Transaktion eines Coins unter ... Merkle Tree, also called hash tree, is a name for an imperative unit of the blockchain. Simply put, this is a structure that allows the verification of content contained within a large information set. It is also one of the most important aspects of this technology, as it ensures the security of the verification process. It is at this point that the miner who had created the header that loses has wasted their Proof-of-Work and missed out on the bitcoin mining reward. Short chain forks happen quite a bit in Bitcoin ...

[index] [35195] [2143] [23845] [37264] [35369] [11339] [41089] [28209] [14926] [51106]

Bitcoin Mining Explained in Detail: Nonce, Merkle Root, SPV,... Part 15 Cryptography Crashcourse

Bitcoin Mining Explained in Detail: Nonce, Merkle Root, SPV,... Part 15 Cryptography Crashcourse Part 15 Cryptography Crashcourse Dr. Julian Hosp - Bitcoin, Aktien, Gold und Co. In this video I try to explain Merkle Tree. Bitcoin 101 - Merkle Roots and Merkle Trees - Bitcoin Coding and Software - The Block Header - Duration: 24:18. CRI 42,669 views Most people on earth have never even heard of Merkle roots. But bitcoin programmers deal with them every day. This is old school technology in terms of softw... Editing Monitors : https://amzn.to/2RfKWgL https://amzn.to/2Q665JW https://amzn.to/2OUP21a. Check out our website: http://www.telusko.com Follow Telusko on T... In this video we expand on the previous one where we computed a given list of transactions Merkle root using Merkle trees. We will now compute a Merkle proof for a given transaction, allowing ...

#