Great Consensus Cleanup Revival

As mentioned [1], I think the worst case proof size now is actually ~1 MB. For example, imagine we have the following block (P2P block serialization):

Bytes Description
80 Header
1 Tx count
999,826 Coinbase tx
93 1-in, 1-out P2TR (stripped size)

A lite client performing whitepaper-style SPV can be tricked into accepting a fake transaction if the coinbase transaction was actually 64 bytes. To avoid that, it needs to learn the contents of the entire coinbase transaction in order to derive its txid for verifying its depth in the merkle tree. That means, even with optimizations, a proof size of about 1 MB.

Obviously, very large coinbase transactions will be rare given that they reduce miners’ ability include fee-paying transactions, but I think it’s worth noting in discussion and documentation that the worst case is ~1 MB. It should still be possible to validate a worst-case merkle proof with coinbase in witness data (given other soft fork changes), but it would be ~2,000x more expensive than validating a merkle proof that didn’t require a copy of the entire coinbase transaction.

@evoskuil mentioned an alternative potential soft fork: a commitment to tree depth, which could be done for any depth possible with the current consensus rules[1] using only 4 bits. He didn’t suggest where the commitment could be stored, but I think it’s clear that we’re probably never going to use all BIP8/9 versionbits and miners currently seem satisfied with the 16 BIP320 version bits, meaning we could probably put the commitment it the block header version. That wouldn’t require any extra bandwidth for SPV.

I think the two cons of that approach are:

  • We might want to give more (eventually all of the available) versionbits to miners in the future as hashrate increases; that way individual hardware doesn’t need to create coinbase extranonces or use hacks like nTime rolling.
  • It still makes SPV more complicated than described in the whitepaper, although only slightly more so.

[1] 4 bits can express a maximum depth of 16 and a tree of depth 16 can have up to 65,536 transactions. However, the minimum possible transaction size is 60 bytes and a 999,919-byte block (excluding header and tx count) can only fit a maximum of 16,665 transactions of that size.

1 Like

I don’t think that’s strictly true? sha256 includes the size of its input when calculating the hash, so implicit in being provided a midstate is being provided the number of bytes that went into the midstate, in bitcoin that’s s (the midstate), buf (the remaining bytes that will build the next chunk) and bytes (the number of bytes that went into both those things). So a preimage for the txid of a very large coinbase tx can still be reduced to as few as 104 bytes. That requires you have some way of differentiating a “valid” 64 byte coinbase tx from an internal node of the tx merkle tree, but it’s easy to transmit the full coinbase tx in that case anyway.

Compared to the status quo, it improves the correctness of block inclusion proofs. Compared to including the coinbase in the proof, I think the reduction in complexity is more of an advantage tbh:

  • you need a merkle path, and also to check the witness-stripped tx isn’t 64 bytes
  • you need a combined merkle path to both the coinbase and the tx; you need to check they’re the same depth; you need to check the coinbase is valid; in order to minimise bandwidth you need to deal with sha256 midstates
  • you need to lookup the block’s merkle tree depth via (TBD), (except if the block height is prior to the activation height, in which case you’ll need to use this fixed table and hope there’s no absurdly large reorg) then check the merkle path is exactly that long

Perhaps worth noting that these merkle proofs will only give you the witness-stripped tx anyway – in order to verify the tx’s witness data you need the full coinbase in order to obtain the segwit commitment (which can be essentially anywhere in the coinbase tx), and then you need the matching merkle path from that to your non-stripped tx. So for a lite node, checking the stripped size of the tx is straight-forward – that’s all the data it’s likely to be able to verify anyway.

It occurs to me that most of the time, you’re probably more interested in whether an output is (was) in the utxo set, rather than whether a tx is in a block. But I think the concerns here are irrelevant for things like utreexo that have some new merkle tree: they can use a prefix to distinguish tree contents from internal nodes.

1 Like

It took me a while to figure out how knowing the number of bytes used to create a digest was an alternative to validating the structure of a coinbase transaction. Do I accurately understand that you propose the following:

  1. Prover provides the coinbase transaction midstate (including size) for all but the final chunk of the first of the double SHA256 hashes, plus the final chunk.

  2. Prover provides a partial merkle branch for the coinbase transaction.

  3. Verifier uses the midstate (including size) and the final chunk to generate the coinbase txid.

  4. Verifier checks all provided nodes in the partial merkle branch are on the right side (i.e., that the branch is for the first transaction in a block, i.e. the coinbase transaction).

  5. Verifier inserts its generated coinbase txid into the partial merkle branch and verifies up the tree until it connects to the merkle root in the block header.

After the above process, the verifier can be certain of the following:

  • The number of bytes used to generate the digest (txid).

    • If the number of bytes is not equal to 64, then the digested data must have been a transaction (or the block was invalid).

    • If the number of bytes is equal to 64, then the digested data may have been either a transaction or concatenated digests for an internal merkle node. The verifier can distinguish between these by checking whether the 64-byte chunk provided to it matches a coinbase transaction pattern.

  • The verified transaction is a coinbase transaction given the inability to use CVE-2017-12842 to forge coinbase transactions and the position of the transaction in the merkle tree (or the block was invalid).

  • The depth of the coinbase transaction, which is the same depth as all other transactions in the tree.

I agree, as the average absolute bandwidth is trivial, can be easily reduced by another 36 bytes, and can be amortized over an entire block of transactions.

And as the above implies, worst case should not be a consideration given that it is reasonably bounded.

Given that bandwidth is not a material issue, adding sha256 midstates is unnecessary complexity.

Also, by simply compacting out the excess 36 bytes as mentioned above there is no need to validate the coinbase. Reconstituting it with the presumed null point is sufficient (e.g. defaulting a tx deserialization with no input point to a null input point).

Consequently the complexity comparison becomes:

  • Ensure that the stripped transaction is not 64 bytes, and ensure that its Merkle path is valid.

vs:

  • Ensure that the (reconstituted) coinbase Merkle path is valid, and ensure the same valid Merkle depth for any tx in the block.

Validating the coinbase Merkle path is of course code reuse, so the new logic is checking the Merkle path size. This is the tradeoff in the SPV wallet. The tradeoff in the full node is mandatory checking of the stripped size of all transactions (and soft fork) vs. doing nothing (possibly “compressing” null points for SPV clients).

Cross referencing this SV2-related post here, as I brought up GCC as a point of relevance on that discussion.

Not important to your point but executing this attack using the coinbase transaction requires brute-forcing 40 more bits in the first stage, which makes it fairly impractical.

To add to this list:

  • This is a roundabout way of closing the fake inclusion attack vector instead of addressing the root issue that 64 bytes transaction are indistinguishable from inner nodes.
  • This requires miners to update their software stack, whereas making 64 bytes transactions invalid doesn’t require them to do anything (if they aren’t running a non-standard Bitcoin Core).

Even then for an application which checks the SPV proof of transactions for received payments this would never be an issue as they would only check transactions paying to a normal scriptpubkey in the first place, which would implicitly ensure the transaction is never 64 bytes.

1 Like

After re-reading through the arguments presented in this thread i’m leaning toward including the invalidity of 64 bytes transactions as part of the consensus cleanup proposal. I appreciate Eric’s push back against some of the advertised benefits (my caching assumptions being incorrect and the bandwidth reduction in absolute terms being modest). Still i think the actual benefits, mostly in terms of reduced complexity for SPV applications, outweigh the cost.

It recently occurred to me this would require mandating the coinbase input to have its nSequence field be SEQUENCE_FINAL, as otherwise the coinbase transaction would fail the IsFinalTx() check (the locktime value is the height at which a transaction is valid in the next block).

Why wouldn’t you have nLockTime be the height of the parent block (mod 500M) instead?

I just figured the off-by-one would be unnecessarily confusing to anyone trying to use this. But i don’t have a strong opinion, and we might favor the small confusion over restricting the coinbase’s nSequence.

The height could also be encoded in the coinbase’s nVersion, and if we think making the block height available in a simpler manner than parsing the scriptSig isn’t a goal then i think just requiring the coinbase transaction nVersion be anything but 1 would be sufficient.

1 Like

I just caught up with the testnet4 discussions around fixing timewarp from back in May. Here is the TL;DR:

  • The original testnet4 PR included a timewarp fix with a 2h leeway (as opposed to the 10 minutes from Matt’s original proposal). The rationale for this was that if a miner publishes the last block of a period 2 hours in the future, another miner working on the first block of the next period should be able to use its “wall clock” as this block’s timestamp. I’m speculating but i think the intention behind this was to not make it possible for a single miner to push the time forward just because it happened to mine the last block of a period. But this is not necessary, the second block of the new period can just get back to use the current time.
  • The leeway was reduced from 2h to 10min in PR #30647 after discussion in BIPs PR #1658 on the basis that 2h leeway would unnecessarily leave (slightly) more room for inflation, since Bitcoin Core would never create a block template with an invalid timestamp in the first place since PR #30681. A further motivation was to go with the more restrictive value on the test network to be able to learn about any unknown compatibility issue.

I think the Consensus Cleanup should include a fix for the Murch-Zawy attack, in the form of requiring the timestamp of the last block in a retarget period to never be lower than the timestamp of the first block in this same period. In other words, the time span between the first and last block of a retarget period should never be negative.

We discussed it in this thread already back in August, but i figured i should mention it here as well.

Why this, rather than a rolling “past time limit” along the lines of option 3 here? (eg: as well as the monotonic median time past, we maintain a non-decreasing earliest_time_past, which is initially the genesis timestamp, and thereafter max(earliest_time_past, block.nTimestamp - 2 hours), and add a new consensus rule next_block.nTimestamp >= earliest_time_past in addition to next_block.nTimestamp > median_time_past)

Having a rule that is applied consistently to every block seems somewhat better to me than a rule that only applies to every 2016th block to me. YMMV.

(Most of the discussion in that thread was about options 1 and 2, which replaced/simplified median time past, which I don’t think is a practical option)

TL;DR A 2 hr limit on every block enables an easy attack on external applications that don’t implement it correctly if the median of past timestamps (MPT ot MTP) is ever more than 2 hours in the past. Applying Murch’s timestamp limit to every block appears better than doing it only on the 2016th block because it forces everyone to view it and code it as a change to the MPT consensus rule which will prevent mistakes and attacks if it’s viewed as only a “timestamp” or “difficulty” change. This is because there may be script or code somewhere in the world that is focused on MPT and the developer thinks the change only affects timestamp or the difficulty.

I called the 2 hr limit on every block the “safest and easiest” option, but I need to retract that due to 2 hour being too small. It changes the MPT consensus rule that a lot of software depends on by overriding it if a timestamp >2 hr in the past occurs and if the MPT is even further in the past. It enables an easy malicious attack on all software that doesn’t implement the rule change. A single miner only needs to use a timestamp >2 hrs in the past if the MTP happens to be more than that.

The general rule is that if you make a consensus rule change, make it as small as possible. This is what Murch’s fix does (timestamps at 2016th block can’t older than the 1st block), but there’s a lack of “symmetry”, “consistency”, “simplicity”, or “beauty” in not applying the limit to every block which might be a way of saying there’s an unknown risk. It seems harder for me to reason that doing it on only 1 block is as safe as doing it on every block. Applying it to every block might be simpler (less risky) code. In testing, the change would occur 2016 times more often, possibly making problems easier to detect. If it’s safe at the transition, it better be safe on every block.

So the choice is between minimal consensus change (apply only on 2016th block) and adhering to “symmetry”, “consistency”, “consistency”, or “beauty” as a way to prevent problems that are hard to detect or deduce.

As observational evidence that lack of “symmetry” or “beauty” is risky in ways that are hard to detect (and therefore the same changes should be applied to every block) I think halvings and difficulty changes not being slowly changed on every block are possibly Satoshi’s biggest mistakes. No alt coin has survived on-off mining when using bitcoin’s difficulty algorithm which means it’s dangerous to bitcoin itself if the fundamental security requirement of non-repurposable hashrate equipment is relaxed or not present, as it is in most alt coins. The halvings are harmful to miners’ investment planning and have caused large price fluctuations that have been brutal to weak hodlers and bitcoin’s reputation in the populace. In both cases Satoshi chose math or code simplicity over “symmetry” or “consistency”. He should have felt some uneasiness about those decisions despite there not being an error in the calculations. It’s widely known (at least intuitively) that batch processing or fluctuations towards a goal is less efficient than continuous, stable progress.

The two problems occured but there wasn’t any error in Satoshi’s math. The problem is at a higher level than his objective. I don’t know what problem there could be in this case of changing once per 2016 blocks, but it probably isn’t in the specific objective of preventing attacks, but “external” to it. For example, “external” code or applications may not have an awareness of something happening every 2016 blocks but depend only on the MPT, which is potentially going to be overridden. So from their point of view, not doing it every block might be a “larger” consensus change.

If people only look at the difficulty algorithm or timestamp code to deploy or see the effects of this consensus change, it allows attacks on code that only need to know the MPT. This should be marketed as an MTP change, not a difficulty or timestamp change. Doing it every block seems better in the sense it forces everyone to realize it’s an MTP rule change.

This argument doesn’t mean “we might as well use a 2 hr limit” because it allows an easy attack on code that doesn’t update. If some reason the proposed two week limit makes people uneasy, a 1 day limit would be safer than 2 hours.

Because it would be an unnecessarily bigger and more restrictive change. Restricting only the timestamp of those blocks that matter is less invasive and i find it elegant to simply check the timespan is not negative.

Why does it seem somewhat better to you?

Consensus primacy: MTP → timestamps → difficulty.

In deciding the consensus in any consensus mechanism, monotonicity has primacy over timestamps. In the case of PoW, timestamps have precedence over difficulty because they are how difficulty is determined which determines PoW. No adjustment to timespan (or target for that matter) that is restricted to the difficulty calculation should be made. It doesn’t matter what difficulty algorithm you use, as long as it’s a “fair” (blind) mathematical estimate of the hashrate based on prior difficulty(ies) and solvetime(s). Changing timespan in the algorithm pollutes the math. I have seen many exploits that result from that. For example, the 4 and 1/4 timespan limits allow exploits which I previously said should exist for this theoretical reason and apparently PW believed my reasoning enough to found the exploit. He may do the same here.

So this consensus rule change should be made in the MTP calculation. When it’s at that level, it makes more intuitive sense to do it every block.

Hard-to-see “gotcha” problems involve the MTP because it’s a patch instead of strictly following monotonicity.

edit: I was having trouble figuring out what it means to say

MTP → timestamps → difficulty.

when I know that’s the theoretical requirement and yet timestamps dictate MTP. The answer prevents the exploit: it means that the “timestamps” used by the difficulty algorithm to calculate timespan must be the MTP of the 1st and 2016th blocks. Even PW’s attack on the 4x ad 1/4 timespan limits can be fixed without removing them completely if they’re turned into limits on the MTP. That is, no timestamp can be more that 3/4 * 2016 * 600 seconds into the past from the current MTP or more than 4 * 2016 * 600 into the future. If there’s a network partition for more than that limit the limit is used until real time catches up. Likewise for the past limit.

But either of these are too big of a change to implement. I don’t know of any simple solution that would make me feel comfortable in saying “it probably can’t be attack” except to make MTP=1 instead of 11. This forces true monotonicity. But as a minimum, Murch’s limit on timespan better be enforced on the timestamp instead of timespan or there’s probably an exploit.

How many times have people had to learn “don’t use timestamps for time, use the MTP”? And here we are still trying to use them for the absolute core of our consensus. Non-monotonic timestamps for any consensus mechanism are not legitimate.

It’s concerning that the “official time” for a lot of things in bitcoin like what scripts use is the MTP, but the consensus mechanism is based on timestamps. The consensus mechanism isn’t determining the value of the MTP. All nodes place an upper limit on it, so it’s how far in the past that a 51% attack, enabling an unlimited disconnect even if the difficulty can’t be exploited for excess blocks. The MTP monotonicity isn’t enforcing monotonicity on the difficulty algorithm if there’s a >51% attack which is why we had to proposed a monotonic rule on the timespan. Also, by limiting only the timespan, the consensus isn’t attesting to the value of the 1st and 2016th timestamps.

This gives two levers that miners can control without PoW.

Monotonicity on each timestamp is the only way to assure there are no unknown attacks external to the difficulty algorithm. Assuming that’s too drastic to implement (via MTP=1 instead of 11), I’d say no timestamp should be older than 1 second after the 2016th timestamp in the past. But this doesn’t address the problem of the disconnect between the difficulty timestamps and the MTP time used everywhere else. The simplest fix for that if MTP=1 can’t be used is to base the timespan on the MTP of the 1st and 2016th blocks. This wouldn’t need a limit on the timestamps.

Lamport’s 1978 paper tells us what must be done with timestamps in all consensus mechanisms to do it correctly. We can propose patches but if the rules aren’t followed, there’s an attack. The only way to let the PoW prove the 1st and 2016th timestamps and MTP are correct is to force the timestamps to be monotonic. All nodes independently prevent miners from pushing time forward. Monotonicty prevents them from holding it back.

The other rules being broken show an attack isn’t necessarily large. The other rules are that clock tick must be slower than propagation speed but a lot faster than the length of consensus rounds (1 block) and clock accuracy should be less than 1/2 clock ticks. Enforcing these rules would prevent the possibility of 1 or 2 block selfish mining attacks, is they start being a problem.

The non-repurposability requirement for good security forces the miners to act in the best interest of the value of the coin, but I believe there are instances where they won’t if they’re not required to such as maximizing fees. They can’t increase the number of blocks with the proposed fix, but can their control of the other two levers somehow increase fees or deprecate something they don’t like? What are the possible profitable consequences if miners as a group hold the MTP back 6 months and make it jump to the present in 1 block? This can’t be done without a super-majority if the timespan limit is moved out of the DA to a limit on every timestamp.

Murch’s fix seems the easiest and safest in terms of preventing an attack and not breaking anything. But a monotonicity fix should be readily available in case of an emergency. There should be a BIP.

Mostly that a field with a fairly free choice of values normally having different constraints apply to it 0.05% of the time seems a bit error prone, in that it would be easy to not take the 0.05% chance thing into account, and cost people money as a result… But I suppose the same complaint would apply to the original timewarp fix too, and these are really 0% chance things outside of very high cost attacks anyway.

Opentimestamps isn’t secure [edit: to the extent MPT time is used as the measure of time instead of the highest timestamp value seen]. This is due to bitcoin not adhering to a proven distributed consensus requirement.

[edit: Here’s a simple solution if attacks on Opentimestamps start occurring: don’t use MPT as the official bitcoin time. Use the timestamp in the prior 11 blocks that has with the highest value. It can only be 2 hours at most into the future, and being in the future instead of past is consistent with a working timestamp.]

Unlike other 51% attacks, this isn’t prevented by bitcoin’s security model which depends on miners’ long-term profit motive. MPT isn’t suported by PoW because 1) bitcoin doesn’t imposed a “reverse” time limit on it, and 2) manipulating it doesn’t necessarily harm (and may increase) miners’ profit motive. MPT isn’t a secure source of decentralized time. It’s a lie to say bitcoin provides a secure decentralized source of time. MPT is a permissioned consensus mechanism.

To falsify the above, one of the following statements must be shown to not be true, or a fault found in the reasoning. It’s verbose to make it easier to falsify.

In short, miners should be able to find a way to profit more from holding MPT time back (due to its affect on Opentimestamps) than it reduces the present value of their equipment.

definitions:

“position” = existence, possession, and/or transfer.

  1. Opentimestamps agglomerates attestations crucial to the position of value.
  2. The agglomerations greatly reduce the fees paid per value positioned.
  3. Some attestations crucially depend on MPT time. Height & a single timestamp are not always useful options.
  4. A hashrate majority can hold back MPT time.
  5. The hashrate majority seeks to maximize net present value of their hashrate.
  6. A hashrate majority could profit from making the attestations appear earlier in MPT time than real time.
  7. The profit from 6 can exceed the harm it causes to the majority hashrate’s interest in the current and future BTC value on exchanges.

The above is proof of the first sentence. The following is proof of the second.

  1. Bitcoin doesn’t use monotonic timestamps on every block.
  2. Lamport’s 1978 paper on clocks and timestamps applies to all concepts of distributed consensus and it proves monotonic timestamps must be used.
  3. Monotonicity on every timestamp prevents a hashrate majority from being able to hold MPT back significantly unless it’s >95% of hashrate or it orphans the honest hashrate’s blocks.
  4. At least 5% of hashrate will be honest.
  5. The degree to which the majority hashrate can profit from 6 depends on the degree to which it can hold MPT back significantly.
  6. The majority hashrate’s interest in the current and future BTC value on exchanges decreases in value as the number of orphans of honest miners increases.
  7. Conjecture: Due to 12 and 13, the profit from 6 will not exceed the losses mentioned in 7.

Monotonic timestamps would enable MPT (and therefore Opentimestamps) to be supported by PoW. The goal of Opentimestamps is to have PoW decentrally support the claim that the stamps occurred at a certain point in real time or earlier, not that they definitely occurred before a certain time when they didn’t.

The attestations my also change the movement of value based on a minimal or maximum difference in real time which the hashrate majority could thwart (initiate or prevent) by making MTP change a month in a single block or by holding MPT back.

There are two consensus mechanism at work in bitcoin when you don’t have monotonicity. One is PoW based on single timestamps that dictate difficulty, not directly subject to the MPT. The other is a median of the block winners (MPT) which is a permissioned mechanism.

1 Like

If silence means everyone agrees MTP isn’t secure, isn’t supported by PoW, and shouldn’t be used or recommended for any purpose that makes those assumptions, here’s an alternative that doesn’t require a change to bitcoin. Instead of using MTP, use the highest timestamp out of the prior 11 blocks and the current timestamp (or more simply, the highest timestamp seen). This would be at most 2 hours in future which is consistent with timestamping (whereas being in the past isn’t). This has PoW support and is enforcing monotonicity. The highest timestamp seen would have better PoW support if the DAA changed every block which would increase the cost of miners delaying timestamps during the 2 week adjustment period.

Miners should enforce monotonicity even if it’s not a requirement.

It’s not a big problem in this sense: if timestamps are observed to monotonically increase, the MTP and the timestamps are valid and have PoW support. The degree to which they don’t have PoW support is limited by how far out of sequence they get. If it’s a problems, it can be seen and the highest timestamp seen can be used. The problem is if code, script, apps, or legal contracts depend on MTP or current timestamps as accurate.

Discussing and/or improving OpenTimeStamps seems out of scope for this discussion, maybe open a new topic for it?

1 Like