That’s true, but my wider point is that MTP isn’t a valid PoW timestamp as some scripts may assume. Peter Todd informed me that he was aware of MTP not being good for timestamps back in 2016 and Opentimestamps therefore doesn’t depend on it.
I opened a separate discussion about Timewarp attack 600 second grace period
Hi, i’d like to update this thread with the outcome of the discussions in the private one on the topic of improving the worst case validation time.
Shortly after posting the details of the worst block, i realized i could adapt it to be valid under the mitigations originally proposed in 2019. From there we reconsidered possible mitigations and their tradeoffs in terms of impact, confiscatory surface and complexity. Thanks to everyone who participated and in particular to Anthony Towns for his contributions, corrections and the helpful discussions.
After studying the various options i believe the best way forward is to introduce a 2’500 per-transaction limit on the number of legacy (both bare and P2SH) input sigops. This provides a 40x decrease in the worst case validation time with a straightforward and flexible rule minimizing the confiscatory surface. A further 7x decrease is possible by combining it with another rule, which is in my opinion not worth the additional confiscatory surface.
Regarding the duplicate coinbase fix, i think we should go with mandating the height of the previous block be set in all coinbase transactions’ nLockTime
field. The feedback i got from all miners i reached out to was that either of the fix being discussed was fine. Over other approaches, this one has the added advantage of letting one retrieve the coinbase’s block height without having to parse Script.
If anyone has an objection or a good reason to prefer an alternative approach please let me know.
Sjors convinced me of using a 2 hours grace period for the timewarp fix. Let me summarize the arguments put forth in the thread he opened and what led me to this conclusion.
One first illustration of his concern is with large powerful machines. He argues a hypothetical 3PH machine would need to roll timestamps by 10s every second. This could cause an issue when mining the first block of a period, if the miner of the previous block set its timestamp as far in the future as possible, and if the pool does not feed a new template to the miner more frequently than once every minute and 6 seconds.
This was addressed by Matt Corallo and Anthony Towns. Matt points out he’s unaware of any miner that rolls nTime
more aggressively than once per second, and that the existence of old pool software that hard rejects rolling past 600 seconds makes that very unlikely. Further, he points out that in SV2 miners can roll the extranonce. So this hypothetical powerful ASIC could just do that when exhausting the 600 seconds. Finally, Anthony Towns points out that as long as nTime
is not rolled by more than 600 seconds the attack scenario is moot as the attacker’s block would also be too far in the future and have been rejected in the first place.
Sjors then raises another concern about pool software ignoring the time in the Bitcoin Core provided block template and instead using wall clock time. He argues that while such software is technically broken today because of the MTP rule, it’s unlikely to cause issues because nobody pushes time forward such as a block using current time would have a too far back timestamp. On the other hand, a pool using such software would be directly vulnerable post 600s-grace-period timewarp fix activation to producing an invalid first block of a period if the miner which produced the previous block set its timestamp as far in the future as possible.
I disagree we should make inferior protocol decisions to accommodate hypothetical software being ran by pools that would already be incompatible with today’s consensus rules. Further, it’s not clear the timewarp fix is worsening things that much since such software is already vulnerable today to having a timestamp lower than 6 or more of its previous 11 blocks. Contrary to the timewarp fix this can happen on any block and does not even require an attack scenario (misconfigured clock). So really the 600 seconds grace period only introduces marginal additional risks to a blatantly broken software that most likely nobody is running on mainnet with actual money on the line.
In addition to those concerns Sjors reminded me that Fabian Jahr initially implemented the timewarp fix on testnet4 with a 2 hours grace period, and that it would make sense to keep this property of being able to use current time no matter the value of the previous block’s timestamp (which may be up to 2 hours in the future).
Finally Sjors pressed me to consider the downside of a 2 hours grace period as opposed to a 10 minutes one. It would increase the worst case block rate increase from ~0.1% to ~0.65%. We would also lose the aesthetically pleasing property that with a 10 minutes grace period the block rate under attack would be equivalent to that originally intended: one block every 599.9997 seconds (vs one block every 596.7211 seconds with a 2 hours grace period).
I concluded that despite the fairly weak arguments in favor of increasing the grace period, the cost of doing so did not prohibit erring on the safe side.
In this accounting, how many sigops will OP_CHECKMULTISIG
(CMS) count for? IIRC, current accounting for bare scripts (which is only applied to output scripts) counts each CMS as 20 sigops, but accounting for P2SH redeem scripts makes the sigops equal to CMS parameter for number of pubkeys to check (e.g. 2 <key> <key> <key> 3 OP_CMS
counts as 3 sigops).
Also, to be clear, am I correct assume the 2,500 input limit will apply to CHECK*SIG operations specified in a bare prevout script? E.g., spending a P2PK output will count 1 sigop towards the transaction limit.
I will be sharing the specs i’ve been drafting soon (just wanted to cleanup and better test my implementation before). The intention here is to account in the same way as for P2SH/Segwit. The number of sigops in a CMS is the number of keys if it’s inferior or equal to 16, and 20 otherwise.
Yes. The previous scriptpubkey, scriptsig + redeem script if P2SH all count toward the limit.
I think it would be helpful if we could have distinct names for the two limits here – keep “sigop” for signatures that appear in the p2sh, p2wsh and the output scriptPubKey, and use something new for the new limit (“sigchecked”? “legacysig”?)
The condition is whether the script is OP_1
… OP_16
immediately followed by CMS, versus anything else, not strictly the number of keys. So eg 1 1 ADD CMS
will be treated as 20 sigops, not 2, and 0 0 0 CHECKMULTISIG
also counts as 20 sigops, rather than 0. It also counts CHECK*SIG opcodes from unexecuted branches, so IF <P> CHECKSIG ELSE <Q> CHECKSIG ENDIF
counts as 2 sigops while IF <P> ELSE <Q> ENDIF CHECKSIG
counts as 1.
Also keep in mind that pre-SegWit sigops are multiplied by 4.
But if you introduce a new name then maybe also don’t bother with that multiplication, since it only applies to pre-SegWit spends.
One thing to note of disallowing 64 byte transactions on the network is that this will not work well if we ever decide to move away from a 256 bit hash digest for our merkle tree structure.
For instance if we decide to move to a 512 bit digest, we would now disallow 128 byte transactions, which by my count ~300,000 occurring in the bitcoin blockchain. A 1024 bit digest would disallow 256 byte transactions, of which there are ~450,000 occurring in the bitcoin blockchain.
More generally with this approach it seems if have a N
byte digest, we cannot allow N*2
transaction size?
Ig if such a drastic change is needed, maybe we propose just reworking the entire merkle tree structure.
The entire reason why 64-byte transactions are problematic is because Bitcoin’s txid Merkle tree has a design flaw that doesn’t distinguish between internal nodes and leaves. If anyone ever realistically considers changing the Merkle tree design, like changing the hash function, they should start from a sane design that’s not broken in the first place.
Characteristics of a 64-byte Transaction
I recently proposed a Bitcoin Improvement Proposal (BIP) to make 64-byte transactions consensus-invalid in Bitcoin. This document examines the characteristics of 64-byte transactions.
Background
According to Suhas Daftuar, 64-byte transactions follow this format:
- version (4 bytes)
- vin size (1 byte)
- outpoint (36 bytes)
- length scriptSig (1 byte)
- scriptSig (0–4 bytes, depending on the scriptPubKey in this transaction)
- sequence (4 bytes)
- vout size (1 byte)
- value (8 bytes)
- length scriptPubKey (1 byte)
- scriptPubKey (0–4 bytes, depending on the scriptSig in this transaction)
- locktime (4 bytes)
64-Byte Pre-Segwit Transactions Cannot Contain a Digital Signature in the scriptSig
Since the activation of BIP66 on the Bitcoin network, digital signatures must be at least 9 bytes long.
As a result, a 64-byte pre-segwit transaction cannot spend raw scripts that contain:
OP_CHECKSIG
OP_CHECKSIGVERIFY
OP_CHECKMULTISIG
OP_CHECKMULTISIGVERIFY
64-Byte Transactions Must Create an ANYONECANSPEND
Output
No known scriptPubKey
is 4 bytes long while still being protected by a public key or a supported hash function in Bitcoin Script.
Therefore, every output created by a 64-byte transaction can be trivially claimed by miners. If the goal of the transaction is to create an output claimable by miners, this can be achieved with Bitcoin transactions either smaller or larger than 64 bytes.
Nonstandard Outputs
As of block 00000000000000000001194ae6be942619bf61aa70822b9643d01c1a441bf2b7
, there are no non-standard, non-zero-value outputs that could be satisfied exclusively by a 64-byte transaction.
P2SH Outputs
P2SH outputs place a redeem script in the scriptSig
. We can allocate up to 3 bytes for the redeemScript
when spending a P2SH output.
As of block 00000000000000000001194ae6be942619bf61aa70822b9643d01c1a441bf2b7
, there are no UTXOs in the blockchain with redeemScripts
of 0–3 bytes.
SegWit Outputs
BIP141 fundamentally restructures Bitcoin transactions. It introduces a new data structure called a witness that can replace the scriptSig
for SegWit programs. This data does not count toward the 64-byte transaction limit, meaning digital signatures can be included in 64-byte SegWit transactions.
Native SegWit v0 and v1 Programs
64-byte transactions that spend native SegWit programs must have exactly 4-byte scriptPubKeys
. This is because the inputs to their programs are put into the witness rather than the scriptSig
.
As a side note, when running tests for this document I realized it is impossible to broadcast 64-byte transactions, even with -acceptnonstdtxn=1
via the RPC interface without custom-compiling bitcoind
.
Wrapped SegWit Programs
There are two types of wrapped SegWit programs:
p2sh(p2wpkh)
p2sh(p2wsh)
Both types of outputs require witness programs in the scriptSig
, which are larger than 4 bytes. Therefore, these types of outputs cannot be spent by 64-byte transactions.
Future SegWit Versions
As per BIP141, this is how a witness program is defined:
A
scriptPubKey
(orredeemScript
as defined in BIP16/P2SH) that consists of a 1-byte push opcode (one ofOP_0,OP_1,OP_2,...,OP_16
) followed by a direct data push between 2 and 40 bytes gets a new special meaning.
If a BIP disallowing 64-byte transactions is activated, we will no longer allow 1-input, 1-output SegWit transactions paying to 2-byte witness programs.
Here is an example of a witness program that would no longer be possible in a 1-input, 1-output transaction:
OP_2 0x02 0xXXXX
I am not aware of any reason why this would be a problem, but I have not seen it documented anywhere.
See this testnet4 transaction: f1572558fed009ab9d247da85be221e3d8f98c80b66ce9c2ada3a25cba0d797a
The message wrapped in OP_RETURN says: “Without this OP_RETURN, sending to tb1pfees will result in 64-byte transaction.”
So, in general, when you want to send from Segwit to Anchor, or from Anchor to Anchor, then you need a dummy OP_RETURN (or anything else) to make it valid.
Did you check all utxos one by one? In any case this statement is always true because you can just pad the scriptsig (even if that might be nonstandard).
Nice, did you bruteforce all possible valid script serializations from 0 to 3 bytes and checked no corresponding P2SH exist in the utxo set? (But also it’s not necessary to make the point it wouldn’t freeze anyone’s coins, for the same reason.)
I’m pretty sure it was discussed around Greg’s segwit ephemeral anchors (renamed ephemeral dust along the road), as alluded to by garlonicon.
Did you check all utxos one by one
and note, i excluded 0 value utxos. I also excluded any utxo that involves a checksig op (BIP66), and syntactically invalid Scripts (there is a lot of Scripts that have OP_IF with no OP_ENDIF!). I can rerun with different filters if you would like. Here is a link to the source code
In any case this statement is always true because you can just pad the scriptsig (even if that might be nonstandard).
This seems right to me but i haven’t tested this yet with the various interpreter flags (CLEANSTACK
,PUSHONLY
,MINIMALDATA
etc).
Nice, did you bruteforce all possible valid script serializations from 0 to 3 bytes and checked no corresponding P2SH exist in the utxo set?
I’m pretty sure it was discussed around Greg’s segwit ephemeral anchors (renamed ephemeral dust along the road), as alluded to by garlonicon.
I’m getting caught up on this, if you have a link handy to the discussion it would be much appreciated.
Actually this is incorrect. P2SH requires the scriptSig to be pushonly.
See the thread for segwit ephemeral anchor. It contains no direct mention of 64 bytes transactions though.
I’ll think a bit more but I cannot foresee a use-case. If you only have a single output and that output is key-less(like P2A), I’m unsure what action could be taken that couldn’t be taken by burning value to fees with a minimally sized op_return as well?
As i’m finalizing the BIP, @ajtowns suggested to me a neat idea.
In addition to mandating the nLockTime
of coinbase transactions be set to the block height minus 1, we could also require their nSequence
not be final. This would make it so the timelock is enforced.
Once (and if) the Consensus Cleanup is activated and its activation height buried, this would give us the following property: “BIP30 validation is never necessary after CC activation height”.
Note we don’t otherwise necessarily have this property. Technically now that we removed checkpoints[1], Bitcoin Core could validate a chain containing a coinbase transaction before BIP34 activation height such that it committed, according to both BIP34 and CC, to a block height post CC activation. In this case, it would be necessary to resume BIP30 validation or a duplicate coinbase could be let in.
But mandating the coinbase’s nSequence
never be final, by leveraging that timelocks are also checked on coinbase transactions, makes it so it cannot be possible for a previous transaction to have the very same txid.
Of course it does not matter for any practical purpose. But it’s pretty neat. Thoughts?
I have decided to include Aj’s suggestion in the proposal.
In other news, i scanned the chain and there has never been a single usage of a 64 bytes transaction in 16 years of Bitcoin history. Incorrect, Chris has a list of historical 64 bytes transactions here.
This is the list of the 64 byte transactions that I found in the bitcoin blockchain.
These results were produce around 00000000000000000001194ae6be942619bf61aa70822b9643d01c1a441bf2b7
but unfortunately I didn’t document the exact hash. Its unlikely that any have occurred since then, but no guarantees.