@AntoineP exponential growth is always a problem if it goes on for long enough. Is there some upper bound to the speedup or can blocks come in every second if the 51% attack persists?
Wait, this is not a ābug in Bitcoin Coreā this is a fix to the PR which changed block validation rules without adapting the block creation logicā¦
@AntoineP exponential growth is always a problem if it goes on for long enough. Is there some upper bound to the speedup or can blocks come in every second if the 51% attack persists?
Iām not sure what exponential growth you are referring to. Are you asking if in theory itās possible to extremely increase the block rate with any grace period value if the attack persists long enough? If so then yes.
In theory the difficulty can always be brought down to 1 if you can constantly claim what took you 2 weeks took you 2 weeks + x seconds. See the numbers i shared above. But iām not sure how relevant it is: for instance with a 600 seconds grace period it would take hundreds of years to bring the difficulty down to 1.
without adapting the block creation logic
Which is a bug in Bitcoin Core. It was caught before release. The second bug wasnāt, itās fixed by 31600. It just illustrates how easy it is to screw things up when the grace period is only 600 seconds.
Update: it was a known omission before the original PR was merged, so maybe shouldnāt count as a bug: Move maximum timewarp attack threshold back to 600s from 7200s by TheBlueMatt Ā· Pull Request #30647 Ā· bitcoin/bitcoin Ā· GitHub
Are you asking if in theory itās possible to extremely increase the block rate with any grace period value if the attack persists long enough? If so then yes.
Exactly. Iām not really worried about a one-off 10% increase in block speed. But compounded over time itās more worrying.
with a 600 seconds grace period it would take hundreds of years to bring the difficulty down to 1
Hundreds of years is enough time to come up with a plan, but serious problems could happen long before difficulty reaches 1. E.g. deep reorgs would be easier to execute.
If we take a difficulty reduction of 10x as the (arbitrary) definition of ābadā, and if assume a 10% difficulty decrease every ~10 retarget periods, it takes 240 retarget periods to reduce to reach this ābadā situation. Thatās several years, probably enough to discuss and deploy a soft-fork to make it stricter. But not super comfortable either.
Whereas with a grace period of 600 reaching ābadā takes 20 times longer, which seems very safe.
So this may be a good argument for keeping the number at 600 and to insist that pools either use the min_time
field provided by Bitcoin Core, or implement their own code for the minimum time that takes both MTP and the new timewarp rule into account.
In theory the difficulty can always be brought down to 1 if you can constantly claim what took you 2 weeks took you 2 weeks + x seconds.
Not if you want to win a 51% race. After claiming you took 2 wks + x, youāre going to solve the next 2016 blocks in 2 wks - x if you maintain your hashrate. If you donāt, youāre not going to win chain work. To claim it took 2 wks + x in that round, you have to add 2x to to your 2 wks - x real time. You have only 1 x to manipulate without penalty and the other x you have to add to real time. As those accumulate past 7200 seconds, no nodes will accept your blocks.
Yes, thatās why i started with āin theoryā. In practice i think any of the discussed values for the grace period make extreme drops in difficulty completely unlikely.
Thereās no theory that allows an increase the block production rate. A difficulty drop wonāt accumulate more than 1 round unless hashrate drops. You want it to drop if that happens. Difficulty canāt drop more 1.2% in 1 and only 1 round if the reverse limit is 7200 and the attacker sets the last block to 7200 in the future. Thereās no average decrease in difficulty because it will cause an increase in difficulty in the next round.
If the difficulty drop doesnāt accumulate then I flip back to my preference for a grace period high enough to avoid bugs, and which maintains the rule that the minimum nTime
of a block is given by MTP + 1
, without the need for a clock.
Update 2025-01-09: you donāt need a clock to honor the timewarp rule.
@zawy Imagine the proposed rule, that the first block in a period (i.e., one where n = 0 \mod 2016) must have a timestamp no more than 600s before its immediate predecessor (the last block of the previous period).
Call this constant G = 600 (grace period). Also introduce P = 2016 \cdot 600 (the period, 2 weeks).
For simplicity, letās replace the ātimestamp must be strictly larger than MTPā rule with a ātimestamp must not be below MTPā rule (i.e., we allow timestamp=MTP). This doesnāt materially affect the attack, but means we donāt need to increment by 1s every 6 blocks.
The following sequence of timestamps for periods is valid, starting at time 0:
- Period 0: [0, 0, 0, ā¦, 0, 0, P].
- Period 1: [P-G, P-G, ā¦, P-G, 2P-G].
- Period 2: [2P-2G, ā¦, 3P-2G].
- Period 3: [3P-3G, ā¦, 4P-3G],
- ā¦
- Period k: [kP-kG, ā¦, (1+k)P-kG].
Within each period, the difference between the first and last block timestamp is exactly P, so the difficulty will not adjust. However, the net timestamp increase per period is only P-G, i.e., 10 minutes less than 2 weeks.
So this allows attacker to permanently keep the block rate higher than intended, without incurring a difficulty increase cost.
The reason for the 600s grace period is that is compensates the effect of the off-by-one in the difficulty calculation (it only looks at how long 2015 blocks take, not 2016), which would otherwise result in (under constant hashrate) periods of 2 weeks plus 10 minutes. So with the proposed rule, a timewarp attacker can not cause a steady constant-difficulty block production rate of more than 2016 per two weeks.
There is another effect due to asymmetry of the Erlang distribution that results in another 10 minutes lengthening under constant hashrate. I forget why this isnāt incorporated for compensation in the proposed GCC rule.
EDIT: ah, itās explained in footnote [1] in bips/bip-XXXX.mediawiki at 7f9670b643b7c943a0cc6d2197d3eabe661050c2 Ā· TheBlueMatt/bips Ā· GitHub. The attacker can ignore the first effect but not the second.
I donāt understand how one of the effects can be ignored. Reference [1] switched from saying weāre looking at the Erlang of 2015 blocks to 2016 blocks. It seems to imply the attacker received a double benefit, i.e. one by increasing timespan by 600, but then another by somehow going back in real-world time by 600 seconds to get 2016 * 600 seconds to mine instead of being stuck with 2015 * 600.
[ for time taken to mine the last 2015 blocks ] ā¦ IBT is 2016/2014 * 600. This is equivalent to 600 * E(2016 * 600/X) where X~ErlangDistribution(k=2015, Ī»=1/600). In the case of a miner deliberately reducing timestamps by 600 seconds on the difficulty-retargeting block, we are effectively changing the difficulty multiplier to (2016 / (time taken to mine the last 2016 blocks + 600)), or 600 * E(2016 * 600/(X + 600)) where X~Erlang Distribution(k=2016, Ī»=1/600), which is effectively targeting an inter-block time of ~599.9999 seconds.
Concerning if this means ādifficulty drop can accumulateā, yes, for the last block in this sequence he can use real time in for the last timestamp, then the next period will have a lower difficulty, but then the next period would be back to normal. He would have to do 168 periods with a 2 hr limit to drop difficulty to I think 1/2.
I think thereās a lot more risk to making it strict. I can see only the very smallest of benefit in making it strict, but I canāt estimate the risk thatās being increased.
Just so weāre talking about the same thing, my points are:
- I believe a majority-hashrate attacker can use the timewarp attack to permanently increase the block rate beyond 1 per 600s on average, while keeping the hashrate constant, as long as they keep up the timewarping scheme. They canāt keep increasing exponentially or anything like that.
- With a grace-period limiting how much time can go backwards between the last block of a period and the first block of the next period, we limit how much that that increased rate is. If the grace period is G seconds, then the attacker can maintain a block rate of roughly one per \frac{2017 \cdot 600 - G}{2016} seconds, at constant hashrate. I donāt remember why itās 2017 and not 2018 there (as would be expected if both the off-by-one and Erlang asymmetry are relevant), but I will try to simulate it.
- I have no opinion about when/where to enforce timestamp rules beyond bounding the amount of time the timestamp can go backwards between the last block of a period and the first block of the next period.
- I understand and agree with your example.
- Right, I donāt see why the BIP thinks itās 2017 instead of 2018. I bolded the unexplained jump in logic in the BIP that I didnāt follow.
- You agree that making the limit 600 seconds is at least as safe as 7200?
Rather than 2017 / 2018, weāre measuring a timespan across 2015 blocks, so via Erlang we expected 2014 blocks. Therefore the adjustment is supposed to be ā2014_blocks / 2015_expected_solvetimesā. But the code calculates ā2016_blocks / 2015_timesā. Allowing +600 in the denominator makes it 2016 / 2016. But we should add 1 to the numerator if weāre going to add one to the denominator: (2014+1) / (2015 + 1) = 2015 / 2016. So 2016 / 2016 is too large.
For your example of subtracting 600 at the beginning and end, the difficulty is too much as always 2016 / 2015 instead of āstaying constantā with 2014 / 2015, i.e. itās taking 600.6 seconds / block (2 weeks + 20 minutes). So G = 3 * 600 in order to solve in 2 weeks minus 10 minutes?
Letās say c_i is the actual time at which block i is found, and t_i is its timestamp.
If the attacker (who is assumed to have 100% hashrate) follows this strategy:
- The first block in every period p uses as timestamp the previous blockās timestamp minus the grace period: t_{2016p} = t_{2016p-1} - G (the minimum legal time according to the proposed rule).
- The 2014 middle blocks use the minimum legal time, which doesnāt really matter as long as itās low enough: t_{2016p+k} = 0 for 0 < k < 2015.
- The final block in the period uses the current time t_{2016p+2015} = c_{2016p+2015}.
Then the observed duration of the 2015 blocks in period p (relevant for difficulty adjustment) is t_{2016p+2015} - t_{2016p} = c_{2016p+2015} - c_{2016p-1} + G, i.e. the time it took to mine 2016 blocks plus G.
The difficulty multiplier m will thus be m = \frac{P}{X + G}, where X is Erlang distributed with k=2016 and \lambda=hashrate \cdot \frac{target}{2^{256}}.
In a simulation with G=600 I get an effective average block interval under constant hashrate of 599.9997ā¦ seconds. With G=7200 I get 596.7211ā¦ seconds.
OK, now I see. Itās not that an adjustment for both Erlang and ā2015 holeā arenāt needed, itās that 600 seconds before the previous block isnāt a 600 second lie, itās a 1200 second lie because we expected a timestamp 600 seconds after it.
itās a 1200 second lie because we expected a timestamp 600 seconds after it.
Thatās a great way to put it.