Fastest-possible PoW via Simple DAG

I’m fairly skeptical of using graph structure alone and the DAA here to handle high latency. High latency is already observed and accounted for using the Nc/Nb method (because high latency generates large cohorts). I don’t want to create incentives for miners to do anything other than naming all known tips. So, I think we need to pull in another timing measurement to decide whether beads have abnormally high latency or not. Remember that just due to the statistics of the Poisson distribution, large cohorts and excessive grandparents will naturally be generated some calculable fraction of the time, and doesn’t represent a problem for the system or evidence of an attack. Adjusting the difficulty due to these infrequent large naturally-occurring cohorts biases the difficulty algorithm relative to the naive Nb/Nc=2.42 algorithm.

Let me elaborate on my alternative here where instead of adjusting the difficulty, we will decide to not pay high-latency beads.

First, assume that Braidpool commits to a millisecond resolution timestamp in its committed metadata that we will use. Second, consider the following diagram of a high-latency bead:

       /-------o---------\
  o-o-o                   o-o-o
       \o-o-o-o-o-o-o-o-o/

Where the bead on the top has high latency and the chain of beads on the bottom is the “highest work path” through the DAG. This is the most common example of what a high latency bead will do the DAG. In its absence you’d just have the chain on the bottom. The lower chain will in general have higher order structures and e.g. Nb/Nc=2.42 if we ignore the high latency bead. This example does not have excessive grandparents.

A good measure of latency of any bead is:

bead_latency = median({t_c}) - median({t_p})

where {t_c} is the set of the timestamps of children, and {t_p} is the set of timestamps of parents of that bead. This measure has the advantage that it is not influenced by the miner’s own timestamp, only by timestamps reported by other miners, so is very difficult to game. A miner also doesn’t know what his own latency will be until children of his bead appear, so it’s in his best interest to broadcast his bead as quickly as possible, to collect those children.

We want to pay everyone “close” to the main highest-work path and not pay the high-latency bead here. We also don’t want the presence of the high-latency bead to affect other miner’s rewards, as this constitutes an attack vector. Therefore the simplest thing to do is have a hard cutoff on bead_latency, probably around 5s, above which the bead won’t receive a reward. We can allow cohorts to be as large as necessary to include all known beads without orphans/stale beads, so as to get an accurate measure of global latency, even if an extended network split occurs, and without biasing for rare but naturally-occurring largeish cohorts.

This means:

  1. The DAA is using a as its (only) timing measure through Nb/Nc
  2. Payment decisions are using other miners timestamps

@zawy has been killing it with his simulations over the holidays, but I’ll present some similar simulations soon now that the holidays are over :wink:

P.S. if you want to discuss this in real time, join our Braidpool Discord

1 Like