Weak blocks, or “near blocks” are not a new idea.
In short, have miners propagate what amounts to mining shares over the p2p network, which allows PoW-backed sharing of data.
Historical discussions of weak blocks centered around the blocksize and scaling debate, which means there was intense focus on reducing the marginal bytes sent per weak block to aid “gigameg blocks”. There was seemingly a lot of focus on creating DAGs, extra-consensus chains, and similar mechanisms for increasing the blocksize safely.
Almost 10 years have passed, communities have split, basically everyone is a small blocker of some kind. Ignoring blocksize increases as a motivation, is there value in reconsidering this type of proposal?
Some considerations jumped out to me:
- We have compact blocks deployed for an off-the-shelf toolset to reduce the amount of bandwidth necessary to transmit these weak blocks.
- We are seeing diverging mempool policies for a number of reasons, which along with mempool churn, results in additional round-trips and delays for final block propagation, which hurts mining fairness and thus decentralization.
What if we can use compact blocks-derived infrastructure to enable weak blocks, which in turn makes compact block relay perform better by reducing round trips?
“Specification”
Re-use a variant of compact blocks messages to support propagation of “weak compact blocks”. Don’t make a DAG or require consensus over these messages, just use them as a DoS-resistant messaging layer for things miners appear to be working on.
It’s ok for these messages to be slower or fail, as long as the PoW is validated carefully. They are not as speed-critical as regular compact blocks.
Open questions:
- What level of validation of weak blocks is required to share the weak block transactions?
- Since we don’t want to require persistence of weak blocks to disk, we need to allow for a full weak block to be “forgotten” even after being advertised via a weak compact block. Add a
notfound
type response?
Implementation
Basic PoC here with light tests only to demonstrate the high level idea.
When a weak block comes in, we fetch the missing transactions from our peer using getwblocktxns
, then once the weak block is validated as structurally sound, attempt to insert any transactions we don’t have yet into our mempool, and relay the weak block via weak compact blocks in turn.
Everything is held in a “holding cell”, even if rejected from the mempool(possibly for standardness reasons).
The implementation is only doing a “best effort” last seen weak block, but clearly this will be insufficient for a full implementation.
Open questions:
- What PoW “multiplier”(factor decrease from consensus-required) should we set? For the PoC branch I set this to 2 simply for testing. Increasing this value increases the expected number of weak blocks per block interval.
- How many blocks should we buffer?
- Do we have to buffer the transactions even if they’re in the mempool already? If we’re asked for them via
getwblocktxns
, we need to respond somehow even if they’ve been cycled out of your mempool. - Should we support the “low bandwidth” path, with an additional round-trip via
weak headers
message? Should we even support the “high bandwidth” path?
Bonus use-cases
“Forcible insertion” of transactions that are incentive-compatible but violate anti-DoS rules? (e.g., “pinning replacement”)
Next-block fee estimation?
Next Steps
- Gather higher-level feedback on a proposed specification/implementation
- If there’s general enthusiasm for such a proposal among developers, figure out if/how miners would actually use this. Do miners use RPCs to submit blocks? Should we support a (whitelisted peer only) protocol to submit non-compact versions of weak blocks to nodes for initial propagation? Would miners actually want to run this? Market research is required, although small miners can run these on their own and benefit from the increased network convergence.