Sharing block templates

I find it surprising you’d simply assume AJ didn’t think about that. Under this proposal you maintain a template per peer, so one peer cannot evict other peers’ transactions. It can at worst waste a bounded amount of your memory.

3 Likes

You don’t need GETDATA here; the messages would just be:

sequenceDiagram
    Pleb->>Miner: GETTEMPLATE
    Miner->>Pleb: TEMPLATE [shortids...]
    Pleb->>Miner: SENDBLOCKTXNS [missing-tx-indexes...]
    Miner->>Pleb: BLOCKTXNS [secret-transactions...]

Probably having a bastion node would be the best way to avoid sharing data, something like:

sequenceDiagram
    Pleb<<->>Bastion: normal relay
    Bastion->>Helper: txs, templates and blocks
    Helper->>Miner: txs, templates and blocks
    Miner->>Helper: txs and blocks
    Helper->>Bastion: blocks only
    Pleb<<->>Bastion: normal relay

Having a fake node that only relays txs to the miner, but never relays txs from the miner would probably work pretty well; having it pass on templates generated by the bastion node might help ensure CPFP txs and the like don’t get lost, but if the miner and bastion run the same policy and just have slightly different sets of txs, shouldn’t make a huge difference, I think.

Presumably you want something like that now, if you want to prevent early relay of secret transactions that don’t violate standardness rules.

This is published as BIN25-2 now. That only describes the basic protocol, not how to best use it, which is something still to be figured out. It also doesn’t allow for any sort of delta-encoding as discussed in this thread.

I’ve now done some very limited experimentation on mainnet with it, just between two peers running similar mempool policies. With that setup, I’d expect very few out-of-mempool txs to appear in the templates being shared. The ones I am seeing seem to take the form:

  • this tx was just RBFed, but the template was generated beforehand
  • this tx was confirmed in the previous block, but a new template hasn’t been created since that block came in

The just-confirmed txs seem like they might take a little too long to deal with (I’m seeing ~0.3ms each, but when they happen, it can be 3000 txs all at once so that’s still 1s of total processing that doesn’t really achieve any useful progress). But we already have a map of the last block’s transactions, so just using that to quickly move on seems like it might be fine.

The just-RBFed case could perhaps also be cached and skipped over, though I like the idea of reconsidering recently RBFed top-of-mempool transactions as a way of mitigating Riard’s replacement cycling attacks, so it might have the potential to be beneficial. In any event there aren’t that many of them and they and easily resolved, so aren’t much of a problem.

Still haven’t figured out a decent way of covering templates from honest inbound peers that works well in the presence of adversarial inbound peers.

On the other hand, one thing that occurred to me is that I think if you requested a template from feeler peers before disconnecting and attempted to add any novel txs they had to your mempool, that would provide a fairly good way of increasing relay connectivity to get around attempted relay censorship, without adding much of an ongoing burden.

1 Like

With txs confirmed in the previous block excluded I’m seeing about 2 txs per minute that get (re)considered. They seem to fall into these buckets:

  • most of them are from slightly old templates where the RBF came through after the template was generated, but before I requested it
  • a few are due to an RBF missing a block, but the template is from before the block was found but does include the RBF tx
  • new transactions, that I had requested via INV/GETDATA but that hadn’t come through (the 1 minute delay to retry from another peer, vs the ~2 minute delay to request a template presumably means that top of mempool txs that aren’t getting a quick reply will often get requested via template sharing)

One way to reduce the RBF hits might be to change it so that rather than sending the latest template immediately, you mark the peer as wanting a template, and send the next template as soon as you generate it. That will make it much rarer to send a template based on an old block, and also much rarer for an RBF to arrive in-between the template being generated by the sender and being processed by the receiver.

3 Likes