Emulating OP_RAND

My default assumption is that you’d need to rerun the protocol for every randomized-HTLC every time you update your commitment tx, which I think would start eating up significant cpu/bandwidth pretty quickly (ie, every channel performs like it’s running over tor). It also seems like it’d be kind-of “hot path” in that you need to perform it everytime you forward a payment, before you can forward a payment. But, I mean, I could be completely wrong – super early to be trying to thinking about things like that!

$1000 in tx fees at 100sat/vb at $100k/BTC means 10k vbytes, or about 40kB of witness data, which seems large – if you can walk through a merkle tree of 2**n entries with 50 witness bytes a step, then a factory with a billion participants would still be only 1.5kB of witness data, for maybe about 530 vbytes or $53 at 100sat/vb at $100k/BTC. So for me, I think it makes more sense to work on improving the fan out technology, and to continue thinking of the probabilistic stuff as only relevant for dusty outputs (where they don’t pay enough to justify their own appearance on-chain).

2 Likes