I’m working on a feerate diagram comparison implementation for (single transaction) RBF and thinking through the constant factors:
The number of different clusters that a single transaction can conflict with will govern how many chunks we have to consider in the feerate diagram comparison.
In theory, the number of such chunks is bounded by # of clusters * # of transactions in each cluster, and the feerate diagram comparison test is linear in the number of chunks that form the two diagrams.
So if we permit a transaction to conflict with 100 in-mempool transactions, and if we permit each cluster to have up to 100 transactions, then we’re looking at 10000 potential chunks to iterate. That seems like it could be slow.
We’ve also previously discussed getting rid of this limit on number of conflicts, and replacing it with the number of clusters that would need to be relinearized. I’m no longer sure we’ll be able to do that, given this new proposed validation logic.
Are there any tricks we can perform to bring these numbers down?