Merging incomparable linearizations

Merging isn’t necessarily commutative either, I think, in the case where both linearisations have different (eg, non-overlapping) first chunks at equal feerates.

What are the chunks here? I have a hard time imagining how [5,3,1,8,0] turns into [5, 4, 0].

[5] [3,1,8] (feerate 4) [0]

1 Like

Hmm, indeed. This example shows that bestPi isn’t always as good as the original PiMerge description.

Given that, I lean towards sticking with the original, as I don’t think that the performance difference between the two is significant. If there was no observable difference at all, it’d make sense to pick the faster one, but that doesn’t seem to be the case now?

The linearizations that come out can differ depending on whether the transactions from L_1 or L_2 is chosen when the found subsets have the same feerate, but I believe this cannot affect the fee-size diagram. Generally I use “smaller size is better” as tie-breaker when comparing equal-feerate chunks, as inside linearization algorithms this sometimes avoids accidentally merging equal-feerate subsets, but I don’t believe it matters here what tie-breaker is chosen. Do you have a counter-example?

EDIT: here is a counterexample:

  • Transactions: A=1, B=2, C=3, D=2, E=1 (all same size)
  • L1: [B,E,C,A,D], chunked as [B:2, EC:2, AD:1.5]
  • L2: [D,B,A,C,E], chunked as [D:2, B:2, AC:2, E:1]
  • Merge(L1,L2): [B,C,D,E,A], chunked as [BC:2.5, D:2, E:1, A:1]
  • Merge(L2,L1): [D,B,C,A,E], chunked as [DBC:2.33, A:1, E:1]
1 Like

Ok, with that counterexample in place I’m now again leaning the other direction: that the right approach is using the simplest algorithm that works.

My impression is that the non-commutativity and non-associativity of merging are effectively randomly stumbled upon through accidentally having/putting transactions in the right order. And these accidents can’t be prevented, and moreover trying more subsets will inevitably mean there is a small chance of finding something better still. However, that doesn’t mean it’s worth spending time on even if it’s a small cost. That time could be better spent on directly trying more things in the linearization algorithm.

2 Likes

Just a quick note, using insights from the composition algorithm, it appears possible to further simplify merging:

  • Instead of finding the input linearization with the best remaining prefix, it is possible to only consider prefixes of what remains of the original chunks the input linearizations had.
  • Instead of trying the intersection of that best prefix with every prefix of what remains of the other linearization, it suffices to only consider prefixes aligning with the chunk boundaries of the remainder, or even of just the original chunk boundaries.