You’re of course right, since we’re not optimally pruning these effects can still happen. So you might want to have the evicting party pay for it, but then let it back in via re-submission. Or just be more optimal.
Shower-thought level quality aside, the goal is to give wallets, who maybe don’t even see the cluster limit being hit, a way to improve the mempool and get their transactions confirmed. If we find an even better way later, we should take it.
A couple use-cases for motivation in the future, all relying on cluster sizes of above 2:
- 0-conf funding transactions in LN. Funding transaction may have descendants added on, so the anchor spend(or slightly larger replacement) is unable to enter the cluster.
- Ark/Timeout trees. You may want log(n) nodes in the tree published with a single anchor at the end of the chain, but once enough branches are put into the mempool, you’re unable to make attempts at inclusion.
Adding anchors at each step “fixes” these, but significant marginal bytes to protect against scenarios no one has seen in the wild is a hard sell.