V3 transaction policy for anti-pinning

I’m trying to better understand the claimed v3 pinning vulnerability. Do I understand correctly that:

  1. The worst case, which is what PT analyzed, is a commitment transaction with no pending HTLCs.
  2. The attacker reduces the feerate of the package/cluster containing the commitment transaction by the attacker using their own money to pay the fees.
  3. A commitment transaction with no pending HTLCs has no special urgency (i.e., no timelocks that will expire to the detriment of the broadcasting party).

In other words, the worst case form of this attack would be that Bob will have to wait a bit longer to respend his channel funds but Mallory will pay his fees?

Obviously, the attack also works against commitment transactions with pending HTLCs, but for each additional HTLC output, the attack quickly becomes less effective due to the decrease in relative size difference. Perhaps more interestingly, if Bob pays out of band to fee bump both the commitment transaction and Mallory’s pin (ephemeral anchor spend), Bob will possibly pay less fees than he would’ve without the pinning attack and Mallory will end up losing the funds she spent on the attack.

Of course, paying to defeat an attack out of band is still bad for mining decentralization, but I think all of the above points to this pinning attack being possibly ineffective.

1 Like

yeah, protocol and wallet devs should be reluctant to rely on oob payments to resolve mempool conflicts or timeout situations. such an api is a single point of failure and a huge centralization vector.

not gonna pretend i fully understand the various pinning attacks, i expect the major pain would be when you try to settle some contract by adding fees through the anchor, and someone else outbids your tx with higher fees (which is not a problem for you in itself) but with dependence on other unconfirmed ancestors, which ancestors they can double spend, resulting in the eviction of both fee bumps and the anchored tx from the mempools.

I don’t think that works: if you get to the point where miners offer an API for out of band fee payments that’s trustworthy enough and discounted enough that it sees wide adoption, leading to a centralisation risk, then if you did have a soft fork to require ephemeral anchors be spent, then those miners could work around your soft fork as follows:

  • create a “nouveau ephemeral anchors” BIP, with the same behaviour as before the soft-fork, but with a different scriptPubKey pattern
  • flood the network with nodes that relay according to the new BIP, have those nodes preferentially peer with each other to ensure there aren’t disconnected subgraphs
  • get this implemented by the devs that had already integrated with their API
  • push the patch to core noting that it’s in wide use on the network
  • profit

I think the sort of soft-fork you be okay if it was in line with economic incentives (ie, the only time an ephemeral anchor is in a block but not immediately spent is people testing things on regtest/signet/testnet, or due to bugs), but if the economic incentives are strongly pushing the other way (miners, wallet devs and users all collaborating to save a buck despite the centralisation risk), I don’t think a soft fork here would actually help.

(The other soft-fork approach would be: “an ephemeral anchor output can only be spent in the same block that it was created; it’s removed from the utxo set once the block is completely processed”. That resolves the “bugs lead to dust in the utxo set” issue, but doesn’t touch out-of-band-payment incentives, and introduces the potential for the child tx to become invalid in a reorg, if for some reason it isn’t included in the same block as its parent)

1 Like

I think you’re mostly describing the “cycle attack”, where the child is RBF’d out of the mempool, and the new child no longer spends the ephemeral anchor, causing the parent transaction to be evicted.

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-October/021999.html

Mitigation is, in general, to aggressively re-broadcast, since each cycle costs the attacker fees(unlike pins).

You can think of it in another, imo more principled way: Someone is paying toe censor you, but they have to continuously bid block on block(and inbetween blocks if you’re doing it right + mempool synchrony assumptions), where you only pay once at the end of the game at most, and at your expected rate.

I don’t want to get too deep into cycle discussion though; it’s been done to death.

1 Like

Just checked. At the moment the fees required to get into the next block are 30sat/VB while the min relay fee is 22.2sat/VB. The mempool feerate is super flat. So an attacker could easily force you to wait basically forever becuase those min fee txs aren’t gonna get mined, unless you pay the 50% extra. Not good. And the attacker’s txs will get pushed out of the mempool as fees go up so Mallory isn’t paying anything to attack.

HTLC outputs aren’t much more space. I don’t think Peter’s numbers would change that much if you did the computation for that case too. You could do the same attack on the HTLC as well I think?

@glozow How are V3 transactions supposed to work with HTLCs anyway? Going to have V3 HTLC-success/failure transactions too?

Hmm, so Peter’s thing made assumptions about the attackers fee margin of error that aren’t valid right now. His analysis assumed the attacker paid 1/2.5th of the victim fees, which would be less than minrelayfee. You could probably get away with just 25% less right now, which would force the victim to pay 2.8x more fees. Big difference!

1 Like

The exact pin will indeed rely on exact mempool conditions, which is why I mostly just said “5x max” in my BIP text: https://github.com/instagibbs/bips/blob/527b007dbf5b9a89895017030183370e05468ae6/bip-ephemeralanchors.mediawiki#motivation

If a 5x theoretical pin, down from 500x, with practical limits depending on factors generally considered out of the attacker and defenders power. The only direct way to further lower this is to lower the CPFP max size, without effecting “honest” CPFPs. Originally the child size was something like 4kvB, we already bikeshedded it once!

@0xB10C made some nice charts which may inform better as well: https://twitter.com/0xB10C/status/1743626031070630038

Spec for HTLCs are identical to today, pretty much. Making those non-pinnable is out of scope for now as it would be a significantly larger spec change, with clear drawbacks. I think we can do better later without bothering spec people too much :slight_smile:

But then why would you use an ephemeral anchor which can be maliciously pinned by anyone instead of a normal anchor with a checksig? Why make the attackers able to attack more stuff?

@glozow Is this true? Why aren’t we figuring out this stuff now while the spec is developing? I thought V3 was supposed to fix all these pinning issues.

Assuming we’re talking specifically about LN, every choice has tradeoffs. From what I can see we have these shorter term choices:

  1. 2 Keyed anchors: Relies on CPFP carveout(which is going away eventually due to incoherence in cluster mempool world). Not really viable.
  2. 1 Keyed Anchor(with no sharing): Relies V3 + package RBF which means you have to pay for absolute fee of counter-party’s package + incremental bytes (slightly worse than replacing just child with adversarial counterparty).
  3. 1 Shared Keyed Anchor: Relies on V3 + package RBF. Both parties can independently spend the same anchor. Pinning bounded by the child tx size rule. Requires all other outputs be relative timelocked for at least one block, and costs the additional vbytes for a keyspend. Output must be above dust value. Allows for package RBF or direct RBF against anchor spends. Allows for “theft” of base anchor value.
  4. Keyless anchor: In benign cases, strictly cheaper to spend, and results in smaller commitment transactions. Allows using any outputs previously encumbered by 1 CSV to be unlocked for potential CPFP. Downside is it allows general adversaries to try and pin in the same reduced way as the highly motivated counter-party could in (3).

For more general smart contracting, it has plenty of benefits over CPFP-carveout based solutions: bips/bip-ephemeralanchors.mediawiki at 527b007dbf5b9a89895017030183370e05468ae6 · instagibbs/bips · GitHub

It was previously investigated by me, presented to LN spec group last year in NYC in person, and shelved because all HTLC-Success paths must be pre-signed to commit to some sort of opt-in policy, and either:

  1. If using V3+ephemeral anchors, additional overhead bytes for HLTCs and protocol changes (icy reception to additional bytes)
  2. Some sort of “V4” transaction which helps directly with 1-input-1-output case (Investigated this twice; not many people seemed to like this either).

In summation, we’re trying to offer something people can use to safely replace CPFP carveout, whatever that is, while also improving the situation for wallets who would prefer some anti-pin features.

1 Like

In my head, the idea here is that opting into v3 is a collaboration between (some) node operators and transaction creators to ignore potentially valid transactions in order to more readily relay high feerate alternatives.

So currently you might see txs:

  • :white_check_mark: P = small parent, low feerate
  • :white_check_mark: C1 = huge child, low feerate, high fee
  • :x: C2 = small child, high feerate, modest fee, conflicts with C1

and end up with P and C1 stuck in your mempool and refuse to accept or relay C2, even though due to its high fee, it might well even be included in the next block. That remains a reasonable decision by individual nodes, as simply replacing C1 by C2 would make nodes vulnerable to relay spam: broadcast many C1’s with lots of data, get that distributed across the network for free, then replace those txs with C2’s, making sure that you’re only actually paying a small amount in fees for all that relay spam. This is known as the “free relay” problem.

The workaround that v3 makes available is simply that C1 is now rejected in the first place in many cases. This prevents free relay (C1 isn’t relayed at all), and solves the incentive compatibility problem where a tx that would be acceptable in the next block (C2) isn’t relayed at all.

This works okay even if only adopted by a proportion of the network: C2 is still able to propagate over the subgraph of nodes that implement this policy eventually reach miners that run a policy compatible with this policy, who will eventually mine it.

It is in no way a perfect solution to pinning – some systems will be designed in a way that a large child is sometimes necessary and those systems won’t be able to opt-in to v3 rules. Also, even relatively small children can create a fee amount/rate pin, and there are other pinning vectors than high-fee/low-feerate. That’s fine: making things incrementally better for some people is still making things better.

The limitations with v3 are quite annoying: one ancestor / one descendant means you can only have a pair of related v3 txs in the mempool, and nothing more complicated. In particular you can’t do batch CPFP where a single v3 child pays for a bunch of v3 parents. But again, slightly better is still better, and relaxing policy rules if we figure out better ways of doing things is less problematic than restricting policy rules.

With just the v3 constraints, I think you’d want to design your protocol such that either parent transactions only have a single output that’s spendable immediately – that way any CPFP spends will naturally conflict with each other and RBF rules will apply, and the size limits applying to the child tx will limit the maximum fees those children will need to pay. However if there are multiple immediately spendable outputs that all have an n-of-n multisig arrangement, and all the txs that spend those outputs spend some common output, that could work as well. That approach seems unlikely to be useful in practice though?

When you add the ephemeral anchor rule, in particular that the EA output must be spent for the tx that creates the EA output to remain in the mempool, it becomes okay to allow other outputs to be immediately spendable: each child must spend the EA output, so they must conflict with each other, and RBF rules are applied. That seems to me like a significant bonus in flexibility – allowing lightning commitment tx balance outputs to be spent unilaterally by the same tx that spends the EA output, which I think would be problematic otherwise.

2 Likes

If you are expected to be able to conflict the parent this isn’t necessarily required.

e.g., exchange doing a batched payout can sign multiple versions of withdrawals at different feerates and confidently replace them with up to 1kvB overhead if customers attempt sweeps. Locking customerss addresses with 1 CSV isn’t a think you can really do.

1 Like

Hi all,

Adding to the use case pile for packages, DLCs would use packaging to guarantee broadcast of refund transactions.

Refund transactions are signed at contract funding time. They become valid after a negotiated refund_locktime in the offer message. As the fees are calculated at contract funding, when it comes time to refund the funds because the oracle didn’t do their job, fees on the network could be drastically different.

With packages, you can now submit a package of (refund_tx, CPFP_refund_tx) to get your refund tx confirmed. Either Alice or Bob could submit this package as both would have valid outputs on refund_tx

1 Like

Hi all,

I just wanted to update this topic with a link to some research I did on the effect of the v3 rules on transactions that my datalogging node saw on the network last year.

See Analysis of attempting to imbue LN commitment transaction spends with v3 semantics.

One interesting element of the data is that very few anchor spends are large – see the histogram at the end of the post. I think this raises the question of whether a smaller value than 1000vbytes should be chosen as the max v3 child size, which reduces the amount of additional tx size that might need to be paid for when doing an RBF of another v3 child.

Any thoughts @t-bast @MattCorallo ? Ultimately I think the choice of that parameter is driven by the LN wallet use case (for other use cases involving v3, I think that having no children is even better, so it’s just a tradeoff between utility of being able to create larger cpfp transactions and pinning costs that come from that).

1 Like

It’s very tempting to reduce the 1000 vbytes value, but past data only shows honest attempts (with very few pending HTLCs) as we haven’t seen any widespread attacks on the network yet, so it’s hard to figure out what value would be “better”.

The issue is that commitment transaction may be very large when filled with pending HTLCs. I believe that lnd for example allows up to 483 HTLCs in each direction in the commitment. When filled with 2 * 483 HTLCs, it already costs a bit more than 800 000 sats in fees to reach 20 sat/byte! I would expect such a fee bump to require multiple wallet inputs (even though we have no idea what the utxo set of node operators looks like).

It probably doesn’t make sense to pay that much fees though: node operators who don’t have a large utxo set should limit the maximum value of pending HTLCs and the maximum number of pending HTLCs to something that matches the on-chain fees they’re ready to pay. But we’ve seen that default values are sticky, so people may be at risk if we don’t allow some leeway here.

I honestly don’t know what value would make sense here, as any value will be too risky for some, and too safe for others.

1 Like

How crazy would it be to implement a variety of options for the descendant size? For example:

  • v3 == 200 vB (value suggested by Peter Todd)
  • v4 == 400 vB
  • v5 == 600 vB
  • v10 == 1600 vB (max LN historical size observed by @sdaftuar)

Then each use case could decide for themselves.

Primarily I think:

  1. It still leaves the problem of deciding how many utxos your future self may need for fees for presigned contracts, though it does let you possibly scale this value along with sats-at-risk for the smart contract in question.
  2. More bits used, more fingerprinting
  3. We’ll likely learn(even more!) from working on and deploying V3 that we might want to apply forward. We might require expanded topologies which may make interaction between many bits weird. We might to upgrade V3 directly. We might decide we want a new bit to mean a policy for securing SIGHASH_SINGLE|ACP like constructs instead to cover more cases.

If the concern is really that you might have a very large commitment transaction (say 30k-40k vbytes) that may require a lot of UTXOs in order to CPFP, then it would seem that the downside from having to first consolidate your UTXOs down to 1 in a separate transaction, get that confirmed, and then use that to CPFP is not so large, in percentage terms?

The additional number of vbytes consumed to consolidate first would be something like 110 vbytes, if I’m calculating right (1 extra transaction’s overhead, plus one extra output created and one extra input spent).

Not ideal for a long run solution, but in thinking about tradeoffs, maybe minimizing pinning potential by going with a smaller child size is more valuable?

1 Like

I agree, this is somewhat simple logic that can be easily implemented to handle those rare cases. We must make sure we don’t forget to implement this though!

1 Like