CTV+CSFS: Can we reach consensus on a first step towards covenants?

I like the aphorism “trust, but verify” (of course, some Bitcoiners take it further with “don’t trust, verify” – but either way, the verification part is important). When I try to verify statements like these about CTV, they usually come up pretty lacking.

Taking the specific examples you list for CTV+CSFS, rather than the ones on the utxos page:

  • “CTV+CSFS functions as an equivalent for SIGHASH_ANYPREVOUT (APO), which enables Lightning Symmetry (formerly Eltoo)”

CTV+CSFS isn’t equivalent to APO, it’s somewhat more costly by requiring you to explicitly include the CTV hash in the witness data. The TXHASH approach is (in my opinion) a substantial improvement on that.

The TXHASH approach would be a taproot-only solution (as is CSFS), which would have the benefit of both simplifying the change (removing policy considerations for relay of bare-CTV txs, not having to consider the CTV in P2SH being unsatisfiable), and removing the need for segwit-related hashes to be precalculated for non-segwit transactions, preventing the potential for IBD slowdown, and not requiring complicated workarounds. I believe the only cost would be adding 70 witness bytes to txs in the “congestion control” use case, however that use case doesn’t seem viable in the first place to me.

(Strictly, “TXHASH EQUAL” is perhaps an extra witness byte compared to “CTV”, though “TXHASH EQUALVERIFY” is the same as “CTV DROP”. “[sig] TXHASH [p] CSFS” is probably a byte cheaper than APO, since you add “TXHASH” but save both a SIGHASH byte and a public key version byte. TXHASH also has the benefit that you can just run the script “TXHASH” through a script interpreter to get the TXHASH you want to commit to, rather than having to construct it out of band)

Beyond that, the exploration into ln-symmetry has been both fairly preliminary (to the best of my knowledge, nobody has been able to reproduce Greg’s results; I’ve tried, and I believe cguida has as well – ie, the “verify” part hasn’t succeeded in general). It also used APO, the annex, and some custom relay rules, not CTV or CSFS or the current TRUC/ephemeral anchor model, so even if it did provide good evidence that eltoo was possible with APO, someone would still need to do the work to redesign it for CTV/TXHASH/CSFS before it was good evidence for that approach. I suspect TXHASH, CAT and CSFS would actually be a good combination here, as that would allow for a single signature to easily commit to both the tx and publication of some data while minimising wasted bytes and without needing to allow relay of random data in the annex. (Doing an adaptor signature rather than forcing publication of data would be more efficient on-chain, but seems to still be hard to convert from a whiteboard spec to an actual implementation)

Unfortunately, eltoo/ln-symmetry doesn’t seem to be a very high priority in the lightning space from what I can see – eg, see the priorities from LDK.

I think having both CAT and CSFS would make implementing PTLCs a bit easier – you could replace “ SIZE 32 EQUALVERIFY SHA256 EQUALVERIFY” with “ SIZE 32 EQUALVERIFY TUCK CAT SWAP DUP CSFS” where “y” is the the s-part of the signature, calculated as preimage/(1+H(x,x,x)). Not simple per se, but separates out the PTLC calculation for the signing of the tx as a whole (avoiding the need for adaptor signatures), and only needs to be calculated once, not once per channel update. I can’t see a way to get the same result from just CSFS alone.

  • “I am obviously biased here, but CTV is a game-changer for Ark”

When I looked into this, as far as I could tell Ark has been built based on the elements introspection opcodes and on pre-signed transactions, but there has been no practical exploration into actually using it with CTV. For something that was announced as being implementable either with BIP 118 (APO) or BIP 119 (CTV), never having actually done that comes across as a huge red flag for me, particularly when people are willing to implement crazy things like the “purrfect” vaults.

  • “Protocols involving DLCs, which have been growing in popularity, can be significantly simplified using CTV.”

Doesn’t having CSFS available on its own give you equally efficient and much more flexible simplifications of DLCs? I think having CAT available as well would probably also be fairly powerful here.

  • “BitVM would be able to drastically reduce its script sizes by replacing their current use of Lamport signatures implemented in Script by simple CSFS operations”

I think CAT would also be a significant win here.

  • “Very limited vaults have been built with CTV.”

I don’t believe the CTV-only “vaults” are at all interesting or useful in practice; I’m not sure if adding CSFS into the mix changes that in any way. Fundamentally, I think you want some ability to calculate an output scriptPubKey as “back to cold storage, or spend here after some delay”, and CTV and the like only give the “spend here” part of that. Features like that are very complicated, and are very poorly explored. I think they’re worth exploring, but probably not worth rushing.

I mean, personally I have denied that covenants are a useful tool and continue to do so. Covenants are a bad idea, and misusing that term to also cover things that are useful was also a bad idea. Jeremy may have had a good excuse for making that mistake in 2019 or so, but there’s no reason to continue it.

If you’re seriously trying to establish consensus about activating CTV and CSFS simultaneously, I would expect the first step would be to revise the CTV BIP so that its motivation/rationale are actually consistent with such an action.

There’s no fundamental reason for TXHASH to be particuarly more flexible than CTV; it could be implemented precisely as “push the BIP-119 hash for this tx onto the stack”. If you wanted slightly more flexibility, you could use a SIGHASH-like approach where there’s a handful of values that will hash different parts of the tx, which could also precisely cover all the BIP-118 sighash values. It’s not immediately clear to me what variants would actually be useful though; I think having CAT available would cover most of the cases where committing to the OP_CODESEPARATOR position or the script being executed as a whole would be very useful though, which are the main differences between the two APO variants. I don’t think upgradability considerations here are even particularly necessary; just introducing OP_TXHASH2 in future seems roughly fine. (For general tx introspection, an OP_TX that doesn’t automatically hash everything is perhaps more useful, but general tx introspection is a much wider design space)

So much for “I won’t be … pointing fingers”, I guess? In any event, I would personally argue this was a serious flaw in how we deployed taproot, and one that we shouldn’t repeat. While we did have some practical experimentation (the optech taproot workshops explored schnorr, alternative tapscripts, and degrading multisig eg), and certainly had plenty of internal tests, that was still pretty limited. In particular, there were a variety of problems we didn’t catch particularly early, but likely could have with more focus on “does this work in practice, or just in theory?”:

  • going from an abstract design for musig to an actual implementation was quite complicated, and perhaps would have been simplified if the x-only/32-byte point changes could have been reconsidered. The optech workshops predated this change, so didn’t have a chance to catch the problem; and the taproot review sessions afterwards were mostly focussed on the theory, with any failures at not being able to actually use the features in practice not raising huge alarms.
  • Neutrino had a bug regarding compact block filters impacting blocks with taproot spends, that was only discovered a little over a week before taproot activated on mainnet.
  • Likewise, there were multiple bugs in btcd related to parsing of transactions that were only caught long after taproot was in active use. Really, these were holdover bugs since segwit, however taproot at least made transactions that trigger the first bug relayable. Either way, more practical experimentation on public test networks, rather than just in our internal test framework, would likely have allowed these bugs to have been discovered earlier, and potentially fixed prior to it causing downtime for users on mainnet, with risk of loss of funds.

From a social point of view this outcome probably shouldn’t be surprising – there were plenty of people pushing for taproot to be activated ASAP, and nobody saying that we should slow down and spend more time testing things and demonstrating that they’re useful.

As far as “most of us can’t afford to spend months building” goes, there’s two things you should consider. One is that with open source, you only need one person to build something, after which point everyone can reap the benefits. If you can’t find even one person willing to spend time on a moonshot project, it’s probably not actually much of a moonshot. Second, is that, reportedly these things only take a few hours, not months.

Personally, I think the biggest blocker to progress here continues to be CTV’s misguided description of “covenants”, and its misguided and unjustified concern about “recursive covenants”. Particularly given those concerns are incompatible with co-activation of CSFS, I would have thought a simple first step would be updating BIP 119 to remove/correct them, which might then also allow some rational discussion of CAT and similar features. I and others have already tried talking to Jeremy both publicly and privately about those issues with no success, but maybe you’ll have some. Otherwise, discarding BIP 119 entirely and starting from scratch with an equally expressive and more efficient simplified TXHASH BIP could be a good option.

1 Like