CTV+CSFS: Can we reach consensus on a first step towards covenants?

I know firsthand that a large custodian would make use of CTV to replace presigned transactions specifically for custodial operations.

Beyond that, as you should recall from your VAULT days, CTV (or an equivalent) is a necessary prerequisite to doing any kind of “better” vault. It’s tablestakes. Rob Hamilton recently substantiated the industry demand for good vaults, using VAULT or similar, and once again I can corroborate firsthand there.

So to downplay the demand for vaults is very strange to me. It’s an obvious use that people keep asking for in various forms, and CTV is a required primitive.

The best argument you can make against CTV on this count is that TXHASH’s CTV mode would serve the same purpose, but scrutinizing the TXHASH implementation for validity is probably an order of magnitude harder than for CTV’s given its complexity.

I’m not understanding you here - CTV vaults, and my implementation in particular, allows you to either do a time-delayed spend, or sweep immediately to cold. That much is shown in the first state diagram on the page.

I’m not sure what your basis for saying this is. Aside from the “thundering herd use” potential, congestion control can be used by miners to compress payouts in the coinbase transaction during times of elevated feerates. I spoke to LukeJr about this last night, who runs Ocean Mining, and he said that miners would make use of this - although possibly on the basis of firmware limitations rather than fee smoothing.

The point stands that there are many uses for what’s referred to as “congestion control,” and to dismiss them all casually seems presumptuous.

The “bare legacy” mode for CTV is especially important in these cases because it’s the most succinct first-stage commitment (<32byte-hash> OP_CTV) that can serve to lock in a series of n payments on bitcoin.

1 Like

Oh, and I forgot to mention on the subject of vaults/presigned-txns: I’ve spoken to two Blockstream engineers in the past couple months who both say that CTV could be used to drastically improve the Liquid timelock-fallback script that requires coins to be rolled on a periodic basis. This may apply to Liana as well.

1 Like

For the sake of discussion getting these things on the record by the claimants somewhere public with details is helpful. I think I know the upsides/downsides of the approach but without details it’s hard to engage.

1 Like

I am skeptical of this claim.

First of all, it’s vague. What does “drastically improve” even means, concretely? Following Liquid’s documentation, i assume you are talking about the peg-in script. TL;DR for everyone here: Bitcoin users who want to onboard to the Liquid sidechain can pay to this Script. Coins sent to this Script by users onboarding may later be spent by 2/3 of the Liquid watchmen (custodians) when a Liquid users wants to peg-out to Bitcoin. This Script contains a timelock clause, such that those coins may be spent using three emergency keys after 4032 blocks.

Using such a timelocked spending path directly in the receiving Script presents the same trade-off as for Liana: if they don’t want the emergency recovery to become available then the Liquid watchmen need to spend every coin within 28 days (4032 blocks) of its reception. This presents an inescapable trade-off between funds availability in case the recovery is needed and the security margin to avoid the weaker spending path from being available unless absolutely necessary.

More interesting covenants provide a way out of this, by delaying the timelock to only be triggered through a second stage similar to that of vault constructions. I think claiming CTV can achieve as much is misleading, as in this case you would have to commit to the second-stage transaction at the time of receiving the coins. Since the receiver crafts the address to request funds on, this means the Liquid watchmen would need to both 1) know the amount before giving away the address and 2) trust that the user will for sure use the exact same amount they said they would in the previous round of communication, or the funds may be locked forever or have the excess burned to fees. In addition they need to trust the address will never be reused with a different in the future, something that is infamously hard to get users to do.

This scheme would probably do (much) more harm than good, and this is why i’m skeptical either Liquid or Liana engineers would ever put such a footgun in the hand of their users. Therefore, i do not think it is a valid motivation for a CTV soft fork.

1 Like

Lightning Eltoo

ajtowns:

"CTV+CSFS isn’t equivalent to APO, it’s somewhat more costly by requiring you to explicitly include the CTV hash in the witness data. The TXHASH approach is (in my opinion) a substantial improvement on that.”

stevenroose:

"I think it’s fair to call something equivalent even if it’s a little more costly but achieves the same functionality. (Of course I wouldn’t go as far to make the same argument for things like CAT where you have to dump the entire tx on stack including several dozen opcodes.) A better argument would be that CTV+CSFS can only emulate APO|ALL and not the other APO flags. Though it seems that the APO|ALL variant of APO has the most interest.”

I don’t believe we can say things are equivalent when the marginal on-chain witness cost can fluctuates in the range of two-digits bytes. In Lightning, we already have to trim the outputs out of the commitment transaction, if the outputs scriptpubkeys + claiming input is superior to the ongoing mempool feerates (very imperfect heuristic…). This is a safety issue if you go to open LN chan with a miner, that you can never be sure of.

Going for the more expensive Eltoo, i.e the one where the script stack has to provide <pubkey> <message> <signature>, where message size is equal to 32 bytes those 32 bytes compared to the ANYPREVOUT sighash approach that might make some chan unable to uncooperatively force-close, at a time of fee spikes.

Note this concern on marginal channel or off-chain payment is something that very likely affects Ark too. It’s even hard to compare the cost of a LN chan marginal payment vs the cost of a Ark marginal payment, as with ARK you have an ASP and you have to come with some probabilistic estimations for the interactivity of the ASP.

If my memory is correct, the efficiency approach of logically equivalent primitive was already discussed in the OP_CHECKMERKLEBRANCHVERIFY vs check-if-this-is-a-PTR2 templated approach (i.e BIP341).

Discreet Log Contracts

ajtowns:

“Doesn’t having CSFS available on its own give you equally efficient and much more flexible simplifications of DLCs? I think having CAT available as well would probably also be fairly powerful here”.

See this thread on Optech Github for the trade-offs on the usage of CTV of Discreet Log Contracts.

tl;dr: With adding a hash for each CTV outcome, there is a logarithmic growth of the witness script size (i.e <hash1_event> <OP_CTV> <hash2_event> <OP_CTV>), if the bet is logarithmic in its structure. Evaluating what primitive is the best for a Discreet Log Contract is very function of (1) what is the marginally value “bet on” and (2) what is the probabilistic structure of the bet (i.e are you betting on a price where equal chance among all outcomes or a bet sport where ranges scores can be approximated).

Push-Based Approach Templating

stevenroose:

The TXHASH BIP explicitly also specifies to enable CHECKTXHASHVERIFY in legacy and segwit context and outlines how hashes should be calculated for those contexts.

See the old Johnson Lau idea with OP_PUSHDATADATA for another variant of push-approach.

My belief on the push-vs-implicit-access-with-sigs digest, it’s all up to wish semantic you wish to check on the spending transaction of your constrained UTXO, and if it’s not a templated approach, what is shortest path for stack operations among a N number of transactions fields.

Of course, there can be numerous low-level details of primitive implementation to make that more efficient, like bitvector, special opcodes to push target input or assumptions on the “most-likely” fetched transaction fields.

I don’t know if it’s a programming model we wish to move towards wish…This would start to be very likely to ASM where you have to program CPU registers at the bit-level. If you think Bitcoin Script programming is already low-level and error-prone, this is an order of magnitude worst. Complexity for the use-case programmer at the benefit of more on-chain efficiency.

On the Taproot Rush

ajtowns:

So much for “I won’t be … pointing fingers”, I guess? In any event, I would personally argue this was a serious flaw in how we deployed taproot, and one that we shouldn’t repeat.

I share the opinion, that we could have spent more time doing experimentations of the use-case enabled by Schnorr / Taproot. There was a research page at the time listing all the ideas enabled by Schnorr. I did an experiment to implement PTLC+DLC in early ~2020 for Discreet Log Contract. The learning I’ve come from it that we would have to seriously re-write the LN state machine. As far as I can tell, this has been confirmed by the more recent research of other LN folks.

On the more conceptual limitations of the Taproot, the lack of commitment in the control block of the oddness of an internal pubkey is a limitation to leverage a Schnorr signature as mutable cryptographic accumulator for payments pools. This limitation was known before the activation of Taproot, and it has been discussed few times on the mailing list and documented by Optech.

On the merge of the Taproot feature, let’s remember that the PR implementing it was merged the latest day of the feature freeze for 0.21.0, which I don’t believe I was the only one to find it was a bit of a rush…

One can see the names who have ACKed the merged commit at the time on the Github pull request, as I think seriously reviewing and testing code for a consensus change is always more expensive than talking about it:

  • instagibbs
  • benthecarman
  • kallewoof
  • jonasnick
  • fjahr
  • achow101
  • jamesob (post-merge)
  • ajtowns (post-merge)
  • ariard (post-merge)
  • marcofalke (post-merge)

On the usage of the "Covenant” word

ajtowns:

Personally, I think the biggest blocker to progress here continues to be CTV’s misguided description of “covenants”, and its misguided and unjustified concern about “recursive covenants”.

To be fair here the usage of the word covenant in Bitcoin is not Jeremy’s initiative. I think it comes with Gmax "CoinCovenants using SCIP signatures, an amuingly bad idea” bitcoin talk org article in 2013, and it was in the aftermath also used by folks like Roconnor, Johnson Lau, Roasbeef or even by myself as early as 2019 when OP_CTV was still called OP_SECURETHEBAG.

I’m not aware if Satoshi herself / himself has used the word covenant in its public writing. However the idea to use Script for many use-cases beyond payments, that’s Satoshi, there is quote in the sense somewhere talking about escrow and having to think carefully the design of Script ahead.

The problem of “recursive covenants” is also layout in Gmax’s article of 2013, as a basic question, if malicious “covenants” could be devised or thought at lot, and it was abundantly commented at the time on bitcoin talk org.

On the lack of enthusiasm for Lightning / Eltoo

1440000bytes:

We see an arrogance and non sense being repeated here by developers who are misusing their reputation in the community. Some of these developers have no reasons to block CTV and been writing non sense for years that affects bitcoin.

To bring more context on why there is a lack of enthusiasm for Eltoo Lightning, during the year of 2022, Greg Sanders have worked on a fork of core-lightning with eltoo support and this was reviewed multiple times by AJ Towns and myself.

This is during the review of this Lightning-Eltoo and considering hypothetical novel attacks on eltoo Lightning (“Updates Overflow” Attacks against Two-Party Eltoo ?"), that I found was is (sadly) known today as Replacement Cycling Attacks.

This is for a very experimental point, if you believe that reviewing complex Bitcoin second-layers is shamanism or gatekeeping. I still strongly believe that end-to-end PoC’ing, testing and adversarial review is a good practice to get secure protocols, and no not all second-layers issues can be fixed “in flight” like “that”, especially if the fixes commands themselves for serious engineering works at the base-layer (e.g better replacement / eviction policy algorithms).

Ark + CTV

stevenroose:

But we have been working on this implementation for over 6 months, it is working on bitcoin’s vanilla signet, we have ample integration tests that test various unilateral exit scenarios and all of these are passing for the ctv-based trees.

If I’m understanding correctly, Ark is argued as an example of a near-production or production-ready use-case that would benefit from a hash-chain covenant like CTV. Given Ark is relying on a single “blessed party” the ASP, I’m still curious how an ASP client can be sure that he can redeems its balance on-chain in a collaborative on-chain.

Namely, how do you generalize the fair exchange of a secret to a N number of parties, where among the set of N there is blessed party M, how do you avoid collusion between the N-1 parties + the blessed party M against the N party. Fair exchange of secret is quite studied the 90’s distributed litterature. There were papers also few years ago on Lightning, analyzing unsafe update mechanism for many parties.

Of course, you can do a congestion control tree embedded in the on-chain swap UTXO coming the ASP, but now there is no efficiency gain remained for the usage of CTV (nVersion / nLocktime fields penalty for each depth of the tree).

Vault + CTV / better primitives

jamesob:

Beyond that, as you should recall from your VAULT days, CTV (or an equivalent) is a necessary prerequisite to doing any kind of “better” vault. It’s tablestakes. Rob Hamilton recently substantiated the industry demand for good vaults, using VAULT or similar, and once again I can corroborate firsthand there.

The issue, with vault, is of course dynamics fees for time-sensitive transactions, and if I remember correctly the emergency path, which is a time-sensitive path you have to be sure dynamic fees works well. Even if you pre-sign at some crazy rate, there is no guarantee that you won’t have a nation-state sponsoring hacking group going to engage in a feerate race to delay the confirmation (e.g costless bribes to the miners), until the compromised withdrawal can confirm.

This is not paranoia, if one follows smart contract exploits in the wider cryptocurrencies world (free to check rekt.news), you often see hacks in the $100M - $500M range. So an attacker going to burn 10% of the targeted value in miner bribing fees do not seem unrealistic or unreasonable to me. If you assume that attacker has already keys for the withdrawal or unvault target "hot” wallet.

To be frank, fixing dynamics fess, it’s very likely going to be someting in the line of “fee-dependent timelocks". And here everyone is free to believe or not (don’t trust, verify), under all technical info and hypothesis I’m aware off, eventually those are not going to be simple consensus changes…

Now, on the more precise question of CheckTemplateVerify and its usage for vaults, the best piece of information I’m aware of for the key-sig-vs-hash-chain is based on this Phd Thesis, section 4 "Evolving Bitcoin Custody”.

Of course, while CheckTemplateVerify introduces immutability of a chain of transactions, where intermediary vault transactions do not have to be pre-signed and corresponding privates key deleted, there is still the issue of key ceremony to be dealt with. After reorg-delay, though this a novel property if one goes to design bitcoin second-layers.

If you’re software engineer with know-how on the difference between userspace and kernelspace or familiarity with core Internet protocols (…yes all those talks on bitcoin consensus changes can be very technical in the raw sense), keys ceremonies and overall corresponding operational guidelines can be an incredibly hard thing to get right. All depends the threat model considered, though it’s hard thing to do, and hard to do it repeatedly in production.

So what is key ceremony for bitcoin vaults ? This is dealing with the transition of the cold wallet to the hotter wallets, though for bitcoin this is not a only a “blind signature”, it’s verifying that the spent utxo exists, that the unvaulting outputs scriptPubkeys are valid or at byte-for-byte equal to the one that are going to be verified by the Script by bitcoin full-nodes at run-time.

As people who are familiar with the Validating Lightning Signer, series of custom checks and the situation there are to have re-write a LN state machine embeddable for the constraint of a secure enclave, being sure that your vault is the “correct” vault in production is a bit more complex safety-wise than is this a P2WSH yes or no.

So I strongly believe the bottleneck we have to evaluate a CTV-enabled vault is a vault proof-of-concept specified enough that all the logic of the vault can be described with chain headers, UTXO proofs and outputs descriptors that they can be given through a carefully-designed interface to a HW or secure enclave and have the vault setup verification done there.

Do we have any bitcoin HW vendors or secure enclave vendors ready-to-extend their interfaces for a simple 2-steps CTV vault protocol ?

Not only with output script support though also with any efficient proving of the UTXO set, which can be challenging programming-wise as secure enclave RAM and cache memory is limited, by design. And as far as I researched so far, constrained templating like CTV do not comes with tx-withhold risks and do not alter the UTXO model, though my thanks if you prove me wrong here.

I spoke with @jamesob about this. I am no longer affiliated with Blockstream, but I do feel that liquid script can benefit from CTV. I worked on research team, maybe @stevenroose and @instagibbs who worked more closely with liquid can correct me if I am in the wrong here.

You’re right. A straightforward implementation would be highly prone to issues for the same reasons you mentioned. However, this can be addressed as follows: users deposit funds using a standard liquid peg-in script. These peg-in funds are then consolidated into a separate CTV address managed by the watchmen. This shifts the responsibility of correct CTV usage from users to the liquid engineering team.

While this approach requires careful engineering, the potential fee savings could make it well worth the effort.

I don’t think Liana can achieve something similar since it is non-custodial. In this case, once the funds are in the watchmen’s custody, they can optimize spending as needed.

Overall, I agree with the sentiment that general-purpose vaults are better and less error-prone for this use case, for the same reasons you listed. However, as mentioned above, CTV remains useful on its own for avoiding recurring fees.

Greg used the term precisely and correctly – his post describes taking a general zero-knowledge proof feature and using that to produce actual covenant constructions where a coin and all possible spends of that coin are permanently constrained in a particular way, creating a burn address that allows the burnt coins to be burnt again and again:

A particular sort of rule could take the form of requiring any output scriptpubkey to be of the form THIS_VALIDATION_KEY && {whatever rules you want} and by doing so you have effectively created a coin which is forever subject to a covenant which will run with the coin and forever constrain the use of it and its descendants degrading and its fungibility.

On not ignoring the importance of script cost for contested settlement:

I’ve made a comparison of many alternatives for developing eltoo. Briefly, APO+CTV+standardized_annex is the most efficient of known alternatives. Compared to this:

Method uncontested once contested
LNHANCE +8vB +16vB
@instagibbs APO+annex +16vB +32vB
CTV+CSFS +58vB +109vB

As in other protocols CTV consistently reduces cost and sigops even when used with other mechanisms.

edit: corrected some figures and made into a table

Eh, I seem to somehow have missed the second half of AJ’s first response. My bad.

I wasn’t trying to point fingers at whomever was involved in the taproot deployment. More rather trying to indicate that the bar for the taproot soft fork was met on perceived technical merit only, while today it seems that practical usage is the only bar upheld. Is this because people actually don’t see technical merit in CTV and CSFS and hence want convincing by usage, or because we believe purely technical merit is no longer enough for a consensus upgrade?

Arguably CTV so closely resembles presigned transactions yes, one can swap presigned txs to CTV in a few hours, but that is only possible because we have spent the presigned txs project for several months while keeping the potential of swapping to CTV in mind while designing API. This strategy wouldn’t apply for any project that really needs a new primitive to be built.

I’m confused. Are you talking about a technical amendment to CTV? Or are you (again, like in your recent mailing list post) whining about the Motivation section of the BIP? Excuse my French, but if your “biggest blocker” in what is trying to be a pragmatic evaluation of a technical change is some wording in a BIP’s motivation section, I can’t help but question whether you are actually trying to constructively participate in this conversation.

FWIW, I wouldn’t mind this outcome of course, I implemented TXHASH exactly because it’s a more powerful version of CTV that offers a lot more flexibility. Though when I published the BIP and implementation, I barely received any feedback, so I figured there was not much interest. Given your own remarks on the topic, I suspect you also haven’t read the BIP text yourself.

It’s true that CTV isn’t suitable for a scheme where you want to present the user with an address that they can deposit any sort of funds into. However, when these transactions are managed by software, I think it’s ok. Liquid could just abandon the “pegin address” concept and have wallets implement pegins internally.

We do the same with Ark: when onboarding you craft an exit tx and send it to the server for cosigning. With CTV you could do away with this round of interaction and since the amount is visible in the tx, you could also re-generate the template hash and recover funds using a mnemonic, which in the current system is impossible because loss of the cosign signature means you can’t recover the exit tx.

(For completion, TXHASH is able to assert that input amount equals output amount, so that would solve this particular problem if you only use a single input. Can’t do input sum amount equals output sum amount unfortunately.)

2 Likes

Andrew Poelstra is the second person I spoke with about this. Today he gave me permission to affirm here that he thinks CTV could be used for this purpose with Liquid.

He writes

What the watchmen need is super simple. “These keys allow moving the coins to a special timelocked staging area, from which the original keys can still pull them back” It’s basically a one-step vault.

In fact we literally implemented it in terms of unsigned transactions in an early version of liquid, but it was too difficult to keep them synced up and invalidated.

Given that he is short time (and a Delving Bitcoin account), he graciously allowed me to republish here.

So we have two independent attestations that CTV could be used to good effect on Liquid – hopefully that’s sufficient to put the matter to rest.

2 Likes

I still don’t see any quantification of benefit to Liquid so seems premature for you to put the matter to rest.

What has been put to rest is that two highly qualified Liquid devs have said, “yeah, we’d use CTV for Liquid.”

You’re the one saying you’re putting things to rest. I’m curious to learn of the magnitude of the advantage.

Yes, I’m familiar with the idea that you have a SNARK verifier as a replacement for the script interpreter, so you get as an input stack <input public data> <verification public key> <signature> <hash_masked_tx>, where you can assert properties on the spending tx (e.g is this output scriptpubkey size 32 bytes).

By setting the verification rule that an output redeem script must be of the form THIS_VALIDATION_KEY && {whatever rules you want}, in my understanding you’re introducing recursivity of the verification rules.

The recursivity can be bounded (i.e using the tx nVersion field as a counter) or unbounded, by generating a correct and custom <verification public key>.

In my understanding, and here with in mind Roconnor’s FC’17 paper and Jeremy’s talk at Standford’s 2017, the idea of constraining and recursively applying a set of rules on a UTXO and any of spent tx, is what probably mistakenly understood as a covenant nowadays. At the very least, I gave talks and used the term in that sense in my own writing on bitcoin covenants.

I don’t disagree that “covenant” is an imprecise terminology. After re-checking the translation in my native tongue (i.e “une convention legale”), the term designates more wider legal constructions than what is understood as a “covenant” in the English real estate law. Saying contracting primitives or Script opcodes sounds indeed better.

So the original idea of Eltoo was to have a new sighash flag for the signature of state transaction, that makes them re-bindable on any previous state, removing the constraint to have to store revoked scripts / amounts, for each previous state, and opening the door to >= 2 parties off-chain constructions.

Efficiency-wise, yes I can see how you can have the chan constraint in an “anyprevout_flag” tx template in the commitment_tx output, the per-state counter in the annex fields which could be saving the signature cost size, though I’m not sure we’re talking about the same data layout. And I believe you might need one more opcode to push the annex field on the stack.

As I pointed out on the mailing list more powerful opcode primitives can open the door to TxWithhold risks, by allowing to introspect the status of another UTXO. E.g a basic tx-withhold contract would be someone promising to any miner that if target LN commitment tx is not confirmed until N, a native bitcoin bounty is paid. The N picked up can be the safety timelock of a LN commitment tx.

At the very least CSFS, with CSFS you can have the <message> being the commitment transaction, for which you know the public key (e.g if you’re a LN counterparty), but you don’t know the <signature>.

If you combine it with an UTXO set oracle (e.g <message=UTXO_123 signed), you’ve already a rudimentary yet powerful tx-withhold contract. I don’t believe CTV allows in any fashion to do more powerful malicious tx-withhold contract, though I believe if it can affect LN, it can certainly affect Ark too.

If I’m correct here, the lemma is that you certainly needs to have any LN chan UTXO be veiled with some kind of consensus-level semantics, “this outpoint cannot be referenced by the Script execution of another UTXO spend”. That can be very touchy to implement in bitcoin Script interpreter…

Thanks, this is interesting to get Andrew’s opinion here. While I disagree with him on the simplicity (lol) of translating “these keys allow moving the coins…” in a protocol and well-designed cryptographic API, I believe this is backing the point I was raising in my previous comment that you should get support for any opcode at the HW-level or within the secure enclave.

Otherwise, how can you be sure that the “cold keys” are authorizing the spend to the correct hash of a transaction, if you do not re-verify the hash computation on the enclave ? This is the same problem that the Validating Lightning Signer has already today to verify all the state transition of the LN protocol to be secure in face of a “compromised” main LN node.

In my opinion, this as much the community of bitcoin protocol experts to convince than HW vendors of all kinds, that a said given opcode should be supported.

—————————————————————————————————————————-

Speaking for myself, I can be supportive of the most minimal opcode improvement that improves self-custody for the lambda bitcoin user, at condition it doesn’t introduce tx-withholding risk or extend DoS surface area for full-nodes. If you go in the street to ask to 10 bitcoiners “do you wish to make the self-custody of your coins, stronger and easier”, I genuinely believe the number of positive answer is going to be equal to 10. I don’t expect the same level of positive answer for Ark, DLC or payment pool, I think it’s just either early or far too complex to be understood by a lambda bitcoin user. In comparison coins self-custody has always been an area of focus of bitcoin development since people have started to develop lightweight wallets for their own usage in ~2010 / 2011.

Liquid, no opinion. I’ve already met many Liquid devs in real-life but I’ve never met a L-BTC user, which it doesn’t mean Liquid users don’t exist.

The confusion Jeremy has created with this has been continually used to block discussion of other comparable approaches in this design space, including combinations of CTV with other opcodes, variations on introspection approaches, and other opcodes entirely. Thanks at least for confirming that this is as much of a waste of time as every other CTV discussion has been.

Jeremy has not at all participated in this discussion and you are the first one to bring up both him and the BIP text. Or even CTV history in general. Most other comments to focus on the actual technical merit of the proposal, whether it be evidence or lack of evidence thereof.

You are the only one bringing up historical artifacts and calling them “blockers”.

I am aware I said I wouldn’t be pointing fingers, but I tried to start a discussion here that focuses on where we are right now and how we want to move forward and I don’t appreciate attempts to derail the conversation into old fights that have been fought before and add absolutely no value to the topic.