CTV, APO, CAT activity on signet

BIP 118 (SIGHASH_ANYPREVOUT, APO) and BIP 119 (OP_CHECKTEMPLATEVERIFY, CTV) have been enabled on signet for about two years now, since late 2022. More recently, BIP 347 (OP_CAT) was also activated – it’s been available for about six months now.

Here’s a brief investigation into how they’ve been used:

APO

That’s the extent of APO-based usage on signet to date to the best of my knowledge. It’s possible that further ln-symmetry tests will be possible soon, with all the progress in tx relay that’s been made recently.

CTV

And that’s it. I didn’t see any indication of exploration of kanzure’s CTV-vaults which uses OP_ROLL with OP_CTV, or the simple-ctv-spacechain, which uses bare CTV and creates a chain of CTV’s ending in an OP_RETURN.

CAT

There are substantially more transactions on chain (74k) that use it than either APO (1k) or CTV (16) due to the PoW faucet, which adds three spends per block, so it’s a bit harder to analyse. Linking to addresses rather than txids, I think they sum up as:

Addendum

I tracked these by hacking up a signet node that tries to validate block transactions with all of CTV/APO/CAT discouraged, and on failure retries with CTV and APO individually enabled, logging a message hinting at what features txs use. Could be interesting to turn that into an index that could track txs that use particular features a bit more reliably.

patch
--- a/src/validation.cpp
+++ b/src/validation.cpp
@@ -2663,13 +2663,22 @@ bool Chainstate::ConnectBlock(const CBlock& block, BlockValidationState& state,
             std::vector<CScriptCheck> vChecks;
             bool fCacheResults = fJustCheck; /* Don't cache results if we're actually connecting blocks (still consult the cache, though) */
             TxValidationState tx_state;
-            if (fScriptChecks && !CheckInputScripts(tx, tx_state, view, flags, fCacheResults, fCacheResults, txsdata[i], m_chainman.m_validation_cache, parallel_script_checks ? &vChecks : nullptr)) {
-                // Any transaction validation failure in ConnectBlock is a block consensus failure
-                state.Invalid(BlockValidationResult::BLOCK_CONSENSUS,
-                              tx_state.GetRejectReason(), tx_state.GetDebugMessage());
-                LogError("ConnectBlock(): CheckInputScripts on %s failed with %s\n",
-                    tx.GetHash().ToString(), state.ToString());
-                return false;
+            auto xflags = flags | SCRIPT_VERIFY_DISCOURAGE_CHECK_TEMPLATE_VERIFY_HASH | SCRIPT_VERIFY_DISCOURAGE_ANYPREVOUT | SCRIPT_VERIFY_DISCOURAGE_OP_CAT;
+            if (fScriptChecks && !CheckInputScripts(tx, tx_state, view, xflags, false, false, txsdata[i], m_chainman.m_validation_cache, nullptr)) {
+                if (CheckInputScripts(tx, tx_state, view, (xflags ^ SCRIPT_VERIFY_DISCOURAGE_CHECK_TEMPLATE_VERIFY_HASH), false, false, txsdata[i], m_chainman.m_validation_cache, nullptr)) {
+                    LogInfo("CTV using transaction %s\n", tx.GetHash().ToString());
+                } else if (CheckInputScripts(tx, tx_state, view, (xflags ^ SCRIPT_VERIFY_DISCOURAGE_ANYPREVOUT), false, false, txsdata[i], m_chainman.m_validation_cache, nullptr)) {
+                    LogInfo("APO using transaction %s\n", tx.GetHash().ToString());
+                } else if (CheckInputScripts(tx, tx_state, view, flags, fCacheResults, fCacheResults, txsdata[i], m_chainman.m_validation_cache, nullptr)) {
+                    LogInfo("CTV/APO/CAT using transaction %s\n", tx.GetHash().ToString());
+                } else {
+                    // Any transaction validation failure in ConnectBlock is a block consensus failure
+                    state.Invalid(BlockValidationResult::BLOCK_CONSENSUS,
+                                  tx_state.GetRejectReason(), tx_state.GetDebugMessage());
+                    LogError("ConnectBlock(): CheckInputScripts on %s failed with %s\n",
+                        tx.GetHash().ToString(), state.ToString());
+                    return false;
+                }
             }
             control.Add(std::move(vChecks));
         }
9 Likes

I can understand the importance of use cases and proof of concepts. Looking at number of transactions and adoption of prototypes on signet for a soft fork is beyond me.

Especially when I see such things on signet:

It was tested on CTV signet and transaction is shared in the repository. I am sure some developers prefer to use regtest as well.

I have seen zero interest in average bitcoin users to try anything built using OP_CAT. Although some of them have tried using CTV playground.

These 2 transactions are mine and were used to demo joinpool.

The CTV examples in Oct/Nov 2023 were me and a friend demo testing pathcoin (see fidelitybonds.py for the scripts).

(at least … I’m 99% sure … I certainly did it around then just before the San Salvador conference but didn’t keep a record, and the scripts match exactly)

Perhaps not the most practically realistic use-case but, interesting experiment :slight_smile: thanks for setting it up to be usable on signet!

1 Like

Your investigation inspired me to create an explorer for Bitcoin Inquisition transactions. It should hopefully make similar analysis easier in the future, let me know if there’s something I can improve.

4 Likes

I don’t really think these are valid metrics, given they don’t include ctv-signet which predates inquisition.

I ran a count of CTV activity on ctv-signet. I did this by scanning for NOP4.

  • 1558 segwit v0 spends
  • 52 legacy spends
  • 14 taproot spends

This was last used in 2022, though i keep it running still.

There were only 52 spends of legacy, although there were around 24 additional creations not counted in the above figure. This suggests some amount of other taproot or segwit outputs would also be CTV users. Especially taproot – the whole point being you can’t tell if it’s just a key!

Overall I think signet is just… not that useful, in general. There’s no track record of these vanity metrics having anything to do with consensus. Most devs just do stuff on regtest at the end of the day.

Your methodology also AFAIU won’t catch IF blah ELSE CTV ENDIF scripts, if they execute the blah.

As you know, I already did a similar review of transactions on ctv-signet in 2022. There wasn’t anything there that made it seem like it would be worth the time to do another review.

The point isn’t to find another instance of “number go up”, it’s to see if there’s been any testing of the code, or any experimentation to see if the hypothesised use cases are implementable in the real world.

For things that are interesting to people, you do see real experimentation on signet. For example, there’s about 60k inscriptions, about 1000 runes, and babylon’s test runs “filled” quite a few blocks, leading to an apparently successful (and comparatively less disruptive) mainnet launch (and, TIL, followup).

It similarly won’t catch CTV invocations that aren’t executed because they’re in unrevealed taproot branches, or in presigned child transactions that are never broadcast. The point of having a test environment is that you can test the rare cases to make sure they behave correctly just in case the rare case happens in real life some day; and if you do that, there will be example transactions where those paths are exercised. Nobody’s saying you have to do that testing on signet, of course.

These were testing the GitHub - nbd-wtf/soma: a spacechain on signet with anyprevout, a spacechain toy implementation for transferring NFTs (the simplest example of blockchain I could come up with). These transactions correspond to blind merged-mining events (paid out of band with signet lightning) that would each yield a different spacechain block, as you can see they have an OP_RETURN with the hash of corresponding spacechain block.

The broken key path free spending part was a stupid vulnerability due to an overlooked detail. Please ignore, it was just a demo.

2 Likes

Oh wow; up until now I’ve only seen spacechain demos that didn’t actually implement the spacechain part. Is the spacechain data archived somewhere?

Not using a NUMS point probably makes sense for an experiment – lets you use a key path spend to hardfork the spacechain when updating the code, while potentially retaining history. Of course, that assumes you’re using a somewhat secure key; looks like you used G so the funds could be stolen at any point (edit: or more importantly, the spacechain could be stopped dead) by anyone who can figure out what signature the latest scriptPubKey contains.

(I was expecting a non-well-known pubkey and a published list of signatures, which would have been secure-ish and also allowed permissionless mining)

If I’m reading the code right, you limited the spacechain to only doing 50 blocks (Publish.scala, bmm.precompute(0, 50)), which looks like it about matches the number of txs listed above, which presumably means the only script path spend you can do now is the final tx, paying to a 0sat “fim” OP_RETURN and reclaiming the funds (or using them for fees, or using them to launch a brand new spacechain)?

This was a long time ago so I don’t remember exactly, but it looks like 50 was just the number of precomputed transactions that was stored initially by the miner (and presumably it would run the computation again and store more transactions when it was restarted). In fact the spacechain was precomputed to last 100 years (BMM.scala, line 11).

So I wasn’t really expecting to get to the last block of this, I was probably planning on expanding the maximum period to 1000 years in a “production” scenario, but it didn’t make sense to calculate so much given that this was just a demo.

I ran the demo miners and spacechain nodes for months and tried to get people to use it, but didn’t get much attention, maybe ~10 people played with it at the time. Then I got disillusioned and decided to shut it down, I thought about keeping a record of the blockchain data, but it was too depressing and the blockchain didn’t really have any value, it probably makes more sense to start a new spacechain now and test again if anyone is interested.

All the blockchain did was to generate NFTs (without any metadata whatsoever except for a serial number) and transferred them between bip340 pubkey accounts. It had two types of transactions, mint and transfer.

The mining process was interesting: to publish a transaction on the spacechain a user would contact any of the (or all) miners and send them the transaction, the miner would make a Lightning invoice and the user would pay. Miners would hold the Lightning payment in-flight while they gathered more transactions from other spacechain users and then try to use the funds they got via Lightning to pay for an onchain BMM transaction, always performing an RBF when a new user tx arrived – once they succeeded they would resolve the Lightning payments and release the spacechain block to other spacechain nodes. If they didn’t succeed in 10 blocks or if they saw one of the transactions they were holding mined in another spacechain block then they would cancel that transaction specifically and fail the Lightning payment. The user “wallet” was a web interface for keeping track of user assets, pending transactions and it also contacted miners directly via the CLN websocket “commando” interface. It’s weird, I should at least have recorded a screencast of the process, but I can’t find any.

For the BMM covenant I should have used a provably invalid key (is that what a NUMS mean?), but I think I got too distracted while coding the script path and all the APO stuff (I didn’t have any prior experience with programming Bitcoin script and much less with Taproot and APO and PSBT) and didn’t pay attention to that obvious key path flaw.

Oh, by the way, the BMM transactions should all be linked in a chain, every new one always spending the “canonical” 1234sat output from the previous – when you see that chain broken that is probably because I was testing myself, then abandoning the chain and starting a new one, before releasing it to the public.

Ah, that makes more sense. Could cache every 4000th hash, and the next 4000 hashes, so you don’t have to recalculate all 5M hashes everytime you run out of cached hashes. Would only be ~20k hashes (640kB) to cache even for the 1000 year case then, and you could save them to disk or precalculate at compile-time.

Yeah, gotta have a marketing plan to build up lots of drama/excitement if you want attention in the NFT/memecoin space. As far as I’m aware, it never even got mentioned in any of the tech spaces I follow (eg bitcoin-dev, optech). OTOH, I expect I would have ignored anything I did see though, since that was around the time of the inv-to-send issue. Shame.

Two things that would probably be interesting, but also perhaps much harder than what you did: a spacechain that includes actual new programming features (simplicity? EVM like RSK? confidential txs? multiple fungible assets? utreexo commitments?), and/or a spacechain with a currency that’s pegged against sBTC in some way (even a boring trusted third-party way like RKS’s or liquid’s multisig federation or whatever WBTC does), so the chain could theoretically help with “overflow txs” during fee spikes. But those are probably hard to do, and it’s better to be simple but implemented than fancy but imaginary.

Maybe utreexo commitments could be interesting for NFTs – you could have short proofs that you “own” your profile picture NFT; though having the proof potentially change every block might be too awkward in practice… I guess you’d need some way to link the things you mint to some other data though; atm I think soma just lets you create a token that’s tradeable but has no link to anything else? Reverse commitments, where the data links to the token (instead of the token containing/committing to the data) might work fine with soma as-is though, though then you’d also need to add an O(n_transfers)-size proof that your utxo matches the token… a spacechain with a chia-esque coin-set model instead of a utxo model could let you avoid that, though…

How reliable spacechains are in adversarial environments is a big question to me; will people actually fully validate spacechains, or is there a risk that you can just pay to mine lots of “invalid” spacechain blocks and that will be accepted anyway? How easy/disruptive are reorgs – if you can reorg some blocks, doublespend one transaction, but nobody else loses out (because there’s no coinbase fees), will anyone try to prevent the reorg, or is the person being doublespent on their own? Could be fun to explore that sort of thing with a real implementation.

That’s probably similar to the actual usage of the powcoins script, fwiw. Also comparable to bitcoin itself, really; ignoring coinbase txs, there were only 93 txs in its first 3 months (10570 blocks). Sometimes you just have to give these things time.

Yeah, “Nothing Up My Sleeve” – ie, “it’s completely/cryptographically infeasible for me to possibly have a private key/preimage/etc corresponding to this”.

Sounds a bit complicated for an experiment tbh; though I see you setup a thing to auto pay people’s invoices for them anyway. I think I can see how you could plausibly build up a reputation as a trustworthy miner for people to be willing to go through that process with non-fake coins. Neat.

If you do, I’d suggest trying to make something that you can just leave running unattended for ages, and ideally that automatically includes some spacechain content occassionally, even if it’s trivial. Automatically dumping the block data into a github repo once a week might be workable? If the explorer could make static html pages that could also go into the github repo that would maybe be nice, as a low overhead way of making it still explorable even if you don’t want to keep actually maintaining/running the servers?