SuperScalar: Laddered Timeout-Tree-Structured Decker-Wattenhofer Factories

Correct me, if I’m wrong on the fact, though I think it’s a fair description of the fact.

Sure — people who work in the same company tend to work together on the same stuff. SuperScalar was released in a limited venue internal to Block, before we presented it at the summit.

The initial SuperScalar was lousy — it was just laddered timeout trees, without the Decker-Wattenhofer. This was developed a few months ago, internally to Block, and not released publicly because it sucked. A few weeks ago, while looking over @instagibbs work on P2A, I realized that P2A handled the issues I had with Decker-Wattenhofer — in particular, the difficulty of having either exogenous fees (without P2A, you need every participant to have its own anchor output, increasing the size of each tx with an output per participant) or mutable endogenous fees (because not every offchain transaction is changed at each state change, earlier transactions cannot change their feerates when you update the state for a feerate change), which is why I shelved Decker-Wattenhofer constructions and stopped work on sidepools, which used Decker-Wattenhofer. However, with P2A, I realized that Decker-Wattenhofer was actually more viable — and thus can be combined with timeout trees, creating the current SuperScalar. I then rushed to create a quick writeup, got it reviewed internally, and got permission to publish it on Delving, so we could present this at the summit. @moneyball believes it provides a possible path to wider Lightning onboarding, so this encouraged me to focus more attention on it and figuring out its flaws and limitations.

(You can track my thinking around laddered timeout trees by my Twitter posts, incidentally — again, I only started posting about timeout trees, and musing on laddering them, in the past 6 months. Thinking takes time.)

Adding Decker-Wattenhofer to laddered timeout-trees was an idea that occurred to me literally a few weeks before the summit. Again, remember that I stopped working on sidepools because Decker-Wattenhofer sucked (exogenous fees are out because you need an output per participant, endogenous fees cannot be meaningfully changed because each state transition only changes a subset of offchain transactions), and I only returned my attention to Decker-Wattenhofer after @instagibbs could release P2A for Bitcoin Core 28. Without thinking about Decker-Wattenhofer, I cannot combine Decker-Wattenhofer with timeout-trees, obviously. The timing was coincidental, not planned. Thinking takes time.

I have not made any representation that the construction has been peer-reviewed meaningfully, even internally in Block. Given the addenda I have been writing, it is very much a work-in-progress, one that I am taking all of my time to work on rather than anything else. If anyone can poke holes into it, they can do so in this very thread, that is the whole point of making this thread and presenting it at the summit. I am in fact taking the time here to allow people to respond. All I did was present this to the people at the conference, and I believe it to be worth my time to think about and refine.

If block inc is designing a product only for the US market, where there are legal framework in case of issues, it’s better to be more verbose about it

I have been advised to avoid mentioning of legal or regulatory stuff, as I am not a lawyer and anything I say about regulations, legal frameworks, or regulatory bodies would not be expert advice on it. Let me ask my supervisor about this.

So if you assume timeout tree with a k-factor of 2 and 8 users, that’s 12 transactions that have been to confirm in the worst-case. 12 transactions, that’s more than the 8 commitment transactions that have to be confirmed in the worst-case scenarios, rather than 8 commitment transactions, and there is no, or only small compression gain as the k-factor has to be materialized in the fan-out output at each intersection of your timeout tree.

We are aware of this issue and this issue was also presented at the Lightning Proto Dev Summit. However, the tree structure does allow for subsets to sign off on changes instead of requiring the entire participant set to be online to sign; this reduces the need for large groups to come online. @adiabat last year in TABConf had a talk about how onlineness (specifically, coordination problems with large groups) will be the next problem; tree structures allow us to reduce participant sets, regardless of consensus change, and tree structures do still require a multiple of N data to publish N leaves. Full exits of OP_TLUV-style trees are even worse, as they require O(N log N) instead of O(N) data (m * N data to be specific, where m is reduced by higher arity). OP_TLUV-style trees also cannot change subset without changing the root, which means they do not gain the ability of SuperScalar to have subsets sign off on changes; this is because OP_TLUV-style trees use Merkle trees, unlike timeout trees which use transaction trees which allow sub-trees to mutate if you combine it with Decker-Wattenhofer.

I have been refining SuperScalar to shift much of the risk to the LSP, precisely to prevent risks on clients. You may not agree that it goes far enough, but I think it can be done in practice such that it is more economical for the LSP to not screw over its clients, just as typical capitalism works. You cannot earn money from a dead customer, which is why capitalism works.

The thing about SuperScalar is that timeout trees suck because of the large numbers of transactions that need to be put onchain in the worst case, Decker-Wattenhofer sucks because of the large numbers of transaction that need to be put onchain in the worst case, but when you mash them together they gain more power while not increasing their suckiness — their suckiness combines to a common suckiness instead of adding their suckiness together. Timeout trees get the ability to mutate due to combining with Decker-Wattenhofer, while Decker-Wattenhofer gets the ability to mutate using partial participant sets. The whole is better than the sum of its parts, and I think it is worth my while to investigate if this is good enough in practice.