I object to “completely”. Consider that custodial Bitcoin wallets are even worse, in that the offchain state is merely the trustmebro of the custodian; with Lightning and SuperScalar, the offchain state at least has the (unguaranteed) possibility of resolving with the onchain state if the LSP becomes uncooperative (unlike the case of custodial Bitcoin, where if the custodian refuses to cooperate, it can simply 0 out your account under you). Custodial Bitcoin is significantly more broken than Lightning or similar schemes. Until you can present a perfect scheme that allows Bitcoin to have offchain state perfectly resolved 100% of the time, I would respectfully ask you to point your attention at custodial schemes instead. Improving something is still better than waiting for the perfect thing. Sometimes you have to accept gray in a gray vs black fight.
This is in effect Peter Todd’s suggestion to make fees endogenous(and moreover, a single transaction).
It begets potentially weird behavior in that the layer 1 chain can not actually know what the latest state is, so we don’t know what “their own funds” are, to spend as fees. During the challenge period, we just don’t know. It relies much heavier on the penalty mechanism, and ignores the fact that a miner could be in cahoots with your counter-party, robbing the LSP by simply playing an old state that had lots of fees attached to it.
If we had “non-contentious” funds inside the tree, ala the Timeout Tree paper(off-chain funds, I think it was called), these could directly be used without weirdness, but that’s liquidity per-user at least, and I honestly didn’t quite understand the stated construction.
This is standard tree transaction structures, but timeout trees also add an alternative spending condition: the LSP can spend by itself after a particular timeout.
Private key handover is AWESOME.
The security model of time-sensitive contracting protocol is to be able for any counterparty to broadcast and unilaterally fee-bump its off-chain states, before expiration of the safety timelocks.
This construction is so broken…The arity of the tree inflates the branch of transactions in weight units that a counterparty can have to fee-bump in the worst-case scenario.
“Fair secret exchange” i.e the private key out-of-band swap to make an assisted exit cannot work, as there is no guarantee that the LSP complete the key exchange on time (under classic physics), before the safety timelocks of the tree expire. That way letting the LSP to rug pull the end users.
An alternative idea given to a few people at the summit, was to have multiple commitment transactions
per commitment state with different feerates (dependent transactions such as HTLC-success and HTLC-failure
would have the same feerate as the commitment transactions).
This is in effect Peter Todd’s suggestion to make fees endogenous(and moreover, a single transaction).
Sure making the fees endogenous is an idea known for years, with rainbow ranges of pre-signed replacement lightning states. But I think I should still go to explain to Peter Todd, why it doesn’t work as soon as you have one or two bitcoin stacked in your lightning channels.
I object to “completely”. Consider that custodial Bitcoin wallets are even worse, in that the offchain state
is merely the trustmebro of the custodian; with Lightning and SuperScalar, the offchain state at least has the
(unguaranteed) possibility of resolving with the onchain state if the LSP becomes uncooperative (unlike the case
of custodial Bitcoin, where if the custodian refuses to cooperate, it can simply 0 out your account under you).
Custodial Bitcoin is significantly more broken than Lightning or similar schemes. Until you can present a perfect
scheme that allows Bitcoin to have offchain state perfectly resolved 100% of the time, I would respectfully ask
you to point your attention at custodial schemes instead. Improving something is still better than waiting for the
perfect thing. Sometimes you have to accept gray in a gray vs black fight.
Can I ask you a simple question ? Are you the ZmnSCPxj which whom I’ve already been on the same panel at some bitcoin conference to talk about the subject of off-chain scaling eyes in the eyes ? And if yes are you realizing this work about SuperScalar as part of your paid time as an employee at TBD’s the Jack Dorsey’s Block Inc’s subsidiary ? I respect your privacy, just not using a pseudonym as an excuse for shameless corporate lobbying about open-source.
Apart of that, what you’re saying about custodial wallet vs non custodial wallet is gibberish, and one is better to put its money at Silicon Valley Bank, than within a SuperScalar off-chain construction.
Back to the more technical conservation about scaling and off-chain constructions, if one goes the way of using short-paced timelocks and Decker-Wattenhofer Factories are just that, one as to find a solution at the consensus-level for the “Forced Expiration Spam" problem as described in the lightning whitepaper (section 9.2).
I’m not the one who came to describe this problem in the bitcoin community. Tadge Dryja and Joseph Poon did it back in 2015. And as far as I know, since then there has been no research emanating from the academic or industry pointing out that this problem is not a real issue for this approach of bitcoin scalability, of which factories and payment channels clearly belong.
Can I ask you a simple question ? Are you the ZmnSCPxj which whom I’ve already been on the same panel at some bitcoin conference to talk about the subject of off-chain scaling eyes in the eyes ?
Yes
And if yes are you realizing this work about SuperScalar as part of your paid time as an employee at TBD’s the Jack Dorsey’s Block Inc’s subsidiary ?
Yes.
And as far as I know, since then there has been no research emanating from the academic or industry pointing out that this problem is not a real issue for this approach of bitcoin scalability, of which factories and payment channels clearly belong.
Feel free to attack the actual Lightning Network to demonstrate the problem. Clearly, you believe:
one is better to put its money at Silicon Valley Bank, than within a SuperScalar off-chain construction.
If so, you can demonstrate this, today, with the actual Lightning Network.
addendum
Inversion of Timelock Default
Existing timeout tree discussion has the timeout default in the favor of the LSP.
However, we should note that the construction needs a timeout (in order to provide a well-defined scope for how long the LSP needs to provide service to the clients), but does not actually need to have the timeout default to the LSP.
If we assume that the LSP is in the business of selling liquidity, we can assume that the LSP has large amounts of funds available onchain to pay for onchain fees if the timeout tree needs to be published onchain. What we need is a way to force the LSP to handle unilateral closes and pay for them if the client is so unsatisfied with LSP services that it decides a unilateral close is more palatable.
Instead of having an L & CLTV
branch at the transaction outputs of Decker-Wattenhofer state transactions, we can instead have the signatories sign an nLockTime
d transaction that sends the funds to the clients, with the timelock being the timeout of the tree. Thus, each node output that goes would have gone to an (A & ... & Z & L) or (L & CLTV)
would instead have just A & ... & Z & L
and two transactions signed:
- The node as shown in the main post.
- An alternate transaction, locked at
nLockTime
, which distributes the funds so that the initial channels ofA
…Z
are given solely to the respective client, and withL
-funds (i.e. the liquidity stock) split evenly among all clients.- Each node output that eventually leads to the client channel must, by necessity, include the total value of the client channel, plus any channel reserve imposed by clients on the LSP, plus any fellow client channels, plus the liquidity stock the LSP is holding ready for sale to clients.
- As the clients have unilateral control of the outputs, they can trivially fee-bump this alternate timeout transaction to any level.
Then, if a client decides it wants to unilaterally exit, it can force the LSP to pay for unilateral exit by simply never performing an assisted exit from the current tree and waiting until nLockTime
. If the blockheight approaches the nLockTime
of the tree, the LSP must initiate the unilateral exit itself, and pay for the confirmation of those nodes, or else it risks loss of all funds still locked in that part of the sub-tree.
If a client has performed assisted exit (i.e. a PTLC-based swap that exchanges the client private key used in the tree for onchain funds, or for funds in next laddered timeout-tree) then the LSP does not need to fully perform a unilateral exit; it only needs to publish enough nodes until it reaches an output with (A & ... & M & L)
where it already got the client private keys A
…M
via assisted exit.
This means that the LSP is very incentivized to provide assisted exit. For instance, for an onchain assisted exit, the client can wait for the PTLC output to be deeply confirmed, and if onchain feerates have changed enough, can require the LSP to re-sign a new PTLC-claim transaction at a different feerate, and the LSP has incentive, up to the cost of onchain fees to perform a unilateral exit from the tree, to cooperate. The client can abort this assisted exit, and it would not be much different from the client simply refusing to perform an assisted exit and forcing the LSP to perform a unilateral exit from the tree.
For the “Inversion of Timelock Default” case, the following question may be raised: if the client performs unilateral exit by passively waiting for the timelock default so that the LSP is forced to perform (and pay for) unilateral exit on behalf of the client, how can the client enforce HTLC timeouts for outgoing payments?
And the simple answer is: it does not have to (as long as the client is not secretly a forwarder, i.e. if the client is really just an end-user that never forwards payments)!
If the HTLC timeout is shorter than the timeout tree timeout, then the LSP can very well simply not fail the HTLC and not drop unilateral exit. However, the LSP gains no advantage; either it knows the preimage or not. If the LSP did not successfully forward to the final destination and refuses to fail the HTLC, this is equivalent to the LSP refusing service to the client; then the client can simply force a passive unilateral exit and recover its funds, including the outgoing HTLC. If the LSP still forwards the payment (in order to learn the preimage) after the client-offered HTLC has timed out, it is at the risk of the LSP: the client might still have onchain funds to pay for exogenous fees, and the nodes have P2A outputs that the client can use (a truly destitute client might go to a competitor LSP, give them the signatures of the tree nodes, and let the competitor LSP punish the misbehaving LSP). Thus, the incentive for the LSP is to respect the timeout of the HTLC, and to fail the HTLC before the timeout. The LSP can only lose a customer if it misbehaves this way.
This is risky if the client is forwarding: the upstream node may enforce the upstream timeout. However, the expectation is that the client is a plain end-user that never forwards.
For HTLCs offered by the LSP, we have already given the assumption that the LSP is capable of paying for onchain fees to do unilateral closes via exogenous fees. Thus, even though the LSP is only a forwarder, it can enforce the timeout on HTLCs it offered to the client.
Thanks for the clarification.
Let’s recall some fact for the public bitcoin community:
- few months agos, a block inc employe A is making an invitation list for a so-called Lightning Dev Summit and apparently deliberately excluding people who are scientifically critical to their technical ideas
- during the time that meeting happening, you, a block inc employe B publish out of nowwhere, without any peer review that one expect at academic or industry venue, that SuperScalar construction
- during that meeting happening, another block inc employe C, shows the light on SuperScalar construction, making the call to discuss that construction
- after that meeting, you, a block inc employe B and another block inc employe D are using the fact that this construction was discussed at that recent Lightning Dev Summit, as an argument of authority, or at least I’m under that impression, that this construction has been peer-reviewed in any meaningful way
Correct me, if I’m wrong on the fact, though I think it’s a fair description of the fact.
I don’t know if you’re familiar with the history of Bitcoin protocol development (here the Wikipedia link). During the block size war, in 2016, there was something called the Hongkong Agreement, where some devs attended and industry participants, and while some more senior devs discourage other folks to attend, there was some pow-mow style agreement born out of that on the future of bitcoin scalability.
That agreement, which was done in a complete opaque fashion for the rest of the bitcoin community and maybe without considering the maths, economics and network aspects of discussed topics with distant minds, gave the birth years latter to many controversies and was used as an argument to plan the Segwit2x hard fork.
In the spirit of avoiding future controversies in the community, I think it would be great if you could highlight that this SuperScalar constructions represents only the view of the block inc employes. Or worst if has been also vetted in a pow-mow fashion by the other lightning protocol devs present at this summit, and they’re weighing their experts opinions on it. Far from me to accuse block inc to engage in a embrace, extend, extinguish stragegy a la microsoft about the lightning protocol, but many in the communities can have real doubts. At least, when blockstream folks published their landmark paper about sidechains in 2014, there were more vocal about their vested interest in the adoption of this blockchain technology (— and the company CEOs were old cypherpunks themselves).
I don’t think the lightning community has put so much years of works in getting a decentralized network of payment, with instant and private settlements, to fall into some very ill-designed off-chain construction, with a lot of trust in the LSP. If block inc is designing a product only for the US market, where there are legal framework in case of issues, it’s better to be more verbose about it (-- I’m not going to say that Jack Dorsey’s tweets are misleading between what he’s saying and what block inc is really doing, but why TBD hasn’t released yet a LSP ?).
Edit: corrected s/defamatory/misleading/g. it’s not the same.
Personally, I’m more interested in a decentralized network where a lambda user in El Salvador or Oman can pay another user in Seoul or Singapor, while relying only on the trust-minimized assumptions of the protocol. A protocol working well in war zones and developing countries, where there is not always a stable nation-state authority and a legal remedy is not an option.
Feel free to attack the actual Lightning Network to demonstrate the problem. Clearly, you believe:
As far as I know, I’ve never seen any CVE or responsible disclosure associated to your name in the bitcoin field. You might have done so in the past in another industry as an infosec researcher, and if so you’re free to communicate such realizations.
In the lack of such information, I don’t really think you’re familiar with ethical disclosure as a security researcher when end-users money are at risk, and as such I’ll let your naive remark here. Sorry, not sorry.
If so, you can demonstrate this, today, with the actual Lightning Network.
I don’t think you’re understanding how the “Forced Expiration Spam” laid out in the lightning whitepaper of 2015 characterize a systemic risk in a world where the bitcoin blockchain space is limited and the number of coins limited to 21 millions.
Let’s me re-give you an explanation.
Post-segwit, blocks are limited to 4MB in weight units. Under current lightning protocol safety timelocks of 144 blocks in average and assuming commitment transactions of 300 weight units in average, the block chain can only process 13k channels per bloc. That means the blockchain can only process 2M of channel per day in the worst-case scenario of everyone force-closing at the same time.
Failing to confirming on-time a commitment transaction and the corresponding HTLC-preimage or HTLC-timeout leads to loss of funds for one of the lightning counterparty.
This is the best scenario, I’m not considering fully charged lightning channels with max_accepted_htlc
limits number of HTLCs, neither second-stage transactions and usage of lower than 144 blocks, as done in practice by lightning implementation. We have only 44k public channels in lightning today, though probably another order of magnitude of private channels, so ~500k. Massive force expiration spam could already be an issue for today’s Lightning Network, if we see such forced expiration spam happening (or in other words a bank run).
Your SuperScalar proposal, even if it does not implies a consensus change, only makes the problem worst, as now all the combinations of the full tree of transactions might have to be confirmed on-chain. Corresponding fee reserves have to be provisioned by the lightning routing nodes, or the end-users lightning wallet, who ever is paying the fee. So if you assume timeout tree with a k-factor of 2 and 8 users, that’s 12 transactions that have been to confirm in the worst-case. 12 transactions, that’s more than the 8 commitment transactions that have to be confirmed in the worst-case scenarios, rather than 8 commitment transactions, and there is no, or only small compression gain as the k-factor has to be materialized in the fan-out output at each intersection of your timeout tree.
So, I’ll re-say what I’ve already said the SuperScalar construction is broken and only makes the systemic risk that is already happening with open lightning channels, in a world where the size of the block is statically limited. Pleasure to discuss the mathematics and the economics, if there is still some persistent wondering.
I think that problem has been understood for years by many bitcoin protocol experts, and again it’s described in the lightning whitepaper section 9.2
Correct me, if I’m wrong on the fact, though I think it’s a fair description of the fact.
Sure — people who work in the same company tend to work together on the same stuff. SuperScalar was released in a limited venue internal to Block, before we presented it at the summit.
The initial SuperScalar was lousy — it was just laddered timeout trees, without the Decker-Wattenhofer. This was developed a few months ago, internally to Block, and not released publicly because it sucked. A few weeks ago, while looking over @instagibbs work on P2A, I realized that P2A handled the issues I had with Decker-Wattenhofer — in particular, the difficulty of having either exogenous fees (without P2A, you need every participant to have its own anchor output, increasing the size of each tx with an output per participant) or mutable endogenous fees (because not every offchain transaction is changed at each state change, earlier transactions cannot change their feerates when you update the state for a feerate change), which is why I shelved Decker-Wattenhofer constructions and stopped work on sidepools, which used Decker-Wattenhofer. However, with P2A, I realized that Decker-Wattenhofer was actually more viable — and thus can be combined with timeout trees, creating the current SuperScalar. I then rushed to create a quick writeup, got it reviewed internally, and got permission to publish it on Delving, so we could present this at the summit. @moneyball believes it provides a possible path to wider Lightning onboarding, so this encouraged me to focus more attention on it and figuring out its flaws and limitations.
(You can track my thinking around laddered timeout trees by my Twitter posts, incidentally — again, I only started posting about timeout trees, and musing on laddering them, in the past 6 months. Thinking takes time.)
Adding Decker-Wattenhofer to laddered timeout-trees was an idea that occurred to me literally a few weeks before the summit. Again, remember that I stopped working on sidepools because Decker-Wattenhofer sucked (exogenous fees are out because you need an output per participant, endogenous fees cannot be meaningfully changed because each state transition only changes a subset of offchain transactions), and I only returned my attention to Decker-Wattenhofer after @instagibbs could release P2A for Bitcoin Core 28. Without thinking about Decker-Wattenhofer, I cannot combine Decker-Wattenhofer with timeout-trees, obviously. The timing was coincidental, not planned. Thinking takes time.
I have not made any representation that the construction has been peer-reviewed meaningfully, even internally in Block. Given the addenda I have been writing, it is very much a work-in-progress, one that I am taking all of my time to work on rather than anything else. If anyone can poke holes into it, they can do so in this very thread, that is the whole point of making this thread and presenting it at the summit. I am in fact taking the time here to allow people to respond. All I did was present this to the people at the conference, and I believe it to be worth my time to think about and refine.
If block inc is designing a product only for the US market, where there are legal framework in case of issues, it’s better to be more verbose about it
I have been advised to avoid mentioning of legal or regulatory stuff, as I am not a lawyer and anything I say about regulations, legal frameworks, or regulatory bodies would not be expert advice on it. Let me ask my supervisor about this.
So if you assume timeout tree with a k-factor of 2 and 8 users, that’s 12 transactions that have been to confirm in the worst-case. 12 transactions, that’s more than the 8 commitment transactions that have to be confirmed in the worst-case scenarios, rather than 8 commitment transactions, and there is no, or only small compression gain as the k-factor has to be materialized in the fan-out output at each intersection of your timeout tree.
We are aware of this issue and this issue was also presented at the Lightning Proto Dev Summit. However, the tree structure does allow for subsets to sign off on changes instead of requiring the entire participant set to be online to sign; this reduces the need for large groups to come online. @adiabat last year in TABConf had a talk about how onlineness (specifically, coordination problems with large groups) will be the next problem; tree structures allow us to reduce participant sets, regardless of consensus change, and tree structures do still require a multiple of N data to publish N leaves. Full exits of OP_TLUV
-style trees are even worse, as they require O(N log N) instead of O(N) data (m * N data to be specific, where m
is reduced by higher arity). OP_TLUV
-style trees also cannot change subset without changing the root, which means they do not gain the ability of SuperScalar to have subsets sign off on changes; this is because OP_TLUV
-style trees use Merkle trees, unlike timeout trees which use transaction trees which allow sub-trees to mutate if you combine it with Decker-Wattenhofer.
I have been refining SuperScalar to shift much of the risk to the LSP, precisely to prevent risks on clients. You may not agree that it goes far enough, but I think it can be done in practice such that it is more economical for the LSP to not screw over its clients, just as typical capitalism works. You cannot earn money from a dead customer, which is why capitalism works.
The thing about SuperScalar is that timeout trees suck because of the large numbers of transactions that need to be put onchain in the worst case, Decker-Wattenhofer sucks because of the large numbers of transaction that need to be put onchain in the worst case, but when you mash them together they gain more power while not increasing their suckiness — their suckiness combines to a common suckiness instead of adding their suckiness together. Timeout trees get the ability to mutate due to combining with Decker-Wattenhofer, while Decker-Wattenhofer gets the ability to mutate using partial participant sets. The whole is better than the sum of its parts, and I think it is worth my while to investigate if this is good enough in practice.
Thanks again for the clarifications
Sure — people who work in the same company tend to work together on the same stuff. SuperScalar was released in a limited venue internal to Block, before we presented it at the summit.
Be sure, I don’t question a private company to seek profit, neither people working at the same company working together on the same stuff, that’s all right.
I’m questioning if all of that it’s not a bit of corporate capture of the communication channels and venues dedicated to the Lightning protocol. Such protocol has been developed in common by different implementations since the Milan meeting around Scaling Bitcoin in 2016, and all the protocol specifications have been released under Creative Common License 4.0 since then.
Such meetings have been usually reserved to discuss matters related to the lightning protocol, and not to present commercial products in exclusivity. E.g, the folks at Lightning Labs, has never used that to talk about their Lightning Pool product, that they released in late 2020, and I think it would have been inappropriate for them to do so.
Moreover, and here I’ll recall the example of Blockstream in 2014, when they released the sidechain paper, this was explicitly licensed into the public domain. Concerning the SuperScalar construction, given it has been developed internally at Block Inc as you said so, there could be patent protection in application. As far as I can see, there has been no such mention when you published your post few weeks ago, and this forum has moderation rules noticing about “deceptively encouraring the use of patent-encumbered techniques" as documented here:
Beyond, and here it’s more concerning the lightning summit attendees, has any Block Inc employe explicitly said to the non-Block attendees that the SuperScalar construction was a Block Inc product, before to make a presentation about it ? If yes, have they give opportunity to the non-Block attendees to leave the room or not attend the session if they didn’t wish to discuss SuperScalar ? For the ones who did attend a session on it, as all materials presented to them being explicitly put into the public domain or a creative common license ?
As a reminder, it’s not liked “closed-door” meetings, where only some developers were invited have raised many controversies in the past, of which the Hong Kong Agreement of 2016 is a good example.
So far there has been no answer on a public forum or channel from TheBlueMatt, a Block Inc employee, how the invitation list has been composed: Follow-up Lightning Dev Summit · Issue #1201 · lightning/bolts · GitHub and if any technical criterias has been followed.
Some in the community could have doubt if it wasn’t just opportunistically to lobby some lightning devs about BlockInc’s SuperScalar product ?
Concerning lightning summits, I think Rusty Russel sets a good standard in the past in matters of open-source protocol meetings, where the invitation for the Adelaide meeting of 2018 were announced on the mailing list: [Lightning-dev] Lightning Developer Summit #2: Adelaide, Australia 2018-10-08 and 2018-10-09
I certainly don’t wish to accuse TheBlueMatt of hypocrisy or double-standard on a public forum in matters of open-source, neither that he would forget his open-source standards everytime he’s changing of corporate employer. After all, I contributed with him for years on the rust-lightning
open-source project.
The initial SuperScalar was lousy — it was just laddered timeout trees, without the Decker-Wattenhofer. This was developed a few months ago, internally to Block, and not released publicly because it sucked. A few weeks ago, while looking over @instagibbs work on P2A, I realized that P2A handled the issues I had with Decker-Wattenhofer — in particular, the difficulty of having either exogenous fees (without P2A, you need every participant to have its own anchor output, increasing the size of each tx with an output per participant) or mutable endogenous fees (because not every offchain transaction is changed at each state change, earlier transactions cannot change their feerates when you update the state for a feerate change), which is why I shelved Decker-Wattenhofer constructions and stopped work on sidepools, which used Decker-Wattenhofer. However, with P2A, I realized that Decker-Wattenhofer was actually more viable — and thus can be combined with timeout trees, creating the current SuperScalar. I then rushed to create a quick writeup, got it reviewed internally, and got permission to publish it on Delving, so we could present this at the summit. @moneyball believes it provides a possible path to wider Lightning onboarding, so this encouraged me to focus more attention on it and figuring out its flaws and limitations.
I don’t know if you’re reading the mailing list, though the timeout trees concept was presented there. If I remember numerous limitations were pointed out. Beyond, it’s not like the P2A sounds to be broken too, as a fee-bumping scheme for off-chain counterparties with competing interest.
(You can track my thinking around laddered timeout trees by my Twitter posts, incidentally — again, I only started posting about timeout trees, and musing on laddering them, in the past 6 months. Thinking takes time.)
I’m not on Twitter. Social medias culture can only make you dumb, and more inclined to follow the madness of the crowd. Good to read again the philosopher Hannah Arendt, her writings on the crisis of culture or some of her essays on the essence of totalitarism.
Adding Decker-Wattenhofer to laddered timeout-trees was an idea that occurred to me literally a few weeks before the summit. Again, remember that I stopped working on sidepools because Decker-Wattenhofer sucked (exogenous fees are out because you need an output per participant, endogenous fees cannot be meaningfully changed because each state transition only changes a subset of offchain transactions), and I only returned my attention to Decker-Wattenhofer after @instagibbs could release P2A for Bitcoin Core 28. Without thinking about Decker-Wattenhofer, I cannot combine Decker-Wattenhofer with timeout-trees, obviously. The timing was coincidental, not planned. Thinking takes time.
The main issue with Decker-Wattenhofer, whatever the elegance of the construction, is that for each state update, the relative timelocks are decremented and when they reach near of expiration, you can suddenly have massive surface of transactions to fee-bump, at the worst time of block demand.
On the other hand, the long safety timelocks can only make the construction very burdensome for the user, as they would have to wait until their on-chain expiration if there is an early exit.
I have not made any representation that the construction has been peer-reviewed meaningfully, even internally in Block. Given the addenda I have been writing, it is very much a work-in-progress, one that I am taking all of my time to work on rather than anything else. If anyone can poke holes into it, they can do so in this very thread, that is the whole point of making this thread and presenting it at the summit. I am in fact taking the time here to allow people to respond. All I did was present this to the people at the conference, and I believe it to be worth my time to think about and refine.
Again, see all the comments above about making the timing of this publication at the same time than the summit and that being explicitly pointed out by another Block Inc employe in the github thread about the lightning dev summit.
In 2017, the way the extension block proposal was brought into the public conversation too raise controversies, when it was published on a Medium post, rather than usual communication venues.
I have been advised to avoid mentioning of legal or regulatory stuff, as I am not a lawyer and anything I say about regulations, legal frameworks, or regulatory bodies would not be expert advice on it. Let me ask my supervisor about this.
Why your supervisor cannot comment here on this public forum on this name ? That’s only sounds more like corporate capture of the lightning protocol, when questions are asked on how a “closed-door” meetings, did happen. Some folks it was just pure hazard about the timing of publication and meetings happening, and when asked more questions someone falls into the “let me ask to my boss…” as they were just doing Silicon Valley-style bad public relationships…
Corrected: s/this name/its name/g - English can be hard.
Personally, I’m fine too to talk about technical and legal matters in a public fashion, and I think few other bitcoin and lightning protocols devs are versed too in legal matters.
Again, it’s not like your top-down hierarchical supervisor, Jack Dorsey might freely have given in the past his contacts to some devs in that space and some people can go to ask in private explanations if someone wishes to know more. But if a supervisor at Block Inc is not able to explain on a public why there are some irregularities in the organization of a “closed-doors” meeting about an open-source protocol, this can only raise doubts among the bitcoin community what that SuperScalar product is all about.
We are aware of this issue and this issue was also presented at the Lightning Proto Dev Summit. However, the tree structure does allow for subsets to sign off on changes instead of requiring the entire participant set to be online to sign; this reduces the need for large groups to come online.
If some participants are signing off the changes, the construction is broken, as they cannot verify that ulterior state changes are correct, and the LSP and another subset of the group colludes to double-spend the balance. The whole lightning security model is about not trusting the channel counterparty with loss of funds style risk, or the Lightning Service Provider, for what the LSP is worth.
@adiabat last year in TABConf had a talk about how onlineness (specifically, coordination problems with large groups) will be the next problem; tree structures allow us to reduce participant sets, regardless of consensus change, and tree structures do still require a multiple of N data to publish N leaves. Full exits of OP_TLUV-style trees are even worse, as they require O(N log N) instead of O(N) data (m * N data to be specific, where m is reduced by higher arity). OP_TLUV-style trees also cannot change subset without changing the root, which means they do not gain the ability of SuperScalar to have subsets sign off on changes; this is because OP_TLUV-style trees use Merkle trees, unlike timeout trees which use transaction trees which allow sub-trees to mutate if you combine it with Decker-Wattenhofer.
Lol, Tadge saying that onliness and coordination problems with large groups will be the next problem. It’s not like something that the OG working on Lightning have known for years…and even some more.
Sure, OP_TLUV-style trees cannot changes subset without changing the root, however if done correctly they do not require trust in any of the other counterparty about integrity of the balance.
I have been refining SuperScalar to shift much of the risk to the LSP, precisely to prevent risks on clients. You may not agree that it goes far enough, but I think it can be done in practice such that it is more economical for the LSP to not screw over its clients, just as typical capitalism works.
That’s missing the point about the discussion about levels of security risks. What you’re presentation of this SuperScalar product is saying, is that the LSP can rug pull at anytime one of the user, so it’s a loss of balance security risk. Not a simpler risk like a delay in processing due to the bitcoin CSV timelocks.
In the real world, there is something call “bank run” and as one put it into the bitcoin blockchain years years ago, "Chancellor on brink of second bailout for banks”.
You cannot earn money from a dead customer, which is why capitalism works
I’ll let you dig into the etymology of mortgage, a financial instrument underpinning many of business operations in modern capitalism.
The thing about SuperScalar is that timeout trees suck because of the large numbers of transactions that need to be put onchain in the worst case, Decker-Wattenhofer sucks because of the large numbers of transaction that need to be put onchain in the worst case, but when you mash them together they gain more power while not increasing their suckiness — their suckiness combines to a common suckiness instead of adding their suckiness together. Timeout trees get the ability to mutate due to combining with Decker-Wattenhofer, while Decker-Wattenhofer gets the ability to mutate using partial participant sets. The whole is better than the sum of its parts, and I think it is worth my while to investigate if this is good enough in practice.
That SuperScalar construction still does not solve the "Forced Expiration Spam”
about a massive number of off-chain state transactions hitting the blockchain,
as whoever is paying the on-chain fees, be it the LSP or the channel counterparty,
there is a limited blockchain space (4MB) and a limited number of coins (MAX_MONEY
)
that can be used to pay the fees. The LSP can inflate the timeout trees, with actually no fee-bumping reserves to back them up, so it’s clearly trusted and the LSP can steal the participant sets at anytime.
But sure, what you’re doing or what Block Inc is doing you it’s up to you guys.
The rest of the lightning community, they will prefer something that scales bitcoin in a more trust-minimized fashion.
I can state that Block has no intention of patenting SuperScalar or otherwise limiting the use of SuperScalar. The entire point of publishing the damn thing on delving and in presenting it to the summit was to bring it out to the public, WTF. There are no copyright claims or similar because a lot of these is just initial design notes. I am designing it in public, right here, on delving.
I have been advised to not engage you in anything that is non-technical. Given that your technical expertise has not raised anything that has not been raised before, I am also no longer engaging you in anything technical, either.
I can state that Block has no intention of patenting SuperScalar or otherwise limiting the use of > SuperScalar. The entire point of publishing the damn thing on delving and in presenting it to > the summit was to bring it out to the public, WTF.
If all what Block Inc did in the organization of this summit where SuperScalar was presented in “exclusivity” to selected devs, and respectuous of how Lightning development has been done in the open-source fashion, since 2016 and after since then There is still no answer from one of the Block Inc employe on how it was really organized.
This is not like I’ve myself organized open-source protocol dev meetings in the past, making abstraction of people backgrounds or the organizations (be it for-profit, non-profit or whatever) there were representing to concentrate the discussion on purely technical matters.
This silence from Block Inc is very speaking in itself…
Going back to SuperScalar, and the chain economics and deep technicals here.
Let’s say you have the initial transaction with the LSP and all the users, i.e the root transaction.
Under, Decker-Wattenhofer update mechanism, channel factories have two stages: kick-off and state transaction, spending the root transaction. Each state transaction has a decrementing timelock and attached to this state transaction, there is a timeout tree, where after a timelock X, either the use should have come back online to interact with the LSP to update the tree, or the LSP (+ some users to sign the multisig of rhe state transaction) can evict out of the tree the user.
There is a k-factor at each output of the state channel factory transaction, to branch off and fan-out the users in the subtrees.
So if you assume a k-factor of 2 (at each branching of the timeout tree) and 8 users, that’s 12 transactions that have to confirm in the worst-case. That means in the worst-case, either the user (if they wish to make a fully non assisted exit) or the LSP must have on-chain amounts available to confirm the 4 transactions constituting the path before the safety timelock expiration. For the LSP, it’s even worst as they must have liquidity for all the combination of the timeout tree, and this when mempool networks might be full.
So, I’ll re-say what I said above in one of my previous post, under current block size (4 MB) and the maximum number of bitcoins, that can be used to pay the fees, I don’t see how SuperScalar works at all under “Forced Expiration Spam” as described in the section 9.2 lightning whitepaper. As a reminder, that problem was well-described by protocol experts before I was involved in bitcoin dev, so don’t take the shortcoming I’m pointing too about SuperScalar as ad hominem here.