They certainly could, however increasing the L1 block limit probably isn’t the way to do this. When Lightning’s original paper came out, it was already known that it wouldn’t scale to 7 billion occupants. Instead of modifying L1 further, it might be better to design a layer either between the timechain and lightning or on top of lightning.
Does anyone pushing big blocks run a node themselves? The main issue with, say, 32MB blocks is just that. It’s not that most consumer hardware couldn’t handle this (though it makes decentralisation over low power, non-traditional IP networks and low bandwidth ones more difficult), but the storage requirements would change enormously. 32MB a block, 144 blocks a day is a 4.608GB per day storage requirement. My node currently is storing ~658GB (ignoring electrum indexes too). With 32 byte blocks, my hardware could totally handle it, but you’d rival the entire chain size in less than half a year. I can’t handle buying 1TB drives every 7.12 months, and I doubt many other operators would be fond of this either.
Maybe investigate opcodes that don’t cause issues to build additional layers, or layers without new opcodes (best, really), but I don’t think this is viable. Not on the base layer.
I have multiple full nodes, some pruned, some unpruned, and I have been running one them since 2013. I don’t think we should go straight to 32MB blocks, but we can examine the consequences of them as a worst case scenario.
4 TB SSDs, even with the recent run up in prices, can be had for $200, which is $0.05 / GB. So if you feel the need to run an archival node, and if you feel the need to serve it from and SSD, you storage costs would be $0.25 / day or $90 / year. If you prune then storage costs don’t matter, and if your application allows for a spinning disk hard drive, storage costs would be 1/3 of the SSD, for about $30 a year.
We should keep in mind that while some people do need to run archival nodes, most do not. It is also worth considering that there is every reason to believe that SSD prices will eventually continue the steep downward trend in price they have been on for the past 10 years.
I’ll admit that I would not be a fan of 32 MB blocks today, especially since at first they would just be filled with low fee junk. Considering what we have seen with ordinals it would probably be better to start small, with just a doubling to 8 MB. Then a 4 TB SS would be sufficient for the next 15 years.
Who is the person who would prefer to pay $50 each time they transact on chain rather than a once per year $22 in storage costs?
As I understand it, persistent storage is not the issue with block size. The main issue is rather the time it takes to propagate a new block from a miner.
A large miner would strongly prefer to withhold blocks from the smaller miners of the network, only releasing blocks when the smaller miners have found one, in order to orphan the blocks of the smaller miners and effectively remove their hashrate from the network.
The above can be done deliberately, but a well-connected set of mining nodes with low latency with each other can do this accidentally to less-well-connected sets of mining nodes. And the larger the block size, the more likely such an accident would occur, and the greater the pressure to centralize mining arises. It all starts with well-connected mining nodes, then colocated mining nodes, then co-owned mining nodes.
Another objection is that every block must be transmitted to every other validator. This is in fact the first objection in the first ever reply to the bitcoin.pdf paper on the cypherpunks mailinglist: each block needs to be sent to each other participant in the fullnode network. Yes, you can argue that not everyone has to directly participate in the fullnode network. Nevertheless, the argument “we should increase block size by N times” implies “we expect N more fullnodes on the network sending N times more data to each other for an N^2 times total global bandwidth consumption” which seems the opposite of “efficiency”, and ultimately all costs incurred by the network must by necessity be paid for, including bandwidth, and paid for they shall. Increasing block size is a massive decrease in efficiency and is the opposite of what you ultimately want.
On the other hand: can we at least first try to discover ways to scale Bitcoin without ever increasing block size? It has worked well enough so far (it has been 8 years since SegWit increased block size) and I can comfortably use Lightning Network today. I admit I am a nerd and can use Electrum on desktop comfortably for LN (pointing to my electrumx instance on my basement tower server with the fullnode), but with enough effort and iteration I believe this can be made a lot more comfortable to most people, including most people living in my country where 5 USD a week would be an onerous burden. I am also working on figuring out how to get the last mile more comfortably onboarded without as much onchain footprint, which translates to less resource use and therefore savings that ultimately get passed on to end-users.
Ultimately, we should focus on reducing resource use of non-mining-related resources; the consumption of energy by mining is the security provided by mining, but this does not apply to the rest of the system, including resources spent on sending mined blocks. Increasing the block size increases total resource use. We should be restricting who can see transactions (the way Lightning Network does, which is why it is a scaling solution, unlike block size increase), not increasing how many publicly-visible transactions can be seen by everybody.
It is true that larger blocks are a centralizing pressure upon miners. But there are several questions at play that must be answered before we could decide if a centralizing force was to be avoided or not. One such question relates to the goals of decentralization itself, and whether slightly more or less centralization of miners has any practical effect on Bitcoin’s censorship resistant qualities. All else being equal, less centralization would be better, but we must consider the question a matter of cost-benefit.
Another question relates to the balance and magnitude of other centralizing/decentralizing forces that operate on miners, and how propagation delays would affect the result. For instance, it is a fact that stranded and excess electricity are geographically distributed around the globe and across different political regimes. And more generally, cheap electricity is also well distributed. Losses to small miners from propagation delays would need to be large enough to offset higher electricity costs incurred by co-locating to where propagation losses would be smaller. Aren’t the size of these costs currently an order of magnitude or two apart from each other, with electricity costs dwarfing propagation delay losses in miner’s location considerations? And to clear up an earlier point - there is only a finite quantity of free or low cost electricity in each location where it is present. This is a very strong force keeping miners geographically distributed.
Indeed there are N^2 dynamics at play, which can quickly cause problems if we aren’t careful. So we need to be careful.
I disagree that we need to consider the ‘efficiency of the network as whole’ in the way you are advocating. Instead, we primarily ought to consider current and potential end users, and if the costs and benefits weigh out for them to indeed become or remain a user. And generally, we ought to want Bitcoin to a viable choice for transacting for as many people possible, for both practical and philosophical reasons. It is not a thing that individuals worry about their impact on “total global bandwidth consumption”. Not at all. And for the professionals involved in keeping the internet up and running, the push is for more fiber, faster switches and overall better connectivity. Not the rationing of bandwidth. There are parallels to your point of blocksize N^2 scaling with advances in the Megapixel counts for smartphone cameras. Higher pixel counts lead to better photos (taking up more storage), but encourage people to take more photos because of the better results. Indeed, on my smartphone today I have checks phone 236 GB of photos and videos. Which is almost embarrassing, but no one argued against better phone cameras for fear of this outcome.
Perhaps it is a reason why phone cameras aren’t 200 MP already, because storage does matter and diminishing returns exist. But N^2 consumption of a resource doesn’t mean an approach is flawed, or that it increasing N has no benefit - it just means that over some relatively small range of N consumption of available resources goes from trivial to manageable to unworkable.
In terms of viability of the network as a whole we should keep in mind that not all nodes serve blocks, and so the burdens of the N^2 network traffic can be concentrated. However, for this type of question, we ought to use the measure of the cheapest bandwidth available, since there is no reason nodes that block serving nodes couldn’t locate in those places. For instance in Switzerland there are 10 Gbit/s fiber to the home plans for $100 usd / month, which could serve 2,500 Terabytes per month of data at a rate of 1 GB / second. Which works out $0.04 / TB transmitted. Bitcoin’s chain is currently just under 600GB of data, or ¢2.4 in server transmission costs to do an IBD. It would cost $480 total to provide data to all of the 20,000 currently existent nodes for their IBD. The 17GB / month to keep a node in sync with 4MB blocks @ 20k nodes would total 332 TB, at a cost of $13. $13 to feed to the entire trillion dollar-plus network with data for a month. Multiply this by 10 and then square it and it would still be a completely trivial cost.
I don’t disagree on the importance of Lighting, or other scaling layers. And I do agree that the base chain will never have the capacity to settle even a meaningful fraction of global demand for transactions. But neither do either of those truths somehow suggest we shouldn’t increase the on chain capacity to its largest practical and safe amount that current conditions allow for. We should. And today that amount is larger than the current blocksize.
There is not a dichotomy here. It is not one or the other. We can strive to reduce demand for on chain transactions by creating other options that are superior, while at the same time increasing on chain capacity to its safe and viable limits. Bitcoin is an inherently inefficient design, and it has been enormously popular despite that.
In a network where demand for on-chain volume is fluctuating, then only a dynamical block size limit can be optimal.
To me the block size debate is all about who pays for the sustainability of the network when the block subsidy becomes negligible.
With small block size limit, you have a network with more competition; therefore, transactions paying more. I wouldn’t argue that only big institutions could use it. People would just be more encouraged by using a good L2, than L1.
With big block size limit, you don’t have as much competition over the long term. (I’d be glad to be proven wrong, if you could give me an optimal block size function.) If you don’t have enough competition, people won’t pay enough. You’ll have to inevitably add tail emission.
As I said, it’s all about who pays the price of securing the network. With small blocks, it’s people who make on-chain transactions that pay for everyone’s coins. With big blocks, you have tail emission, which means everyone pays for the security, even those who don’t transact.
Given the impossibility to quantify an optimal blockspace demand, are you aware of proposals suggesting a dynamic blocksize that would work similar to the block difficulty adjustment?