P2share: how to turn any network (or testnet!) into a bitcoin miner

Part of the confusion here is my fault. The original post presented a less generalized version of p2share but where the share issuance was more similar to Bitcoin’s (shrinking-pie model). Then, later in the thread I presented a more refined version which, I believe, is equivalent to PPLNS in expectation. Then, to confuse things more, I presented the most general version where the “parameters” (shape of the share issuance schedule, block time, difficulty adjustment algorithm, etc) can all be fine tuned.

Now, to get on the same page with regard to nomenclature, when I refer to “shares” I am here referring to the sharechain’s unit of account. You can imagine a sharechain as its own network which operates similar to bitcoin, and just like bitcoin has output amounts (sats) in its utxos, a sharechain has output amounts (shares) in its utxos.

Due to the “select a random shareholder” mechanism of p2share, unlike bitcoin itself, these shares possess an equity-like property that we might associate with more traditional for-profit enterprises. That is why I am calling them shares. Though, of course, it is important to remember that a sharechain is not itself any sort of “legal structure” in the traditional sense. In this manner it is like Bitcoin.

You seem to be imagining a sharechain shareholder having an “account” more similar to how some networks use an account-based methodology for tracking amounts rather than a utxo methodology. While it might be possible or even preferable to use that sort of mechanism within the sharechain, that is a technical detail that, in my opinion, does not really matter right now in our discussion.

I am not sure if this is where you are getting confused or not, but we are not necessarily selecting an “account.” We are randomly selecting a specific share. That specific share will (in a utxo model) be located in a certain utxo which will have a certain public key associated with it. That is all we, as sharechain nodes, know. Conveniently, that is also all we need to know.

Sure, we could “burn” the entire utxo which contains that share, but that suffers from a number of problems, some of which you have pointed out. A miner can easily minimize her number of burned shares by pulverizing her shares across many outputs.

Set all the technical concerns aside for a moment and assume that the sharechain has solved a Bitcoin block and the reward (subsidy + fees, in sats) is R. Let’s assume we could randomly select and “burn” R sats worth of shares, in exchange for distributing the R bitcoin to those shareholders. This is nearly equivalent to what happens in a share buyback in the traditional equity markets.

However, mathematically the buyback method is equivalent, in expectation, to the much more simple, and also conveniently tractable/implementable: “select a single share at random and distribute the entire reward R to the owner of that share.”

A major advantage of the above simplification is that we need only select a single share at random. Whereas with what you want to do, we would need to select a set of shares and the set of shares selected is necessarily a function of R and some sort of price signal getting smuggled into the sharechain (otherwise we would not know how many shares to “burn”). This is an unnecessary complication, and may not even be possible, yet the simple model of selecting and distributing the reward to a single share achieves the same result, and we can let the exogenous market just price things accordingly.

Again, this depends on the chosen parameters around share issuance. In the linear model where each share issued is always tied to a constant amount of work, then even though there is an ever-growing supply of shares, the difficulty adjustment algorithm takes care of this problem for us. Say difficulty grows by 100x on the sharechain, then sure there will be 100x more shares issued. And yes, in such a scenario, the new “high difficulty” shareholders would have a much higher liklihood of being randomly selected for the bitcoin reward. However, they also paid (in work) for those shares. Similarly, the “low difficulty” (earlier) shareholders still have a non-zero chance of being selected. This is why, in expectation, everything works out just fine. In the linear issuance model, even though supply of shares tends to infinity (note: it will, of course never actually get there), everyone is fairly accounted for. This is the beauty of a difficulty adjustment algorithm tied to thermodynamic work.

Sure, there are a lot of interesting things which might be tried by a sharechain, all without affecting mainchain bitcoin. I am less focused on the specifics here though and right now just want to explore the general p2share framework to ensure that it is sound and properly Bitcoin-aligned.

Markets, especially of the open, permission-less, and unhampered kind, are very good at solving these sorts of problems. Sharechains with features or issuance models which are reckless will not do well in the atomic swap market. Nobody will want to part ways with their precious sats for those garbage shares, and those sharechains will lose hashrate (or never even achieve it in the first place) because of it. Sharechains with solid, but differentiated, feature sets and fair issuance have a much better chance of success.

Hi

I want to point out what I judge to be the key ideas I presented on my last post, because I think it deserves more debate over the final architecture.

Yes, I think it doesn’t matter now, at this point of discussion (even though I think that an account model would be easier to save a checkpoint at mainchain). When I spoke about account, you could read “UTXO” as you like (I was indeed thinking on an UTXO model).

I really think we cannot runaway from burning the entire UTXO. I’ll explain:

I pointed out to inflation and small miner friendliness, but there is another aspect: how much time the miner has contributed to the pool. If we don’t burn, the old miners would have a gigantic amount of shares, at a point that new miners couldn’t even compete. The conclusion would be a pool for little, where new miners couldn’t enter, and even the old miners would eventually leave because of the total pool size plateau.

Also, that system wouldn’t be fair, because miners would be double paid. When a miner is selected to receive the revenue, he’s getting paid by its work, but if he doesn’t lose shares, the amount of work spent one time would be valued multiple times.

I think I didn’t express myself very well. In an UTXO model, I think the selected UTXO would be the one that is burnt entirely, in an account based, the account would be burnt. That way, we don’t need to insert shares-to-BTC exchange rates, it’s all or nothing.

BUT, the main point, and waht I think is the final solution to all incentives problems: the above linear probability function.

Imagine an exponential probability function based on amount of shares. In that way, doubling the amount of shares would more than double the probability of being selected. That is, [p(s1+s2) > p(s1) + p(s2)]

In an an above linear probability system, that would be severely penalized, that’s because it is far more desirable to have a single big UTXO instead of many small UTXOs. Big miners would severely undervaluate their energy spents by pulverizing shares.

That’s another incentive gain from the above linear solution: shares have different values in the hands of different actors. The shares of the small miners are much more valuable on the hands of bigger miners, because at their point on the probability curve, the marginal increment of shares have more impact. That’s the solution for small miners, big miners and entities are incentivized to buy shares.

We could think that this would lead to centralization, and we need for sure run some simulations on that. But, looking by another angle, knowing that the selected share (either in an UTXO or an account) would be burnt, even the big miners go down to position 0. In theory, some long running small miner could eventually have an enormous amount of shares and be selected. That’s because the concurrent miners would eventually go back to 0.

I just pointed out the cordination because that could be another mean of getting paid. Miners could cordinate inside the pool on order to “join their shares” and distribute it onchain by a cordinated payout tree. The sidechain would be a cordination point by itself, as a data availability medium, and dealing with miners, they would certanly be more frequently online.

This is not correct, at least not in a linear “will work-for-shares” issuance model. It it only correct if you assume that shares are issued in a some other manner which gives early sharechain miners an “advantage” with regard to required work (expected # hashes) per share they receive. That is what Bitcoin itself did, and that is fine for Bitcoin (without it, we could never even contemplate the p2share model!). However, part of the confusion is that the original post did present such an issuance model where shares were issued non-linearly in the shrinking-pie Bitcoin-like way.

However, the more refined linear model I proposed, coupled with a suitable difficulty adjustment algorithm, explicitly fixes that objection.

The “paid multiple times for the same work” concern doesn’t apply in the linear issuance model.

Every new unit of work adds new shares and proportionally dilutes all existing shares. Because selection is random over all shares, every share, regardless of when it was created, has the same expected value. There is no persistent extra advantage from “early” work; it’s continuously diluted by later work.

To describe behavior cleanly, fold difficulty into expected value per hash:

  • Let EV_sc be the expected sats per unit hash on the sharechain (this already accounts for sharechain difficulty and share price).
  • Let EV_mc be the expected sats per unit hash on mainchain.
  • Let P* be the share price at which EV_sc = EV_mc.

Then miners simply arbitrage:

1. Mining decision (where to point hash):

Condition Mining action
EV_sc > EV_mc Mine on sharechain
EV_sc < EV_mc Mine on mainchain
EV_sc ≈ EV_mc Indifferent

2. Trading decision (what to do with shares):

Market share price Trading action
Price > P* (“too high”) Sell shares (including newly mined)
Price < P* (“too low”) Buy shares (if you want more exposure)
Price ≈ P* (“just right”) Indifferent; no strong trade implied

So all shares have the same expected value, and the “right” behavior is just: point hash where EV per hash is higher, and trade shares when their market price deviates from the fair P*.

Have you taken a look at this yet?

Thanks for your reply.

It has been a while since I looked at Braidpool. What they mean by shares in their documentation is subtly different than what I am exploring in this thread (and in a more economic-focused fashion over in this thread.

From Braidpool’s readme:

Custody of accumulated coinbase rewards and fees is performed by a large multi-sig among miners who have recently mined blocks using the FROST Schnorr signature algorithm. Consensus rules on the network ensure that only a payout properly paying all miners can be signed and no individual miner or small group of colluding miners can steal the rewards.

I have not yet fully been able to grasp how they intend to have the above claim be true. So one of the focuses of my reserach is to not introduce such a signing aspect and instead use game theory and random share selection to keep it entirely non-custodial.