PPLNS with job declaration

I’m sketching up an sv2 extension, that allow miners to verify pool’s payouts when miners are allowed to select transaction hence mining jobs with different fees.

The extension is based on a system proposed here this system is build on top of PPLNS.

You can find the extension spec here

Here instead I implement the extension messages using SRI libraries.

Here I started implementing a miner translator proxy that use the extension.

Everything above is still work in progress and need reviews.

4 Likes

I’m a Spiral grantee working as a contributor to StratumV2 Reference Implementation (SRI).

In a time where the Bitmain FPPS debt-slavery cartel is pushing Bitcoin towards dangerous levels of mining centralization, this post is extremely relevant and I would love to see more engagement from mining players here.

The ideas presented on the paper titled PPLNS with Job Declaration are academically sound and it’s refreshing to see that Demand Pool has some talented minds proposing a tangible path out of the dark situation Bitcoin is currently under.

Remember that while SV2 allows for hashers to choose their own templates via Job Declaration (JD), the protocol itself is inherently agnostic to:

  • share accounting
  • reward distribution

So when a pool decides what kind of algorithms to employ to solve those two specific problems, their design space is limited to:

  • hashers blindly trusting the pool is fair
  • feeding back information to hashers to minimize trust

And if they choose the second path, they will inevitably need protocol extensions.

Now, SV2 decentralizes via JD!

Which means hashers have the right to be paid for hashing on templates that could be economically suboptimal with regards to fee revenue… that could happen for different reasons:

  • maybe the hasher’s Template Providing (TP) node is suboptimally connected (remote location, poor internet, plebhashers) and the “mempool” they see is suboptimal.
  • maybe they are ideologically driven and see some categories of consensus-valid transactions as spam (which is a subjective term, albeit introduced by Satoshi and ingrained into protocol primitives).
  • maybe they want to do MEVil and prioritize transactions under some specific meta-protocol.
  • maybe they want to filter out consensus-valid transactions that would hurt low-end nodes (see GCC for more).

So an economically rational pool needs a mechanism to still allow for jobs with low-revenue templates, while rewarding them fairly with regards to jobs with more economically optimal templates, which is work that deserves to be rewarded more.

And in order for hashers to be willing to put some level of trust on the pool, they need to be fed back some info to give them reassurance about how their templates are being rewarded fairly in proportion to other hashers. That is exactly what this new SV2 protocol extension proposes, and I’m happy it’s built on top of PPLNS, rather than FPPS.

I’m looking forward to following this Discussion on SRI repo, where Braiins, Demand, and SRI engineers are shaping up the implementation details for the first ever SV2 protocol extension.

2 Likes

As a pleb advocate, I’m particularly curious how the proposed SV2 extension would affect transactions described under GCC, which would essentially penalize low-end nodes, even if they:

  • pay high fees
  • are consensus-valid
  • are available in the so called “Standard/Canonical/Platonic” mempool

Let’s call these transactions as GCC vectors.

How should a SV2-JD-enabled Pool take that into account? I see three options:

  • A. reject all jobs that include GCC vectors in the proposed templates (as a JD policy)
  • B. impose economical penalties to jobs that include GCC vectors in their template (as a reward policy)
  • C. ignore GCC vectors

The proposed extension is not really relevant for option A, since the basic SV2 primitives already allow for that.

Option B does have some relevance here, and I’m curious as to whether this is being taken into account in the design of the proposed extension.

We only look at total fee payed by the mined[1] job, respect to the jobs in the same slice. So we will not penalize anything we can say that pool take an agnostic approach.


  1. job for which we received the share. ↩︎

:broken_heart: from pleb noderunners.

But that is fully understandable, since it shouldn’t be up to pools nor hashers to act as “benevolent guardians” of low-end nodes. Ideally, this is something to be addressed at the consensus level, not mining.

Hopefully @AntoineP will save the day for us with GCC Revival! :pray:

What prevents a pool from diluting shares within a slice/window?

I think this protocol makes sense but still suffers from the same issue that pools suffer from now, all of the validation data being used by miners is given out by the pool.

I think there’s a way forward with blinded signatures here that makes each proposal more robust. I’m not sure what that looks like yet but will post when I have someothing on the topic.

1 Like

the fact that every miner verify a random sample of shares and that shares can not be faked.

But miners can only validate using shares they know of? Where does this source of shares come from that miners are using as a random sampling?

No miner can validate every share that the pool receive.

How do they get shares they didn’t create?

they can use this message share-accounting-ext/extension.md at 281c1cbc4f9a07b21a443753a525197dc5d8e18c · demand-open-source/share-accounting-ext · GitHub

This is not an sv2 message and it can be used only with pool that support the ext.

How do they get shares they didn’t create?

See:

No Share is a custom datatype and is encoded like described in the extension spec.

Is implemented here: share-accounting-ext/src/data_types.rs at 281c1cbc4f9a07b21a443753a525197dc5d8e18c · demand-open-source/share-accounting-ext · GitHub

1 Like

The client gives the server a list of id’s (how is this list of IDs determined?) of shares they want to validate, I understand this piece. There is just no guarantee that the pool is providing accurate information. This is the same as using an API to query for shares.

What stops a pool from providing inaccurate or misleading share data? or omitting shares when requested?

Edit: Inaccurate or misleading shares would fail the Merkle inclusion validation piece. but omission remains un answered.

No Share is a custom datatype and is encoded like described in the extension spec.

Is implemented here: share-accounting-ext/src/data_types.rs at 281c1cbc4f9a07b21a443753a525197dc5d8e18c · demand-open-source/share-accounting-ext · GitHub

yeah I was making some confusion there, thanks for the clarification!

edited those questions out of my previous message to avoid further propagating my misunderstanding

yep for inaccurate an misleading shares you have the merkle tree. Not clear what you mean by omission.

Nvm on the omission part, I misunderstood.

Does the id of a share factor into the Merkle path for that share?

this is not needed cause when you get the actual share you can verify that the is the share at position x in the slice, so is the one that you required

Given a window with one slice. Given that slice contains 10 shares, 0…10.

Client A requests shares 1,3,5,7. Pool provides 1,3,5,7

Client B requests shares 2,4,6,8. Pool provides 2,4,5,8, mislabeling share 5 as 6

How does Client B know they were deceived? Assuming share 6 and share 5 are not shares they themselves submitted.

they can spot it cause each share come along the share’s merkle path in the slice, hence the index.