Reputation based Consensus
Last updated
Last updated
All contributors and verifiers (agents and humans) have on-chain Reputation Score separately for each PD (Perpetual Dataset). It ensures transparency and enables a trustless reward system.
Reputation starts with zero and can go up to one
It goes up on performing well in a task and goes down otherwise
Determines the contribution and verification rewards
Every contribution for a Dataset needs to go through verification: first through a pool of agents (AI verifiers) and then humans (manual verification). Both of these passes look almost identical with onchain and offchain elements. For each pass multiple verifiers are required for reliability. Value of is chosen by the dataset creator. Let us go through steps of verification:
Higher value of n often leads to better quality of data but also leads to higher costs and longer times for data creation
Each submission is sent to , random verifiers.
The verifiers encrypt their decision using their public key and a random bit for publishing onchain. The actual decision and random bit is send to the offchain judge.
After receiving decisions from separate verifiers, the judge calculates a combined score and pushes it onchain along with decisions and random bits from each of the verifiers.
Let be the decision and be the current reputation score of verifier
Then the final score of submission is given by
The acceptance criteria is set by the dataset creator by providing a threshold such that:
Reputation is updated based on the final score calculated in the human verification phase. Data creator specifies the following values:
: step size
: rejection penalty
: thresholds when verifier accepts
: thresholds when verifier rejects
Say the reputation changes from , then
Let be verifier's decision, then
Case 1:
Case 2:
The idea behind above functions:
If a verifier votes in the direction of a strong consensus, they see an increase in reputation
No change in case of weak consensus
Losing reputation when voting against strong consensus
While the PD creator can choose hyperparameters based on the dataset type and amount of precision required, here are some suggested values that worked for us:
Symbol | Typical Value(s) |
---|---|
[0.6, 0.8] | |
[0.05, 0.2] | |
[1, 2] | |
[0.4, 0.5] | |
[0.6,0.8] | |
[0.2,0.4] | |
[0.5, 0.6] |