Reputation based Consensus

All contributors and verifiers (agents and humans) have on-chain Reputation Score separately for each PD (Perpetual Dataset). It ensures transparency and enables a trustless reward system.

  1. Reputation starts with zero and can go up to one

  2. It goes up on performing well in a task and goes down otherwise

  3. Determines the contribution and verification rewards

Verification Process

Every contribution for a Dataset needs to go through verification: first through a pool of agents (AI verifiers) and then humans (manual verification). Both of these passes look almost identical with onchain and offchain elements. For each pass multiple verifiers nn are required for reliability. Value of nn is chosen by the dataset creator. Let us go through steps of verification:

Higher value of n often leads to better quality of data but also leads to higher costs and longer times for data creation

Reaching Quorum

Each submission is sent to mm, m>nm>n random verifiers.

The verifiers encrypt their decision using their public key and a random bit for publishing onchain. The actual decision and random bit is send to the offchain judge.

Calculating Final Score

After receiving decisions from nn separate verifiers, the judge calculates a combined score and pushes it onchain along with decisions and random bits from each of the verifiers.

Let vi{0,1}v_i \in \{0,1\}be the decision and ri[0,1]r_i \in [0, 1] be the current reputation score of verifier i,i{1,2...n}i, i \in \{1, 2 ... n\}

Then the final score of submission is given by

S=i=1nrivii=1nriS = \frac{\sum_{i=1}^{n}r_iv_i}{\sum_{i=1}^{n}r_i}

Acceptance / Rejection

The acceptance criteria is set by the dataset creator by providing a threshold T[0,1]\Tau \in [0,1]such that:

decision={acceptif S>Trejectotherwisedecision = \begin{cases} accept &\text{if } S>\Tau \\ reject & otherwise \end{cases}

Reputation Update

Reputation is updated based on the final score SScalculated in the human verification phase. Data creator specifies the following values:

η[0,1]\eta \in [0, 1] : step size

p[1,2] p \in [1, 2] : rejection penalty

0<l1<u1<10 < l_1< u_1 < 1: thresholds when verifier accepts

0<l0<u0<10 < l_0< u_0 < 1: thresholds when verifier rejects

Say the reputation changes from rr+Δrr \to r + \Delta r, then

For contributor:

Δr={ηif S>Tpηotherwise\Delta r= \begin{cases} \eta &\text{if } S>\Tau \\ -p\eta & otherwise \end{cases}

For verifier:

Let v{0,1}v \in \{0, 1\} be verifier's decision, then

Case 1: v=1v = 1

Δr={ηif S>=u10if l1<S<u1pηif S<=l1\Delta r= \begin{cases} \eta &\text{if } S>=u_1 \\ 0 &\text{if } l_1<S<u_1 \\ -p\eta &\text{if } S<=l_1 \\ \end{cases}

Case 2: v=0v = 0

Δr={ηif S<=l00if l0<S<u0pηif S>=u0\Delta r= \begin{cases} \eta &\text{if } S<=l_0 \\ 0 &\text{if } l_0<S<u_0 \\ -p\eta &\text{if } S>=u_0 \\ \end{cases}

The idea behind above functions:

  1. If a verifier votes in the direction of a strong consensus, they see an increase in reputation

  2. No change in case of weak consensus

  3. Losing reputation when voting against strong consensus

Suggested Hyperparameters

While the PD creator can choose hyperparameters based on the dataset type and amount of precision required, here are some suggested values that worked for us:

SymbolTypical Value(s)

T\Tau

[0.6, 0.8]

η\eta

[0.05, 0.2]

pp

[1, 2]

l1l_1

[0.4, 0.5]

u1u_1

[0.6,0.8]

l0l_0

[0.2,0.4]

u0u_0

[0.5, 0.6]

Last updated