Components

Roadmap

Research → Prototype → MVP → V1 → Future

✅ Research (Weebo)

The protocol makes sense. All incentives are clear.

Deliverables: Docs.

✅ Prototype (Electric sheep)

The protocol is implemented in code. Smart contract follows incentives.

Deliverables: Infrastructure and technical requirements for components.

✅ MVP - Bots VS Managers (Wall-E)

In short: No UX, make it for bots and group managers first. Let price-of-forgery discovery begin.

The use-case that would require the least development efforts is quality control of human verification methods. We don't even need non-malicious users at this stage and we advise DApps not to use the scores (see further why).

From the whole infrastructure this use case would require only the protocol (smart contracts), single pool type, group manager CLI and DB module. Those would be enough to let the bots explode and let group managers start price-of-forgery discovery process (the technique for the price of forgery discovery is here).

Also per Upala's design philosophy, we protect bot rights. It does make sense to let bot owners (malicious users) to start developing their tools as soon as possible.

Notes on the components for this stage

Smart contracts. Signed scores pool only.

For quality control use-case we only need single type of pool - the Signed scores pool. In this type of pool group manager publishes a bundleID hash on-chain and then signs <userID-score-bundleId> with his or her private key for every individual scores. The signed scores are then published to the DB component. Smart contract validates the signature when user wants to explode or use their score. The drawback of this pool type is that group manager can create unlimited number of users and there's no way to control it. So this pool doesn't fit for providing scores to the third party DApps. But it would work very well if the group wants to discover price of forgery for any verification method just for their own use (Gitcoin, Clr.fund may quality control human verification methods they use).

If groups want to provide scores to DApps in the future, they could easily migrate to Merkle pool. Technically it would mean just two steps:

  1. Create Merkle Pool (should be cheap, due to the use of OpenZeppelin clones proxy)