The Application and all related software, technology, and intellectual property rights are owned by Flipside Crypto LLC and MetricsDAO Association, and are protected by applicable copyright, trademark, and other intellectual property laws. You acknowledge and agree that these rights are valid and enforceable.
<aside> 💡 Summary: Use this rubric to quickly assess submissions & ensure they end up in the right bucket on the 5-point scale
The old 12-point scale required Reviewers to exercise judgment on every single submission. Many Reviewers have stated that the need to review each submission in depth saps their focus, attention, and their desire to be Reviewers in the future.
The scale below simplifies this 12-point scale into a 5-point scale. For those who wish, it provides detailed guidance on how to subtract points for dashboard flaws. This enables Reviewers to triage submissions much more quickly - while still enabling secondary Reviewers to focus on the details of specific dashboards.
By focusing their attention on Good and Stellar dashboards, Reviewers will be able to provide in-depth, detailed feedback on worthy dashboards. This will also enable us to provide sufficient compensation to attract top-tier Reviewers, thereby improving Review quality.
Each submission should be assigned a score in one of the following five categories:
1 - Spam
2 - Bad
3 - Average
4 - Good
5 - Stellar
The table below, Scoring Scale, outlines the numeric score of each category on a scale from 0-100. It also “maps” the scores in the prior 12-point system to the new system.
In other words: a score of 7 or 8 in the old system would correspond to a score of Average in the new system. In both cases, this score reflects a submission that is “solid” but not professional-quality.
This rubric is a set of guidelines. It is not written in stone, and it’s intended to undergo further discussion on Discourse. We will continue to improve it based on community feedback and submission data.
The rubric is meant to let Reviewers quickly and fairly separate Spam from other submissions & mediocre submissions from high-quality submissions. This ensures that deserving submissions can get a second look from talented and qualified Reviewer networks.
|Score||Quantitative Value||Overall Score Range||Score in Old System|
The current payout structure assigns the following:
“Your Average Score / Sum Total of Average Scores” = % of total payout pool
This is our default payout structure at launch - as of April 2023, other payout structures are already being designed & will be rolled out over time.
To assign scores at the individual level, i.e. scores that will become part of the overall average, we suggest a mechanism that starts at 100 points and subtracts for elements that inhibit the dashboard. This scale is based on the Discourse conversation [here](Reviewer Scoring).
This scale enables reviewers to rapidly apply criteria to sort dashboards into the appropriate bucket. Sorting dashboards in this way reduces burden on reviewers & enables rapid categorization of submissions.
In this way, Good and Stellar submissions can receive detailed feedback - while submissions of Average or below are incentivized to resolve baseline errors that harm their score.