<aside> ⛪ This will be the first of a series of blog posts discussing science token engineering, the focus of my Open Web Fellowship at Opscientia.

</aside>

Firstly, what is science token engineering? Token engineering is an exciting rapidly growing field in the Web3 space and I highly recommend this excellent blog post by Trent McConaghy as an overview.

Essentially, science token engineering is a practice of applying token engineering principles specifically to scientific systems. This will all make more sense down the line, so let's begin with the main question: What does the current science value flow look like?

The Science Value Flow

To answer this question, imagine you're a research scientist applying for funding. You submit a proposal for a research project in the hopes of receiving a grant either from the institution you work at or from an external agency. Once you receive that funding, you'll use it to cover the costs of getting all the necessary resources for your research (data, equipment, etc.), but part of that funding will inevitably go towards publishing your results in a scientific journal.

Going back to the original question of value flow, we can see that all value originated in some grant funding agency (fully monetary value), which was then transformed by the researchers into new knowledge (intellectual value), which was finally captured within a knowledge curator, in this case a scientific journal (both intellectual and monetary value). Omitting the inevitable leakage of value caused by uncontrollable variables, it's clear that the science value flow is incredibly linear, it starts in one centralized place (a grant funding agency) and ends up in another (a scientific journal).

A schema of the baseline science model

A schema of the baseline science model

The schema above shows this linear value flow. Now, you might wonder whether there is anything wrong with this model. After all, this is how scientific research has been conducted for decades, if not centuries. I outline the problems with this model below.

Problems with the Baseline Science Model

The centralization of value can (and does) introduce a number of inefficiencies. For instance, if I am a researcher and have spent years collecting valuable data, that data probably doesn't belong to me, so I have little to no control over what happens to it. Furthermore, with the little control I do have, I will not want to share that data since it represents my potential competitive advantage in getting future grants for additional research and for getting recognition within the scientific community when I publish papers on it.

Essentially, the current flow of value does not incentivize collaboration and data sharing, which is inefficient, because what if somebody wants to do research on data that has already been collected, but isn't available to use? It means the data needs to be collected again, which requires resources that could have been used on processing the existing data.

Putting everything together, we identify the following problems with the current science value flow:

visual-stories-micheile-f030K9IzpcM-unsplash.jpeg

Conclusion

These three points outline the motivation behind science token engineering, which seeks to solve these issues by designing a new community where the incentives of all participants are aligned to maximize efficiency of scientific research and fairness of value distribution. Thanks to the incredible world of Web3, scientists can be free of the dependence on centralized agencies, they can retain ownership of the work they do and receive fair rewards based on their contributions. Together, we'll explore how we can reach this goal. Stay tuned for Part 2.

Science Token Engineering Blog Series

Science Token Engineering Part 1: The Problem with Science - current