1. When will a rational adversary choose to store instead of recompute the PoRep for WindowPoST?

Assume $A_{\max{}}$ is the ratio between the cost of computation for our providers and the cost of computation for a hypothetical adversary. Let $T$ be the time period between WindowPoSTs (currently 24 hours) and $\mathit{Cost}T$ be the cost of storage of a sector for time $T$. Let $\mathit{Cost}{\mathit{unseal}}$ be the cost of unsealing for our providers (the adversary’s cost may be up to $A_{\max{}}$ times less), which is the same as the cost of computing all 11 layers of the graph.

Question: when will a rational adversary store (most of) a sealed sector?

  1. Result by Fisch, Luca and Irene: it is rational to store at least 80% of a sealed sector whenever $67 \cdot A_{\max{}} \cdot \mathit{Cost}T < \mathit{Cost}{\mathit{unseal}}$
  2. Result submitted to Eurocrypt: it is rational to store at least 80% of a sealed sector whenever $6 \cdot A_{\max{}} \cdot \mathit{Cost}T < \mathit{Cost}{\mathit{unseal}}$.
  3. New result explained below: it is rational to store at least 87% of a sealed sector whenever $1.1 \cdot A_{\max{}} \cdot \mathit{Cost}T < \mathit{Cost}{\mathit{unseal}}$
  4. Moreover, we can reduce the cost of WindowPoST from 10 queries per sector to only 3, and it still rational to store at least 87% of a sealed sector whenever $1.3 \cdot A_{\max{}} \cdot \mathit{Cost}T < \mathit{Cost}{\mathit{unseal}}$

Item 3 provides a 62x improvement overall in cost assumption and 35% improvement in space gap, from 20% to 13%.

The simplest explanation for items 3 and 4 is that we now consider the combined cost of storage and computation over the entire range of possible storage amounts, rather than either the cost of storage or the cost of computation. I explain a bit more below. The details are in https://github.com/ninitrava/PoS/blob/main/SDR_improvement/tighterSDR.tex.

Security Margin. According to Section 5.1.3 of https://drive.google.com/file/d/1notObdkPT1BCztgspIpzSUAzWSrM8h81/view, our security margin under the first analysis was 3.15. According to item 3, it is 62x bigger, i.e., 195.

Room for Improvement?

We now know that assuming $1.1 \cdot A_{\max{}} \cdot \mathit{Cost}T < \mathit{Cost}{\mathit{unseal}}$ is sufficient to ensure storage of at least $87\%$. of a sector. On the other hand, assuming that $0.87 \cdot A_{\max{}} \cdot \mathit{Cost}T < \mathit{Cost}{\mathit{unseal}}$ is necessary — otherwise the adversary would pay $\mathit{Cost}{\mathit{unseal}}/A{\max{}}$ to simply repebble the whole graph before WindowPoST, and that would be cheaper than storing $87\%$ of the sector. So the best we can hope for is to improve for 1.1 to 0.87.

So the gap between an assumption we currently know to be sufficient and an assumption that is absolutely necessary is small — $1.1/0.87\approx 1.26$. There is not much room for improvement from 1.1.

2. How We Achieve This Better Analysis

2.1 Summary of the traditional analysis that gives us items 1 and 2 in Section 1

Traditional Analysis. Typical proof-of-space security analysis goes like this. Suppose the space requirement is $n$, and we assume that adversary can perform the PC1 computation (that is, pebbling the whole graph; equivalently, unsealing) at cost $\sigma$. (Because we don’t know for sure what the best technology available to the adversary is, we must take $\sigma$ to be smaller than the typical cost of PC1 computation during sealing; assume $A_{\max{}}\cdot \sigma$ is the cost of the actual PC1 computation for typical storage providers, and thus $A_{\max{}}$ is the ratio is of costs between typical and what is available to the adversary.) If the adversary stores less than $(1-g)n$, then at execution time, the adversary will have to do redo (in expectation) some fraction $r$ of the cost $\sigma$. This $\epsilon$ is known as the spacegap.

Application of the Traditional Analysis to the Latency model. This analysis seems sufficient in the latency model as long as the space gap $g$ is small enough: if $r\sigma$ must be done sequentially and the verifier will time out before it’s done, then the verifier will catch any adversary who stores less than $(1-g)n$; if the penalties for being caught are significant enough, the adversary will be incentivized to store at least $(1-g)n$.

Application of the Traditional Analysis to the Cost Model. In the cost model, assume the cost of storage of $n$ for the entire time between execution is $s$. Then the adversary who erases everything will incur cost at most $\sigma$ (perhaps less, depending on what it costs to compute query answers, which may be less than computing everything); to ensure a rational adversary does not do so, we need to at least ensure $\sigma>s$.

But what about an adversary commits some amount of storage $(1-\epsilon)n$ that is less than $n$ but more than $0$? With the traditional analysis, here’s the best we can say:

  1. If $\epsilon<g$, all we know is that the adversary will have cost $(1-\epsilon)s$.