TL;DR: no matter how clever we are about the graph construction, unsealing will cost over 4 hashes per label, unless we learn more about predecessor robustness or depth robustness. In a companion note, I explain why the cost of unsealing in SDR/SPR can’t be improved much; this note applies more generally to any graph construction.

This note is limited to graph-based PoS constructions in which initialization computes a hash-based graph and execution probes some graph nodes (there’s only one non-graph-based construction and its concrete parameters are unclear). It is also limited only to symmetric constructions (i.e., ones in which sealing and unsealing have the same cost or latency, depending on whether we are in the cost or latency model). We will assume a parameter $\delta$ that is an upper bound on the fraction of nodes may be incorrectly computed (i.e., so called “red pebbles”).

Recall the definition of $(e, d)$ predecessor robustness: if you pebble any $e$ fraction of nodes in the graph, there’s a single-sink unpebbled subgraph of fractional size $d$. Recall the definition of $(e,d)$ depth robustness: same, but this subgraph is a path.

Recall the cost (or latency) ratio $r$ is the ratio between the cost (or latency) of

Suppose the PoS consists of pebbling some graph and then outputting some (or all) nodes of this graph. Let the number of nodes output be $n$ and the total graph size be $cn$. For example, in SDR, $n=2^{30}$ and $c=11$. However, we do not assume that the graph is necessarily layered, or even that the output nodes are necessarily the tail end of the pebbling.

Note that $c$ relates directly to the cost of unsealing: to unseal, you need to compute the whole graph, so for every node of storage, you perform $c$ hashes.

Claim. If a graph has space gap $g$, and cost (or latency) ratio $r$, then it is $(e,d)$ predecessor (or depth) robust for $e = (1-g+\delta)/c$ and $d=r$.

Proof. Consider an adversary who places $(1-g+\delta)n$ pebbles on the graph. Since the graph has space gap $g$, we know that during the execution phase, in response to some queries, the adversary will have to do some fraction $r$ of original $cn$ work (in the cost model); in the latency model, we will also know that this work is sequential. Let $w=rcn$. Because queries are to specific nodes, that means that, no matter how the adversary places $(1-g+\delta)n$ pebbles on the graph, there should be at least one node that takes $w$ hashes (in the cost model) or $w$ sequential hashes (in the latency model) to compute. But if a node takes $w$ hashes to compute, then it is the single sink of a fully unpebbled subgraph of size $w$ (else, it could be compute with fewer hashes, because only the nodes that are its ancestors need to be computed). Similarly, if it takes $w$ sequential hashes to compute, then it is the last node on a path of length $w$ (else, it can be computed in less time, by greedily computing everything possible).

So a space gap of $g$ and cost (or latency) ratio $r$ implies $(e, d)$ predecessor robustness (or depth robustness) for $e=(1-g+\delta)n / (cn) = (1-g+\delta)/c$ and $d = rcn/(cn) = r$. QED.

Corollary. If we don’t know how to reason about predecessor (or depth) robustness for $e>0.2$, then we must set $e$ to $0.2$ or less. And if we want a spacegap $g \le 0.2$, then $c> (1-0.2+\delta)/e>0.8/0.2= 4$. So unsealing will be at most $11/4 = 2.75$ times cheaper than it is now, no matter how clever we are about the graph construction.

Remark. This note does not consider what happens when the verifier can take advantage of querying multiple nodes and adding up their costs, instead of just considering the maximum cost of a single node; however all this will do is change $d$, but not $e$, which is the focus of this note.