Background

In a proof-of-capacity blockchain, anyone may create a deep fork and prepare a heavier chain using only the average amount of space (since the fork). This long-range attack allows them to double-spend with less than half of the honest space. If we instead take the chain with the heaviest tip (i.e. the last k blocks) then a dishonest majority may always produce a heavier tip and a deep fork.

To resolve these issues we need some way to seal the blocks with the same amount of wall-clock time that has actually elapsed, such that an attacker must also seal any chain they produce on the fly, effectively preventing the long-range attack.

Possible Approaches

  1. Name of Approach

    1. How it works in their model
    2. Limitations or new security assumptions
    3. How it could be adapted to Subspace
    4. Security implications for Subspace
  2. Chia: alternating proofs-of-space-time (PoST)

    1. Each PoSpace has a score, which determines the number of iterations required for the PoTime in order to generate a new PoSpace challenge. Both PoSpace and PoTime have difficulty resets. This is how it was described in their original white paper, but this has changed with the new consensus paper. Note that this provides both a notion of time (i.e. a delay) in consensus and by virtue of that fact, prevents long-range attacks
      1. Chia Green Paper
      2. Chia Consensus White Paper
    2. Problem is that faster PoTime evaluation leads to a heavier chain (for private long-range attacks), breaking the honest majority storage assumption. Need to confirm the algorithm specifics from their latest code or spec. There is a grinding attack on difficulty resets if done naively.
    3. We could adapta Subspace, such that each PoR has a score, which determines the number of iterations for a PoTime (use the same primitive), which yields a new PoR challenge.
    4. Problems
      1. The block time in Chia is much larger, so the propagation delay on the PoTime is negligible. In Subspace, a six-second block time would make this non-negligble. If this were done over the epoch randomness, it might not be as much of a challenge, but that would be a different model.
      2. Still have the challenge of long-range attacks with a faster VDF.
  3. Tse: proof-of-stake-arrow-of-time (PoSaT)

    1. Consensus nodes evaluate a PoTime incrementally, at each iteration they compute a VRF over the intermediate output and check if it falls below a threshold, at which point they can produce a new block.
      1. PoSat Paper
    2. Main problem is that all consensus nodes must have access to a PoTime evaluator, which must be at roughly the same speed for a fair protocol. This also increases the computational effort expanded linear in the number of nodes and encourage centralization.
    3. Adapting to Subspace, the output of each step of the PoTime is evaluated against the plot (specifically the BST) until a tag which satisifes the difficulty paramater is found.
    4. Challenge is that all farmers must also have a fast CPU (or ASIC) and expend a much larger constant computation to particiapte in cosnesus, signifcantly raising the barriers to entry for farming.
  4. Ethereum 2.0: Random Beacon from Proof-of-Time (PoDelay)

    1. The basic idea is to replace the randomness beacon in PoS (i.e. OB Praos) with a Potime. Each epoch a PoTime is evaluated, seeded with the randomness from the prior epoch. Epochs could either have a fixed time (specific iteration count) or variable time (satisfy some threshold) though I beleive the former is used. The slot randomness is still derived in the same manner as before.
      1. Ben Bunz's Paper (proofs-of-delay)
      2. Justin Drake's approach
    2. This has yet to be implemented in any 3rd gen PoStake protocol, so there are certainly some challenges. In ETH2 they wanted to use an RSA modulus with a trusted setup using some large MPC ceremony (1024 participants) but the scheme was shown to be broken. Not sure where this has yet gone. Feels like there are also issues around forking the randomness and simulation as well as the implications of PoTime evaluation speedups (as in Chia).
    3. Simply replace the randomness construction to use the PoTime output for a set number of iterations. The randomness would no longer be based on the previous PoRs.
    4. Problems — should be very similar to the PoStake variant challenges
  5. Solana naive Proof-of-Time

    1. Instead of using a VDF for the PoTime, a standard hash function is used with several intermediate checkpoints which allows for faster parallel verification (i.e. on a GPU). Need to review the paper and documentation, there is some BFT consensus being built on top of this as well and it has certainly evolved since the last look.
    2. Unclear what the challenges are without further research.
    3. We could use the existing AES library PoTime we already created or we could use Sloth in some latency optimized mode (seems like Supranational said this was the direction the PoStake research was moving)
    4. Still very unclear without a further understanding of the protocol
  6. Post-Consensus Sealing

    1. Instead of using the PoTime to derive the consensus challenge (as all above do) we only use it to "seal" the blocks which have already been created (the full block — proof + content). This could be done at the level of specific blocks, epochs, or even eras — depending on how long we would like each PoTime evaluation to take (probably epoch is best). In the epoch scheme, we would concat and hash all of the block hashes for the previous epoch, then use that as the input to a PoTime with a desired wall-clock duration which would specify some number of iterations based on observed time by the network (i.e. it is dynamic). These proofs would then be embedded back into the chain through a special transaction which calls a special on-chain contract or runtime logic.
    2. Challenges
      1. Speedups: If an attacker has a faster PoTime evaluator they could produce the seals in less wall clock time than has elapsed and still generate a deep fork, though it would be much more expensive.
      2. Since the difficulty adjustments are based on observed timestamps it feels like there is some way to cheat the system in a private attack where the timestamps can be made up (since they are not verified in real time)
      3. Incentives: Why does anyone evaluate the PoTs if they are not required for consensus, who creates them, is there a reward, how is a winner selected?

Solution (for 3 above)

For each epoch, we derive a random challenge as the output of an inherently sequential function, i.e. challenge = sequential_hash(input, iterations) where input is a random seed and iterations is tuned s.t. the actual wall-clock time is nearly the same as the expected epoch duration. This produces a random beacon which serves as the epoch randomness itself.

We can make this scheme efficiently verifiable with a proof-of-time (PoT) such as a Verifiable Delay Function (VDF)

Algorithm

  1. Given a genesis seed as a pseudorandom 32 byte value.
  2. Given the minimum latency for the invocation of a hash function (i.e. 1 ms).