<aside> đź’ˇ
The data availability (DA) Service used by Bedrock is also known as LogosDA. LogosDA is a scalability protocol that ensures that data from Zones and Sovereign Rollups built on Logos - known as blobs - are public and can be downloaded if desired by any participant in the network. This is represents an improvement over the naive solution - having each node keep copies of entire blobs - which introduces serious limitations on scaling and data throughput.
The primary motivation for using LogosDA is to allow for the network to scale while ensuring that blob data remains available. LogosDA achieves scalability by minimising the amount of data sent to each node and the bandwidth that the node must support, while maximising the data throughput supported by the network at large. Additionally, Logos’ decentralised ethos requires that data availability guarantees be achieved without reliance on trust assumptions and centralised actors - including “supernodes” and special DA roles.
LogosDA therefore involves splitting up blob data and distributing it among network participants, with cryptographic properties used to verify the data’s integrity. A major feature of this design is that parties who wish to receive an assurance of data availability can do so very quickly and with minimal bandwidth requirements. This lightweight process is known as Data Availability Sampling (DAS).
Strictly speaking, LogosDA’s availability guarantees can only be proven in an instant in time - when a client engages in DAS and obtains an assurance of the data’s integrity. After this point, there is no way to guarantee the continued availability of this data without engaging in DAS again. This is because data loss cannot be punished by slashing (since lack of data availability is unattributable), and because nodes can stop participating in LogosDA or exhibit any kind of Byzantine behaviour.
Lightweight DAS requires participants to be more stable and devote more bandwidth than Bedrock nodes. These participants, known as DA nodes, are required to temporarily store portions of blob data and maintain open connections with several other DA nodes to maximise connection efficiency.
The LogosDA protocol typically involves three distinct stages: encoding, dispersal, and sampling. Additionally, reconstruction can be invoked as a fourth stage in the scenario where some of the data cannot be directly obtained through other means.
In the encoding stage, the encoder (typically a Zone executor or Sovereign Rollup sequencer) takes the padded blob data and creates an initial matrix of data chunks. They proceed to expand the blob data using Reed-Solomon erasure coding, doubling the size of the rows in the blob matrix. In this expanded matrix, the original data remains intact alongside the new data added via expansion. The executor will also calculate various cryptographic commitments and proofs to enable DA nodes to verify the data’s integrity.

A blob expanded with LogosDA.
LogosDA then proceeds with the dispersal phase, in which the encoder splits up their encoded blob and sends each column to a node in a group of DA nodes known as a subnet. The encoder also sends a Mantle transaction containing a commitment to the blob data. A DA node that received a blob column will replicate the data it received to the other nodes in its subnet, and all the nodes use the published commitments and proofs to verify that their column was correctly encoded. As a result, every DA node only downloads one column of the data blob while gaining confidence about the integrity of the entire blob.

The process of dispersing blob columns to nodes in different subnets.
DA nodes are only expected to host their shares of blob data for a short period of time. After this point, it is expected that at least one entity among the network participants - such as an archival node, Zone executor, or an interested user or collective - will have copied the data and will continue to make it available.
Once dispersed, the data can be sampled by anybody with Data Availability Sampling (DAS). Choosing a random set of columns, a sampling client sends requests to nodes in the corresponding subnets hosting those columns. Each successful sample received from a subnet incrementally boosts confidence in the data's availability. These samples, combined with the relevant commitments and proofs pulled from the chain, are used to obtain a local opinion on whether the data are available.
Sampling is used by consensus leader nodes that attempt to build a block which includes Mantle transactions with blob commitments, as well as by other nodes that see blob commitments in blocks on the chain. In both cases, verifying the integrity of these blobs is done via DAS.