The Streams API allows you to programmatically access streams of data received over the network or created by whatever means locally and process them with JavaScript. Streaming involves breaking down a resource that you want to receive, send, or transform into small chunks, and then processing these chunks bit by bit. While streaming is something browsers do anyway when receiving assets like HTML or videos to be shown on webpages, this capability has never been available to JavaScript before fetch with streams was introduced in 2015.

Streaming was technically possible with XMLHttpRequest, but it really was not pretty.

Previously, if you wanted to process a resource of some kind (be it a video, or a text file, etc.), you would have to download the entire file, wait for it to be deserialized into a suitable format, and then process it. With streams being available to JavaScript, this all changes. You can now process raw data with JavaScript progressively as soon as it is available on the client, without needing to generate a buffer, string, or blob. This unlocks a number of use cases, some of which I list below:

Core concepts #

Before I go into details on the various types of streams, let me introduce some core concepts.

A chunk is a single piece of data that is written to or read from a stream. It can be of any type; streams can even contain chunks of different types. Most of the time, a chunk will not be the most atomic unit of data for a given stream. For example, a byte stream might contain chunks consisting of 16 KiB Uint8Array units, instead of single bytes.

A readable stream represents a source of data from which you can read. In other words, data comes out of a readable stream. Concretely, a readable stream is an instance of the ReadableStream class.

A writable stream represents a destination for data into which you can write. In other words, data goes in to a writable stream. Concretely, a writable stream is an instance of the WritableStream class.

A transform stream consists of a pair of streams: a writable stream, known as its writable side, and a readable stream, known as its readable side. A real-world metaphor for this would be a simultaneous interpreter who translates from one language to another on-the-fly. In a manner specific to the transform stream, writing to the writable side results in new data being made available for reading from the readable side. Concretely, any object with a writable property and a readable property can serve as a transform stream. However, the standard TransformStream class makes it easier to create such a pair that is properly entangled.

Pipe chains #

Streams are primarily used by piping them to each other. A readable stream can be piped directly to a writable stream, using the readable stream's pipeTo() method, or it can be piped through one or more transform streams first, using the readable stream's pipeThrough() method. A set of streams piped together in this way is referred to as a pipe chain.

Once a pipe chain is constructed, it will propagate signals regarding how fast chunks should flow through it. If any step in the chain cannot yet accept chunks, it propagates a signal backwards through the pipe chain, until eventually the original source is told to stop producing chunks so fast. This process of normalizing flow is called backpressure.

A readable stream can be teed (named after the shape of an uppercase 'T') using its tee() method. This will lock the stream, that is, make it no longer directly usable; however, it will create two new streams, called branches, which can be consumed independently. Teeing also is important because streams cannot be rewound or restarted, more about this later.

A pipe chain.

The mechanics of a readable stream #

A readable stream is a data source represented in JavaScript by a ReadableStream object that flows from an underlying source. The ReadableStream() constructor creates and returns a readable stream object from the given handlers. There are two types of underlying source: