At this consistency level, we want to show all changes in the database until the current read request. It means all changes which have happened in the database before the read operation will be reflected in the read query.

In simple terms:
If operation A finishes before operation B starts, then everyone in the system must see A before B.
This gives the illusion that the system is a single, correct, up-to-date machine, even though it may be distributed across many nodes.
For example, suppose initially we had x = 10.
Using the above example, the first read x will be executed after updating the value to 17. This is useful when systems need perfect consistency.
Linearizability combines two guarantees:
1. Single-copy illusion
The system behaves as if there is only one copy of the data.
2. Real-time ordering
If :
Client1 writes X=1 (finishes at t=5)
Client2 readsX (starts at t=6)
Then the read must return X = 1 - no stale value is allowed.
This is much stronger than eventual consistency and even stronger than sequential consistency.
Usecases :
1. High-throughput systems
If two nodes compete for a lock:
Node A: acquirelock -> success
Node B: acquirelock -> must fail
Any stale read → split brain
2. High-throughput systems
balance = 100
withdraw(100)
withdraw(100)
Without linearizability: → both may succeed → money created