https://doc.akka.io/docs/akka/current/typed/failure-detector.html

Introduction

Remote DeathWatch uses heartbeat messages and the failure detector to detect network failures and JVM crashes.

The heartbeat arrival times are interpreted by an implementation of The Phi Accrual Failure Detector by Hayashibara et al.

Failure Detector Heartbeats

Heartbeats are sent every second by default, which is configurable. They are performed in a request/reply handshake, and the replies are input to the failure detector.

The suspicion level of failure is represented by a value called phi. The basic idea of the phi failure detector is to express the value of phi on a scale that is dynamically adjusted to reflect current network conditions.

The value of phi is calculated as:

phi = -log10(1 - F(timeSinceLastHeartbeat))

where F is the cumulative distribution function of a normal distribution with mean and standard deviation estimated from historical heartbeat inter-arrival times.

An accrual failure detector decouples monitoring and interpretation. That makes them applicable to a wider area of scenarios and more adequate to build generic failure detection services. The idea is that it is keeping a history of failure statistics, calculated from heartbeats received from other nodes, and is trying to do educated guesses by taking multiple factors, and how they accumulate over time, into account in order to come up with a better guess if a specific node is up or down. Rather than only answering “yes” or “no” to the question “is the node down?” it returns a phi value representing the likelihood that the node is down.

The following chart illustrates how phi increase with increasing time since the previous heartbeat.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/b1b1b58e-55ea-42de-b9cd-a2ee41fc74a8/phi1.png

Phi is calculated from the mean and standard deviation of historical inter arrival times. The previous chart is an example for standard deviation of 200 ms. If the heartbeats arrive with less deviation the curve becomes steeper, i.e. it is possible to determine failure more quickly. The curve looks like this for a standard deviation of 100 ms.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/c2183a81-befb-4414-939b-7c09d3851f97/phi2.png

To be able to survive sudden abnormalities, such as garbage collection pauses and transient network failures the failure detector is configured with a margin, which you may want to adjust depending on you environment. This is how the curve looks like for failure-detector.acceptable-heartbeat-pause configured to 3 seconds.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/88d26cc4-5737-4526-a892-b37aada4940f/phi3.png

Logging

When the Cluster failure detector observes another node as unreachable it will log: