https://arxiv.org/pdf/1908.07125.pdf

Short of it:

Adversarial examples are scary!

Long of it:

The authors created universal adversarial triggers. These are input-agnostic sequences of tokens that cause a model produce a specific prediction. The authors wanted to find attacks that you could add to the front or end of an input to trigger a target prediction. They focused on universal attacks since they are a bigger threat.