What is impersonation?

When the model speaks for or controls the action of {{user}}

Why does this happen?

A multitude of reasons:

Can't I just OOC the model to stop speaking for {{user}}?

Yes, but OOCs are temporary and only active as long as they are within your context window. They stop working the moment that they leave your context. You are better off trying to troubleshoot where the issue lies instead of depending on them

How to fix it?

How to write better responses?

You should make good use of your turn to push the narrative or give the model something to work with.

Background info An LLM is not able to reason similarly to humans. LLMs work by predicting one token at a time. The model receives sequential text as input and uses patterns learned during its training to predict what token should come next. Once a token is predicted, it is added to the sequence, and the process repeats with each prediction building on all previous tokens in the sequence. -# Note: Because this prediction process is based on learned statistical patterns, sending the same input repeatedly might lead to it producing similar outputs.

How to write better responses?