Stands for Reinforcement Learning from Human Feedback

A diagram from A Survey of Large Language Models:

Untitled

A diagram from Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback:

Untitled

Links

Illustrating Reinforcement Learning from Human Feedback (RLHF)

Some good examples here:

Why do we need RLHF? Imitation, Inverse RL, and the role of reward