In this document, we investigate what prompts we can use to reduce hallucinations and increase the quality of outputs.

Chain-of-Thought

Chain of thought prompting can be used to make the LLM think step by step and follow complicated chains of logic to solve a question or to give a better answer.

Untitled

As the paper says, you can use this prompt just by asking the model to “think step by step” or “think in steps”.

{prompt} + Let's think step by step

ReAct

ReAct prompting helps language models solve complex tasks by combining reasoning and acting. It allows the model to interact with tools and APIs to make better decisions. The primary goal of ReAct is to give the LLM more tokens that it can use for “reasoning” which is thinking and strategizing and “acting” which is using tools to pull in new information.

ReAct seems to perform better in many cases when compared to CoT:

Untitled

Untitled