Skip to yearly menu bar Skip to main content


Poster

Step-Back Prompting Enables Reasoning Via Abstraction in Large Language Models

Huaixiu Steven Zheng · Swaroop Mishra · Xinyun Chen · Heng-Tze Cheng · Ed H. Chi · Quoc V Le · Denny Zhou

Halle B
[ ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide the reasoning steps, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of Step-Back Prompting with PaLM-2 models and observe substantial performance gains on a wide range of challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back Prompting improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%, TimeQA by 34%, and MuSiQue by 7%.

Chat is not available.