Virtual Poster presentation / top 25% paper
PEER: A Collaborative Language Model
Timo Schick · Jane Dwivedi-Yu · Zhengbao Jiang · Fabio Petroni · Patrick Lewis · Gautier Izacard · Qingfei You · Christoforos Nalmpantis · Edouard Grave · Sebastian Riedel
Keywords: [ zero-shot learning ] [ language models ] [ editing ] [ Controllability ] [ prompting ] [ Applications ]
Textual content is often the output of a collaborative writing process: We start with an initial draft, ask for suggestions, and repeatedly make changes.Agnostic of this process, today’s language models are trained to generate only the final result. As a consequence, they lack several abilities crucial for collaborative writing: They are unable to update existing texts, difficult to control and incapable of verbally planning or explaining their actions.To address these shortcomings, we introduce PEER, a collaborative language model that is trained to imitate the entire writing process itself. PEER can write drafts, add suggestions, propose edits and provide explanations for its actions. Crucially, we train multiple instances of PEER able to infill various parts of the writing process, enabling the use of self-training techniques for increasing the quality, amount and diversity of training data. This unlocks PEER's full potential by making it applicable in domains for which no edit histories are available and improving its ability to follow instructions, to write useful comments, and to explain its actions. We show that PEER achieves strong performance across various domains and editing tasks.