Poster
Improving Natural Language Understanding with Computation-Efficient Retrieval Augmentation
Shangyu Wu · Ying Xiong · Yufei CUI · Xue Liu · Buzhou Tang · Tei-Wei Kuo · Chun Jason Xue
Halle B
Retrieval-based augmentations that aim to incorporate knowledge from an external database into language models have achieved great success in various knowledge-intensive (KI) tasks, such as question-answering and text generation.However, integrating retrievals in non-knowledge-intensive (NKI) tasks, such as text classification, is still challenging.Existing works focus on concatenating retrievals to inputs as context to form the prompt-based inputs. Unfortunately, such methods require language models to have the capability to handle long texts.Besides, inferring such concatenated data would also consume a significant amount of computational resources.To solve these challenges, we propose \textbf{ReFusion} in this paper, a computation-efficient \textbf{Re}trieval representation \textbf{Fusion} with neural architecture search. The main idea is to directly fuse the retrieval representations into the language models.Specifically, we first propose an online retrieval module that retrieves representations of similar sentences.Then, we present a retrieval fusion module including two effective ranking schemes, i.e., reranker-based scheme and ordered-mask-based scheme, to fuse the retrieval representations with hidden states.Furthermore, we use Neural Architecture Search (NAS) to seek the optimal fusion structure across different layers. Finally, we conduct comprehensive experiments, and the results demonstrate our ReFusion can achieve superior and robust performance on various NKI tasks.