Poster
RETSim: Resilient and Efficient Text Similarity
Marina Zhang · Owen Vallis · Aysegul Bumin · Tanay Vakharia · Elie Bursztein
Halle B
This paper introduces RETSim (Resilient and Efficient Text Similarity), a lightweight, multilingual deep learning model trained to produce robust metric embeddings for near-duplicate text retrieval, clustering, and dataset deduplication tasks. We demonstrate that RETSim is significantly more robust and accurate than MinHash and neural text embeddings, achieving new state-of-the-art performance on dataset deduplication, adversarial text retrieval benchmarks, and spam clustering tasks. Additionally, we introduce the \wa benchmark (Wiki-40B 4dversarial Near-T3xt Dataset), enabling the evaluation of models on typo-laden near-duplicate text retrieval in a multilingual setting. \rs and the \wa are open-sourced under the Apache 2 license at https://github.com/[anonymized].