Skip to yearly menu bar Skip to main content


Spotlight Poster

RealChat-1M: A Large-Scale Real-World LLM Conversation Dataset

Lianmin Zheng · Wei-Lin Chiang · Ying Sheng · Tianle Li · Siyuan Zhuang · Zhanghao Wu · Yonghao Zhuang · Zhuohan Li · Zi Lin · Eric Xing · Joseph E Gonzalez · Ion Stoica · Hao Zhang

Halle B
[ ]
Fri 10 May 1:45 a.m. PDT — 3:45 a.m. PDT
 
Spotlight presentation:

Abstract:

Studying how people interact with large language models (LLMs) in real-world scenarios is increasingly important due to their widespread use in various applications.In this paper, we introduce RealChat-1M, a large-scale dataset containing one million real-world conversations with 25 state-of-the-art LLMs.This dataset is collected from 210K unique IP addresses in the wild on our chat demo website.We offer an overview of the dataset's content, including its curation process, basic statistics, and topic distribution, highlighting its diversity, originality, and scale.We demonstrate its versatility through four use cases: developing content moderation models that perform similarly to GPT-4, building a safety benchmark, training instruction-following models that perform similarly to Vicuna, and creating challenging benchmark questions.We believe that this dataset will serve as a valuable resource for understanding and advancing LLM capabilities.The dataset will be publicly available.

Chat is not available.