Skip to yearly menu bar Skip to main content


Poster

Teach LLMs to Phish: Stealing Private Information from Language Models

Ashwinee Panda · Christopher A. Choquette-Choo · Zhengming Zhang · Yaoqing Yang · Prateek Mittal

Halle B
[ ]
Thu 9 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract: When large language models are trained on private data, it can be a _significant_ privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new _practical_ data extraction attack that we call ``neural phishing''. This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of $10$% secret extraction rates, at times, as high as $80$%. Our attack assumes only that an adversary can insert only $10$s of benign-appearing sentences into the training dataset using only vague priors on the structure of the user data.

Chat is not available.