Was it Slander? Towards Exact Inversion of Generative Language Models

Research output: Preprint/Working paperPreprint

8 Downloads (Pure)

Abstract

Training large language models (LLMs) requires a substantial investment of time and money. To get a good return on investment, the developers spend considerable effort ensuring that the model never produces harmful and offensive outputs. However, bad-faith actors may still try to slander the reputation of an LLM by publicly reporting a forged output. In this paper, we show that defending against such slander attacks requires reconstructing the input of the forged output or proving that it does not exist. To do so, we propose and evaluate a search based approach for targeted adversarial attacks for LLMs. Our experiments show that we are rarely able to reconstruct the exact input of an arbitrary output, thus demonstrating that LLMs are still vulnerable to slander attacks.
Original languageEnglish
PublisherarXiv
Number of pages7
DOIs
Publication statusSubmitted - 10 Jul 2024

Keywords

  • cs.CR
  • cs.AI
  • cs.CL
  • cs.LG

Fingerprint

Dive into the research topics of 'Was it Slander? Towards Exact Inversion of Generative Language Models'. Together they form a unique fingerprint.

Cite this