แนวคิดหลัก
A black-box generative text steganographic method based on the user interfaces of large language models, called LLM-Stega, is proposed to enable secure covert communication between Alice and Bob.
บทคัดย่อ
The paper explores a black-box generative text steganographic method based on the user interfaces of large language models (LLMs), called LLM-Stega. The main goal is to enable secure covert communication between Alice (sender) and Bob (receiver) by using the user interfaces of LLMs.
Key highlights:
- Constructs a keyword set and designs a new encrypted steganographic mapping to embed secret messages.
- Proposes an optimization mechanism based on reject sampling to guarantee accurate extraction of secret messages and rich semantics of generated stego texts.
- Comprehensive experiments demonstrate that the proposed LLM-Stega outperforms current state-of-the-art methods in terms of embedding capacity and security.
The paper first discusses the limitations of existing generative text steganographic methods, which are white-box and require access to the language model and training vocabulary. To address this, the authors propose LLM-Stega, a black-box approach that uses the user interfaces of LLMs to generate stego texts and extract secret messages.
The key components of LLM-Stega are:
- Keyword Set Construction: A keyword set is constructed, containing subject, predicate, object, and emotion keywords, to encode secret messages.
- Encrypted Steganographic Mapping: An encrypted steganographic mapping is designed to map secret messages into the location indices and repetition numbers of the keywords in the augmented keyword set.
- Steganographic Text Generation and Secret Message Extraction: An embedding prompt is used to generate stego texts, and an extraction prompt is used to extract the secret messages. A feedback optimization mechanism based on reject sampling is proposed to ensure accurate extraction and rich semantics of the generated stego texts.
The paper presents comprehensive experiments evaluating the performance of LLM-Stega in terms of text quality, embedding capacity, anti-steganalysis ability, and human evaluation. The results demonstrate the superiority of LLM-Stega over state-of-the-art methods.
สถิติ
The average length of the News dataset is about 15 words.
The average perplexity of the normal sentence in the News dataset is 185.64.
คำพูด
"Recent advances in large language models (LLMs) have blurred the boundary of high-quality text generation between humans and machines, which is favorable for generative text steganography."
"While, current advanced steganographic mapping is not suitable for LLMs since most users are restricted to accessing only the black-box API or user interface of the LLMs, thereby lacking access to the training vocabulary and its sampling probabilities."