Core Concepts
Autoregressively replacing tokens with similar meaning in target style using a non-autoregressive model.
Abstract
RLM introduces a novel framework for text style transfer by combining autoregressive and non-autoregressive models. It aims to preserve content while transferring the text into a different style. The model generates new spans based on the source sentence and target style, providing fine-grained control over the transfer process. By disentangling style and content representations at the word level, RLM achieves a balance between flexibility and accuracy in text rewriting. Empirical results on real-world datasets demonstrate the effectiveness of RLM compared to other baselines.
Stats
Autoregressive models challenged by low-efficiency and accumulation errors.
Non-autoregressive models proposed as an alternative for efficiency.
Mutual information used to eliminate style information from content embeddings.
Insertion and deletion mechanisms enhance transfer performance.