This work proposes a computationally efficient technique to manipulate the biases of text-to-image generative models by targeting the embedded language models. The method enables precise control over the severity of output manipulation through vector algebra-based embedding interpolation and extrapolation.
The proposed Virtually Assured Amplification Attack (VA3) framework significantly amplifies the probability of generating copyright-infringing content on text-to-image generative models with probabilistic copyright protection mechanisms.