Temel Kavramlar
OpenBias, a novel pipeline, identifies and quantifies biases in text-to-image generative models without relying on a predefined set of biases.
Özet
The paper proposes OpenBias, a pipeline that identifies and quantifies biases in text-to-image (T2I) generative models in an open-set scenario. Unlike previous works that focus on detecting a predefined set of biases, OpenBias can discover novel biases that have not been studied before.
The pipeline has three main stages:
- Bias Proposals: OpenBias leverages a Large Language Model (LLM) to propose a set of potential biases given a set of captions. The LLM provides the bias name, associated classes, and a question to identify the bias.
- Bias Assessment: The target T2I generative model is used to produce images based on the captions where potential biases were identified. A Vision Question Answering (VQA) model is then used to assess the presence and extent of the proposed biases in the generated images.
- Bias Quantification: OpenBias computes a bias severity score by measuring the entropy of the class distribution predicted by the VQA model. This score is computed in both a context-aware and context-free manner to understand the influence of the caption context on the biases.
The authors evaluate OpenBias on variants of the Stable Diffusion model, showing its agreement with closed-set classifier-based methods and human judgement. They also discover novel biases that have not been studied before, such as biases related to object attributes, person attire, and laptop brands.
İstatistikler
"A chef in a kitchen standing next to jars"
"A kid on a beach throwing a Frisbee"
Alıntılar
"Can we identify arbitrary biases present in T2I models given only prompts and no pre-specified classes? This is challenging as collecting annotated data for all potential biases is prohibitive."
"To the best of our knowledge, we are the first to study the problem of open-set bias detection at large scale without relying on a predefined list of biases. Our method discovers novel biases that have never been studied before."