Keskeiset käsitteet
This paper introduces a novel text-to-pose-to-image framework that enhances the controllability and quality of human poses in images generated by text-to-image diffusion models.
This research paper proposes a novel approach to improve the control and quality of human poses in images generated by text-to-image diffusion models. The authors introduce a two-step pipeline: a text-to-pose (T2P) generative model followed by a novel pose adapter for image generation.
Research Objective:
The study aims to address two key challenges in generating human poses in text-to-image synthesis:
Generating diverse and semantically accurate poses from textual descriptions.
Conditioning image generation on specific poses while maintaining high visual quality and pose fidelity.
Methodology:
CLaPP Metric: The authors develop a contrastive text-pose metric called CLaPP, inspired by CLIP, to evaluate the semantic alignment between text descriptions and generated poses.
T2P Model: A text-to-pose transformer model (T2P) is trained to generate a sequence of key points representing human body parts (body, face, hands) based on input text prompts. The model utilizes a Gaussian Mixture Model (GMM) and a binary classifier to predict the location and existence of key points.
Tempered Distribution Sampling: A novel tempered distribution sampling technique is introduced to improve the precision and diversity of poses generated by the T2P model.
Pose Adapter: A new pose adapter for diffusion models is trained on high-quality images annotated with full-body poses, including facial and hand key points. This adapter conditions the image generation process on the generated poses.
Key Findings:
The T2P model outperforms a KNN baseline in generating semantically relevant poses from text descriptions, achieving a 78% win rate based on the CLaPP metric.
The new pose adapter, incorporating facial and hand key points, significantly improves pose fidelity compared to the previous state-of-the-art (SOTA) SDXL-Tencent adapter.
The proposed framework generates images with higher aesthetic quality and better adherence to input poses compared to the previous SOTA, as evidenced by aesthetic score, Human Preference Score (HPS) v2, and human evaluation.
Main Conclusions:
The study demonstrates the effectiveness of a text-to-pose-to-image pipeline in enhancing the control and quality of human poses in text-to-image synthesis. The proposed T2P model and pose adapter contribute significantly to generating more accurate, diverse, and aesthetically pleasing images that align with textual descriptions.
Significance:
This research advances the field of text-to-image generation by introducing a novel framework for controlling human poses, a crucial aspect of image composition and storytelling. The proposed techniques have the potential to improve user experience in various applications, including content creation, virtual reality, and human-computer interaction.
Limitations and Future Research:
The CLaPP metric, while effective, relies on CLIP embeddings and may inherit limitations in representing pose-specific information.
The T2P model, trained in an autoregressive manner, can be computationally expensive during inference.
While the new pose adapter improves image quality, there is still room for improvement in achieving perfect pose matching and overall image fidelity.
Future research could explore:
Developing more robust and efficient text-pose alignment metrics.
Investigating alternative architectures for T2P models, such as non-autoregressive methods.
Further refining the pose adapter and training it on larger and more diverse datasets to enhance pose fidelity and image quality.
Tilastot
The T2P model outperforms a KNN baseline in generating semantically relevant poses from text descriptions, achieving a 78% win rate based on the CLaPP metric.
The new pose adapter has a win-rate ratio of 70% on the aesthetic score and 76% on the COCO-Pose benchmark compared to the previous SOTA.