Latest Research Papers
2025-01-23
arXiv
One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt
This paper introduces a training-free method, One-Prompt-One-Story, for consistent text-to-image generation that maintains character identity using a single prompt. The method concatenates all prompts into one input and refines the process with Singular-Value Reweighting and Identity-Preserving Cross-Attention. Experiments show its effectiveness compared to existing approaches.
Text-to-image generation models can create high-quality images from input
prompts. However, they struggle to support the consistent generation of
identity-preserving requirements for storytelling. Existing approaches to this
problem typically require extensive training in large datasets or additional
modifications to the original model architectures. This limits their
applicability across different domains and diverse diffusion model
configurations. In this paper, we first observe the inherent capability of
language models, coined context consistency, to comprehend identity through
context with a single prompt. Drawing inspiration from the inherent context
consistency, we propose a novel training-free method for consistent
text-to-image (T2I) generation, termed "One-Prompt-One-Story" (1Prompt1Story).
Our approach 1Prompt1Story concatenates all prompts into a single input for T2I
diffusion models, initially preserving character identities. We then refine the
generation process using two novel techniques: Singular-Value Reweighting and
Identity-Preserving Cross-Attention, ensuring better alignment with the input
description for each frame. In our experiments, we compare our method against
various existing consistent T2I generation approaches to demonstrate its
effectiveness through quantitative metrics and qualitative assessments. Code is
available at https://github.com/byliutao/1Prompt1Story.
2025-01-21
arXiv
TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space
TokenVerse is a method for multi-concept personalization using a pre-trained text-to-image diffusion model, capable of disentangling and combining complex visual elements from multiple images. It leverages the semantic modulation space to enable localized control over various concepts, including objects, accessories, materials, pose, and lighting. The effectiveness of TokenVerse is demonstrated in challenging personalization settings, outperforming existing methods.
We present TokenVerse -- a method for multi-concept personalization,
leveraging a pre-trained text-to-image diffusion model. Our framework can
disentangle complex visual elements and attributes from as little as a single
image, while enabling seamless plug-and-play generation of combinations of
concepts extracted from multiple images. As opposed to existing works,
TokenVerse can handle multiple images with multiple concepts each, and supports
a wide-range of concepts, including objects, accessories, materials, pose, and
lighting. Our work exploits a DiT-based text-to-image model, in which the input
text affects the generation through both attention and modulation (shift and
scale). We observe that the modulation space is semantic and enables localized
control over complex concepts. Building on this insight, we devise an
optimization-based framework that takes as input an image and a text
description, and finds for each word a distinct direction in the modulation
space. These directions can then be used to generate new images that combine
the learned concepts in a desired configuration. We demonstrate the
effectiveness of TokenVerse in challenging personalization settings, and
showcase its advantages over existing methods. project's webpage in
https://token-verse.github.io/