While personalized text-to-image generation has enabled the learning of a single concept from multiple images, a more practical yet challenging scenario involves learning multiple concepts within a single image. However, existing works tackling this scenario heavily rely on extensive human annotations. In this paper, we introduce a novel task named Unsupervised Concept Extraction (UCE) that considers a fully unsupervised setting without any human knowledge of the concepts. Given an image that contains multiple concepts, the task aims to extract and recreate individual concepts solely relying on the existing knowledge from pretrained diffusion models. To address this problem, we present ConceptExpress that tackles UCE by unleashing the inherent capabilities of pretrained diffusion models in two aspects. Specifically, a concept localization approach automatically locates and disentangles salient concepts by leveraging spatial correspondence provided by diffusion self-attention; and based on the lookup association between a concept and a conceptual token, a concept-wise optimization process learns discriminative tokens that represent each individual concept. Finally, we establish an evaluation protocol tailored for the UCE task. Extensive experiments show the effectiveness of ConceptExpress, demonstrating it to be a promising solution to UCE.
ConceptExpress can disentangle each concept in the compositional scene and learn discriminative conceptual tokens that represent each individual concept.
ConceptExpress presents two major innovations:
See more method details in our paper!
Bas†: Break-a-Scene adpated in the unsupervised setting by leveraging the instance masks identified by our method as the ground-truth segmentation masks.
ConceptExpress is also capable of text-guided generation:
If you find this project useful for your research, please cite the following:
@InProceedings{hao2024conceptexpress,
title={Concept{E}xpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction},
author={Shaozhe Hao and Kai Han and Zhengyao Lv and Shihao Zhao and Kwan-Yee~K. Wong},
booktitle={ECCV},
year={2024},
}
This page was adapted from this source code.