Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA


    James Seale Smith1,2, Yen-Chang Hsu1, Lingyu Zhang1, Ting Hua1

    Zsolt Kira2, Yilin Shen1, Hongxia Jin1


    1Samsung Research America, 2Georgia Institute of Technology

    Transactions on Machine Learning Research (TMLR) 2024


    paper

    A use case of our work - a mobile app sequentially learns new customized concepts. At a later time, the user can generate photos of prior learned concepts. The user should be able to generate photos with multiple concepts together, thus ruling out methods such as per-concept adapters or single-image conditioned diffusion. Furthermore, the concepts are fine-grained, and simply learning new tokens or words is not effective.

    Abstract


    Recent works demonstrate a remarkable ability to customize text-to-image diffusion models while only providing a few example images. What happens if you try to customize such models using multiple, fine-grained concepts in a sequential (i.e., continual) manner? In our work, we show that recent state-of-the-art customization of text-to-image models suffer from catastrophic forgetting when new concepts arrive sequentially. Specifically, when adding a new concept, the ability to generate high quality images of past, similar concepts degrade. To circumvent this forgetting, we propose a new method, C-LoRA, composed of a continually self-regularized low-rank adaptation in cross attention layers of the popular Stable Diffusion model. Furthermore, we use customization prompts which do not include the word of the customized object (i.e., person for a human face dataset) and are initialized as completely random embeddings. Importantly, our method induces only marginal additional parameter costs and requires no storage of user data for replay. We show that C-LoRA not only outperforms several baselines for our proposed setting of text-to-image continual customization, which we refer to as Continual Diffusion, but that we achieve a new state-of-the-art in the well-established rehearsal-free continual learning setting for image classification. The high achieving performance of C-LoRA in two separate domains positions it as a compelling solution for a wide range of applications, and we believe it has significant potential for practical impact.


    Method


    Our method, C-LoRA, updates the key-value (K-V) projection in U-Net cross-attention modules of Stable Diffusion using a continual, self-regulating low-rank weight adaptation. The past LoRA weight deltas are used to regulate the new LoRA weight deltas by guiding which parameters are most available to be updated. Unlike prior work, we initialize custom tokens as random features and remove the concept name (e.g., person) from the prompt.

    Results: Faces


    Qualitative results of continual customization using the Celeb-A HQ dataset. Results are shown for three concepts from the learning sequence sampled after training ten concepts sequentially.


    Multi-concept results after training on 10 sequential tasks using Celeb-A HQ. Using standard quadrant numbering (I is upper right, II is upper left, III is lower left, IV is lower right), we label which target data belongs in which generated image by directly annotating the target data images.

    Results: Landmarks


    Qualitative results of continual customization using waterfalls from the Google Landmarks dataset. Results are shown for three concepts from the learning sequence sampled after training ten concepts sequentially.

    BibTeX

                    @article{smith2024continualdiffusion,
                      title={Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA},
                      author={Smith, James Seale and Hsu, Yen-Chang and Zhang, Lingyu and Hua, Ting and Kira, Zsolt and Shen, Yilin and Jin, Hongxia},
                      journal={Transactions on Machine Learning Research},
                      issn={2835-8856},
                      year={2024}
                    }
                  

    主站蜘蛛池模板: 一区二区三区精品视频| 国产福利电影一区二区三区,日韩伦理电影在线福 | A国产一区二区免费入口| 午夜无码一区二区三区在线观看| 国产一区韩国女主播| 久久亚洲中文字幕精品一区| 插我一区二区在线观看| 亚洲欧洲一区二区| 无码人妻aⅴ一区二区三区有奶水| 日韩精品一区二区三区老鸦窝| 亚洲乱码国产一区网址| 乱精品一区字幕二区| 在线视频一区二区三区四区| 国产激情一区二区三区成人91| 久久久久无码国产精品一区 | 国产一区二区在线|播放| 亚洲国产成人久久一区久久| 国产爆乳无码一区二区麻豆| 亚洲.国产.欧美一区二区三区| 无码日韩精品一区二区人妻| 亚洲V无码一区二区三区四区观看 亚洲爆乳精品无码一区二区三区 亚洲爆乳无码一区二区三区 | 国产AV天堂无码一区二区三区| 国产一区二区三区免费观看在线| 国产精品合集一区二区三区| 高清在线一区二区| 精品无码一区二区三区爱欲| 国产在线观看一区二区三区四区 | 国精产品一区一区三区有限在线| 一区二区三区AV高清免费波多| 亚洲欧洲一区二区| 亚洲AV无码一区二区三区在线观看| 免费无码一区二区三区| 一区在线免费观看| 亚洲av综合av一区二区三区| AV无码精品一区二区三区宅噜噜| 黑巨人与欧美精品一区| 一区二区三区日韩| 国内国外日产一区二区| 男人的天堂精品国产一区| 一本久久精品一区二区| 无码人妻少妇色欲AV一区二区|