Skip to content

Official Implementations for Paper - MagicQuillV2: Precise and Interactive Image Editing with Layered Visual Cues

License

Notifications You must be signed in to change notification settings

zliucz/MagicQuillV2

Repository files navigation

🪶 MagicQuill V2: Precise and Interactive Image Editing with Layered Visual Cues


video.mp4

Zichen Liu*,1,2, Yue Yu*,1,2, Hao Ouyang2, Qiuyu Wang2, Shuailei Ma2,3, Ka Leong Cheng2, Wen Wang2,4, Qingyan Bai1,2, Yuxuan Zhang5, Yanhong Zeng2, Yixuan Li2,5, Xing Zhu2, Yujun Shen2, Qifeng Chen1

1HKUST 2Ant Group 3NEU 4ZJU 5CUHK
* Equal Contribution

TLDR: MagicQuill V2 introduces a layered composition paradigm to generative image editing, disentangling creative intent into controllable visual cues (Content, Spatial, Structural, Color) for precise and intuitive control.

TODO List

  • [✅] Release the paper and project page.
  • [✅] Release the system with UI.
  • [✅] Release gradio demo on HuggingFace.
  • Release the batch inference code.
  • Release the training code.

Update Log

  • [2025.12.03] 📢 MagicQuill V2 is released!
  • [Legacy] For the previous version (MagicQuill V1), which requires much less VRAM and computation resources, please visit MagicQuill V1 Repository.

Hardware Requirements

Our model is based on Flux Kontext, which is large and computationally intensive.

  • VRAM: Approximately 40GB of VRAM is required for inference.
  • Speed: It takes about 30 seconds to generate a single image.

Important: This is a research project focused on pushing the boundaries of interactive image editing. If you do not have sufficient GPU memory, we recommend checking out our MagicQuill V1 or trying the online demo on Hugging Face Spaces.

Setup

  1. Clone the repository

    git clone https://github.com/magic-quill/MagicQuillV2.git
    cd MagicQuillV2
  2. Create environment

    conda create -n MagicQuillV2 python=3.10 -y
    conda activate MagicQuillV2
  3. Install dependencies

    pip install -r requirements.txt
  4. Download models Download the models from Hugging Face and place them in the models/ directory.

    huggingface-cli download LiuZichen/MagicQuillV2-models --local-dir models
  5. Run the demo

    python app.py

System Overview

The MagicQuill V2 interactive system is designed to unify our layered composition framework.

MagicQuill V2 UI

Key Upgrades from V1

  1. Toolbar (A): Features a new Local Edit Brush for defining the target editing area, along with tools for sketching edges and applying color.
  2. Visual Cue Manager (B): Holds all content layer visual cues (foreground props) that users can drag onto the canvas to define what to generate.
  3. Image Segmentation Panel (C): Accessed via the segment icon, this panel allows precise object extraction using SAM (Segment Anything Model) with positive/negative dots or bounding boxes.

Tutorial

💡 For a detailed guide on the 5 layer operations, please visit our Project Page.

Citation

If you find MagicQuill V2 useful for your research, please cite our paper:

@article{liu2025magicquillv2,
  title={MagicQuill V2: Precise and Interactive Image Editing with Layered Visual Cues},
  author={Zichen Liu, Yue Yu, Hao Ouyang, Qiuyu Wang, Shuailei Ma, Ka Leong Cheng, Wen Wang, Qingyan Bai, Yuxuan Zhang, Yanhong Zeng, Yixuan Li, Xing Zhu, Yujun Shen, Qifeng Chen},
  journal={arXiv:2512.03046},
  year={2025}
}

Acknowledgement

Our implementation builds upon several great open-source projects:

We thank the authors for their contributions.

License: This repo is governed by the license of CC BY-NC 4.0. We strongly advise users not to knowingly generate or allow others to knowingly generate harmful content.

About

Official Implementations for Paper - MagicQuillV2: Precise and Interactive Image Editing with Layered Visual Cues

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages