🎉Introduction • 🌟Methods Reproduced • 📰What's New • ☄️How to Use • 👨🏫Acknowledgments • 🤗Contact
Welcome to C3Box, a CLIP-based continual learning toolbox [Paper]. On the one hand, C3Box implements some state-of-the-art CLIP-based class-incremental learning algorithms, such as CLG-CBM, PROOF, and ENGINE. On the other hand, C3Box also fits typical class-incremental learning algorithms (e.g., FOSTER, and MEMO) and ViT-based class-incremental learning algorithms (e.g., L2P, and DualPrompt) to evaluate their effectiveness.
If you use any content of this repo for your work, please cite the following bib entries:
@article{sun2026c3box,
title={C3Box: A CLIP-based Class-Incremental Learning Toolbox},
author={Sun, Hao and Zhou, Da-Wei},
journal={arXiv preprint arXiv:2601.20852},
year={2026}
}
@inproceedings{zhou2024continual,
title={Continual learning with pre-trained models: A survey},
author={Zhou, Da-Wei and Sun, Hai-Long and Ning, Jingyi and Ye, Han-Jia and Zhan, De-Chuan},
booktitle={IJCAI},
pages={8363-8371},
year={2024}
}
@article{zhou2024class,
author = {Zhou, Da-Wei and Wang, Qi-Wei and Qi, Zhi-Hong and Ye, Han-Jia and Zhan, De-Chuan and Liu, Ziwei},
title = {Class-Incremental Learning: A Survey},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume={46},
number={12},
pages={9851--9873},
year = {2024}
}
- [2026-01]🌟 Initial version of C3Box is released [Paper].
- [2026-01]🌟 Release the code.
FineTune: Baseline method which simply updates parameters on new tasks.ZS-CLIP: Baseline method which serves as a performance benchmark for the pre-trained CLIP on downstream tasks.FOSTER: Feature Boosting and Compression for Class-incremental Learning. ECCV 2022 [paper]MEMO: A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning. ICLR 2023 Spotlight [paper]L2P: Learning to Prompt for Continual Learning. CVPR 2022 [paper]DualPrompt: DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning. ECCV 2022 [paper]CODA-Prompt: CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning. CVPR 2023 [paper]Ease: Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning. CVPR 2024 [paper]SimpleCIL: Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need. IJCV 2024 [paper]APER: Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need. IJCV 2024 [paper]TUNA: Integrating Task-Specific and Universal Adapters for Pre-Trained Model-based Class-Incremental Learning. ICCV 2025 [paper]RAPF: Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion. ECCV 2024 [paper]MG-CLIP: Mind the Gap: Preserving and Compensating for the Modality Gap in CLIP-Based Continual Learning. ICCV 2025 [paper]CLG-CBM: Language Guided Concept Bottleneck Models for Interpretable Continual Learning. CVPR 2025 [paper]PROOF: Learning without Forgetting for Vision-Language Models. TPAMI 2025 [paper]ENGINE: External Knowledge Injection for CLIP-Based Class-Incremental Learning. ICCV 2025 [paper]BOFA: BOFA: Bridge-Layer Orthogonal Low-Rank Fusion for CLIP-Based Class-Incremental Learning. AAAI 2026 [paper]
Clone this GitHub repository:
git clone https://github.com/LAMDA-CL/C3Box
cd LAMDA-C3Box
-
Edit the
[MODEL NAME].jsonfile for global settings and hyperparameters. -
Run:
python main.py --config=./exps/[MODEL NAME].json
-
hyper-parametersWhen using C3Box, you can edit the global parameters and algorithm-specific hyper-parameter in the corresponding json file.
These parameters include:
-
model_name: The model's name should be selected from the 11 methods listed above, i.e.,
finetune,zs_clip,foster,memo,simplecil,l2p,dual,coda,ease,aper,tuna,rapf,clg_cbm,mg_clip,proof,engineandbofa. - init_cls: The number of classes in the initial incremental stage. As the configuration of CIL includes different settings with varying class numbers at the outset, our framework accommodates diverse options for defining the initial stage.
-
increment: The number of classes in each incremental stage
$i$ ,$i$ > 1. By default, the number of classes is equal across all incremental stages. - backbone_type: The backbone network of the incremental model. It can be selected from a variety of pre-trained models available in the Timm library, such as LAION-400M and OpenAI, for the CLIP with ViT-B/16.
- seed: The random seed is utilized for shuffling the class order. It is set to 1993 by default, following the benchmark setting iCaRL.
- fixed_memory: a Boolean parameter. When set to true, the model will maintain a fixed amount of memory per class. Alternatively, when set to false, the model will preserve dynamic memory allocation per class.
-
memory_size: The total number of exemplars in the incremental learning process. If
fixed_memoryis set to false, assuming there are$K$ classes at the current stage, the model will preserve$\left[\frac{{memory-size}}{K}\right]$ exemplars for each class. ZS-CLIP, SimpleCIL, ADAM, EASE, TUNA, CLG_CBM, MG_CLIP, ENGINE and BOFA do not require exemplars. Therefore, parameters related to the exemplar are not utilized. -
memory_per_class: If
fixed memoryis set to true, the model will preserve a fixed number ofmemory_per_classexemplars for each class.
-
model_name: The model's name should be selected from the 11 methods listed above, i.e.,
We have implemented the pre-processing datasets as follows:
- CIFAR100: will be automatically downloaded by the code.
- CUB200: Google Drive: link or OneDrive link
- ImageNet-R: Google Drive: link or Onedrive: link
- ObjectNet: Onedrive: link You can also refer to the filelist and processing code if the file is too large to download.
- Cars: Google Drive: link or OneDrive: link
- UCF: Google Drive: link or OneDrive: link
- Aircraft: Google Drive: link or OneDrive: link
- Food: Google Drive: link or OneDrive: link
- SUN: OneDrive: link
- TV100: OneDrive: link
These subsets are sampled from the original datasets. Please note that I do not have the right to distribute these datasets. If the distribution violates the license, I shall provide the filenames instead.
When training not on CIFAR100, you should specify the folder of your dataset in utils/data.py.
def download_data(self):
assert 0,"You should specify the folder of your dataset"
train_dir = '[DATA-PATH]/train/'
test_dir = '[DATA-PATH]/val/'We thank the following repos providing helpful components/functions in our work.
If there are any questions, please feel free to propose new features by opening an issue or contact with the author: Hao Sun(sunhao@lamda.nju.edu.cn) and Da-Wei Zhou(zhoudw@lamda.nju.edu.cn). Enjoy the code.
