Skip to content

Commit 93a91c5

Browse files
committed
fix readme
1 parent cf449b0 commit 93a91c5

File tree

2 files changed

+11
-9
lines changed

2 files changed

+11
-9
lines changed

README.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,19 @@
1-
# LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation
1+
# LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
22

33
Welcome to the official repository for **LLM2CLIP**! This project leverages large language models (LLMs) as powerful textual teachers for CLIP's visual encoder, enabling more nuanced and comprehensive multimodal learning.
44

55
[![Paper](https://img.shields.io/badge/Paper-arXiv-red)](https://arxiv.org/abs/2411.04997) [![Project Homepage](https://img.shields.io/badge/Project-Homepage-blue)](https://aka.ms/llm2clip) [![HuggingFace Collection](https://img.shields.io/badge/HuggingFace-Collection-orange)](https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c)
6-
**Paper:** Preprinted and under-review now. Accepted to NeurIPS 2024 Workshop: Self-Supervised Learning - Theory and Practice
6+
**Paper:** Accepted to NeurIPS 2024 Workshop: Self-Supervised Learning - Theory and Practice
77

88
---
9+
10+
## News 🚀🚀🚀
11+
- **[2024-11-08]** We are currently training a **scaled-up** version with ten times the training dataset, along with upcoming updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results trained with LLM2CLIP. Stay tuned for the most powerful CLIP models, and thank you for your star!
12+
- **[2024-11-06]** OpenAI's CLIP and EVA02's ViT base and large models are now available on [HuggingFace](https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c).
13+
- **[2024-11-01]** Our paper was accepted to the NeurIPS 2024 SSL Workshop!
14+
15+
---
16+
917
<img src="docs/static/images/radar_paper(4).png" style="max-width: 800px;">
1018

1119
## Challenges with Existing CLIP
@@ -41,12 +49,6 @@ Through this strategy, we better utilized the LLM's power to comprehend and proc
4149

4250
---
4351

44-
## News 🚀🚀🚀
45-
- **[2024-11-08]** We are currently training a **scaled-up** version with ten times the training dataset, along with upcoming updates: EVA ViT-E, InternVL-300M, SigCLIP-SO-400M, and more VLLM results trained with LLM2CLIP. Stay tuned for the most powerful CLIP models, and thank you for your star!
46-
- **[2024-11-06]** OpenAI's CLIP and EVA02's ViT base and large models are now available on [HuggingFace](https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c).
47-
- **[2024-11-01]** Our paper was accepted to the NeurIPS 2024 SSL Workshop!
48-
49-
---
5052
![main.svg](docs/static/images/main.svg)
5153

5254
## Model Zoo (Continuously Updated)

docs/index.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
<div class="hero-body">
3535
<div class="container has-text-centered">
3636
<img src="static/images/logo.png" alt="LLM2CLIP Logo" style="max-width: 100px; margin-bottom: 20px;">
37-
<h1 class="title is-1">LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation</h1>
37+
<h1 class="title is-1">LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation</h1>
3838
<h2 class="subtitle is-4">Weiquan Huang<sup>1*</sup>, Aoqi Wu<sup>1*</sup>, Yifan Yang<sup>2†</sup>, Xufang Luo<sup>2</sup>, Yuqing Yang<sup>2</sup>, Liang Hu<sup>1</sup>, Qi Dai<sup>2</sup>, Xiyang Dai<sup>2</sup>, Dongdong Chen<sup>2</sup>, Chong Luo<sup>2</sup>, Lili Qiu<sup>2</sup></h2>
3939
<div class="is-size-5">
4040
<span class="author-block"><sup>1</sup>Tongji University</span>,

0 commit comments

Comments
 (0)