LWTformer: A Detail-Aware, Learnable Wavelet-Transformer for Ancient Chinese Character Image Restoration
The official PyTorch implementation of the paper:
"LWTformer: A Detail-Aware, Learnable Wavelet-Transformer for Ancient Chinese Character Image Restoration"
CLICK to view the full Abstract
Ancient Chinese characters hold profound cultural and scholarly value, yet their images often suffer from erosion, abrasion, fractures, and noise due to aging, resulting in blurred strokes and missing edges. While deep learning-based image restoration has advanced significantly, the complex structures and unique configurations of ancient characters present distinct challenges. Specifically, the deep entanglement between noise and fine strokes makes it challenging to strike a balance between denoising and preserving subtle stroke details. Precisely separating these components remains a critical challenge, as conventional methods struggle to disentangle them effectively.
To address these issues, we propose LWTformer, a novel architecture integrating learnable 2D Discrete Wavelet Transform (2D-DWT) for ancient character image restoration. Our key contribution lies in a learnable wavelet downsampling module (WaveDown), which precisely separates ancient character images into low-frequency approximate components that preserve overall character structures and high-frequency detail components that capture distinctive stroke features. Additionally, we introduce Spatial-Enhanced Attention (SEA) and Wavelet-Aware Convolutional Gated Attention (WACGA) to better capture fine strokes and damaged edges. Experiments on Oracle Bone Inscriptions and Warring States Bamboo Slips datasets show that LWTformer outperforms state-of-the-art methods both qualitatively and quantitatively, providing a technical approach more aligned with the glyph structures for ancient Chinese character image restoration.
- 2025.11.29: We have made all our code publicly available.
- 2025.11.29: Our paper is under review.
- Release the complete code.
- Add WebUI demo.
- Add the pre-training of module.
- Build a larger ancient Chinese Character dataset & improve the degradation algorithm.
- Linux
- Python 3.8
- Pytorch 2.4.1
- CUDA 12.1
We recommend using Conda to manage your environment:
conda create -n LWTformer_env python=3.8 -y
conda activate LWTformer_env
# Install PyTorch
pip install torch==2.4.1+cu121 torchvision==0.19.1+cu121 torchaudio==2.4.1+cu121 -f https://download.pytorch.org/whl/torch_stable.html
# Install other dependencies
pip install -r requirements.txtVerify the installation:
python test_env.pyPlease organize your training data as follows:
├── Oracle
│ ├── train
│ │ ├── input
│ │ │ ├── 0001_6.png
│ │ │ └── ...
│ │ └── target
│ │ ├── 0001_6.png
│ │ └── ...
│ └── test
│ ├── input
│ │ ├── 0002_1.png
│ │ └── ...
│ └── target
│ ├── 0002_1.png
│ └── ...
Modify the parameters in options/train_LWTformer_options.py to match your setup. Key parameters include:
| Parameter | Description | Example Value |
|---|---|---|
--nepoch |
Total number of training epochs | 500 (Oracle), 200 (WSC41K) |
--dataset |
Dataset name (used for logging) | Oracle |
--task |
Task name (used for logging) | Restoration |
--dim |
Feature dimension size | 32 (LWTformer-S), 48 (LWTformer-L) |
--arch |
Model architecture | LWTformer-S, LWTformer-L |
--train_dir |
Path to training dataset | ./Oracle/train |
--val_dir |
Path to validation/test dataset | ./Oracle/test |
Once configured, run the following command:
python train/train_LWTformer.pyBefore sampling, please update the parameters in test/test_LWTformer.py to match your data and model paths:
| Parameter | Description | Example Value |
|---|---|---|
--real_dir |
Path to the Ground Truth (Target) dataset | Oracle/test/target |
--input_dir |
Path to the degraded (Input) dataset | Oracle/test/input |
--weights |
Path to the trained model checkpoint | logs/Oracle/Restoration/LWTformer/models/model_latest.pth |
Run the inference:
python test/test_LWTformer.pyThe generated images will be saved in the ./results folder.
We provide visual examples demonstrating the restoration capabilities of LWTformer.
Figure 3: Generalization test on real-world Hubei Bamboo Slips (using model trained solely on WSC41K).
We heavily drew on the following works during the development of our codebase. We are grateful for their valuable contributions.
If you find our work helpful for your research, please consider citing:
@misc{LWTformer2025,
title = {LWTformer: A Detail-Aware, Learnable Wavelet-Transformer for Ancient Chinese Character Image Restoration},
author = {Wentao Ruan and Xinhui Li and Zhan Cheng and Cunhang Fan and Libao Tian and Zhao Lv},
year = {2025},
howpublished = {\url{https://github.com/INWLY/LWTformer}},
note = {GitHub repository}
}For any questions, please contact:
- Email: INWLY@qq.com(Recommended) e24201077@stu.ahu.edu.cn
- This repository can only be used for non-commercial research purposes.
- For commercial use, please get in touch with Prof. Zhao Lv (
12080@ahu.edu.cn). - Copyright 2025, Anhui University Laboratory of Intelligent Information and Human-Computer Interaction (IIP-HCI), Anhui University.


