Junsheng Zhou* · Weiqi Zhang* · Yu-Shen Liu
(* Equal Contribution)
We release the code of the paper DiffGS: Functional Gaussian Splatting Diffusion in this repository.
In this work, we propose DiffGS, a general Gaussian generator based on latent diffusion models. DiffGS is a powerful and efficient 3D generative model which is capable of generating Gaussian primitives at arbitrary numbers for high-fidelity rendering with rasterization. We explore DiffGS for various tasks, including unconditional generation, conditional generation from text, image, and partial 3DGS, as well as Point-to-Gaussian generation. We believe that DiffGS provides a new direction for flexibly modeling and generating Gaussian Splatting.
Overview of DiffGS. (a) We disentangle the fitted 3DGS into three Gaussian Splatting Functions to model the Gaussian probability, colors and transforms. We then train a Gaussian VAE with a conditional latent diffusion model for generating these functions. (b) During generation, we first extract Gaussian geometry from the generated GauPF, followed by the GauCF and GauTF to obtain the Gaussian attributes.
We recommend creating an anaconda environment using our provided environment.yml:
conda env create -f environment.yml
conda activate diffgs
Notice:Since the code uses the original repository of Gaussian Splatting, please follow the environment setup instructions provided in the official repository to install the required dependencies.
We first provide the pretrained models: Gaussian VAE and Gaussian LDM of the chair unconditional model. Please download the pretrained models from Google Drive.
To inference pretrained model of ShapeNet Chair, save the downloaded model checkpoint to config/stage1 and config/stage2. Additionally, you also need to adjust the checkpoint path in config/genetate/specs.json, then run the following command:
python test.py -e config/generate/
- We would like to thank Stanford ShapeNet Renderer repository for their contribution, we have made modifications to the code based on their open-source work. Please install
Blenderand run the following command:
cd proecess_data
blender --background --python render_blender.py -- --output_folder {images_path} {mesh_path}- Next, perform point sampling on the mesh and modify the
shapene_folderpath insample_points.py. The sampled points will be used as the initial positions for the Gaussians.
python sample_points.py
- Run the Gaussian fitting script provided by us.
python train_gaussian.py -s <path to COLMAP or NeRF Synthetic dataset>
- Run the conversion script
convert.pyprovided by us to transform the Gaussians into data suitable for training, and perform sampling of the Gaussian probability field.
python convert_data.py
python train.py -e config/stage1/ -b 4 -w 8 # -b for batch size, -w for workers, -r to resume training
# extract the modulations / latent vectors, which will be saved in a "modulations" folder in the config directory
# the folder needs to correspond to "Data_path" in the diffusion config files
python test.py -e config/stage1/ -r {num epoch}
python train.py -e config/stage2 -b 32 -w 8
If you want to train point2gaussian, simply add --point2gs after the "Train Gaussian modulations" command.
python train.py -e config/stage1/ -b 4 -w 8 --point2gs
If you want to train a conditional generative model, please first prepare the condition for each Gaussian, set the context_path in specs.json to the correct path, and then run the following command.
python train.py -e config/stage2_conditional -b 32 -w 8
If you find our code or paper useful, please consider citing
@inproceedings{diffgs,
title={DiffGS: Functional Gaussian Splatting Diffusion},
author={Zhou, Junsheng and Zhang, Weiqi and Liu, Yu-Shen},
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
year={2024}
}






