Runfeng Li,
Mikhail Okunev,
Zixuan Guo,
Anh Ha Duong,
Christian Richardt∞,
Matthew O'Toole*,
James Tompkin
Brown University, ∞Meta Reality Labs, *Carnegie Mellon University
To set up the environment, run
conda env create -f environment.yml
conda activate gftorf
If your machine (hardware & software) is compatible with the original 3DGS, then you should have no problem setting up our environment.
Next, you have two options:
-
To download pretrained models, run:
python prepare_models.py -
Modify arguments in
run_render.pyif needed, then run:
(You might need to modify the IMAGEMAGICK path inconf.pyto compose video panels.)python run_render.pyYou should get the exact video panels as shown on our project page, such as:
This can take at most 60 minutes for one scene on a single NVIDIA 3090 GPU.
-
Download F-TöRF
real_scenes.zip,synthetic_scenes.zipand TöRFcopier,cupboard,deskbox,phonebooth, andstudbookscenes to thedata/folder, and then run:python prepare_data.py -
Modify arguments in
run_optimize.pyif needed, then run:python run_optimize.pyYou can get some decent looking results after 20k iterations, such as:
(though training longer would usually be better)
@InProceedings{Li_2025_CVPR,
author = {Li, Runfeng and Okunev, Mikhail and Guo, Zixuan and Duong, Anh Ha and Richardt, Christian
and O'Toole, Matthew and Tompkin, James},
title = {Time of the Flight of the Gaussians: Optimizing Depth Indirectly in Dynamic Radiance Fields},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {21021-21030}
}