We run our experiments with TensorFlow 1.14.0, CUDA 9.2, Python 3.6.12 and Ubuntu 18.04.
We provide first 300 frame stereo images. You can download the EndoSLAM dataset and the Hamlyn dataset for more dataset test.
preprocess
We rectified stereo images sampled from the in-vivo endoscopy stereo video.
split
We train first 200 frame data of in-vivo endoscopy stereo dataset and test frame 201 to 300.
Standard TPS training:
CUDA_VISIBLE_DEVICES=0 python std_tps.py --model 'TPS' --cpts_row 4 --cpts_col 4 --output_directory <path_to_save_result>Alternative TPS training:
Training step:
CUDA_VISIBLE_DEVICES=0 python o_tps.py --pretrained False --cpts_row 4 --cpts_col 4 --output_directory <path_to_save_result>Test step:
CUDA_VISIBLE_DEVICES=0 python std_tps.py --model 'OTPS'set --model OTPS to load trained T of OTPS model for test
We provide a main.ipynb include scripts above all.
we ignore 3d plot code and show result directly. You can find a test reconstruction video in folder result.
we compare our method with several well-know end to end models of stereo depth estimation.
- disparity map result
- reconstruction result





