command as follow:
accelerate --num_processes train.py --config-name=train_diffusion_unet_timm_umi_workspace task.dataset_path=cup_in_the_wild.zarr.zip
When I checked the configuration, I found that training.num_epochs = 120. However, I was training on a single GPU on an A100 machine, which consumed 20GB of memory and took 20 hours to complete 10 epochs. Roughly estimated, it would take about 10 days to complete the entire training process. So, I would like to ask what it means to train for 120 epochs? And is it just about one action of grasping a cup?