-
Notifications
You must be signed in to change notification settings - Fork 165
Description
Hi, I am trying to decrease generation time. So far I am getting 2 min and 20 seconds per image (generating 10 images as output for age).
What I am realizing is that encode_images.py is taking this long for each input image:
- Initializing generator : 7.2106 secs
- Creating PerceptualModel : 9.0305 secs
- Loading Resnet model : 23.0473 secs
- Loop loss : 1.0582 secs
- Loop loss : 0.0619 secs
- Loop loss : 0.0630 secs
- Loop loss : 0.0618 secs
- Loop loss : 0.0628 secs
- Loop loss : 0.0621 secs
So I am trying to initialize the generator, create the perceptual model and load the resnet model at once at the beginning of my script and pass as parameters to encode_images.py so steps 1 to 3 are not being done for each image.
But I have no idea if that's the right way to do it. I defined an auxiliar() function instead of calling the script directly and passing same flags and parameters:
New defined function
auxiliar(optimizer='lbfgs', face_mask=True, iterations=6, use_lpips_loss=0, use_discriminator_loss=0, output_video=False, src_dir='aligned_images/', generated_images_dir='generated_images/', dlatent_dir='latent_representations/')Former script call
python encode_images.py --optimizer=lbfgs --face_mask=True --iterations=6 --use_lpips_loss=0 --use_discriminator_loss=0 --output_video=False aligned_images/ generated_images/ latent_representations/So far I am getting this error:
ValueError: Tensor(“Const_1:0”, shape=(3,), dtype=float32) must be from the same graph as Tensor(“strided_slice:0", shape=(1, 256, 256, 3), dtype=float32).
At this point of the code that used to be in encode_images.py:
perceptual_model = PerceptualModel(args, perc_model=perc_model, batch_size=batch_size)
perceptual_model.build_perceptual_model(generator, discriminator_network)