-
Notifications
You must be signed in to change notification settings - Fork 25
Open
Description
I have come to realize by the implementation in built_ins/gan.py that two separate calls are done in order to update the discriminator. The first optimizer update utilizes the gan loss and the second the gradient penalty.
This is because LossHandle will overwrite (s1) any value for a specific network key, which is an inconvenient behaviour. I can see in s2 and s3 that there was an intention to implement a convenient behaviour, but it seems that it has not been done.
I propose the following, tell me what you think:
-
self.losses.network = awill overwrite/set the loss for thenetwork. -
self.losses.network += awould add to the already existing loss, if it exists, and if it doesn't it sets it toa. - Remove any options for
methodoradd_value=Trueas it would only introduce confusion and incoherencies to the API for those creatingModelPlugins.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels