|
ref = Variable(torch.rand((1,1,width,length)).cuda(), requires_grad=True) |
|
optimizer = optim.Adam(net.parameters(), lr = learning_rate) |
|
optimizer2 = optim.Adam([ref], lr = 1e-1) |
Thank you for sharing this impressive work! I truly appreciate the effort put into it. However, I have some questions about certain details in the code. Specifically, I noticed that (\lambda) is initialized as a trainable random variable and is optimized by Optimizer 2. That said, I couldn’t find updates for Optimizer 2 in the later stages, nor did I observe the current output being used as the input for the next iteration. Looking forward to your reply. Thank you!