Releases: MadryLab/robustness
Releases · MadryLab/robustness
robustness 1.2.1.post2
- Support for SqueezeNet architectures
- Fix incompatibility with PyTorch 1.7 (#83)
- Allow user to specify only some device ids for training through the
dp_device_idsargument totrain.train_model - Update requirements.txt
robustness 1.2.1.post1
Small fixes in BREEDS dataset
robustness 1.2.1
Add BREEDS dataset, minor bug fixes
robustness 1.2-post1
- Restore ImageNetHierarchy class
- Improve type checking for dataset arguments
robustness v1.2
- Biggest new features:
- New ImageNet models
- Mixed-precision training
- OpenImages and Places365 datasets added
- Ability to specify a custom accuracy function (custom loss functions
were already supported, this is just for logging) - Improved resuming functionality
- Changes to CLI-based training:
--custom-lr-schedulereplaced by--custom-lr-multiplier(same format)--eps-fadein-epochsreplaced by general--custom-eps-multiplier
(now same format as custom-lr schedule)--step-lr-gammanow available to change the size of learning rate
drops (used to be fixed to 10x drops)--lr-interpolationargument added (can choose between linear and step
interpolation between learning rates in the schedule)--weight_decayis now called--weight-decay, keeping with
convention--resume-optimizeris a 0/1 argument for whether to resume the
optimizer and LR schedule, or just the model itself--mixed-precisionis a 0/1 argument for whether to use mixed-precision
training or not (required PyTorch compiled with AMP support)
- Model and data loading:
- DataParallel is now off by default when loading models, even when
resume_path is specified (previously it was off for new models, and on
for resumed models by default) - New
add_custom_forwardformake_and_restore_model(see docs for
more details) - Can now pass a random seed for training data subsetting
- DataParallel is now off by default when loading models, even when
- Training:
- See new CLI features---most have training-as-a-library counterparts
- Fixed a bug that did not resume the optimizer and schedule
- Support for custom accuracy functions
- Can now disable
torch.nogradfor test set eval (in case you have a
custom accuracy function that needs gradients even on the val set)
- PGD:
- Better random start for l2 attacks
- Added a
RandomStepattacker step (useful for large-noise training with
varying noise over training) - Fixed bug in the
with_imageargument (minor)
- Model saving:
- Accuracies are now saved in the checkpoint files themselves (instead of
just in the log stores) - Removed redundant checkpoints table from the log store, as it is a
duplicate of the latest checkpoint file and just wastes space
- Accuracies are now saved in the checkpoint files themselves (instead of
- Cleanup:
- Remove redundant
save_checkpointfunction in helpers file - Code flow improvements
- Remove redundant
v1.1
Release stuff
v1.0-post1
Update README.rst