A comprehensive GUI application for visualizing and understanding adversarial attacks on deep neural networks.
Attack Progression
Attack Surface Visualization
- FGSM (Fast Gradient Sign Method)
- PGD (Projected Gradient Descent)
- DeepFool (Minimal perturbation attack)
- C&W (Carlini & Wagner attack)
- Real-time attack progression tracking
- Side-by-side original vs adversarial image comparison
- Enhanced perturbation visualization (10x amplified)
- Top 5 prediction confidence analysis
- 3D attack surface mapping
- Gradient flow visualization
- Vulnerability heatmaps
- Pexels API integration for image search
- Unsplash fallback support
- Direct URL image loading
- Automatic fallback to dummy images (optional)
- Python 3.8 or higher
- CUDA-compatible GPU (optional, but recommended)
git clone https://github.com/imjbassi/adversarial-visualizer.git
cd adversarial-attack-visualizer
pip install -r requirements.txt
cp .env.example .envEdit .env and insert your Pexels API key.
python scripts/run_attack.pypython test_setup.pyPEXELS_API_KEY=your_pexels_api_key_here
- Epsilon: Perturbation magnitude (0.001 - 0.1)
- Iterations: Number of attack iterations (10 - 100)
- Attack Method: FGSM, PGD, DeepFool, or C&W
adversarial-attack-visualizer/
├── scripts/
│ └── run_attack.py
├── attacks/
│ ├── fgsm.py
│ ├── pgd.py
│ ├── deepfool.py
│ └── cw.py
├── utils/
│ ├── image_utils.py
│ └── visualization.py
├── requirements.txt
├── test_setup.py
├── .env.example
├── .gitignore
└── README.md
- Launch the application
- Enter search term (e.g., "cat", "dog", "car")
- Select attack method
- Adjust parameters using sliders
- Click "Search & Attack"
- 3D Attack Surface: Compare attack effectiveness across methods and parameters
- Gradient Flow: Visualize gradient magnitudes during attacks
- Vulnerability Heatmap: Identify most vulnerable image regions
- Architecture: ResNet-18
- Dataset: ImageNet pretrained
- Input Size: 224x224 RGB
- Normalization: ImageNet standard (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
- FGSM: Single-step gradient-based attack
- PGD: Multi-step projected gradient descent with momentum
- DeepFool: Iterative minimal perturbation method
- C&W: Optimization-based attack with L2 norm constraints
- Fork the repository
- Create a feature branch (
git checkout -b feature/your-feature) - Commit your changes (
git commit -m 'Add feature') - Push to your branch (
git push origin feature/your-feature) - Open a Pull Request
This project is licensed under the MIT License. See the LICENSE file for details.
- PyTorch team for the deep learning framework
- ImageNet dataset contributors
- Pexels and Unsplash for image APIs
- Researchers behind adversarial methods
@software{adversarial_attack_visualizer,
title={Adversarial Attack Visualizer},
author={Your Name},
year={2025},
url={https://github.com/yourusername/adversarial-attack-visualizer}
}- CUDA out of memory: Reduce batch size or switch to CPU
- Missing dependencies: Run
pip install -r requirements.txt - API key errors: Check
.envfile - Slow performance: Use GPU acceleration
- Open a GitHub issue
- Search existing issues
- Review documentation and logs
- Support for additional attacks (e.g., JSMA, BIM)
- Custom model uploads
- Batch attack support
- Export result capability
- Performance benchmarking

