AI Security Platform: Defense (227 engines) + Offense (39K+ payloads) | 🎓 Academy: 159 lessons + 8 labs | RLM-Toolkit | OWASP LLM/ASI Top 10 | Red Team toolkit for AI
-
Updated
Feb 8, 2026 - HTML
AI Security Platform: Defense (227 engines) + Offense (39K+ payloads) | 🎓 Academy: 159 lessons + 8 labs | RLM-Toolkit | OWASP LLM/ASI Top 10 | Red Team toolkit for AI
Comprehensive taxonomy of AI security vulnerabilities, LLM adversarial attacks, prompt injection techniques, and machine learning security research. Covers 71+ attack vectors including model poisoning, agentic AI exploits, and privacy breaches.
Veil Armor is an enterprise-grade security framework for Large Language Models (LLMs) that provides multi-layered protection against prompt injections, jailbreaks, PII leakage, and sophisticated attack vectors.
🤖 Test and secure AI systems with advanced techniques for Large Language Models, including jailbreaks and automated vulnerability scanners.
Evaluates LLM safety failure modes across prompt attacks, context overflow, and RAG poisoning.
Add a description, image, and links to the llm-attacks topic page so that developers can more easily learn about it.
To associate your repository with the llm-attacks topic, visit your repo's landing page and select "manage topics."