A comprehensive security scanner for AI-powered applications that detects exposed API keys, AI provider endpoints, dependencies, and calculates risk scores for your codebase.
- API Key Detection: Identifies exposed API keys for OpenAI, Anthropic, Google, and other AI providers using entropy analysis
- Endpoint Scanning: Detects AI provider API endpoints in your code
- Dependency Analysis: Scans package files for AI-related dependencies
- Risk Scoring: Calculates an overall risk score based on findings severity
- Multi-language Support: Scans Go, Python, JavaScript, TypeScript, JSON, YAML, and .env files
- Go 1.25.4 or later
- Git (for cloning the repository)
- Clone the repository:
git clone https://github.com/etyvrox/ankerit_ai_scanner
cd ankerit_ai_scanner- Build the application:
go build -o ankerit main.goAlternatively, you can install it directly:
go installYou can also run the scanner directly without building:
go run main.go -path /path/to/scanScan the current directory:
./ankeritOr specify a path:
./ankerit -path /path/to/your/project-path: Path to scan (default: current directory.)
The scanner outputs JSON results with the following structure:
{
"findings": [
{
"type": "secret",
"provider": "OpenAI",
"file": "config.env",
"line": 5,
"severity": "critical"
},
{
"type": "endpoint",
"provider": "Anthropic",
"file": "src/api.js",
"line": 12,
"match": "api.anthropic.com",
"severity": "high"
}
],
"risk_score": 70
}- secret: Exposed API keys or credentials
- endpoint: AI provider API endpoints detected in code
- dependency: AI-related dependencies found in package files
- critical: Exposed secrets (API keys, credentials)
- high: AI provider endpoints detected
- medium: AI-related dependencies
For comprehensive AI security scanning including advanced attack vector detection, visit platform.ankerit.io.
The Ankerit platform provides extended functionality beyond basic scanning:
- Direct Prompt Injection: Identifies vulnerabilities where user input is directly concatenated into prompts without sanitization
- Indirect Prompt Injection: Detects cases where external data sources (databases, APIs, files) are injected into prompts
- Jailbreak Attempts: Scans for common jailbreak patterns and adversarial prompts
- Context Injection: Detects manipulation of system prompts, instructions, or context windows
- Multi-turn Injection: Analyzes conversation history for injection attempts across multiple interactions
- Parameter Tampering: Detects attempts to modify model parameters, temperature settings, or sampling configurations
- Output Format Manipulation: Identifies vulnerabilities in structured output parsing (JSON, XML, etc.)
- Token Limit Bypass: Scans for attempts to exceed context windows or token limits
- Model Switching: Detects unauthorized attempts to switch between different models or versions
- Training Data Extraction: Identifies prompts designed to extract training data or memorized information
- PII Leakage: Scans for potential exposure of personally identifiable information in prompts or responses
- Sensitive Data Exposure: Detects inclusion of sensitive business data, credentials, or proprietary information
- Cross-User Data Leakage: Analyzes multi-tenant applications for data isolation vulnerabilities
- API Key Rotation: Monitors for stale or exposed API keys requiring rotation
- Rate Limit Bypass: Detects attempts to circumvent rate limiting mechanisms
- Token Replay: Identifies vulnerabilities in authentication token handling
- Privilege Escalation: Scans for unauthorized access to higher-privilege models or features
- Dependency Vulnerabilities: Deep analysis of AI/ML library dependencies for known CVEs
- Model Integrity: Verifies model files and weights haven't been tampered with
- Third-party API Security: Assesses security posture of integrated AI services
- Container Security: Scans Docker images and containers for AI-related vulnerabilities
- Prompt Engineering Attacks: Detects sophisticated prompt manipulation techniques
- Adversarial Examples: Identifies potential for adversarial input generation
- Model Evasion: Scans for attempts to bypass content filters or safety mechanisms
- Backdoor Detection: Analyzes models and code for potential backdoors or trojans
- GDPR Compliance: Ensures AI systems handle personal data in compliance with regulations
- AI Ethics Checks: Validates adherence to ethical AI principles and guidelines
- Audit Logging: Verifies proper logging and monitoring of AI interactions
- Data Retention Policies: Checks compliance with data retention and deletion requirements
- Input Validation: Comprehensive analysis of input sanitization and validation
- Output Filtering: Verifies proper filtering of model outputs before presentation
- Error Handling: Detects information leakage through error messages
- Resource Exhaustion: Identifies potential for DoS attacks through resource-intensive prompts
The Ankerit platform offers:
- CI/CD Integration: Automated scanning in your build pipeline
- API Access: Programmatic access to security scanning capabilities
- Web Dashboard: Real-time monitoring and reporting
- Custom Rules: Define organization-specific security policies
- Alerting: Configure notifications for critical findings
Visit platform.ankerit.io to access these advanced features and protect your AI applications from sophisticated attack vectors.
.go- Go source files.py- Python source files.js- JavaScript source files.ts- TypeScript source files.json- JSON configuration files.yml,.yaml- YAML configuration files.env- Environment variable files
- OpenAI
- Anthropic (Claude)
- Google Gemini
- Azure OpenAI
- Mistral AI
- Cohere
- Hugging Face
./ankerit -path ~/my-ai-project./ankerit./ankerit -path /path/to/project > scan_results.jsonContributions are welcome! Please feel free to submit a Pull Request.
GNU public license
For issues, questions, or feature requests, please open an issue on the repository or visit platform.ankerit.io for enterprise support.