A comprehensive, structured security audit methodology for full-stack applications, specifically designed for:
- Air-gapped environments
- Sensitive data handling
- Multi-tier access control systems
- Enterprise security requirements
This repository contains a complete security audit framework consisting of 13 sequential phases that systematically examine every aspect of your application's security posture, from initial reconnaissance through remediation verification.
- Sequential methodology - Each phase builds on previous findings
- LLM-optimized prompts - Designed for use with Claude, GPT-4, or similar
- Air-gap focused - Special attention to offline/isolated environments
- Comprehensive coverage - Auth, APIs, infrastructure, secrets, and more
- Actionable output - Prioritized remediation roadmaps
- Carry-forward system - Context preserved across phases
- Multiple export formats - Markdown, JSON, and CSV reports
- Phase 13: Remediation Verification - Verify fixes, test bypasses, detect regressions
- Threat Modeling (STRIDE) - Integrated into Phase 0 for upfront risk identification
- Rules of Engagement Template - Professional pre-engagement documentation
- JSON/CSV Export - Enterprise-friendly report formats
- 122 Unit Tests - Comprehensive test coverage for scripts
- CI/CD Workflows - Automated testing and security scanning
See CHANGELOG.md for full details.
codebase-audit/
βββ README.md # This file
βββ audit-selector.md # Auto-detect tech stack & recommend audits
βββ core-phases/ # Main audit phases (run in order)
β βββ phase-00-reconnaissance.md # Attack surface mapping
β βββ phase-01-authentication.md # Identity verification
β βββ phase-02-authorization.md # Access control
β βββ phase-03-api-security.md # API endpoint security
β βββ phase-04-business-logic.md # Logic flaw detection
β βββ phase-05-data-layer.md # Database & storage
β βββ phase-06-frontend.md # Client-side security
β βββ phase-07-infrastructure.md # IaC & deployment
β βββ phase-08-secrets-management.md # Credentials & keys
β βββ phase-09-logging-monitoring.md # Audit trails
β βββ phase-10-error-handling.md # Failure modes
β βββ phase-11-cross-cutting.md # Integration review
β βββ phase-12-synthesis.md # Final report
β βββ phase-13-verification.md # Remediation verification
βββ specialized/ # Deep-dive audits
β βββ mobile-security-audit.md # iOS, Android, React Native, Flutter
β βββ aws-security-audit.md # AWS services security
β βββ kubernetes-audit.md # K8s-specific checks
β βββ graphql-audit.md # GraphQL API security
β βββ api-penetration-testing.md # Active testing guide
β βββ frontend-performance-audit.md # Performance & SEO audit
βββ skill/ # Claude Code Skill (auto-triggered)
β βββ SKILL.md # Main skill entry point
β βββ phases/ # Condensed phase instructions
β βββ specialized/ # Condensed specialized audits
β βββ templates/ # Finding & report templates
β βββ scripts/ # Utility scripts (Python)
β βββ detect_stack.py # Auto-detect technologies
β βββ init_audit.py # Initialize .audit/ folder
β βββ validate_finding.py # Validate finding format
β βββ generate_report.py # Compile final report
βββ compliance/ # Compliance framework mappings
β βββ compliance-mapping.md # OWASP, SOC2, GDPR, PCI-DSS, HIPAA
βββ templates/ # Documentation templates
β βββ finding-template.md # Individual finding format
β βββ audit-context-template.md # AI session memory template
β βββ progress-tracker.md # Audit progress tracking
β βββ rules-of-engagement.md # Pre-engagement questionnaire
βββ tests/ # Unit tests (pytest)
β βββ test_detect_stack.py # Stack detection tests
β βββ test_validate_finding.py # Finding validation tests
β βββ test_generate_report.py # Report generation tests
βββ checklists/ # Quick-reference checklists
β βββ master-checklist.md # Consolidated checklist
βββ .github/
βββ ISSUE_TEMPLATE/ # GitHub issue templates
βββ security-finding.md # Finding issue template
This framework supports two usage modes to fit your workflow:
If you use Claude Code (Anthropic's CLI), the skill provides automated orchestration:
Installation:
# Option 1: Copy (simple, but requires manual updates)
cp -r skill ~/.claude/skills/security-audit
# Option 2: Symlink (recommended - auto-updates when you git pull)
ln -s /path/to/codebase-audit/skill ~/.claude/skills/security-auditUpdating:
# If you used symlink: just pull the latest
git pull
# If you used copy: re-copy the skill folder
cp -r skill ~/.claude/skills/security-auditUsage:
# Just ask Claude to audit your codebase:
"Run a security audit on this codebase"
"Check this app for vulnerabilities"
"Perform a security review"
What happens automatically:
- Claude detects your technology stack (frameworks, cloud, infrastructure)
- Recommends appropriate audit phases and specialized audits
- Creates
.audit/folder for findings and reports - Runs phases sequentially with context preservation
- Generates final report with prioritized remediation
Works with ChatGPT, Cursor, Aider, Windsurf, or any AI assistant:
- Start with
audit-selector.mdto determine your audit path - Run phases sequentially from
core-phases/ - Save carry-forward summaries between sessions
- Use
templates/for consistent documentation - Reference
compliance/for regulatory mapping
See detailed workflow below.
This framework is designed to work with AI coding assistants. The recommended approach uses Claude Code (Anthropic's CLI tool), but it also works with other AI tools.
- Claude Code installed (
npm install -g @anthropic-ai/claude-code) - Access to the codebase you want to audit
- Terminal/command line access
Clone this repository to a separate folder (not inside your target codebase):
# Clone the audit framework
git clone https://github.com/your-username/codebase-audit.git
cd codebase-auditStart Claude Code from the framework folder and let it understand the audit methodology:
# Start Claude Code
claude
# Once inside Claude Code, initialize the framework context
/initThe /init command helps Claude understand the complete audit framework, including all phases, templates, and guidelines.
Tell Claude the path to your target codebase and ask it to begin the audit:
Example prompts to start:
Following this audit framework, please audit my codebase located at "../my-project/"
I want to run a security audit on my application at "C:/Projects/my-app/".
Please follow the phases in this framework, starting with Phase 0 Reconnaissance.
Audit the codebase at "../my-saas-app/" using this security framework.
Focus on authentication and API security first.
For performance audits:
Run a frontend performance audit on "../my-website/" using the
specialized/frontend-performance-audit.md guide.
Claude will automatically:
- Create the
.audit/folder in your target project for all findings - Initialize
audit-context.mdto track progress and enable resumption - Run phases sequentially (0 through 12), building on previous findings
- Document findings using the templates in this framework
- Ask about
.gitignorebefore adding sensitive audit files
You can guide the process with prompts like:
Continue to the next phase
Focus more on the API endpoints in /src/api/
Skip Phase 7 (Infrastructure) - we don't use Kubernetes
Run the specialized GraphQL audit on our API
If you need to stop and resume later, Claude will read the audit-context.md file:
Resume the security audit on "../my-project/"
Continue the audit from where we left off
After completing all phases:
Generate the final synthesis report for this audit
Create an executive summary of all findings
- Open the phase markdown file (e.g.,
core-phases/phase-00-reconnaissance.md) - Copy the prompt section (between triple backticks)
- Paste into ChatGPT along with your code files
- Manually save the "Carry-Forward Summary" for the next phase
- Repeat for each phase, pasting the previous summary
- Open your target codebase in Cursor
- Reference this framework in your prompts:
Using the audit methodology from @codebase-audit/core-phases/phase-00-reconnaissance.md, analyze this codebase for security vulnerabilities - Use
@filereferences to include phase prompts
- Start your AI tool in the framework directory
- Provide the target codebase path
- Reference phase files as context
- Feed code in logical chunks - Don't overwhelm the context window
- Save carry-forward summaries - These provide crucial context between sessions
- Document as you go - Don't wait until the end
- Verify findings - AI analysis should be validated by security professionals
| Phase | Name | Purpose | Time Est. |
|---|---|---|---|
| 0 | Reconnaissance | Map attack surface, identify components | 2h |
| 1 | Authentication | Validate identity verification | 3h |
| 2 | Authorization | Check access control enforcement | 3h |
| 3 | API Security | Audit all API endpoints | 4h |
| 4 | Business Logic | Find logic flaws | 3h |
| 5 | Data Layer | Database & storage security | 3h |
| 6 | Frontend | Client-side vulnerabilities | 2h |
| 7 | Infrastructure | IaC & deployment security | 3h |
| 8 | Secrets | Credential management | 2h |
| 9 | Logging | Audit trail completeness | 2h |
| 10 | Error Handling | Secure failure modes | 2h |
| 11 | Cross-Cutting | Integration vulnerabilities | 3h |
| 12 | Synthesis | Final report & prioritization | 2h |
Total Estimated Time: 34 hours
For deeper analysis of specific technologies:
| Audit | Use When |
|---|---|
mobile-security-audit.md |
iOS, Android, React Native, Flutter apps |
aws-security-audit.md |
AWS-deployed applications |
kubernetes-audit.md |
K8s/container deployments |
graphql-audit.md |
GraphQL APIs |
api-penetration-testing.md |
Active security testing |
frontend-performance-audit.md |
Frontend performance, Core Web Vitals, SEO |
Use compliance/compliance-mapping.md to map findings to:
- OWASP Top 10 (2021) - Web application security
- SOC 2 - Trust service criteria
- GDPR - EU data protection
- PCI-DSS v4.0 - Payment card security
- HIPAA - Healthcare data protection
- ISO 27001:2022 - Information security management
When using AI assistants (Claude, GPT-4, etc.) to conduct audits with this framework:
All AI-generated documents MUST be saved to a .audit/ folder in the project being audited:
target-project/
βββ .audit/ # AI-generated audit artifacts
β βββ audit-context.md # Session memory (AI resumes from here)
β βββ findings/ # Individual finding documents
β βββ reports/ # Phase reports and summaries
β βββ carry-forward/ # Carry-forward summaries
β βββ final-report.md # Synthesized final report
βββ src/
βββ ...
What goes in .audit/:
audit-context.md- Session memory for AI to resume audits- Finding documents (from
templates/finding-template.md) - Progress tracker instances
- Carry-forward summaries
- Phase reports and final synthesis
The AI MUST create and maintain .audit/audit-context.md using the template in templates/audit-context-template.md. This file enables:
- Resuming audits after breaks or codebase changes
- Tracking remediation status of findings (open/fixed/in-progress)
- Preserving context (carry-forward summaries, notes)
AI Behavior:
- At audit start: Check if
.audit/audit-context.mdexists - If exists: Read it to understand previous state and resume
- If not: Create it using the template
- After each phase: Update the context file with current state
CRITICAL: Before adding .audit/ to .gitignore, the AI MUST:
- Inform the user that audit artifacts exist in
.audit/ - Explain trade-offs:
- Add to .gitignore: Keeps sensitive findings out of version control (recommended)
- Do NOT add: Allows audit history tracking (useful for compliance)
- Explicitly ask: "Would you like me to add
.audit/to your.gitignore?" - Wait for user confirmation before making changes
This framework includes special checks for air-gapped environments:
- β External network dependency detection
- β Offline certificate management
- β Internal logging requirements
- β Update mechanism review
- β Telemetry/analytics detection
Look for "Air-Gap Specific" sections in each phase.
After completing all phases, you'll have:
- Finding Database - All security issues with severity ratings
- Attack Chain Analysis - How vulnerabilities combine
- Prioritized Roadmap - What to fix and when
- Executive Summary - Leadership-ready overview
- Technical Recommendations - Architecture improvements
| Level | Description | Action |
|---|---|---|
| Critical | Immediate compromise possible | Fix immediately |
| High | Significant security gap | Fix within 1-4 weeks |
| Medium | Defense-in-depth issue | Fix within 1-3 months |
| Low | Minor concern | Add to backlog |
| Info | Observation/improvement | Consider for future |
Improvements welcome! Please submit issues or PRs for:
- Additional specialized audits
- Checklist improvements
- New vulnerability patterns
- Better documentation
MIT License - Use freely for your security audits.
This framework provides guidance for security audits but does not guarantee complete coverage. Always complement automated and LLM-assisted analysis with manual review and professional penetration testing for critical systems.
For questions about using this framework:
- Check existing documentation
- Consult security professionals for critical findings
Happy Auditing! π