EcoGuard AI is a transformative, AI-augmented software development pipeline solution designed to proactively champion excellence across the entire software lifecycle. It serves as a unified intelligence layer, meticulously analyzing both human-written and AI-generated code for quality, security, and environmental sustainability.
To embed comprehensive oversight directly into developer workflows and CI/CD processes, making best practices intuitive and sustainable outcomes measurable while addressing inefficiencies unique to AI-generated code.
- π§ Modularization: Build distinct, pluggable analysis modules
- π‘ Actionable Feedback: Provide clear, context-sensitive suggestions with quantified impact
- π€ AI-Specificity: Target inefficiencies and risks unique to AI/ML code and AI-driven development
- π± Carbon-Aware: Explicitly link software choices to environmental impact across the stack
- π Open-Source First: Foster community contribution and transparency
- βοΈ Hybrid Deployment: Support local execution, centralized server reporting (Cloud/On-Prem/Air-Gapped)
- π¨βπ» Developer-Centric: Prioritize seamless integration into developer workflows
- π Holistic Scope: Cover application code, infrastructure, and containers
- Static Analysis: AST-based analysis for comprehensive code evaluation
- Green Software (Eco-Linter): Detection of inefficient patterns with carbon impact estimation
- AI-Code Specifics: Identification of AI-generated code anti-patterns and inefficiencies
- Security (SAST): Vulnerability detection and secure coding practice enforcement
- Dependency Analysis: Third-party library vulnerability scanning
- Carbon Quantification: Static estimation and runtime measurement of code's carbon footprint
- Cloud Resource Optimization: Integration with cloud provider sustainability APIs
- Infrastructure as Code Analysis: Scanning for cost, security, and carbon optimization
- Container Image Analysis: Optimizing container images for efficiency and security
- MCP Feedback Loop: Structured findings sent to AI orchestration platforms
- AI-Assisted Refactoring: Complex code improvement suggestions
- Automated Fixes: Automatic corrections for common issues
- Predictive Analysis: Proactive guidance for AI code generation
Stage 1: Project Setup & Architecture Design β
- Create GitHub repository structure
- Define project architecture and module interfaces
- Set up initial documentation framework
- Establish coding standards and contribution guidelines
- GitHub Actions CI/CD pipeline configured and working
Stage 2: Core Dependencies & Environment Setup β COMPLETED
- Enhanced Python development environment with comprehensive pyproject.toml
- Configured advanced development tools (black, flake8, mypy, pytest, ruff, tox)
- Enhanced CLI entry point structure with robust functionality
- Advanced GitHub Actions CI/CD with multi-matrix testing, security scans, and artifact collection
- Pre-commit hooks with comprehensive code quality checks
- Development automation tools (Makefile, setup scripts, tox environments)
- Fixed code formatting issues and line length violations
- Verified complete development workflow and package building functionality
Stage 3: AST Analysis Research & Prototyping
- Deep dive into Python AST module capabilities
- Create proof-of-concept AST parser for basic Python constructs
- Research Tree-sitter and ANTLR for multi-language support
- Document AST traversal patterns and node identification strategies
Stage 4: Green Software Principles Research
- Analyze existing green software tools (ec0lint, Creedengo)
- Research CodeCarbon and CarbonTracker integration possibilities
- Identify top 10 Python anti-patterns for energy efficiency
- Create carbon impact estimation methodology
Stage 5: AI Code Pattern Analysis
- Research common AI-generated code inefficiencies
- Analyze LLM output patterns from popular AI coding assistants
- Document "hallucinated" code patterns and redundancies
- Create taxonomy of AI-specific code smells
Stage 6: Security Analysis Framework Research
- Study Bandit and Pysa implementation approaches
- Research OWASP Top 10 mapping to static analysis rules
- Investigate data flow analysis techniques
- Document security rule prioritization methodology
Stage 7: Runtime Profiling Integration Research
- Explore cProfile, memory_profiler, and timeit integration
- Research lightweight profiling techniques for CI/CD
- Investigate correlation methods between static and runtime analysis
- Design profiling data collection framework
Stage 8: MCP Integration Architecture
- Research Model Control Plane APIs and protocols
- Design feedback loop architecture for AI platforms
- Create communication protocol specifications
- Document integration patterns for popular AI orchestration tools
Stage 9: Server Architecture Design
- Design scalable server architecture (FastAPI + PostgreSQL)
- Plan database schema for scan results and historical data
- Design API endpoints and authentication mechanisms
- Create deployment strategies for Cloud/On-Prem/Air-Gapped
Stage 10: IDE Integration Strategy
- Research VS Code extension development
- Investigate JetBrains plugin architecture
- Design Language Server Protocol implementation
- Plan real-time analysis integration approach
Stage 11: Basic AST Engine Implementation
- Implement core AST parsing and traversal engine
- Create visitor pattern framework for rule execution
- Develop standardized Issue reporting class
- Add basic error handling and logging
Stage 12: Rule Engine Framework
- Design pluggable rule architecture
- Implement rule registration and discovery system
- Create rule configuration and severity management
- Add rule execution pipeline with performance monitoring
Stage 13: First Code Quality Rules
- Implement unused variable detection
- Add cyclomatic complexity analysis
- Create function length and parameter count rules
- Add basic naming convention checks
Stage 14: Initial Green Software Rules
- Implement inefficient string concatenation detection
- Add list comprehension optimization suggestions
- Create generator vs list usage recommendations
- Add basic loop optimization rules
Stage 15: First AI-Specific Rules
- Implement redundant import detection
- Add verbose code pattern identification
- Create duplicate function detection within classes
- Add AI-generated boilerplate identification
Stage 16: CLI Interface Development
- Create comprehensive command-line interface
- Add file and directory scanning capabilities
- Implement output formatting options (JSON, text, XML)
- Add configuration file support
Stage 17: JSON Report Structure
- Design comprehensive JSON report schema
- Implement structured output with metadata
- Add severity levels and impact metrics
- Create report validation and schema documentation
Stage 18: Basic Testing Framework
- Set up pytest testing infrastructure
- Create test files with known issues for validation
- Implement rule-specific unit tests
- Add integration tests for CLI functionality
Stage 19: Configuration Management
- Implement YAML/JSON configuration file support
- Add rule enabling/disabling mechanisms
- Create severity threshold configuration
- Add custom rule parameter support
Stage 20: Performance Optimization
- Profile AST parsing and rule execution performance
- Implement caching mechanisms for repeated scans
- Add parallel processing for multiple files
- Optimize memory usage for large codebases
Stage 21: Security Rule Framework
- Implement security-focused AST analysis patterns
- Create vulnerability severity classification system
- Add CWE (Common Weakness Enumeration) mapping
- Implement basic data flow analysis
Stage 22: Core Security Rules
- Add SQL injection vulnerability detection
- Implement hardcoded secrets detection
- Create insecure file handling rules
- Add dangerous function usage detection (exec, eval)
Stage 23: Dependency Vulnerability Scanner
- Integrate with OSV.dev vulnerability database
- Implement requirements.txt and pyproject.toml parsing
- Add CVE severity mapping and reporting
- Create dependency update recommendations
Stage 24: Enhanced Green Software Rules
- Add memory-efficient data structure recommendations
- Implement file I/O optimization suggestions
- Create database query efficiency rules
- Add cloud resource usage optimization patterns
Stage 25: Advanced AI Code Analysis
- Implement tensor operation efficiency analysis (PyTorch/TensorFlow)
- Add model loading optimization detection
- Create data pipeline efficiency rules
- Add GPU utilization pattern analysis
Stage 26: Secrets Detection Enhancement
- Implement entropy-based secret detection
- Add API key pattern recognition
- Create credential scanning with context awareness
- Add false positive reduction mechanisms
Stage 27: Code Complexity Analysis
- Implement advanced complexity metrics (Halstead, maintainability index)
- Add cognitive complexity analysis
- Create technical debt estimation
- Add refactoring priority suggestions
Stage 28: Pre-commit Hook Integration
- Create pre-commit framework integration
- Implement selective scanning for changed files
- Add commit blocking based on severity thresholds
- Create developer-friendly error messages
Stage 29: Git Integration
- Implement Git blame integration for issue attribution
- Add commit history analysis for recurring patterns
- Create branch-specific configuration support
- Add pull request comment automation
Stage 30: Error Handling & Resilience
- Implement comprehensive error handling and recovery
- Add graceful degradation for partial scan failures
- Create detailed logging and debugging capabilities
- Add scan result integrity validation
Stage 31: Runtime Profiling Integration
- Implement cProfile integration for CPU analysis
- Add memory_profiler for memory usage tracking
- Create execution time measurement framework
- Add resource consumption correlation with static analysis
Stage 32: Carbon Footprint Measurement
- Integrate CodeCarbon for runtime emissions tracking
- Implement static-to-runtime carbon impact correlation
- Add cloud provider carbon intensity data integration
- Create carbon savings estimation algorithms
Stage 33: Server Foundation
- Implement FastAPI server with PostgreSQL backend
- Create database schema for scan results and metadata
- Add authentication and authorization framework
- Implement basic API endpoints for data ingestion
Stage 34: Results Storage & Retrieval
- Implement scan result storage with indexing
- Add historical data management and archiving
- Create efficient querying mechanisms
- Add data migration and backup strategies
Stage 35: Dashboard Backend APIs
- Create REST APIs for dashboard data
- Implement aggregation and analytics endpoints
- Add filtering and search capabilities
- Create trend analysis and reporting APIs
Stage 36: MCP Feedback Implementation
- Implement structured feedback formatting for AI platforms
- Add webhook system for real-time feedback delivery
- Create feedback effectiveness tracking
- Add integration with popular AI orchestration platforms
Stage 37: Server-Client Communication
- Implement secure client-server communication protocol
- Add result synchronization mechanisms
- Create offline mode support with later sync
- Add conflict resolution for concurrent scans
Stage 38: Basic Web Dashboard
- Create React-based web dashboard
- Implement project overview and scan history views
- Add issue browsing and filtering capabilities
- Create basic reporting and export features
Stage 39: Multi-tenant Architecture
- Implement organization and project isolation
- Add user management and role-based access control
- Create resource usage tracking and limits
- Add tenant-specific configuration management
Stage 40: Server Deployment Options
- Create Docker containerization for easy deployment
- Add Kubernetes deployment configurations
- Implement air-gapped deployment support
- Create cloud provider specific deployment guides
Stage 41: Automated Fix Engine
- Implement automatic code fix generation
- Add safe transformation verification
- Create fix preview and approval mechanisms
- Add rollback capabilities for automated changes
Stage 42: AI-Powered Refactoring
- Integrate LLM for complex refactoring suggestions
- Implement context-aware code improvement recommendations
- Add refactoring impact analysis
- Create human-in-the-loop approval workflows
Stage 43: IDE Extension Development
- Create VS Code extension with real-time analysis
- Implement JetBrains plugin for IntelliJ/PyCharm
- Add Language Server Protocol support
- Create inline fix suggestions and quick actions
Stage 44: Infrastructure as Code Analysis
- Add Terraform configuration analysis
- Implement CloudFormation template scanning
- Create Kubernetes manifest security and efficiency rules
- Add Docker image optimization suggestions
Stage 45: Container Security & Optimization
- Implement Dockerfile best practice analysis
- Add container image vulnerability scanning
- Create container resource optimization suggestions
- Add multi-stage build efficiency analysis
Stage 46: Multi-Language Support
- Add JavaScript/TypeScript analysis capabilities
- Implement Java basic analysis rules
- Create language-agnostic rule framework
- Add support for additional popular languages
Stage 47: Enterprise Features
- Implement SSO integration (SAML, OAuth2, LDAP)
- Add comprehensive audit logging
- Create compliance reporting (SOC2, ISO27001)
- Add advanced policy management and enforcement
Stage 48: Custom Rule Development Platform
- Create web-based rule development interface
- Implement rule testing and validation framework
- Add rule sharing and marketplace capabilities
- Create SDK for external rule development
Stage 49: Advanced Analytics & ML
- Implement machine learning for pattern detection
- Add predictive analysis for code quality trends
- Create anomaly detection for security and performance
- Add intelligent prioritization of issues
Stage 50: Community & Ecosystem
- Launch open-source community platform
- Create plugin marketplace and ecosystem
- Implement community rule sharing
- Add integration with major development platforms and tools
# Install from PyPI (coming soon)
pip install ecoguard-ai
# Or install from source
git clone https://github.com/ecoguard-ai/ecoguard-ai.git
cd ecoguard-ai
pip install -e .# Analyze a Python file
ecoguard analyze myfile.py
# Analyze a directory with JSON output
ecoguard analyze src/ --format json --output results.json
# Get help
ecoguard --help# Clone the repository
git clone https://github.com/ecoguard-ai/ecoguard-ai.git
cd ecoguard-ai
# Automated setup (recommended)
python scripts/setup_dev.py
# Or manual setup
make dev-install# Run all tests
make test
# Run tests across all Python versions
make test-all
# Format code
make format
# Run linting checks
make lint
# Type checking
make type-check
# Security scans
make security
# Run EcoGuard AI on itself
make self-analyze
# Run all CI checks locally
make ci
# Clean build artifacts
make clean# Run tests across all Python versions
tox
# Run specific environment
tox -e lint # Linting
tox -e type # Type checking
tox -e security # Security scans
tox -e format # Code formattingEcoGuard AI uses pre-commit hooks to ensure code quality:
# Install hooks (done automatically with dev setup)
pre-commit install
# Run hooks manually
pre-commit run --all-files- Code Quality: black, isort, flake8, ruff, mypy
- Testing: pytest, pytest-cov, pytest-xdist, pytest-mock
- Security: bandit, safety, pip-audit, detect-secrets, semgrep
- Documentation: sphinx, sphinx-rtd-theme, myst-parser
- Automation: pre-commit, tox, twine
ecoguard-ai/
βββ src/ecoguard_ai/ # Main package
β βββ cli/ # Command-line interface
β βββ core/ # Core analysis engine
β βββ analyzers/ # Specific analyzers
β βββ utils/ # Utilities
βββ tests/ # Test suite
βββ examples/ # Usage examples
βββ scripts/ # Development scripts
βββ .github/workflows/ # CI/CD pipelines
βββ docs/ # Documentation
EcoGuard AI uses automated CI/CD for secure publishing to PyPI with trusted publishing.
# Check current version
./scripts/manage-version.sh current
# Bump version (patch/minor/major)
./scripts/manage-version.sh bump patch
# Set specific version
./scripts/manage-version.sh set 0.2.0
# Create full release (recommended)
./scripts/manage-version.sh release 0.2.0- Test PyPI: Automatic publishing on
mainbranch pushes - Production PyPI: Automatic publishing on version tags (
v*) - Trusted Publishing: Secure, token-free publishing via OIDC
Use GitHub Actions for controlled releases:
- Go to Actions β Release workflow
- Click Run workflow
- Enter version number (e.g.,
0.2.0) - Optionally mark as pre-release
This will:
- Update version in all files
- Run full test suite
- Build and verify package
- Create git tag and GitHub release
- Publish to PyPI automatically
See detailed setup instructions in:
docs/PUBLISHING.md- Complete publishing guidedocs/PYPI_SETUP.md- PyPI trusted publisher configuration
We welcome contributions from the community! Please read our Contributing Guidelines for details on how to get started.
This project is licensed under the MIT License - see the LICENSE file for details.
- CodeCarbon project for carbon footprint measurement inspiration
- Bandit project for security analysis patterns
- The broader green software and sustainable computing community
EcoGuard AI - Building a sustainable future, one line of code at a time π±