The Council is an autonomous AI-powered code review system that acts as a local MCP (Model Context Protocol) server. It provides intelligent, context-aware code reviews by analyzing your entire codebase, learning from documentation, and applying coding standards automatically.
- Reviews your code with deep context analysis using Repomix to understand your entire codebase
- Learns coding standards from documentation URLs and applies them to future reviews
- Provides structured feedback with severity assessments, issue detection, and suggested fixes
- Integrates seamlessly with Cursor, VS Code, and other MCP-compatible editors
- Maintains code quality through automated housekeeping and standardization
Think of it as your AI code reviewer that never sleeps, continuously learns, and understands your project's full context.
- Install dependencies:
cd council
uv sync- Configure your API keys by creating a
.envfile:
Option A: Direct OpenAI/Anthropic/etc. (default)
OPENAI_API_KEY=your_openai_api_key_here
COUNCIL_MODEL=openai:gpt-4o # or anthropic:claude-3-5-sonnet-20241022Option B: LiteLLM Proxy (recommended for custom models)
LITELLM_BASE_URL=http://localhost:4000 # Your LiteLLM proxy URL
LITELLM_API_KEY=your_litellm_api_key
COUNCIL_MODEL=your-model-name # Model name as configured in LiteLLMNote: The
COUNCIL_MODELenvironment variable is required. When using LiteLLM, match your proxy configuration. For direct providers, use formatprovider:model-name(e.g.,openai:gpt-4o).
- Python 3.12+
- uv package manager
- OpenAI API key (or other compatible model provider)
- Teach The Council some best practices:
uv run council learn "https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts" prompt_engineering- Review your code:
uv run council review src/council/main.py- Run housekeeping to maintain code quality:
uv run council housekeeping- Deep Context Analysis: Uses Repomix to extract comprehensive code context (XML format)
- Dynamic Knowledge Base: Learn from documentation via Jina Reader and automatically apply standards
- Structured Output: Pydantic-AI ensures type-safe, structured review results
- MCP Integration: Exposes tools via FastMCP for use in Cursor, VS Code, and other MCP-compatible editors
Run the server directly:
uv run python -m src.council.mainThe Council provides a comprehensive CLI interface for code reviews and maintenance:
uv run council review [PATHS]... [OPTIONS]Options:
-o, --output [json|markdown|pretty]: Output format for the review results (default: pretty)-i, --extra-instructions TEXT: Additional instructions for the review-d, --diff TEXT: Review only changed code compared to git reference (e.g., HEAD, main). If provided, only modified files will be reviewed.--no-cache: Disable caching of review results--phases TEXT: Comma-separated list of review phases to run (security,performance,maintainability,best_practices). If not specified, runs all phases in a single pass.-u, --uncommitted: Review only uncommitted changes. If specified without paths, reviews all uncommitted files.
Examples:
# Review a file with pretty output (default)
uv run council review src/council/main.py
# Get JSON output
uv run council review src/council/main.py --output json
# Review with extra instructions
uv run council review src/council/main.py --extra-instructions "Focus on security issues"
# Review multiple files or directories
uv run council review src/ tests/ config.py
# Review only uncommitted changes
uv run council review --uncommitted
# Review changes compared to a git reference
uv run council review --diff main src/
# Review specific phases only
uv run council review src/council/main.py --phases security,performance
# Disable caching for fresh review
uv run council review src/council/main.py --no-cacheuv run council learn <url> <topic>Example:
uv run council learn "https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts" prompt_engineeringOutput review context for external agents (useful for using other AI models). This command extracts code context, loads relevant knowledge base content, and generates a review checklist. The output can be used by external agents (like Gemini, Claude, Codex, etc.) to perform code reviews.
uv run council context FILE_PATH [OPTIONS]Options:
-o, --output [json|markdown]: Output format for the context (default: json)-d, --diff TEXT: Extract context for diff-based review (e.g., HEAD, main). If provided, only changed code will be included.-i, --extra-instructions TEXT: Additional instructions for the review--phases TEXT: Comma-separated list of review phases to focus on (security,performance,maintainability,best_practices)
Examples:
# Get context in JSON format (default)
uv run council context src/council/main.py
# Get context in markdown format
uv run council context src/council/main.py --output markdown
# Get context for diff-based review
uv run council context src/council/main.py --diff HEAD
# Get context with extra instructions
uv run council context src/council/main.py --extra-instructions "Focus on security"
# Get context focused on specific review phases
uv run council context src/council/main.py --phases security,performanceExecute comprehensive codebase maintenance and cleanup following a structured 4-phase protocol:
uv run council housekeepingThis command runs a structured 4-phase protocol:
- Phase 1: Hygiene & Safety (gitignore audit, cleanup, dead code removal)
- Phase 2: Standardization & Quality (linting, formatting, DRY analysis)
- Phase 3: Documentation Alignment (docstring audit, README updates)
- Phase 4: Mental Map (project context documentation)
The server exposes two tools:
Reviews code at the given file path using Repomix for context extraction.
Example:
review_code("src/council/main.py")Returns a JSON object with:
summary: Overall review summaryissues: List of issues found (description, severity, line_number, code_snippet)severity: Overall severity assessment (low/medium/high/critical)code_fix: Optional suggested code fix
Fetches documentation from a URL using Jina Reader and adds it to the knowledge base.
Example:
learn_rules(
"https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts",
"prompt_engineering"
)The knowledge is automatically loaded into future reviews.
- Server Layer: FastMCP for MCP protocol implementation
- Logic Layer: Pydantic-AI for structured, type-safe AI outputs
- Context Layer: Repomix wrapper for deep code context extraction
- Knowledge Layer: Jina Reader integration for documentation fetching
Create a .env file and configure your API keys:
- Direct Provider: Set
OPENAI_API_KEY(orANTHROPIC_API_KEY, etc.) and optionallyCOUNCIL_MODEL - LiteLLM Proxy: Set
LITELLM_BASE_URLandLITELLM_API_KEYfor custom model routing
The COUNCIL_MODEL environment variable is required and must be set. When using LiteLLM, this should match your LiteLLM proxy configuration. When using direct providers, use the format provider:model-name (e.g., openai:gpt-4o, anthropic:claude-3-5-sonnet-20241022).
Repomix is configured via command-line flags. You can customize which files are included by modifying the get_packed_context function in src/council/tools/repomix.py if needed.
The knowledge/ directory stores markdown files that are automatically loaded into the system prompt. Files are loaded in alphabetical order and injected as "RULESET" sections. Use the learn_rules tool to populate this directory automatically.
the-council/
├── pyproject.toml # Dependencies and project config
├── .env # API keys (create this)
├── knowledge/ # Dynamic knowledge base (.md files)
├── ai_docs/ # AI agent context documentation
│ └── project_context.md # Project mental map
└── src/
└── council/
├── __init__.py
├── main.py # FastMCP server entry point
├── config.py # Settings & path constants
├── templates/
│ └── system_prompt.j2 # Jinja2 template for system prompt
├── agents/
│ ├── __init__.py
│ └── councilor.py # Pydantic-AI reviewer agent
├── cli/
│ ├── main.py # CLI entry point
│ ├── commands/ # CLI commands (review, learn, context, housekeeping)
│ ├── core/ # Core review execution and context building
│ ├── ui/ # UI components (spinner, streaming, output)
│ └── utils/ # Utility functions (paths, errors, validation)
└── tools/
├── __init__.py
├── scribe.py # Jina Reader wrapper
├── repomix.py # Repomix wrapper
├── git_tools.py # Git integration tools
└── ... # Additional analysis tools
fastmcp: MCP server frameworkpydantic-ai: Type-safe AI agent frameworkhttpx: HTTP client for Jina Readerlogfire: Structured loggingdevtools: Development utilities
The project uses Ruff for linting and formatting:
# Check and fix issues
uv run ruff check --fix src/ tests/
# Format code
uv run ruff format src/ tests/Tests are located in the tests/ directory. Run them with:
# Run all tests
uv run pytest
# Run with verbose output
uv run pytest -v
# Run specific test file
uv run pytest tests/test_tools_path_utils.py
# Run integration tests (may require external tools)
uv run pytest tests/integration/ -m integration(Add license information)
