diff --git a/README.md b/README.md
index ea91ebd..5908022 100644
--- a/README.md
+++ b/README.md
@@ -46,8 +46,9 @@ Instead of spending days wiring together LLMs, tools, and execution environments
## 📑 Table of Contents
- [🧰 Available Out of the Box](#-available-out-of-the-box)
- [🤖 Agents](#-agents)
- - [📦 Local Tools (Zero External Dependencies)](#-local-tools-zero-external-dependencies)
- - [🌐 MCP Servers (Context Superpowers)](#-mcp-servers-context-superpowers)
+ - [📦 Local Tools](#-local-tools)
+ - [🌐 MCP Servers](#-mcp-servers)
+ - [🧠 LLM Providers](#-llm-providers)
- [🚀 Quick Start (Zero to Agent in 60s)](#-quick-start-zero-to-agent-in-60s)
- [🛠️ Build Your Own Agent](#️-build-your-own-agent)
- [🏗️ Architecture](#️-architecture)
@@ -62,293 +63,73 @@ Instead of spending days wiring together LLMs, tools, and execution environments
### 🤖 Agents
-
-
-
- | Agent |
- Purpose |
- MCP Servers |
- Local Tools |
-
-
-
-
- developer |
- Code Master: Read, search & edit code. |
- webfetch |
- All codebase tools below |
-
-
- travel-coordinator |
- Trip Planner: Orchestrates agents. |
- kiwi-com-flight-search
webfetch |
- Uses 3 sub-agents |
-
-
- chef |
- Chef: Recipes from your fridge. |
- webfetch |
- - |
-
-
- news |
- News Anchor: Aggregates top stories. |
- webfetch |
- - |
-
-
- travel |
- Flight Booker: Finds the best routes. |
- kiwi-com-flight-search |
- - |
-
-
- simple |
- Chat Buddy: Vanilla conversational agent. |
- - |
- - |
-
-
- github-pr-reviewer |
- PR Reviewer: Reviews diffs, posts inline comments & summaries. |
- - |
-
-
- View tools
- get_pr_diff
- get_pr_comments
- post_review_comment
- post_general_comment
- reply_to_review_comment
- get_pr_metadata
-
- |
-
-
- whatsapp |
- WhatsApp Agent: Bidirectional WhatsApp communication (personal account). |
- webfetch
duckduckgo-search |
- - |
-
-
-
+The framework includes several pre-built agents for common use cases:
-
-📱 WhatsApp Agent Setup
+| Agent | Purpose |
+|-------|---------|
+| `developer` | Code Master: Read, search & edit code |
+| `travel-coordinator` | Trip Planner: Orchestrates agents |
+| `chef` | Chef: Recipes from your fridge |
+| `news` | News Anchor: Aggregates top stories |
+| `travel` | Flight Booker: Finds the best routes |
+| `simple` | Chat Buddy: Vanilla conversational agent |
+| `github-pr-reviewer` | PR Reviewer: Reviews diffs, posts inline comments & summaries |
+| `whatsapp` | WhatsApp Agent: Bidirectional WhatsApp communication |
-The WhatsApp agent enables bidirectional communication through your personal WhatsApp account using QR code authentication.
+📖 **See [docs/agents.md](docs/agents.md)** for detailed information about each agent, including configuration options and usage examples.
-**Requirements:**
-- Go 1.21+ and Git (for WhatsApp backend)
-- Python 3.13+
-- A configured LLM provider (see environment variables below)
+---
-**Configuration:**
-```bash
-# 1. Copy example config
-cp agentic-framework/config/whatsapp.yaml.example agentic-framework/config/whatsapp.yaml
-
-# 2. Edit config/whatsapp.yaml with your settings:
-# - model: "claude-sonnet-4-6" # Your LLM model
-# - privacy.allowed_contact: "+34 666 666 666" # Your phone number (only this number can interact)
-# - channel.storage_path: "~/storage/whatsapp" # Where to store session data
-# - mcp_servers: ["web-fetch", "duckduckgo-search"] # Optional: MCP servers to use
-```
+### 📦 Local Tools
-**Usage:**
-```bash
-# Start the WhatsApp agent
-bin/agent.sh whatsapp --config config/whatsapp.yaml
+Fast, zero-dependency tools for working with local codebases:
-# With custom settings (overrides config file)
-bin/agent.sh whatsapp --allowed-contact "+1234567890" --storage ~/custom/path
+| Tool | Capability |
+|------|------------|
+| `find_files` | Fast search via `fd` |
+| `discover_structure` | Directory tree mapping |
+| `get_file_outline` | AST signature parsing |
+| `read_file_fragment` | Precise file reading |
+| `code_search` | Fast search via `ripgrep` |
+| `edit_file` | Safe file editing |
-# Customize MCP servers
-bin/agent.sh whatsapp --mcp-servers "web-fetch,duckduckgo-search"
-bin/agent.sh whatsapp --mcp-servers none # Disable MCP
+📖 **See [docs/tools.md](docs/tools.md)** for detailed documentation of each tool, including parameters and examples.
-# Verbose mode for debugging
-bin/agent.sh whatsapp --verbose
-```
+---
-**First Run:**
-1. Scan the QR code displayed in your terminal
-2. Wait for WhatsApp to authenticate
-3. Send a message from your configured phone number
-4. Agent will respond automatically
-
-**Privacy & Security:**
-- 🔒 Only processes messages from the configured contact
-- 🔒 Group chat messages are automatically filtered (not sent to LLM)
-- 🔒 All data stored locally (no cloud storage of conversations)
-- 🔒 Messages from other contacts are silently ignored
-- 🔒 Message deduplication prevents reprocessing
-
-**Configuration Options:**
-- `model`: LLM model to use (defaults to provider default)
-- `mcp_servers`: MCP servers for web search and content fetching
-- `privacy.allowed_contact`: Only this phone number can interact with the agent
-- `privacy.log_filtered_messages`: Log filtered messages for debugging
-- `channel.storage_path`: Directory for WhatsApp session and database files
-- `features.group_messages`: Currently disabled by default for privacy
+### 🌐 MCP Servers
-
+Model Context Protocol servers for extending agent capabilities:
-### 📦 Local Tools (Zero External Dependencies)
-
-
-
-
- | Tool |
- Capability |
- Example |
-
-
-
-
- find_files |
- Fast search via fd |
- *.py finds Python files |
-
-
- discover_structure |
- Directory tree mapping |
- Understands project layout |
-
-
- get_file_outline |
- AST signature parsing (Python, TS, Go, Rust, Java, C++, PHP) |
- Extracts classes/functions |
-
-
- read_file_fragment |
- Precise file reading |
- file.py:10:50 |
-
-
- code_search |
- Fast search via ripgrep |
- Global regex search |
-
-
- edit_file |
- Safe file editing |
- Inserts/Replaces lines |
-
-
-
+| Server | Purpose |
+|--------|---------|
+| `kiwi-com-flight-search` | Search real-time flights |
+| `webfetch` | Extract clean text from URLs & web search |
+| `duckduckgo-search` | Web search via DuckDuckGo |
-
-📝 Advanced: edit_file Formats
+📖 **See [docs/mcp-servers.md](docs/mcp-servers.md)** for details on each server and how to add custom MCP servers.
-**RECOMMENDED: `search_replace` (no line numbers needed)**
-```json
-{"op": "search_replace", "path": "file.py", "old": "exact text", "new": "replacement text"}
-```
+---
-**Line-based operations:**
-`replace:path:start:end:content` | `insert:path:after_line:content` | `delete:path:start:end`
+### 🧠 LLM Providers
-
+The framework supports **11 LLM providers** out of the box, covering 90%+ of the market:
-### 🌐 MCP Servers (Context Superpowers)
-
-
-
-
- | Server |
- Purpose |
- API Key Needed? |
-
-
-
-
- kiwi-com-flight-search |
- Search real-time flights |
- 🟢 No |
-
-
- webfetch |
- Extract clean text from URLs & web search |
- 🟢 No |
-
-
-
+| Provider | Type | Use Case |
+|----------|------|----------|
+| **Anthropic** | Cloud | State-of-the-art reasoning (Claude) |
+| **OpenAI** | Cloud | GPT-4, GPT-4.1, o1 series |
+| **Azure OpenAI** | Cloud | Enterprise OpenAI deployments |
+| **Google GenAI** | Cloud | Gemini models via API |
+| **Google Vertex AI** | Cloud | Gemini models via GCP |
+| **Groq** | Cloud | Ultra-fast inference |
+| **Mistral AI** | Cloud | European privacy-focused models |
+| **Cohere** | Cloud | Enterprise RAG and Command models |
+| **AWS Bedrock** | Cloud | Anthropic, Titan, Meta via AWS |
+| **Ollama** | Local | Run LLMs locally (zero API cost) |
+| **Hugging Face** | Cloud | Open models from Hugging Face Hub |
----
-
-### 🧠 Supported LLM Providers
-
-The framework supports **10+ LLM providers** out of the box, covering 90%+ of the LLM market:
-
-
-
-
- | Provider |
- Type |
- Use Case |
-
-
-
-
- | Anthropic |
- Cloud |
- State-of-the-art reasoning (Claude) |
-
-
- | OpenAI |
- Cloud |
- GPT-4, GPT-4.1, o1 series |
-
-
- | Azure OpenAI |
- Cloud |
- Enterprise OpenAI deployments |
-
-
- | Google GenAI |
- Cloud |
- Gemini models via API |
-
-
- | Google Vertex AI |
- Cloud |
- Gemini models via GCP |
-
-
- | Groq |
- Cloud |
- Ultra-fast inference |
-
-
- | Mistral AI |
- Cloud |
- European privacy-focused models |
-
-
- | Cohere |
- Cloud |
- Enterprise RAG and Command models |
-
-
- | AWS Bedrock |
- Cloud |
- Anthropic, Titan, Meta via AWS |
-
-
- | Ollama |
- Local |
- Run LLMs locally (zero API cost) |
-
-
- | Hugging Face |
- Cloud |
- Open models from Hugging Face Hub |
-
-
-
-
-**Provider Priority:** Anthropic > Google Vertex > Google GenAI > Azure > Groq > Mistral > Cohere > Bedrock > HuggingFace > Ollama > OpenAI (fallback)
+📖 **See [docs/llm-providers.md](docs/llm-providers.md)** for detailed setup instructions, environment variables, and provider comparison.
---
@@ -409,89 +190,43 @@ bin/agent.sh chef -i "I have chicken, rice, and soy sauce. What can I make?"
🔑 Required Environment Variables
-
-
-
- | Provider |
- Variable |
- Required? |
- Default Model |
-
-
-
-
- | Anthropic |
- ANTHROPIC_API_KEY |
- 🟢 Yes* |
- claude-haiku-4-5-20251001 |
-
-
- | OpenAI |
- OPENAI_API_KEY |
- 🟢 Yes* |
- gpt-4o-mini |
-
-
- | Azure OpenAI |
- AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT |
- ⚪ No |
- gpt-4o-mini |
-
-
- | Google GenAI |
- GOOGLE_API_KEY |
- ⚪ No |
- gemini-2.0-flash-exp |
-
-
- | Google Vertex AI |
- GOOGLE_VERTEX_PROJECT_ID |
- ⚪ No |
- gemini-2.0-flash-exp |
-
-
- | Groq |
- GROQ_API_KEY |
- ⚪ No |
- llama-3.3-70b-versatile |
-
-
- | Mistral AI |
- MISTRAL_API_KEY |
- ⚪ No |
- mistral-large-latest |
-
-
- | Cohere |
- COHERE_API_KEY |
- ⚪ No |
- command-r-plus |
-
-
- | AWS Bedrock |
- AWS_PROFILE or AWS_ACCESS_KEY_ID |
- ⚪ No |
- anthropic.claude-3-5-sonnet-20241022-v2:0 |
-
-
- | Ollama |
- OLLAMA_BASE_URL |
- ⚪ No |
- llama3.2 |
-
-
- | Hugging Face |
- HUGGINGFACEHUB_API_TOKEN |
- ⚪ No |
- meta-llama/Llama-3.2-3B-Instruct |
-
-
-
-
-**Model Override Variables** (optional):
-- `ANTHROPIC_MODEL_NAME`, `OPENAI_MODEL_NAME`, `AZURE_OPENAI_MODEL_NAME`, `GOOGLE_GENAI_MODEL_NAME`, `GROQ_MODEL_NAME`, etc.
-
-> ⚠️ **Note:** Only one provider's API key is required. The framework auto-detects which provider to use based on available credentials.
+Only one provider's API key is required. The framework auto-detects which provider to use based on available credentials.
+
+```bash
+# Anthropic (Recommended)
+ANTHROPIC_API_KEY=sk-ant-your-key-here
+
+# OpenAI
+OPENAI_API_KEY=sk-your-key-here
+
+# Google GenAI / Vertex
+GOOGLE_API_KEY=your-google-key
+GOOGLE_VERTEX_PROJECT_ID=your-project-id
+
+# Groq
+GROQ_API_KEY=gsk-your-key-here
+
+# Mistral AI
+MISTRAL_API_KEY=your-mistral-key-here
+
+# Cohere
+COHERE_API_KEY=your-cohere-key-here
+
+# Azure OpenAI
+AZURE_OPENAI_API_KEY=your-azure-key
+AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
+
+# AWS Bedrock
+AWS_PROFILE=your-profile
+
+# Ollama (Local, no API key needed)
+OLLAMA_BASE_URL=http://localhost:11434
+
+# Hugging Face
+HUGGINGFACEHUB_API_TOKEN=your-hf-token
+```
+
+📖 **See [docs/llm-providers.md](docs/llm-providers.md)** for detailed environment variable configurations, model overrides, and provider comparison.
@@ -633,7 +368,7 @@ bin/agent.sh developer -i "Hello" -v
# 📜 Access logs (same location as local)
tail -f agentic-framework/logs/agent.log
-# 📱 Run the WhatsApp agent (requires config)
+# 📱 Run the WhatsApp agent (requires config - see docs/agents.md)
agentic-run whatsapp --config config/whatsapp.yaml
# 📱 Run WhatsApp with custom settings
diff --git a/docs/agents.md b/docs/agents.md
new file mode 100644
index 0000000..9ee7e6c
--- /dev/null
+++ b/docs/agents.md
@@ -0,0 +1,245 @@
+# Available Agents
+
+This document provides detailed information about all available agents in the Agentic Framework.
+
+## Agent Overview
+
+| Agent | Purpose | MCP Servers | Local Tools |
+|-------|---------|-------------|-------------|
+| `developer` | **Code Master:** Read, search & edit code. | `webfetch` | All codebase tools |
+| `travel-coordinator` | **Trip Planner:** Orchestrates agents. | `kiwi-com-flight-search`, `webfetch` | Uses 3 sub-agents |
+| `chef` | **Chef:** Recipes from your fridge. | `webfetch` | - |
+| `news` | **News Anchor:** Aggregates top stories. | `webfetch` | - |
+| `travel` | **Flight Booker:** Finds the best routes. | `kiwi-com-flight-search` | - |
+| `simple` | **Chat Buddy:** Vanilla conversational agent. | - | - |
+| `github-pr-reviewer` | **PR Reviewer:** Reviews diffs, posts inline comments & summaries. | - | Custom GitHub tools |
+| `whatsapp` | **WhatsApp Agent:** Bidirectional WhatsApp communication (personal account). | `webfetch`, `duckduckgo-search` | - |
+
+---
+
+## Developer Agent
+
+**Purpose:** The developer agent is a code master designed to read, search, and edit codebases.
+
+### Capabilities
+
+- **Codebase Exploration:** Navigate and understand project structures
+- **Code Search:** Fast pattern matching using ripgrep
+- **File Editing:** Safe modifications to source files
+- **AST Analysis:** Understand code signatures in multiple languages
+
+### MCP Servers
+- `webfetch` - For fetching documentation and web resources
+
+### Local Tools
+- All available codebase tools (see [tools.md](tools.md) for details)
+
+### Usage Example
+```bash
+bin/agent.sh developer -i "Explain the architecture of this project"
+bin/agent.sh developer -i "Find all functions that use the database connection"
+```
+
+---
+
+## Travel Coordinator Agent
+
+**Purpose:** Orchestrates multiple agents to plan complex trips.
+
+### Capabilities
+
+- Coordinates with `travel` and `chef` sub-agents
+- Manages complex travel planning workflows
+- Integrates flight search with dining recommendations
+
+### MCP Servers
+- `kiwi-com-flight-search` - Real-time flight data
+- `webfetch` - For travel-related web content
+
+### Architecture
+Uses 3 specialized sub-agents working together via LangGraph.
+
+### Usage Example
+```bash
+bin/agent.sh travel-coordinator -i "Plan a weekend trip from Madrid to Paris"
+```
+
+---
+
+## Chef Agent
+
+**Purpose:** Suggests recipes based on ingredients you have available.
+
+### Capabilities
+
+- Recipe suggestions based on available ingredients
+- Fetches recipes from online sources
+- Adapts recipes to your ingredient constraints
+
+### MCP Servers
+- `webfetch` - For fetching recipe information
+
+### Usage Example
+```bash
+bin/agent.sh chef -i "I have chicken, rice, and soy sauce. What can I make?"
+```
+
+---
+
+## News Agent
+
+**Purpose:** Aggregates and summarizes top news stories.
+
+### Capabilities
+
+- Fetches latest news from multiple sources
+- Summarizes key developments
+- Filters by topics when requested
+
+### MCP Servers
+- `webfetch` - For fetching news content
+
+### Usage Example
+```bash
+bin/agent.sh news -i "What are today's top tech stories?"
+```
+
+---
+
+## Travel Agent
+
+**Purpose:** Finds the best flight routes for your journey.
+
+### Capabilities
+
+- Real-time flight search
+- Route optimization
+- Price comparison
+
+### MCP Servers
+- `kiwi-com-flight-search` - Real-time flight data
+
+### Usage Example
+```bash
+bin/agent.sh travel -i "Find flights from Madrid to Barcelona next weekend"
+```
+
+---
+
+## Simple Agent
+
+**Purpose:** A basic conversational agent for general chatting.
+
+### Capabilities
+
+- Natural conversation
+- General knowledge (via LLM)
+- No specialized tools
+
+### MCP Servers
+- None
+
+### Usage Example
+```bash
+bin/agent.sh simple -i "Tell me a joke"
+```
+
+---
+
+## GitHub PR Reviewer Agent
+
+**Purpose:** Automatically reviews pull requests by analyzing diffs and posting comments.
+
+### Capabilities
+
+- **Diff Analysis:** Reviews code changes line by line
+- **Inline Comments:** Posts specific feedback on problematic code
+- **Summary Comments:** Provides overall PR assessment
+- **Thread Responses:** Engages in review conversations
+
+### Local Tools
+- `get_pr_diff` - Retrieve the pull request diff
+- `get_pr_comments` - Get existing comments on a PR
+- `post_review_comment` - Post inline review comments
+- `post_general_comment` - Post overall PR feedback
+- `reply_to_review_comment` - Reply to review comment threads
+- `get_pr_metadata` - Fetch PR metadata (title, author, etc.)
+
+### Usage Example
+```bash
+bin/agent.sh github-pr-reviewer -i "Review PR #123 for bugs and style issues"
+```
+
+---
+
+## WhatsApp Agent
+
+The WhatsApp agent enables bidirectional communication through your personal WhatsApp account using QR code authentication.
+
+### Requirements
+- Go 1.21+ and Git (for WhatsApp backend)
+- Python 3.13+
+- A configured LLM provider
+
+### Configuration
+
+```bash
+# 1. Copy example config
+cp agentic-framework/config/whatsapp.yaml.example agentic-framework/config/whatsapp.yaml
+
+# 2. Edit config/whatsapp.yaml with your settings:
+# - model: "claude-sonnet-4-6" # Your LLM model
+# - privacy.allowed_contact: "+34 666 666 666" # Your phone number (only this number can interact)
+# - channel.storage_path: "~/storage/whatsapp" # Where to store session data
+# - mcp_servers: ["web-fetch", "duckduckgo-search"] # Optional: MCP servers to use
+```
+
+### Usage
+
+```bash
+# Start the WhatsApp agent
+bin/agent.sh whatsapp --config config/whatsapp.yaml
+
+# With custom settings (overrides config file)
+bin/agent.sh whatsapp --allowed-contact "+1234567890" --storage ~/custom/path
+
+# Customize MCP servers
+bin/agent.sh whatsapp --mcp-servers "web-fetch,duckduckgo-search"
+bin/agent.sh whatsapp --mcp-servers none # Disable MCP
+
+# Verbose mode for debugging
+bin/agent.sh whatsapp --verbose
+```
+
+### First Run
+1. Scan the QR code displayed in your terminal
+2. Wait for WhatsApp to authenticate
+3. Send a message from your configured phone number
+4. Agent will respond automatically
+
+### Privacy & Security
+- 🔒 Only processes messages from the configured contact
+- 🔒 Group chat messages are automatically filtered (not sent to LLM)
+- 🔒 All data stored locally (no cloud storage of conversations)
+- 🔒 Messages from other contacts are silently ignored
+- 🔒 Message deduplication prevents reprocessing
+
+### Configuration Options
+| Option | Description |
+|--------|-------------|
+| `model` | LLM model to use (defaults to provider default) |
+| `mcp_servers` | MCP servers for web search and content fetching |
+| `privacy.allowed_contact` | Only this phone number can interact with the agent |
+| `privacy.log_filtered_messages` | Log filtered messages for debugging |
+| `channel.storage_path` | Directory for WhatsApp session and database files |
+| `features.group_messages` | Currently disabled by default for privacy |
+
+### MCP Servers
+- `webfetch` - For fetching web content
+- `duckduckgo-search` - For web search capabilities
+
+---
+
+## Creating Custom Agents
+
+To create your own agent, see the [Build Your Own Agent](../README.md#️-build-your-own-agent) section in the main README.
diff --git a/docs/llm-providers.md b/docs/llm-providers.md
new file mode 100644
index 0000000..fa4b43d
--- /dev/null
+++ b/docs/llm-providers.md
@@ -0,0 +1,273 @@
+# LLM Providers
+
+This document provides detailed information about all LLM providers supported by the Agentic Framework.
+
+## Provider Overview
+
+| Provider | Type | Use Case | API Key Required? |
+|----------|------|----------|-------------------|
+| **Anthropic** | Cloud | State-of-the-art reasoning (Claude) | Yes* |
+| **OpenAI** | Cloud | GPT-4, GPT-4.1, o1 series | Yes* |
+| **Azure OpenAI** | Cloud | Enterprise OpenAI deployments | No |
+| **Google GenAI** | Cloud | Gemini models via API | No |
+| **Google Vertex AI** | Cloud | Gemini models via GCP | No |
+| **Groq** | Cloud | Ultra-fast inference | No |
+| **Mistral AI** | Cloud | European privacy-focused models | No |
+| **Cohere** | Cloud | Enterprise RAG and Command models | No |
+| **AWS Bedrock** | Cloud | Anthropic, Titan, Meta via AWS | No |
+| **Ollama** | Local | Run LLMs locally (zero API cost) | No |
+| **Hugging Face** | Cloud | Open models from Hugging Face Hub | No |
+
+**Provider Priority:** Anthropic > Google Vertex > Google GenAI > Azure > Groq > Mistral > Cohere > Bedrock > HuggingFace > Ollama > OpenAI (fallback)
+
+---
+
+## Environment Variables
+
+### Anthropic
+
+```bash
+ANTHROPIC_API_KEY=sk-ant-your-key-here
+ANTHROPIC_MODEL_NAME=claude-haiku-4-5-20251001 # Optional
+```
+
+**Default Model:** `claude-haiku-4-5-20251001`
+
+---
+
+### OpenAI
+
+```bash
+OPENAI_API_KEY=sk-your-key-here
+OPENAI_MODEL_NAME=gpt-4o-mini # Optional
+```
+
+**Default Model:** `gpt-4o-mini`
+
+---
+
+### Azure OpenAI
+
+```bash
+AZURE_OPENAI_API_KEY=your-azure-key
+AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
+AZURE_OPENAI_MODEL_NAME=gpt-4o-mini # Optional
+AZURE_OPENAI_API_VERSION=2024-02-15-preview # Optional
+```
+
+**Default Model:** `gpt-4o-mini`
+
+---
+
+### Google GenAI
+
+```bash
+GOOGLE_API_KEY=your-google-key
+GOOGLE_GENAI_MODEL_NAME=gemini-2.0-flash-exp # Optional
+```
+
+**Default Model:** `gemini-2.0-flash-exp`
+
+---
+
+### Google Vertex AI
+
+```bash
+GOOGLE_VERTEX_PROJECT_ID=your-project-id
+GOOGLE_VERTEX_LOCATION=us-central1 # Optional
+GOOGLE_VERTEX_MODEL_NAME=gemini-2.0-flash-exp # Optional
+```
+
+**Default Model:** `gemini-2.0-flash-exp`
+
+---
+
+### Groq
+
+```bash
+GROQ_API_KEY=gsk-your-key-here
+GROQ_MODEL_NAME=llama-3.3-70b-versatile # Optional
+```
+
+**Default Model:** `llama-3.3-70b-versatile`
+
+---
+
+### Mistral AI
+
+```bash
+MISTRAL_API_KEY=your-mistral-key-here
+MISTRAL_MODEL_NAME=mistral-large-latest # Optional
+```
+
+**Default Model:** `mistral-large-latest`
+
+---
+
+### Cohere
+
+```bash
+COHERE_API_KEY=your-cohere-key-here
+COHERE_MODEL_NAME=command-r-plus # Optional
+```
+
+**Default Model:** `command-r-plus`
+
+---
+
+### AWS Bedrock
+
+```bash
+AWS_PROFILE=your-profile
+# OR
+AWS_ACCESS_KEY_ID=your-access-key
+AWS_SECRET_ACCESS_KEY=your-secret-key
+AWS_REGION=us-east-1 # Optional
+BEDROCK_MODEL_NAME=anthropic.claude-3-5-sonnet-20241022-v2:0 # Optional
+```
+
+**Default Model:** `anthropic.claude-3-5-sonnet-20241022-v2:0`
+
+---
+
+### Ollama
+
+```bash
+OLLAMA_BASE_URL=http://localhost:11434
+OLLAMA_MODEL_NAME=llama3.2 # Optional
+```
+
+**Default Model:** `llama3.2`
+
+**Requirements:** Ollama must be running locally with the specified model available.
+
+---
+
+### Hugging Face
+
+```bash
+HUGGINGFACEHUB_API_TOKEN=your-hf-token
+HUGGINGFACEHUB_MODEL_NAME=meta-llama/Llama-3.2-3B-Instruct # Optional
+```
+
+**Default Model:** `meta-llama/Llama-3.2-3B-Instruct`
+
+---
+
+## Provider Comparison
+
+### Anthropic (Claude)
+- **Pros:** Best reasoning capabilities, excellent code understanding, strong safety guardrails
+- **Cons:** Higher cost per token
+- **Best for:** Complex reasoning, code analysis, multi-step tasks
+
+### OpenAI (GPT)
+- **Pros:** Widely available, good general capabilities
+- **Cons:** Can be more expensive than alternatives
+- **Best for:** General-purpose tasks, when Anthropic key not available
+
+### Google GenAI / Vertex AI (Gemini)
+- **Pros:** Fast, good multimodal capabilities, competitive pricing
+- **Cons:** API changes more frequently
+- **Best for:** Cost-sensitive workloads, Google Cloud users
+
+### Groq
+- **Pros:** Extremely fast inference (low latency)
+- **Cons:** Uses open-source models (may have different quality profiles)
+- **Best for:** Real-time applications, speed-critical tasks
+
+### Ollama
+- **Pros:** Free (after initial compute), privacy, no API limits
+- **Cons:** Requires local compute resources, quality depends on model
+- **Best for:** Development, privacy-sensitive work, offline use
+
+---
+
+## Setup Instructions
+
+### 1. Copy the Environment Template
+
+```bash
+cp .env.example .env
+```
+
+### 2. Configure Your Provider
+
+Edit `.env` and add credentials for your preferred provider. You only need one provider configured:
+
+```bash
+# Choose ONE of the following:
+
+# Anthropic (Recommended)
+ANTHROPIC_API_KEY=sk-ant-your-key-here
+
+# OR OpenAI
+OPENAI_API_KEY=sk-your-key-here
+
+# OR Google
+GOOGLE_API_KEY=your-google-key
+
+# OR Groq
+GROQ_API_KEY=gsk-your-key-here
+
+# OR Ollama (Local, no API key needed)
+OLLAMA_BASE_URL=http://localhost:11434
+```
+
+### 3. Verify Configuration
+
+```bash
+# List available agents to verify the framework is working
+bin/agent.sh list
+
+# Run a simple agent
+bin/agent.sh simple -i "Hello, can you hear me?"
+```
+
+---
+
+## Model Selection
+
+The framework uses provider priority to automatically select the best available provider. You can override this with model-specific environment variables:
+
+```bash
+# Force a specific model
+ANTHROPIC_MODEL_NAME=claude-sonnet-4-6
+OPENAI_MODEL_NAME=gpt-4o
+GROQ_MODEL_NAME=llama-3.3-70b-versatile
+```
+
+Or use the model selection in the agent configuration:
+
+```python
+# In your agent configuration or .env file
+MODEL_NAME="claude-sonnet-4-6" # Will use the provider's model
+```
+
+---
+
+## Switching Providers
+
+To switch between providers without changing code:
+
+1. Update your `.env` file with the new provider's credentials
+2. Remove or comment out the old provider's credentials
+3. Run your agent - it will auto-detect the active provider
+
+The framework automatically detects which provider to use based on available environment variables.
+
+---
+
+## Troubleshooting
+
+### No provider detected
+Ensure at least one provider's API key is set in your `.env` file.
+
+### Ollama connection refused
+Make sure Ollama is running: `ollama serve`
+
+### Rate limits
+Consider using a different provider or Ollama for local inference.
+
+### Model not found
+Check the model name is correct for your provider and that the model is available in your region/subscription.
diff --git a/docs/mcp-servers.md b/docs/mcp-servers.md
new file mode 100644
index 0000000..b5d02fd
--- /dev/null
+++ b/docs/mcp-servers.md
@@ -0,0 +1,148 @@
+# MCP Servers
+
+This document provides detailed information about the MCP (Model Context Protocol) servers available in the Agentic Framework.
+
+## Server Overview
+
+| Server | Purpose | API Key Needed? |
+|--------|---------|-----------------|
+| `kiwi-com-flight-search` | Search real-time flights | 🟢 No |
+| `webfetch` | Extract clean text from URLs & web search | 🟢 No |
+| `duckduckgo-search` | Web search via DuckDuckGo | 🟢 No |
+
+---
+
+## kiwi-com-flight-search
+
+Real-time flight search integration using the Kiwi.com API.
+
+### Description
+Provides access to live flight data including prices, schedules, and routing information. Used by the `travel` and `travel-coordinator` agents.
+
+### Capabilities
+- Search flights between cities
+- Get real-time pricing
+- Compare different routes
+- Find optimal travel dates
+
+### Example Usage
+```python
+# The travel agent automatically uses this server when querying flights
+# Example agent input:
+# "Find flights from Madrid to Barcelona for next weekend"
+```
+
+### Configuration
+No API key required. The server connects directly to Kiwi.com's public API.
+
+---
+
+## webfetch
+
+Fetches and extracts clean text from web pages and performs web searches.
+
+### Description
+Retrieves web content and converts it to readable, clean text format. Also supports web search functionality for finding relevant pages.
+
+### Capabilities
+- Fetch any URL and extract readable content
+- Perform web searches
+- Convert HTML to clean markdown/text
+- Follow redirects automatically
+
+### Example Usage
+```python
+# Fetch a specific page
+webfetch("https://example.com/article")
+
+# Web search (via the MCP server)
+websearch("python best practices 2025")
+```
+
+### Configuration
+No API key required. Uses publicly available web fetching and search APIs.
+
+---
+
+## duckduckgo-search
+
+Web search integration using DuckDuckGo.
+
+### Description
+Provides privacy-focused web search capabilities without tracking.
+
+### Capabilities
+- Web search
+- Instant answers
+- No tracking or personalization
+- Anonymous results
+
+### Example Usage
+```python
+# Perform a web search
+duckduckgo_search("latest tech news")
+
+# Search for specific information
+duckduckgo_search("how to install python on mac")
+```
+
+### Configuration
+No API key required. Uses DuckDuckGo's public search API.
+
+---
+
+## Adding Custom MCP Servers
+
+To add a new MCP server to the framework:
+
+### 1. Add Server Configuration
+
+Edit `src/agentic_framework/mcp/config.py`:
+
+```python
+DEFAULT_MCP_SERVERS = {
+ "my-server": {
+ "command": "npx",
+ "args": ["-y", "@modelcontextprotocol/server-myserver"],
+ "env": {
+ # Server-specific environment variables
+ }
+ },
+ # ... existing servers
+}
+```
+
+### 2. Register Server with Agent
+
+```python
+from agentic_framework.core.langgraph_agent import LangGraphMCPAgent
+from agentic_framework.registry import AgentRegistry
+
+@AgentRegistry.register("my-agent", mcp_servers=["my-server"])
+class MyAgent(LangGraphMCPAgent):
+ @property
+ def system_prompt(self) -> str:
+ return "You have access to my custom MCP server."
+```
+
+### 3. Test the Server
+
+```bash
+# Test with your agent
+bin/agent.sh my-agent -i "Use my-server to do something"
+```
+
+---
+
+## Using MCP Servers in Agents
+
+MCP servers are automatically loaded when specified in the agent registration:
+
+```python
+@AgentRegistry.register("my-agent", mcp_servers=["webfetch", "duckduckgo-search"])
+class MyAgent(LangGraphMCPAgent):
+ # The agent now has access to web fetch and search capabilities
+ pass
+```
+
+The LLM automatically discovers available tools from the MCP servers and uses them as needed based on the conversation context.
diff --git a/docs/tools.md b/docs/tools.md
new file mode 100644
index 0000000..6133d58
--- /dev/null
+++ b/docs/tools.md
@@ -0,0 +1,227 @@
+# Local Tools
+
+This document provides detailed information about all local tools available in the Agentic Framework.
+
+## Tool Overview
+
+| Tool | Capability | Example |
+|------|------------|---------|
+| `find_files` | Fast search via `fd` | `*.py` finds Python files |
+| `discover_structure` | Directory tree mapping | Understands project layout |
+| `get_file_outline` | AST signature parsing (Python, TS, Go, Rust, Java, C++, PHP) | Extracts classes/functions |
+| `read_file_fragment` | Precise file reading | `file.py:10:50` |
+| `code_search` | Fast search via `ripgrep` | Global regex search |
+| `edit_file` | Safe file editing | Inserts/Replaces lines |
+
+---
+
+## find_files
+
+Fast file search using the `fd` tool.
+
+### Description
+Recursively searches for files matching a pattern in the project directory.
+
+### Parameters
+- `pattern`: Glob pattern to match (e.g., `*.py`, `test_*.ts`)
+
+### Example
+```bash
+# Find all Python files
+find_files "*.py"
+
+# Find test files
+find_files "test_*.py"
+```
+
+---
+
+## discover_structure
+
+Maps the directory structure of the project.
+
+### Description
+Provides a tree-like view of the project's directory structure, helping agents understand the codebase layout.
+
+### Parameters
+- `path`: Optional starting path (defaults to project root)
+- `max_depth`: Maximum depth to explore (default: 3)
+
+### Example
+```bash
+# Discover full project structure
+discover_structure()
+
+# Discover specific directory
+discover_structure("src/", max_depth=2)
+```
+
+---
+
+## get_file_outline
+
+Extracts code signatures using AST parsing.
+
+### Description
+Parses source files and extracts classes, functions, methods, and their signatures. Supports multiple languages:
+- Python
+- TypeScript/JavaScript
+- Go
+- Rust
+- Java
+- C++
+- PHP
+
+### Parameters
+- `file_path`: Path to the file to analyze
+
+### Example
+```bash
+# Get outline of a Python file
+get_file_outline("src/main.py")
+
+# Output example:
+# class MyClass:
+# def __init__(self, x: int) -> None
+# def process(self) -> str
+# def helper_function(data: list) -> dict
+```
+
+---
+
+## read_file_fragment
+
+Reads specific portions of a file.
+
+### Description
+Reads a range of lines from a file, allowing precise inspection of specific code sections.
+
+### Format
+`file_path:start_line:end_line`
+
+### Parameters
+- `file_path`: Path to the file
+- `start_line`: Starting line number (1-indexed)
+- `end_line`: Ending line number (inclusive)
+
+### Example
+```bash
+# Read lines 10-50 of a file
+read_file_fragment("src/main.py:10:50")
+
+# Read a specific function (assuming it's on lines 25-40)
+read_file_fragment("src/main.py:25:40")
+```
+
+---
+
+## code_search
+
+Fast code search using `ripgrep`.
+
+### Description
+Performs global regex-based searches across the codebase for patterns, function names, or specific code constructs.
+
+### Parameters
+- `pattern`: Regular expression to search for
+- `file_pattern`: Optional glob pattern to filter files (e.g., `*.py`)
+- `context_lines`: Number of context lines to include (default: 2)
+
+### Example
+```bash
+# Search for a function name
+code_search("def process_data")
+
+# Search in Python files only
+code_search("database\.query", file_pattern="*.py")
+
+# Search with more context
+code_search("TODO|FIXME", context_lines=5)
+```
+
+---
+
+## edit_file
+
+Safe file editing with multiple operation modes.
+
+### Description
+Modifies files with safe operations including insert, replace, and delete. Supports both line-based operations and search-replace mode.
+
+### Operation Modes
+
+#### RECOMMENDED: `search_replace` (no line numbers needed)
+
+Format: `{"op": "search_replace", "path": "file.py", "old": "exact text", "new": "replacement text"}`
+
+```json
+{
+ "op": "search_replace",
+ "path": "src/main.py",
+ "old": "def old_function():\n pass",
+ "new": "def new_function():\n return True"
+}
+```
+
+#### Line-based Operations
+
+| Operation | Format | Description |
+|-----------|--------|-------------|
+| `replace` | `replace:path:start:end:content` | Replace lines start through end with content |
+| `insert` | `insert:path:after_line:content` | Insert content after specified line |
+| `delete` | `delete:path:start:end` | Delete lines start through end |
+
+### Examples
+
+```bash
+# Search and replace (recommended)
+edit_file('{"op": "search_replace", "path": "src/main.py", "old": "old_value", "new": "new_value"}')
+
+# Replace specific lines
+edit_file("replace:src/main.py:10:15:print('new code')")
+
+# Insert after line 20
+edit_file("insert:src/main.py:20:\n# New comment\nprint('hello')")
+
+# Delete lines 30-35
+edit_file("delete:src/main.py:30:35")
+```
+
+### Safety Features
+- Validates file exists before editing
+- Checks line numbers are within bounds
+- Preserves file permissions
+- Creates backup if configured
+
+---
+
+## Creating Custom Tools
+
+To add your own local tools to an agent:
+
+```python
+from langchain_core.tools import StructuredTool
+from agentic_framework.core.langgraph_agent import LangGraphMCPAgent
+from agentic_framework.registry import AgentRegistry
+
+@AgentRegistry.register("my-agent")
+class MyAgent(LangGraphMCPAgent):
+ @property
+ def system_prompt(self) -> str:
+ return "You are my custom agent."
+
+ def local_tools(self) -> list:
+ return [
+ StructuredTool.from_function(
+ func=self.my_tool,
+ name="my_tool",
+ description="Description of what your tool does",
+ )
+ ]
+
+ def my_tool(self, input_data: str) -> str:
+ # Your tool logic here
+ return f"Processed: {input_data}"
+```
+
+For more details, see the [Build Your Own Agent](../README.md#️-build-your-own-agent) section in the main README.