diff --git a/.gitignore b/.gitignore index 08c8786..01a1161 100644 --- a/.gitignore +++ b/.gitignore @@ -36,4 +36,17 @@ npm-debug.log* .codetracker/cache/ # claude -.claude \ No newline at end of file +.claude + +# python +venv/ +.venv/ +__pycache__/ +*.pyc +*.pyo +*.pyd +*.pyw +*.pyz +*.pywz +*.pyzw +*.pyzwz \ No newline at end of file diff --git a/AGENT_SERVICE_GUIDE.md b/AGENT_SERVICE_GUIDE.md new file mode 100644 index 0000000..d6ca5c7 --- /dev/null +++ b/AGENT_SERVICE_GUIDE.md @@ -0,0 +1,503 @@ +# GitGuard Agent Service - Complete Guide + +## ๐Ÿ“‹ Overview + +The `apps/agent` directory contains a **Python FastAPI microservice** that provides **SpoonOS-powered AI analysis** for git snapshots. It's a separate service that runs independently from the Next.js web app. + +--- + +## ๐ŸŽฏ What is `apps/agent`? + +**`apps/agent`** is a **Python-based AI service** that: + +1. **Uses SpoonOS Framework** - Implements a multi-stage AI pipeline using SpoonOS StateGraph +2. **Provides Advanced AI Analysis** - Uses Claude AI to generate detailed conflict explanations and recovery plans +3. **Runs as Separate Service** - FastAPI server on port 8000 (default) +4. **Optional but Recommended** - The web app can work without it (uses fallback), but you get much better AI analysis with it + +--- + +## ๐Ÿ—๏ธ Architecture + +``` +โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” +โ”‚ Next.js Web โ”‚ +โ”‚ (Port 3000) โ”‚ +โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ + โ”‚ HTTP POST + โ”‚ /analyze + โ–ผ +โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” +โ”‚ Python Agent โ”‚ +โ”‚ (Port 8000) โ”‚ +โ”‚ SpoonOS + AI โ”‚ +โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ +``` + +### Two Agent Implementations: + +1. **Python Agent (`apps/agent/main.py`)** - Full SpoonOS implementation + - Uses SpoonOS StateGraph for pipeline + - Advanced AI analysis with Claude + - Multi-stage processing + - **RECOMMENDED for hackathon** + +2. **TypeScript Agent (`apps/web/src/lib/agent/`)** - Fallback implementation + - Works without Python service + - Basic analysis without SpoonOS + - Used when Python agent is unavailable + +--- + +## ๐Ÿ”„ How It Works + +### SpoonOS Pipeline (Python Agent) + +The Python agent uses a **5-stage SpoonOS StateGraph pipeline**: + +``` +1. detect_issue โ†’ Identify problem type (merge_conflict, detached_head, etc.) +2. build_graph โ†’ Create repository visualization +3. extract_conflicts โ†’ Parse conflict files and hunks +4. collect_signals โ†’ Normalize data for AI +5. generate_analysis โ†’ Use Claude AI to generate explanations and plan +``` + +Each stage: +- Processes the state +- Adds traces for debugging +- Passes data to next stage +- Can be visualized in the "SpoonOS Pipeline" tab + +--- + +## ๐Ÿ”Œ Integration with Web App + +### How the Web App Calls the Agent: + +1. **User uploads snapshot** โ†’ `/api/snapshots/ingest` or `/api/sessions` +2. **Web app tries to call Python agent**: + ```typescript + // apps/web/src/app/api/snapshots/ingest/route.ts + const agentResult = await fetch(`${AGENT_URL}/analyze`, { + method: 'POST', + body: JSON.stringify({ snapshot, options }) + }); + ``` +3. **If agent available** โ†’ Uses SpoonOS analysis +4. **If agent unavailable** โ†’ Falls back to basic TypeScript analysis + +### Environment Variable: + +```bash +# In apps/web/.env.local +AGENT_URL=http://localhost:8000 # Default if not set +``` + +--- + +## ๐Ÿš€ How to Start the Agent Service + +### Prerequisites: + +1. **Python 3.8+** installed +2. **SpoonOS installed** (`spoon-ai` package) +3. **Anthropic API Key** (for Claude AI) + +### Step-by-Step Setup: + +#### 1. Navigate to Agent Directory + +```bash +cd apps/agent +``` + +#### 2. Create Virtual Environment + +**Windows:** +```bash +python -m venv venv +venv\Scripts\activate +``` + +**macOS/Linux:** +```bash +python3 -m venv venv +source venv/bin/activate +``` + +#### 3. Install Dependencies + +```bash +pip install -r requirements.txt +``` + +This installs: +- `fastapi` - Web framework +- `uvicorn` - ASGI server +- `spoon-ai` - SpoonOS framework โญ +- `anthropic` - Claude AI client +- `pydantic` - Data validation +- `python-dotenv` - Environment variables + +#### 4. Create `.env` File + +Create `apps/agent/.env`: + +```bash +# Required +ANTHROPIC_API_KEY=your_anthropic_api_key_here + +# Optional (defaults shown) +MODEL_NAME=claude-sonnet-4-20250514 +PORT=8000 +HOST=0.0.0.0 +``` + +**Get Anthropic API Key:** +1. Go to https://console.anthropic.com/ +2. Sign up / Log in +3. Create API key +4. Copy to `.env` file + +#### 5. Start the Service + +**Option A: Direct Python (Simple)** +```bash +python main.py +``` + +**Option B: Uvicorn (Development with auto-reload)** +```bash +uvicorn main:app --reload --port 8000 +``` + +**Option C: Production** +```bash +uvicorn main:app --host 0.0.0.0 --port 8000 +``` + +You should see: +``` +Starting GitGuard Agent with SpoonOS on 0.0.0.0:8000 +INFO: Uvicorn running on http://0.0.0.0:8000 +``` + +#### 6. Verify It's Working + +Open another terminal and test: + +```bash +curl http://localhost:8000/health +``` + +Should return: +```json +{ + "status": "healthy", + "service": "gitguard-agent", + "version": "2.0.0", + "framework": "SpoonOS" +} +``` + +--- + +## ๐Ÿงช Testing the Agent + +### Test with a Snapshot: + +1. **Start the agent** (port 8000) +2. **Start the web app** (port 3000) +3. **Upload a snapshot** via the web UI +4. **Check agent logs** - You should see: + ``` + INFO: POST /analyze + INFO: Running SpoonOS pipeline... + INFO: Stage 1: detect_issue completed + INFO: Stage 2: build_graph completed + ... + ``` + +### Manual API Test: + +```bash +curl -X POST http://localhost:8000/analyze \ + -H "Content-Type: application/json" \ + -d @test-snapshot.json +``` + +--- + +## ๐Ÿ“Š What the Agent Does + +### Input: +- Git snapshot JSON (from CLI or web upload) + +### Output: +```json +{ + "success": true, + "analysis": { + "issueType": "merge_conflict", + "riskLevel": "medium", + "summary": "AI-generated explanation...", + "repoGraph": { "nodes": [...], "edges": [...] }, + "conflicts": [ + { + "path": "src/app.ts", + "hunks": [...], + "whatHappened": "AI explanation...", + "whyConflict": "AI explanation...", + "recommendation": "keep_ours or combine..." + } + ], + "planSteps": [ + { + "title": "Step 1: Review conflicts", + "commands": ["git status"], + "undo": { "commands": [...], "description": "..." } + } + ], + "signals": { ... } + }, + "durationMs": 2345, + "pipelineTraces": [ + { "stage": "detect_issue", "duration_ms": 10, ... }, + { "stage": "build_graph", "duration_ms": 15, ... }, + ... + ] +} +``` + +--- + +## ๐Ÿ” Key Files Explained + +### `apps/agent/main.py` (809 lines) + +**Main Components:** + +1. **SpoonOS Graph Definition** (lines 50-71) + - `GitAnalysisState` - State that flows through pipeline + +2. **SpoonOS Tools** (lines 77-264) + - `DetectIssueTool` - Detects issue type + - `BuildGraphTool` - Creates repo visualization + - `ExtractConflictsTool` - Parses conflicts + +3. **SpoonOS Graph Nodes** (lines 270-494) + - `detect_issue_node()` - Stage 1 + - `build_graph_node()` - Stage 2 + - `extract_conflicts_node()` - Stage 3 + - `collect_signals_node()` - Stage 4 + - `generate_analysis_node()` - Stage 5 (uses Claude AI) + +4. **Graph Assembly** (lines 629-654) + - Creates StateGraph + - Connects nodes in sequence + - Compiles the pipeline + +5. **API Endpoints** (lines 684-801) + - `GET /health` - Health check + - `POST /analyze` - Main analysis endpoint + - `POST /explain/conflict` - Conflict explanation + +### `apps/agent/requirements.txt` + +Lists all Python dependencies including SpoonOS. + +--- + +## โš ๏ธ Common Issues & Solutions + +### Issue 1: "ModuleNotFoundError: No module named 'spoon_ai'" + +**Solution:** +```bash +pip install spoon-ai +# Or reinstall all dependencies: +pip install -r requirements.txt +``` + +### Issue 2: "ANTHROPIC_API_KEY not found" + +**Solution:** +1. Create `.env` file in `apps/agent/` +2. Add: `ANTHROPIC_API_KEY=your_key_here` +3. Restart the service + +### Issue 3: "Connection refused" from web app + +**Solution:** +1. Make sure agent is running: `curl http://localhost:8000/health` +2. Check `AGENT_URL` in web app's `.env.local` +3. Check CORS settings in `main.py` (line 32) - should allow `localhost:3000` + +### Issue 4: Agent works but web app doesn't use it + +**Solution:** +- Check web app logs for errors +- Verify `AGENT_URL` environment variable +- The web app falls back silently if agent fails - check console logs + +--- + +## ๐ŸŽฏ For Hackathon: Why Use the Agent? + +### โœ… Advantages: + +1. **SpoonOS Integration** - Required for hackathon track +2. **Better AI Analysis** - More detailed explanations +3. **Pipeline Visualization** - Shows SpoonOS stages in UI +4. **Professional Architecture** - Microservice pattern +5. **Judging Points** - Demonstrates framework mastery + +### โš ๏ธ Without Agent: + +- Web app still works (uses TypeScript fallback) +- But you lose: + - SpoonOS pipeline visualization + - Advanced AI explanations + - Framework integration points + +--- + +## ๐Ÿ”ง Development Workflow + +### Typical Development: + +1. **Terminal 1**: Run Python agent + ```bash + cd apps/agent + source venv/bin/activate # or venv\Scripts\activate on Windows + uvicorn main:app --reload --port 8000 + ``` + +2. **Terminal 2**: Run Next.js web app + ```bash + cd apps/web + pnpm dev + ``` + +3. **Terminal 3**: Test with CLI + ```bash + cd some-git-repo + gitguard snapshot > snapshot.json + # Upload via web UI or use API + ``` + +### Hot Reload: + +- **Python agent**: Use `--reload` flag with uvicorn +- **Next.js**: Auto-reloads on file changes + +--- + +## ๐Ÿ“ Environment Variables Summary + +### `apps/agent/.env`: +```bash +ANTHROPIC_API_KEY=sk-ant-... # Required +MODEL_NAME=claude-sonnet-4-20250514 # Optional +PORT=8000 # Optional +HOST=0.0.0.0 # Optional +``` + +### `apps/web/.env.local`: +```bash +AGENT_URL=http://localhost:8000 # Optional (defaults to this) +ANTHROPIC_API_KEY=sk-ant-... # For fallback TypeScript agent +``` + +--- + +## ๐ŸŽ“ Understanding SpoonOS Integration + +### What is SpoonOS? + +SpoonOS is an **agentic AI framework** that provides: +- **StateGraph** - Multi-stage processing pipelines +- **Tools** - Reusable AI tools +- **Agents** - AI agents with tool-calling +- **ChatBot** - LLM integration + +### How We Use It: + +1. **StateGraph** - Our 5-stage pipeline +2. **Tools** - Custom tools for git analysis +3. **ChatBot** - Claude AI integration +4. **Traces** - Track each stage's output + +### Why It Matters for Hackathon: + +- **Judging Criteria**: "Utilization of Spoon OS" +- **Track Relevance**: Agentic Infrastructure track +- **Technical Excellence**: Shows framework mastery + +--- + +## ๐Ÿšจ Quick Start Checklist + +- [ ] Python 3.8+ installed +- [ ] Navigate to `apps/agent` +- [ ] Create virtual environment: `python -m venv venv` +- [ ] Activate: `venv\Scripts\activate` (Windows) or `source venv/bin/activate` (Mac/Linux) +- [ ] Install: `pip install -r requirements.txt` +- [ ] Create `.env` with `ANTHROPIC_API_KEY` +- [ ] Start: `python main.py` or `uvicorn main:app --reload` +- [ ] Test: `curl http://localhost:8000/health` +- [ ] Start web app in another terminal +- [ ] Upload snapshot and verify agent is used + +--- + +## ๐Ÿ’ก Pro Tips + +1. **Keep Agent Running**: Start it first, then web app +2. **Check Logs**: Agent logs show each pipeline stage +3. **Test Health Endpoint**: Quick way to verify it's running +4. **Use Fallback**: Web app works without agent, but less powerful +5. **Monitor API Calls**: Check network tab in browser devtools + +--- + +## ๐Ÿ“š Additional Resources + +- **SpoonOS Docs**: https://xspoonai.github.io/docs/ +- **FastAPI Docs**: https://fastapi.tiangolo.com/ +- **Anthropic API**: https://docs.anthropic.com/ + +--- + +## โ“ FAQ + +**Q: Do I need the agent for the hackathon?** +A: **YES!** It's required to demonstrate SpoonOS usage. + +**Q: Can I run without Anthropic API key?** +A: Agent won't work without it. Get a free key from Anthropic. + +**Q: What if agent fails?** +A: Web app falls back to TypeScript agent, but you lose SpoonOS features. + +**Q: Can I modify the pipeline?** +A: Yes! Edit `main.py` to add/remove stages or modify logic. + +**Q: How do I debug?** +A: Check agent logs, web app console, and use `/health` endpoint. + +--- + +## ๐ŸŽ‰ Summary + +The `apps/agent` service is: +- โœ… **Python FastAPI microservice** +- โœ… **Uses SpoonOS StateGraph** +- โœ… **Provides advanced AI analysis** +- โœ… **Runs on port 8000** +- โœ… **Called by web app via HTTP** +- โœ… **Optional but recommended for hackathon** + +**Start it, keep it running, and enjoy better AI-powered git recovery!** ๐Ÿš€ + diff --git a/DEPLOYMENT_GUIDE.md b/DEPLOYMENT_GUIDE.md new file mode 100644 index 0000000..2607902 --- /dev/null +++ b/DEPLOYMENT_GUIDE.md @@ -0,0 +1,851 @@ +# ๐Ÿš€ GitGuard Complete Deployment Guide + +**From Zero to Production** - Deploy GitGuard to your VPS with your Hostinger domain. + +--- + +## ๐Ÿ“‹ Prerequisites + +- โœ… VPS with Ubuntu/Debian (Hostinger KVM2 or any Linux VPS) +- โœ… Domain name from Hostinger +- โœ… SSH access to your VPS +- โœ… Basic terminal knowledge + +--- + +## ๐ŸŽฏ Overview + +You'll deploy: +1. **Next.js Web App** (port 3000) - Node.js/PM2 +2. **Python Agent Service** (port 8000) - FastAPI/Systemd +3. **Nginx Reverse Proxy** (port 80/443) - Public access +4. **Database** - SQLite (or PostgreSQL) + +**All services will auto-start on boot and stay running even when terminal is closed!** + +--- + +## ๐Ÿ“ Step 1: Domain DNS Configuration (Hostinger) + +### 1.1 Get Your VPS IP Address + +```bash +# On your VPS, run: +curl ifconfig.me +# Or check in Hostinger VPS panel +``` + +**Note your VPS IP address** (e.g., `123.45.67.89`) + +### 1.2 Configure DNS in Hostinger + +1. **Login to Hostinger** โ†’ Go to **Domains** โ†’ Select your domain +2. **Go to DNS Zone Editor** or **DNS Management** +3. **Add/Edit DNS Records:** + + **For main domain:** + ``` + Type: A + Name: @ (or leave blank) + Points to: YOUR_VPS_IP (e.g., 123.45.67.89) + TTL: 3600 (or default) + ``` + + **For www subdomain:** + ``` + Type: A + Name: www + Points to: YOUR_VPS_IP (e.g., 123.45.67.89) + TTL: 3600 (or default) + ``` + +4. **Save** and wait 5-30 minutes for DNS propagation + +### 1.3 Verify DNS (Optional) + +```bash +# On your local machine, check: +nslookup yourdomain.com +# Should return your VPS IP + +# Or use online tool: +# https://dnschecker.org +``` + +--- + +## ๐Ÿ–ฅ๏ธ Step 2: Initial VPS Setup + +### 2.1 Connect to VPS + +```bash +# From your local machine: +ssh root@YOUR_VPS_IP +# Or if you have a user: +ssh username@YOUR_VPS_IP + +# If first time, you may need to accept the host key +``` + +### 2.2 Update System + +```bash +sudo apt update && sudo apt upgrade -y +``` + +### 2.3 Install Required Software + +```bash +# Node.js 20 (LTS) +curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - +sudo apt install -y nodejs + +# Python 3.11+ +sudo apt install -y python3 python3-pip python3-venv python3-dev + +# Nginx (web server/reverse proxy) +sudo apt install -y nginx + +# Git (to clone repository) +sudo apt install -y git + +# Build tools (for native modules) +sudo apt install -y build-essential + +# PM2 (process manager for Node.js) +sudo npm install -g pnpm pm2 + +# Verify installations +node --version # Should be v20.x.x +python3 --version # Should be 3.11+ +nginx -v +pm2 --version +pnpm --version +``` + +### 2.4 Create Application Directory + +```bash +sudo mkdir -p /var/www/gitguardian +sudo chown -R $USER:$USER /var/www/gitguardian +``` + +--- + +## ๐Ÿ“ฅ Step 3: Deploy Your Code + +### 3.1 Upload Code to VPS + +**Option A: Clone from Git (Recommended)** + +```bash +cd /var/www/gitguardian +git clone https://github.com/YOUR_USERNAME/gitguardian.git . +# Or if private repo, use SSH: +# git clone git@github.com:YOUR_USERNAME/gitguardian.git . +``` + +**Option B: Upload via SCP (from local machine)** + +```bash +# From your local machine (in project root): +scp -r . root@YOUR_VPS_IP:/var/www/gitguardian/ +``` + +**Option C: Use rsync (from local machine)** + +```bash +# From your local machine (in project root): +rsync -avz --exclude 'node_modules' --exclude '.git' --exclude 'venv' \ + . root@YOUR_VPS_IP:/var/www/gitguardian/ +``` + +### 3.2 Install Dependencies + +```bash +cd /var/www/gitguardian + +# Install all dependencies +pnpm install + +# Build all packages +pnpm build +``` + +**This may take 5-10 minutes depending on your VPS specs.** + +--- + +## ๐Ÿ Step 4: Setup Python Agent Service + +### 4.1 Create Virtual Environment + +```bash +cd /var/www/gitguardian/apps/agent +python3 -m venv venv +source venv/bin/activate + +# Upgrade pip +pip install --upgrade pip + +# Install dependencies +pip install -r requirements.txt +``` + +### 4.2 Create Environment File + +```bash +cd /var/www/gitguardian/apps/agent +nano .env +``` + +**Add the following (replace with your actual values):** + +```bash +# Anthropic API Key (get from https://console.anthropic.com/) +ANTHROPIC_API_KEY=sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx + +# Model Configuration +MODEL_NAME=claude-sonnet-4-20250514 + +# Server Configuration +PORT=8000 +HOST=0.0.0.0 + +# Logging +LOG_LEVEL=INFO +``` + +**Save:** `Ctrl+O`, `Enter`, `Ctrl+X` + +### 4.3 Test Python Agent + +```bash +cd /var/www/gitguardian/apps/agent +source venv/bin/activate +python main.py +``` + +**You should see:** +``` +๐Ÿš€ GitGuard Agent Service Starting + Host: 0.0.0.0 + Port: 8000 +INFO: Uvicorn running on http://0.0.0.0:8000 +``` + +**Press `Ctrl+C` to stop** (we'll set it up as a service next) + +### 4.4 Create Systemd Service + +```bash +sudo nano /etc/systemd/system/gitguard-agent.service +``` + +**Paste this (adjust paths if needed):** + +```ini +[Unit] +Description=GitGuard Agent Service (SpoonOS AI Pipeline) +After=network.target + +[Service] +Type=simple +User=www-data +Group=www-data +WorkingDirectory=/var/www/gitguardian/apps/agent +Environment="PATH=/var/www/gitguardian/apps/agent/venv/bin:/usr/local/bin:/usr/bin:/bin" +EnvironmentFile=/var/www/gitguardian/apps/agent/.env +ExecStart=/var/www/gitguardian/apps/agent/venv/bin/python main.py +Restart=always +RestartSec=10 +StandardOutput=journal +StandardError=journal +SyslogIdentifier=gitguard-agent + +[Install] +WantedBy=multi-user.target +``` + +**Save:** `Ctrl+O`, `Enter`, `Ctrl+X` + +### 4.5 Enable and Start Service + +```bash +# Reload systemd +sudo systemctl daemon-reload + +# Enable service (auto-start on boot) +sudo systemctl enable gitguard-agent + +# Start service +sudo systemctl start gitguard-agent + +# Check status +sudo systemctl status gitguard-agent +``` + +**You should see:** `Active: active (running)` + +**View logs:** +```bash +sudo journalctl -u gitguard-agent -f +``` + +**Press `Ctrl+C` to exit logs** + +--- + +## ๐ŸŒ Step 5: Setup Next.js Web App + +### 5.1 Create Environment File + +```bash +cd /var/www/gitguardian/apps/web +nano .env.local +``` + +**Add the following (replace with your actual values):** + +```bash +# Database (SQLite - simple, or use PostgreSQL for production) +DATABASE_URL="file:./gitguard.db" + +# NextAuth Configuration +NEXTAUTH_URL="https://yourdomain.com" +NEXTAUTH_SECRET="GENERATE_RANDOM_SECRET_HERE" + +# Python Agent Service +AGENT_URL="http://localhost:8000" + +# Optional: Anthropic API Key (for fallback) +ANTHROPIC_API_KEY="sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" + +# Node Environment +NODE_ENV="production" +``` + +**Generate NEXTAUTH_SECRET:** +```bash +openssl rand -base64 32 +# Copy the output and paste it as NEXTAUTH_SECRET +``` + +**Save:** `Ctrl+O`, `Enter`, `Ctrl+X` + +### 5.2 Setup Database + +```bash +cd /var/www/gitguardian/apps/web + +# Generate Prisma client +pnpm exec prisma generate + +# Run database migrations +pnpm exec prisma migrate deploy + +# Or if first time: +# pnpm exec prisma db push +``` + +### 5.3 Update PM2 Configuration + +```bash +cd /var/www/gitguardian +nano ecosystem.config.js +``` + +**Update with your actual values:** + +```javascript +module.exports = { + apps: [ + { + name: 'gitguard-web', + script: 'apps/web/node_modules/.bin/next', + args: 'start', + cwd: '/var/www/gitguardian/apps/web', + instances: 1, + exec_mode: 'fork', + env: { + NODE_ENV: 'production', + PORT: 3000, + AGENT_URL: 'http://localhost:8000', + NEXTAUTH_URL: 'https://yourdomain.com', // โ† Change this + NEXTAUTH_SECRET: 'your-generated-secret-here', // โ† Change this + DATABASE_URL: 'file:./gitguard.db', + }, + error_file: '/var/log/gitguard/web-error.log', + out_file: '/var/log/gitguard/web-out.log', + log_date_format: 'YYYY-MM-DD HH:mm:ss Z', + merge_logs: true, + autorestart: true, + watch: false, + max_memory_restart: '1G', + } + ] +}; +``` + +**Save:** `Ctrl+O`, `Enter`, `Ctrl+X` + +### 5.4 Create Log Directory + +```bash +sudo mkdir -p /var/log/gitguard +sudo chown -R $USER:$USER /var/log/gitguard +``` + +### 5.5 Start Web App with PM2 + +```bash +cd /var/www/gitguardian + +# Start the app +pm2 start ecosystem.config.js + +# Save PM2 configuration +pm2 save + +# Setup PM2 to start on boot +pm2 startup +# Follow the instructions it prints (usually run a sudo command) +``` + +**Check status:** +```bash +pm2 status +pm2 logs gitguard-web +``` + +**Press `Ctrl+C` to exit logs** + +--- + +## ๐Ÿ”’ Step 6: Setup Nginx Reverse Proxy + +### 6.1 Create Nginx Configuration + +```bash +sudo nano /etc/nginx/sites-available/gitguard +``` + +**Paste this (replace `yourdomain.com` with your actual domain):** + +```nginx +# HTTP Server - Redirect to HTTPS +server { + listen 80; + listen [::]:80; + server_name yourdomain.com www.yourdomain.com; + + # For Let's Encrypt verification (before SSL) + location /.well-known/acme-challenge/ { + root /var/www/html; + } + + # Redirect all HTTP to HTTPS (uncomment after SSL setup) + # return 301 https://$server_name$request_uri; + + # Temporary: Allow HTTP (comment out after SSL setup) + location / { + proxy_pass http://localhost:3000; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection 'upgrade'; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Proto $scheme; + proxy_cache_bypass $http_upgrade; + proxy_read_timeout 300s; + proxy_connect_timeout 75s; + } +} + +# HTTPS Server (uncomment after SSL setup) +# server { +# listen 443 ssl http2; +# listen [::]:443 ssl http2; +# server_name yourdomain.com www.yourdomain.com; +# +# ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem; +# ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem; +# +# # SSL Configuration +# ssl_protocols TLSv1.2 TLSv1.3; +# ssl_ciphers HIGH:!aNULL:!MD5; +# ssl_prefer_server_ciphers on; +# +# location / { +# proxy_pass http://localhost:3000; +# proxy_http_version 1.1; +# proxy_set_header Upgrade $http_upgrade; +# proxy_set_header Connection 'upgrade'; +# proxy_set_header Host $host; +# proxy_set_header X-Real-IP $remote_addr; +# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; +# proxy_set_header X-Forwarded-Proto $scheme; +# proxy_cache_bypass $http_upgrade; +# proxy_read_timeout 300s; +# proxy_connect_timeout 75s; +# } +# } +``` + +**Save:** `Ctrl+O`, `Enter`, `Ctrl+X` + +**Replace `yourdomain.com` with your actual domain!** + +### 6.2 Enable Site + +```bash +# Create symbolic link +sudo ln -s /etc/nginx/sites-available/gitguard /etc/nginx/sites-enabled/ + +# Remove default site (optional) +sudo rm /etc/nginx/sites-enabled/default + +# Test configuration +sudo nginx -t +``` + +**Should see:** `syntax is ok` and `test is successful` + +### 6.3 Start Nginx + +```bash +# Enable Nginx on boot +sudo systemctl enable nginx + +# Start Nginx +sudo systemctl restart nginx + +# Check status +sudo systemctl status nginx +``` + +--- + +## ๐Ÿ” Step 7: Setup SSL Certificate (HTTPS) + +### 7.1 Install Certbot + +```bash +sudo apt install -y certbot python3-certbot-nginx +``` + +### 7.2 Get SSL Certificate + +```bash +# Replace yourdomain.com with your actual domain +sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com +``` + +**Follow the prompts:** +- Enter your email address +- Agree to terms +- Choose whether to redirect HTTP to HTTPS (recommended: Yes) + +### 7.3 Update Nginx Config for HTTPS + +After Certbot runs, it will automatically update your Nginx config. You can also manually uncomment the HTTPS server block in `/etc/nginx/sites-available/gitguard`. + +**Test and reload:** +```bash +sudo nginx -t +sudo systemctl reload nginx +``` + +### 7.4 Auto-Renewal (Already Configured) + +Certbot automatically sets up renewal. Test it: + +```bash +sudo certbot renew --dry-run +``` + +--- + +## ๐Ÿ”ฅ Step 8: Configure Firewall + +### 8.1 Setup UFW (Ubuntu Firewall) + +```bash +# Allow SSH (IMPORTANT - do this first!) +sudo ufw allow 22/tcp + +# Allow HTTP +sudo ufw allow 80/tcp + +# Allow HTTPS +sudo ufw allow 443/tcp + +# Enable firewall +sudo ufw enable + +# Check status +sudo ufw status +``` + +**Note:** Ports 3000 and 8000 should NOT be exposed - only access via Nginx (port 80/443) + +--- + +## โœ… Step 9: Verify Everything Works + +### 9.1 Check All Services + +```bash +# Python Agent +sudo systemctl status gitguard-agent +curl http://localhost:8000/health # Should return response + +# Web App +pm2 status +curl http://localhost:3000 # Should return HTML + +# Nginx +sudo systemctl status nginx +``` + +### 9.2 Test from Browser + +1. **Open your browser** +2. **Visit:** `http://yourdomain.com` (or `https://yourdomain.com` if SSL is set up) +3. **You should see the GitGuard web interface!** + +### 9.3 Test API Endpoints + +```bash +# Test Python agent directly +curl http://localhost:8000/health + +# Test web app API +curl http://localhost:3000/api/auth/session +``` + +--- + +## ๐Ÿ”„ Step 10: Verify Auto-Start on Boot + +### 10.1 Test Reboot + +```bash +# Reboot the server +sudo reboot +``` + +**Wait 2-3 minutes, then SSH back in and check:** + +```bash +# Check all services +sudo systemctl status gitguard-agent +pm2 status +sudo systemctl status nginx + +# All should show as "active" or "online" +``` + +### 10.2 Verify Services are Enabled + +```bash +# Python Agent +sudo systemctl is-enabled gitguard-agent # Should return "enabled" + +# Nginx +sudo systemctl is-enabled nginx # Should return "enabled" + +# PM2 (check if startup script exists) +pm2 startup +``` + +--- + +## ๐Ÿ› ๏ธ Maintenance Commands + +### View Logs + +```bash +# Python Agent logs +sudo journalctl -u gitguard-agent -f + +# Web App logs +pm2 logs gitguard-web + +# Nginx logs +sudo tail -f /var/log/nginx/access.log +sudo tail -f /var/log/nginx/error.log +``` + +### Restart Services + +```bash +# Restart Python Agent +sudo systemctl restart gitguard-agent + +# Restart Web App +pm2 restart gitguard-web + +# Restart Nginx +sudo systemctl restart nginx + +# Restart everything +sudo systemctl restart gitguard-agent && pm2 restart gitguard-web && sudo systemctl restart nginx +``` + +### Update Code + +```bash +cd /var/www/gitguardian + +# Pull latest code +git pull + +# Install new dependencies +pnpm install + +# Rebuild +pnpm build + +# Restart services +pm2 restart gitguard-web +sudo systemctl restart gitguard-agent +``` + +--- + +## ๐Ÿšจ Troubleshooting + +### Service Won't Start + +```bash +# Check logs +sudo journalctl -u gitguard-agent -n 50 +pm2 logs gitguard-web --err + +# Check permissions +ls -la /var/www/gitguardian +sudo chown -R www-data:www-data /var/www/gitguardian +``` + +### Port Already in Use + +```bash +# Find what's using port 3000 +sudo lsof -i :3000 + +# Find what's using port 8000 +sudo lsof -i :8000 + +# Kill process if needed +sudo kill -9 +``` + +### Nginx 502 Bad Gateway + +```bash +# Check if services are running +pm2 status +sudo systemctl status gitguard-agent + +# Check Nginx config +sudo nginx -t + +# Restart Nginx +sudo systemctl restart nginx +``` + +### Database Issues + +```bash +# SQLite permissions +sudo chmod 666 /var/www/gitguardian/apps/web/gitguard.db +sudo chown www-data:www-data /var/www/gitguardian/apps/web/gitguard.db + +# Regenerate Prisma client +cd /var/www/gitguardian/apps/web +pnpm exec prisma generate +``` + +### DNS Not Working + +```bash +# Check DNS propagation +nslookup yourdomain.com + +# Check if domain points to your VPS IP +dig yourdomain.com +short + +# Wait longer (DNS can take up to 48 hours, usually 5-30 minutes) +``` + +--- + +## ๐Ÿ“Š Quick Reference + +### Service Management + +| Service | Start | Stop | Restart | Status | Logs | +|---------|-------|------|---------|--------|------| +| Python Agent | `sudo systemctl start gitguard-agent` | `sudo systemctl stop gitguard-agent` | `sudo systemctl restart gitguard-agent` | `sudo systemctl status gitguard-agent` | `sudo journalctl -u gitguard-agent -f` | +| Web App | `pm2 start gitguard-web` | `pm2 stop gitguard-web` | `pm2 restart gitguard-web` | `pm2 status` | `pm2 logs gitguard-web` | +| Nginx | `sudo systemctl start nginx` | `sudo systemctl stop nginx` | `sudo systemctl restart nginx` | `sudo systemctl status nginx` | `sudo tail -f /var/log/nginx/error.log` | + +### File Locations + +- **Code:** `/var/www/gitguardian` +- **Python Agent:** `/var/www/gitguardian/apps/agent` +- **Web App:** `/var/www/gitguardian/apps/web` +- **Logs:** `/var/log/gitguard/` +- **Systemd Service:** `/etc/systemd/system/gitguard-agent.service` +- **Nginx Config:** `/etc/nginx/sites-available/gitguard` +- **Environment Files:** + - `/var/www/gitguardian/apps/agent/.env` + - `/var/www/gitguardian/apps/web/.env.local` + +--- + +## โœ… Final Checklist + +- [ ] Domain DNS configured (A record pointing to VPS IP) +- [ ] All software installed (Node.js, Python, Nginx, PM2) +- [ ] Code deployed to `/var/www/gitguardian` +- [ ] Python agent environment file created (`.env`) +- [ ] Python agent running (systemd service) +- [ ] Web app environment file created (`.env.local`) +- [ ] Database initialized (Prisma migrations) +- [ ] Web app running (PM2) +- [ ] Nginx configured and running +- [ ] SSL certificate installed (optional but recommended) +- [ ] Firewall configured (UFW) +- [ ] All services tested +- [ ] Auto-start on boot verified +- [ ] Website accessible from browser + +--- + +## ๐ŸŽ‰ You're Done! + +Your GitGuard application is now: +- โœ… **Publicly accessible** at `https://yourdomain.com` +- โœ… **Running 24/7** - services auto-start on boot +- โœ… **Survives terminal closure** - all services run in background +- โœ… **Secure** - HTTPS enabled (if SSL configured) +- โœ… **Monitored** - logs available for debugging + +**Access your app:** +- ๐ŸŒ **Website:** `https://yourdomain.com` +- ๐Ÿ“Š **Monitor:** `pm2 monit` (web app) or `sudo journalctl -u gitguard-agent -f` (agent) + +**Need help?** Check the troubleshooting section above or view the logs! + +--- + +## ๐Ÿ“š Additional Resources + +- [PM2 Documentation](https://pm2.keymetrics.io/) +- [Systemd Service Guide](https://www.freedesktop.org/software/systemd/man/systemd.service.html) +- [Nginx Configuration](https://nginx.org/en/docs/) +- [Let's Encrypt Certbot](https://certbot.eff.org/) +- [Hostinger DNS Guide](https://www.hostinger.com/tutorials/how-to-update-dns-nameservers) diff --git a/HACKATHON_EVALUATION.md b/HACKATHON_EVALUATION.md new file mode 100644 index 0000000..e1d34ea --- /dev/null +++ b/HACKATHON_EVALUATION.md @@ -0,0 +1,306 @@ +# GitGuard Agent - Hackathon Evaluation & Improvement Suggestions + +## ๐ŸŽฏ Overall Assessment: **STRONG WIN POTENTIAL** โญโญโญโญ + +Your project has excellent fundamentals and strong technical implementation. With the suggested improvements below, you have a very good chance of winning, especially in the **Agentic Infrastructure & Productivity AI** track. + +--- + +## ๐Ÿ“Š Evaluation Against Hackathon Criteria + +### 1. Technical Excellence & Quality (30pts) - **Current: 24/30** โ†’ **Target: 28/30** + +#### โœ… Strengths: +- **SpoonOS Integration**: Excellent use of SpoonOS StateGraph architecture with proper pipeline stages +- **Cross-platform CLI**: Well-designed Node.js CLI that works on Windows/macOS/Linux +- **Type Safety**: Strong TypeScript usage with Zod schemas for validation +- **Database Design**: Clean Prisma schema with proper relationships +- **Agent Pipeline**: Well-structured collector โ†’ classifier โ†’ planner โ†’ verifier flow + +#### โš ๏ธ Must Fix/Improve: +1. **CRITICAL**: Ensure SpoonOS is properly installed and configured + - Add installation verification script + - Document SpoonOS version requirements + - Add health check endpoint that verifies SpoonOS availability + +2. **Error Handling**: Add comprehensive error boundaries + - Wrap API routes in try-catch with proper error responses + - Add retry logic for LLM calls + - Handle network failures gracefully + +3. **Testing**: Add at least basic integration tests + - Test snapshot parsing + - Test agent pipeline stages + - Test API endpoints + +4. **Performance**: Optimize LLM calls + - Add caching for similar snapshots + - Batch conflict explanations + - Add request queuing for high load + +--- + +### 2. Creativity & Innovation (25pts) - **Current: 20/25** โ†’ **Target: 24/25** + +#### โœ… Strengths: +- **Reversible by Design**: Unique safety-first approach with explicit undo paths +- **Visual Conflict Explorer**: Innovative diff visualization with AI explanations +- **SpoonOS Trace Visualization**: Great transparency showing AI decision-making +- **Reflog Recovery**: Smart fallback mechanism + +#### โš ๏ธ Must Add: +1. **AI Conflict Resolution Suggestions**: + - Show AI-generated code snippets for conflict resolution + - Provide multiple resolution strategies per conflict + - Add confidence scores for suggestions + +2. **Interactive Recovery Plan**: + - Allow users to reorder steps + - Add "what-if" simulation mode + - Show dependency graph between steps + +3. **Smart Branch Recommendations**: + - Suggest optimal branch names for detached HEAD recovery + - Recommend merge vs rebase strategies + - Predict potential future conflicts + +4. **Collaborative Features**: + - Share recovery sessions with team members + - Comment on conflicts + - Team history of resolved conflicts + +--- + +### 3. Real-World Impact (20pts) - **Current: 16/20** โ†’ **Target: 19/20** + +#### โœ… Strengths: +- **Clear Problem**: Git disasters are a real, painful problem for developers +- **Practical Solution**: Addresses merge conflicts, detached HEAD, rebases +- **Safety Focus**: Reversible operations reduce fear of using the tool + +#### โš ๏ธ Must Add: +1. **Metrics Dashboard**: + - Show time saved per recovery + - Track success rate of recovery plans + - Display common issue patterns + +2. **Integration with Git Providers**: + - GitHub/GitLab/Bitbucket integration + - Auto-detect conflicts from PRs + - Suggest recovery before merge + +3. **CLI Improvements**: + - Add `gitguard watch` mode for continuous monitoring + - Auto-snapshot on conflict detection + - Git hooks integration + +4. **Documentation**: + - Video demo showing real-world scenarios + - Case studies of recovered repositories + - Comparison with manual recovery time + +--- + +### 4. Planning & Design/UX (15pts) - **Current: 12/15** โ†’ **Target: 14/15** + +#### โœ… Strengths: +- **Modern UI**: Clean, professional design with good color scheme +- **Responsive Layout**: Works on different screen sizes +- **Clear Navigation**: Intuitive tab structure +- **Visual Feedback**: Good loading states and progress indicators + +#### โš ๏ธ Must Fix/Improve: +1. **CRITICAL - Mobile Responsiveness**: + - Fix conflict diff viewer on mobile (currently may overflow) + - Make code blocks scrollable horizontally + - Improve touch interactions + +2. **Accessibility**: + - Add ARIA labels to all interactive elements + - Ensure keyboard navigation works + - Add focus indicators + - Test with screen readers + +3. **Onboarding**: + - Add interactive tutorial for first-time users + - Show example snapshots + - Quick start guide + +4. **Error Messages**: + - Make error messages more user-friendly + - Add troubleshooting tips + - Provide recovery suggestions for errors + +5. **Loading States**: + - Add skeleton loaders instead of spinners + - Show estimated time for long operations + - Allow cancellation of long-running operations + +6. **Empty States**: + - Improve empty state designs with helpful CTAs + - Add sample data for demo purposes + +--- + +### 5. Track Relevance (10pts) - **Current: 9/10** โ†’ **Target: 10/10** + +#### โœ… Strengths: +- **Perfect Fit**: Agentic Infrastructure & Productivity AI track +- **SpoonOS Integration**: Demonstrates proper framework usage +- **Productivity Focus**: Saves developers time and reduces stress + +#### โš ๏ธ Minor Improvements: +1. **Emphasize SpoonOS Usage**: + - Add SpoonOS logo/badge prominently + - Show SpoonOS graph visualization more prominently + - Document SpoonOS-specific features + +2. **Agent Capabilities**: + - Highlight multi-agent scenarios (if applicable) + - Show agent reasoning process more clearly + - Demonstrate agent learning from user feedback + +--- + +## ๐Ÿš€ MUST-DO Before Submission (Priority Order) + +### ๐Ÿ”ด Critical (Do First): +1. **Fix All Emoji Icons** โœ… DONE +2. **Verify SpoonOS Installation & Configuration** + - Test that agent service starts correctly + - Verify all SpoonOS imports work + - Add health check that confirms SpoonOS availability + +3. **Fix Mobile Responsiveness** + - Test on actual mobile devices + - Fix horizontal scrolling issues + - Improve touch targets + +4. **Add Error Handling** + - Wrap all API routes in try-catch + - Add user-friendly error messages + - Handle LLM API failures gracefully + +5. **Test End-to-End Flow** + - Create test snapshot + - Upload and verify analysis works + - Test recovery plan generation + - Test verification flow + +### ๐ŸŸก High Priority (Do Second): +6. **Add Demo Video** + - Record 2-3 minute demo showing real conflict resolution + - Show SpoonOS pipeline in action + - Highlight key features + +7. **Improve Documentation** + - Update README with clear setup instructions + - Add screenshots to README + - Document SpoonOS integration + +8. **Add Metrics/Stats** + - Show recovery success rate + - Display time saved + - Add usage statistics + +9. **Polish UI/UX** + - Add loading skeletons + - Improve empty states + - Add tooltips for complex features + - Improve color contrast for accessibility + +### ๐ŸŸข Nice to Have (If Time Permits): +10. **Add AI Conflict Resolution Suggestions** +11. **Add GitHub Integration Demo** +12. **Add More Visualizations** +13. **Add Keyboard Shortcuts** +14. **Add Dark/Light Mode Toggle** + +--- + +## ๐Ÿ“ Submission Checklist + +### Required Items: +- [x] Team Member Names +- [x] Contact Information +- [ ] **Project Name**: GitGuard Agent (or similar) +- [ ] **Project Summary**: 2-3 sentence elevator pitch +- [ ] **Pitch Deck/Slides**: Create compelling presentation + - Problem statement + - Solution overview + - SpoonOS integration highlights + - Demo screenshots + - Real-world impact metrics +- [ ] **GitHub Repository**: Ensure it's public and well-organized +- [ ] **Website/Prototype Link**: Deploy to Vercel/Netlify +- [ ] **SpoonOS Framework Usage**: YES (document this clearly) + +### Pitch Deck Suggestions: +1. **Slide 1**: Title + Tagline ("Never Lose Your Work to Git Disasters") +2. **Slide 2**: Problem - Show stats on git conflicts, developer pain +3. **Slide 3**: Solution - GitGuard Agent overview +4. **Slide 4**: SpoonOS Integration - Show pipeline architecture +5. **Slide 5**: Key Features - Visual conflict explorer, reversible plans +6. **Slide 6**: Demo Screenshots - Before/after recovery +7. **Slide 7**: Real-World Impact - Time saved, success stories +8. **Slide 8**: Technical Excellence - Architecture, safety features +9. **Slide 9**: Future Roadmap - Integrations, enhancements +10. **Slide 10**: Thank You + Contact + +--- + +## ๐ŸŽฏ Winning Strategy + +### For Grand Prix ($2,000): +- **Emphasize Innovation**: Your "reversible by design" approach is unique +- **Show SpoonOS Mastery**: Demonstrate deep understanding of the framework +- **Real-World Impact**: Use metrics and testimonials +- **Polish**: Ensure everything works flawlessly in demo + +### For Track Winner ($1,000): +- **Focus on Agentic Infrastructure**: Highlight multi-stage agent pipeline +- **Show Productivity Gains**: Quantify time saved +- **Demonstrate Framework Usage**: Show SpoonOS StateGraph in action + +### For Special Awards ($500): +- **Technical Excellence**: Clean code, proper architecture, error handling +- **Creativity & Innovation**: Unique safety-first approach, visualizations +- **Real-World Impact**: Practical problem-solving, time savings +- **Best UX & Design**: Polished UI, intuitive workflow + +--- + +## ๐Ÿ”ง Quick Wins (Can Do in 1-2 Hours) + +1. **Add Loading Skeletons**: Replace spinners with skeleton loaders +2. **Improve Error Messages**: Make them more helpful +3. **Add Tooltips**: Explain complex features +4. **Fix Mobile Issues**: Test and fix responsive design +5. **Add Demo Data**: Include sample snapshots for quick testing +6. **Improve README**: Add screenshots and better instructions +7. **Add Health Check**: Verify SpoonOS is working +8. **Polish Animations**: Smooth transitions and micro-interactions + +--- + +## ๐Ÿ’ก Final Recommendations + +1. **Focus on Demo**: Your demo is everything. Practice it multiple times. +2. **Tell a Story**: Start with a real developer's pain point, show how GitGuard solves it. +3. **Show SpoonOS**: Make SpoonOS integration a highlight, not a footnote. +4. **Be Honest**: Acknowledge limitations, but show how you'd address them. +5. **Polish Matters**: Small UI improvements can make a big difference in judging. + +--- + +## ๐ŸŽ‰ Conclusion + +Your project has **strong win potential**. The core concept is solid, the implementation is good, and with the improvements above, you'll have a competitive entry. Focus on: + +1. **Polish** (UI/UX improvements) +2. **Demo** (compelling presentation) +3. **SpoonOS** (showcase framework usage) +4. **Impact** (real-world value) + +Good luck! ๐Ÿš€ + diff --git a/apps/agent/.env.example b/apps/agent/.env.example new file mode 100644 index 0000000..fdad842 --- /dev/null +++ b/apps/agent/.env.example @@ -0,0 +1,10 @@ +# Agent Service Configuration +PORT=8000 +HOST=0.0.0.0 + +# Anthropic API Key (for Claude) +ANTHROPIC_API_KEY=your_anthropic_api_key_here + +# Model Configuration +MODEL_NAME=claude-sonnet-4-20250514 +MAX_TOKENS=4096 diff --git a/apps/agent/README.md b/apps/agent/README.md new file mode 100644 index 0000000..4b326ac --- /dev/null +++ b/apps/agent/README.md @@ -0,0 +1,81 @@ +# GitGuard Agent Service + +SpoonOS-powered Git incident analysis service using Claude AI. + +## Setup + +1. Create virtual environment: +```bash +cd apps/agent +python -m venv venv + +# Windows +venv\Scripts\activate + +# macOS/Linux +source venv/bin/activate +``` + +2. Install dependencies: +```bash +pip install -r requirements.txt +``` + +3. Configure environment: +```bash +cp .env.example .env +# Edit .env and add your ANTHROPIC_API_KEY +``` + +4. Run the service: +```bash +python main.py +# Or with uvicorn for development: +uvicorn main:app --reload --port 8000 +``` + +## API Endpoints + +### POST /analyze +Analyze a git snapshot and return structured analysis. + +**Request:** +```json +{ + "snapshot": { ... }, + "options": { + "includeGraph": true, + "maxConflictFiles": 10, + "maxHunksPerFile": 5 + } +} +``` + +**Response:** +```json +{ + "success": true, + "analysis": { + "issueType": "merge_conflict", + "summary": "...", + "repoGraph": { ... }, + "conflicts": [ ... ], + "plan": [ ... ] + }, + "durationMs": 1234 +} +``` + +### POST /explain/conflict +Get AI explanation for a specific conflict hunk. + +### GET /health +Health check endpoint. + +## Environment Variables + +- `ANTHROPIC_API_KEY` - Your Anthropic API key +- `MODEL_NAME` - Claude model to use (default: claude-sonnet-4-20250514) +- `MAX_TOKENS` - Max tokens for AI responses (default: 4096) +- `PORT` - Server port (default: 8000) +- `HOST` - Server host (default: 0.0.0.0) diff --git a/apps/agent/fallback.py b/apps/agent/fallback.py new file mode 100644 index 0000000..846a872 --- /dev/null +++ b/apps/agent/fallback.py @@ -0,0 +1,134 @@ +""" +Fallback Plan Generation +Generates recovery plans without AI when LLM is unavailable +""" + + +def generate_fallback_plan(issue_type: str, snapshot: dict, conflicts: list) -> list: + """Generate fallback plan without AI.""" + branch = snapshot.get("branch", {}).get("head", "main") + + if issue_type == "merge_conflict": + conflict_files = [c["path"] for c in conflicts] + return [ + { + "title": "Review Conflict Files", + "description": f"You have conflicts in {len(conflicts)} file(s): {', '.join(conflict_files[:3])}. Review each to understand what changed.", + "commands": ["git status", "git diff --name-only --diff-filter=U"], + "expectedOutput": "List of files with UU (unmerged) status", + "verify": ["git status"], + "undo": [], + "dangerLevel": "safe", + "estimatedTime": "quick" + }, + { + "title": "Resolve Conflicts", + "description": "Open each conflicted file and resolve the conflict markers (<<<<<<, ======, >>>>>>).", + "commands": [f"# Edit {f}" for f in conflict_files[:3]], + "expectedOutput": "Files no longer contain conflict markers", + "verify": ["git diff"], + "undo": ["git checkout --conflict=merge "], + "dangerLevel": "safe", + "estimatedTime": "careful" + }, + { + "title": "Stage Resolved Files", + "description": "Mark conflicts as resolved by staging the files.", + "commands": [f"git add {f}" for f in conflict_files[:3]], + "expectedOutput": "Files moved from 'Unmerged' to 'Staged'", + "verify": ["git status"], + "undo": ["git reset HEAD "], + "dangerLevel": "safe", + "estimatedTime": "quick" + }, + { + "title": "Complete Merge", + "description": "Commit the merge resolution.", + "commands": ["git commit -m 'Resolve merge conflicts'"], + "expectedOutput": "Merge commit created", + "verify": ["git log -1"], + "undo": ["git reset --soft HEAD~1"], + "dangerLevel": "caution", + "estimatedTime": "quick" + } + ] + elif issue_type == "detached_head": + return [ + { + "title": "Check Current Position", + "description": "See where HEAD is pointing and what branches exist.", + "commands": ["git log --oneline -5", "git branch -a"], + "expectedOutput": "Current commit history and available branches", + "verify": ["git status"], + "undo": [], + "dangerLevel": "safe", + "estimatedTime": "quick" + }, + { + "title": "Save Your Work", + "description": "Create a branch to preserve current commits before moving.", + "commands": ["git branch temp-save-work"], + "expectedOutput": "New branch 'temp-save-work' created at current commit", + "verify": ["git branch"], + "undo": ["git branch -d temp-save-work"], + "dangerLevel": "safe", + "estimatedTime": "quick" + }, + { + "title": f"Return to {branch}", + "description": f"Switch back to your main working branch '{branch}'.", + "commands": [f"git checkout {branch}"], + "expectedOutput": f"Switched to branch '{branch}'", + "verify": ["git status"], + "undo": ["git checkout temp-save-work"], + "dangerLevel": "safe", + "estimatedTime": "quick" + } + ] + elif issue_type == "rebase_in_progress": + return [ + { + "title": "Check Rebase Status", + "description": "Understand where you are in the rebase process.", + "commands": ["git status", "git rebase --show-current-patch"], + "expectedOutput": "Current rebase step and conflict details", + "verify": [], + "undo": [], + "dangerLevel": "safe", + "estimatedTime": "quick" + }, + { + "title": "Option A: Continue Rebase", + "description": "If you've resolved conflicts, continue the rebase.", + "commands": ["git add .", "git rebase --continue"], + "expectedOutput": "Rebase continues to next commit or completes", + "verify": ["git status"], + "undo": ["git rebase --abort"], + "dangerLevel": "caution", + "estimatedTime": "moderate" + }, + { + "title": "Option B: Abort Rebase", + "description": "Cancel the rebase and return to the original state.", + "commands": ["git rebase --abort"], + "expectedOutput": "Returns to state before rebase started", + "verify": ["git status", "git log -3"], + "undo": [], + "dangerLevel": "safe", + "estimatedTime": "quick" + } + ] + else: + return [ + { + "title": "Check Status", + "description": "Review the current repository state.", + "commands": ["git status", "git log --oneline -5"], + "expectedOutput": "Current branch and recent commits", + "verify": [], + "undo": [], + "dangerLevel": "safe", + "estimatedTime": "quick" + } + ] + diff --git a/apps/agent/gitguard-agent.service b/apps/agent/gitguard-agent.service new file mode 100644 index 0000000..0c65bf3 --- /dev/null +++ b/apps/agent/gitguard-agent.service @@ -0,0 +1,19 @@ +[Unit] +Description=GitGuard Agent Service (SpoonOS) +After=network.target + +[Service] +Type=simple +User=www-data +WorkingDirectory=/var/www/gitguardian/apps/agent +Environment="PATH=/var/www/gitguardian/apps/agent/venv/bin" +ExecStart=/var/www/gitguardian/apps/agent/venv/bin/python main.py +Restart=always +RestartSec=10 +StandardOutput=syslog +StandardError=syslog +SyslogIdentifier=gitguard-agent + +[Install] +WantedBy=multi-user.target + diff --git a/apps/agent/graph.py b/apps/agent/graph.py new file mode 100644 index 0000000..d6ba58c --- /dev/null +++ b/apps/agent/graph.py @@ -0,0 +1,61 @@ +""" +SpoonOS Graph Construction +Creates the analysis pipeline graph +""" +from typing import TYPE_CHECKING +from spoon_ai.graph import StateGraph, END + +# Handle both direct execution and module import +try: + from .models import GitAnalysisState + from .nodes import ( + detect_issue_node, + build_graph_node, + extract_conflicts_node, + collect_signals_node, + ) +except ImportError: + from models import GitAnalysisState + from nodes import ( + detect_issue_node, + build_graph_node, + extract_conflicts_node, + collect_signals_node, + ) + +if TYPE_CHECKING: + from spoon_ai.chat import ChatBot + + +def create_analysis_graph(llm: "ChatBot") -> StateGraph: + """Create the SpoonOS graph for git analysis pipeline.""" + graph = StateGraph(GitAnalysisState) + + # Add nodes + graph.add_node("detect_issue", detect_issue_node) + graph.add_node("build_graph", build_graph_node) + graph.add_node("extract_conflicts", extract_conflicts_node) + graph.add_node("collect_signals", collect_signals_node) + + # Generate analysis node needs LLM, so we create a wrapper + async def generate_analysis_wrapper(state: GitAnalysisState) -> dict: + try: + from .nodes import generate_analysis_node + except ImportError: + from nodes import generate_analysis_node + return await generate_analysis_node(state, llm) + + graph.add_node("generate_analysis", generate_analysis_wrapper) + + # Set entry point + graph.set_entry_point("detect_issue") + + # Define edges (sequential pipeline) + graph.add_edge("detect_issue", "build_graph") + graph.add_edge("build_graph", "extract_conflicts") + graph.add_edge("extract_conflicts", "collect_signals") + graph.add_edge("collect_signals", "generate_analysis") + graph.add_edge("generate_analysis", END) + + return graph.compile() + diff --git a/apps/agent/main.py b/apps/agent/main.py new file mode 100644 index 0000000..b318178 --- /dev/null +++ b/apps/agent/main.py @@ -0,0 +1,230 @@ +""" +GitGuard Agent Service - SpoonOS-powered Git Analysis +Uses SpoonOS Graph System for multi-stage AI pipeline +""" +import os +import time +import json +import logging +from dotenv import load_dotenv +from fastapi import FastAPI +from fastapi.middleware.cors import CORSMiddleware +from spoon_ai.chat import ChatBot + +# Handle both direct execution and module import +try: + from .graph import create_analysis_graph + from .models import AnalyzeRequest, AnalyzeResponse, GitAnalysisState +except ImportError: + # When running directly (python main.py), use absolute imports + from graph import create_analysis_graph + from models import AnalyzeRequest, AnalyzeResponse, GitAnalysisState + +load_dotenv() + +# Configure logging +logging.basicConfig( + level=logging.INFO, + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', + datefmt='%Y-%m-%d %H:%M:%S' +) +logger = logging.getLogger(__name__) + +app = FastAPI( + title="GitGuard Agent", + description="SpoonOS-powered Git incident analysis service with graph-based AI pipeline", + version="2.0.0", +) + +# CORS for web app +app.add_middleware( + CORSMiddleware, + allow_origins=["http://localhost:3000", "http://127.0.0.1:3000"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Initialize SpoonOS LLM +MODEL = os.getenv("MODEL_NAME", "claude-sonnet-4-20250514") +llm = ChatBot( + llm_provider="anthropic", + model_name=MODEL, +) + +# Create the compiled graph +analysis_pipeline = create_analysis_graph(llm) + + +# ========================================== +# API Endpoints +# ========================================== + +@app.get("/health") +async def health_check(): + """Health check endpoint.""" + return { + "status": "healthy", + "service": "gitguard-agent", + "version": "2.0.0", + "framework": "SpoonOS" + } + + +@app.post("/analyze", response_model=AnalyzeResponse) +async def analyze_snapshot(request: AnalyzeRequest): + """Analyze a git snapshot using SpoonOS graph pipeline.""" + start_time = time.time() + request_id = f"py-{int(time.time() * 1000)}-{os.urandom(4).hex()}" + + logger.info("=" * 60) + logger.info(f"๐Ÿš€ [PYTHON:ANALYZE:{request_id}] New analysis request received") + logger.info(f" Timestamp: {time.strftime('%Y-%m-%d %H:%M:%S')}") + logger.info(f" Snapshot branch: {request.snapshot.get('branch', {}).get('head', 'unknown')}") + logger.info(f" Has conflicts: {len(request.snapshot.get('unmergedFiles', [])) > 0}") + logger.info(f" Detached HEAD: {request.snapshot.get('isDetachedHead', False)}") + logger.info(f" Rebase in progress: {request.snapshot.get('rebaseState', {}).get('inProgress', False)}") + logger.info(f" AI Model: {MODEL}") + logger.info(f" LLM Provider: anthropic") + + try: + # Initialize state + initial_state: GitAnalysisState = { + "snapshot": request.snapshot, + "options": request.options.model_dump() if request.options else {}, + "issue_type": "", + "risk_level": "", + "repo_graph": {}, + "conflicts": [], + "signals": {}, + "summary": "", + "conflict_explanations": {}, + "plan_steps": [], + "stage_traces": [], + "error": "" + } + + # Run the SpoonOS pipeline + logger.info(f"๐Ÿ”„ [PYTHON:ANALYZE:{request_id}] Starting SpoonOS pipeline execution...") + logger.info(f" Pipeline stages: detect_issue โ†’ build_graph โ†’ extract_conflicts โ†’ collect_signals โ†’ generate_analysis") + result = await analysis_pipeline.invoke(initial_state) + logger.info(f"โœ… [PYTHON:ANALYZE:{request_id}] SpoonOS pipeline completed successfully") + + duration_ms = int((time.time() - start_time) * 1000) + logger.info(f"โฑ๏ธ [PYTHON:ANALYZE:{request_id}] Total analysis time: {duration_ms}ms") + + # Enhance conflicts with explanations + conflicts_with_explanations = [] + for conflict in result.get("conflicts", []): + explanation = result.get("conflict_explanations", {}).get(conflict["path"], {}) + conflicts_with_explanations.append({ + **conflict, + "whatHappened": explanation.get("whatHappened"), + "whyConflict": explanation.get("whyConflict"), + "recommendation": explanation.get("recommendation"), + "priority": explanation.get("priority", "medium") + }) + + logger.info(f"โœ… [PYTHON:ANALYZE:{request_id}] Analysis complete") + logger.info(f" Issue type: {result['issue_type']}") + logger.info(f" Risk level: {result['risk_level']}") + logger.info(f" Plan steps: {len(result.get('plan_steps', []))}") + logger.info(f" Pipeline traces: {len(result.get('stage_traces', []))}") + logger.info(f" Conflicts: {len(result.get('conflicts', []))}") + logger.info(f" Summary length: {len(result.get('summary', ''))} chars") + + # Check if AI was used (has conflict_explanations or detailed summary) + has_ai_content = bool(result.get('conflict_explanations')) or len(result.get('summary', '')) > 100 + if has_ai_content: + logger.info(f"๐ŸŽฏ [PYTHON:ANALYZE:{request_id}] USING REAL AI MODEL - Response contains AI-generated content") + else: + logger.warning(f"โš ๏ธ [PYTHON:ANALYZE:{request_id}] FALLBACK MODE - Response appears to be rule-based") + + logger.info("=" * 60) + + return AnalyzeResponse( + success=True, + analysis={ + "issueType": result["issue_type"], + "riskLevel": result["risk_level"], + "summary": result["summary"], + "repoGraph": result["repo_graph"], + "conflicts": conflicts_with_explanations, + "planSteps": result["plan_steps"], + "signals": result["signals"] + }, + durationMs=duration_ms, + pipelineTraces=result.get("stage_traces", []) + ) + + except Exception as e: + duration_ms = int((time.time() - start_time) * 1000) + logger.error(f"โŒ [PYTHON:ANALYZE:{request_id}] Error during analysis: {str(e)}") + logger.error(f" Error type: {type(e).__name__}") + logger.error(f" Duration before error: {duration_ms}ms") + import traceback + logger.error(f" Traceback: {traceback.format_exc()}") + logger.info("=" * 60) + return AnalyzeResponse( + success=False, + error=str(e), + durationMs=duration_ms + ) + + +@app.post("/explain/conflict") +async def explain_conflict(file_path: str, hunk_index: int, ours: str, theirs: str): + """Get detailed AI explanation for a specific conflict hunk.""" + prompt = f"""Analyze this specific git merge conflict and provide guidance: + +**File**: {file_path} +**Hunk**: #{hunk_index + 1} + +**OURS (your current branch)**: +``` +{ours} +``` + +**THEIRS (incoming changes)**: +``` +{theirs} +``` + +Provide a JSON response with: +1. "oursIntent": What your code is trying to do +2. "theirsIntent": What the incoming code is trying to do +3. "conflictReason": Why these changes conflict +4. "recommendation": "keep_ours", "keep_theirs", "combine", or "needs_review" +5. "combinedCode": If recommend combine, provide the merged code +6. "explanation": Human-readable explanation of your recommendation + +Be specific and reference the actual code.""" + + try: + response = await llm.chat(prompt) + content = response if isinstance(response, str) else response.content + + try: + result = json.loads(content) + except: + if "```json" in content: + result = json.loads(content.split("```json")[1].split("```")[0]) + else: + result = {"explanation": content} + + return {"success": True, "explanation": result} + except Exception as e: + return {"success": False, "error": str(e)} + + +if __name__ == "__main__": + import uvicorn + port = int(os.getenv("PORT", "8000")) + host = os.getenv("HOST", "0.0.0.0") + logger.info("=" * 60) + logger.info("๐Ÿš€ GitGuard Agent Service Starting") + logger.info(f" Host: {host}") + logger.info(f" Port: {port}") + logger.info(f" SpoonOS Pipeline: Enabled") + logger.info("=" * 60) + uvicorn.run(app, host=host, port=port) diff --git a/apps/agent/models.py b/apps/agent/models.py new file mode 100644 index 0000000..9f348c8 --- /dev/null +++ b/apps/agent/models.py @@ -0,0 +1,41 @@ +""" +Request/Response Models for FastAPI +""" +from typing import Optional, TypedDict +from pydantic import BaseModel + + +class GitAnalysisState(TypedDict): + """State that flows through the SpoonOS graph pipeline.""" + snapshot: dict + options: dict + issue_type: str + risk_level: str + repo_graph: dict + conflicts: list + signals: dict + summary: str + conflict_explanations: dict + plan_steps: list + stage_traces: list + error: str + + +class AnalyzeOptions(BaseModel): + includeGraph: bool = True + maxConflictFiles: int = 10 + maxHunksPerFile: int = 5 + + +class AnalyzeRequest(BaseModel): + snapshot: dict + options: Optional[AnalyzeOptions] = None + + +class AnalyzeResponse(BaseModel): + success: bool + analysis: Optional[dict] = None + error: Optional[str] = None + durationMs: Optional[int] = None + pipelineTraces: Optional[list] = None + diff --git a/apps/agent/nodes.py b/apps/agent/nodes.py new file mode 100644 index 0000000..76cb228 --- /dev/null +++ b/apps/agent/nodes.py @@ -0,0 +1,297 @@ +""" +SpoonOS Graph Nodes +Pipeline stage implementations +""" +import time +import json +import logging +from typing import TYPE_CHECKING + +# Handle both direct execution and module import +try: + from .tools import DetectIssueTool, BuildGraphTool, ExtractConflictsTool + from .fallback import generate_fallback_plan + from .models import GitAnalysisState +except ImportError: + from tools import DetectIssueTool, BuildGraphTool, ExtractConflictsTool + from fallback import generate_fallback_plan + from models import GitAnalysisState + +if TYPE_CHECKING: + from spoon_ai.chat import ChatBot + +logger = logging.getLogger(__name__) + + +async def detect_issue_node(state: GitAnalysisState) -> dict: + """Stage 1: Detect issue type and risk level.""" + start = time.time() + snapshot = state["snapshot"] + logger.info("๐Ÿ” [detect_issue] Starting issue detection...") + + tool = DetectIssueTool() + result = await tool.execute(snapshot) + + duration_ms = int((time.time() - start) * 1000) + logger.info(f"โœ… [detect_issue] Completed in {duration_ms}ms - Issue: {result.get('issue_type')}, Risk: {result.get('risk_level')}") + + trace = { + "stage": "detect_issue", + "duration_ms": duration_ms, + "output": result + } + + return { + "issue_type": result["issue_type"], + "risk_level": result["risk_level"], + "stage_traces": state.get("stage_traces", []) + [trace] + } + + +async def build_graph_node(state: GitAnalysisState) -> dict: + """Stage 2: Build repository visualization graph.""" + start = time.time() + logger.info("๐Ÿ“Š [build_graph] Building repository graph...") + + tool = BuildGraphTool() + graph = await tool.execute(state["snapshot"]) + + duration_ms = int((time.time() - start) * 1000) + node_count = len(graph.get("nodes", [])) + edge_count = len(graph.get("edges", [])) + logger.info(f"โœ… [build_graph] Completed in {duration_ms}ms - Nodes: {node_count}, Edges: {edge_count}") + + trace = { + "stage": "build_graph", + "duration_ms": duration_ms, + "output": {"nodes": len(graph["nodes"]), "edges": len(graph["edges"])} + } + + return { + "repo_graph": graph, + "stage_traces": state.get("stage_traces", []) + [trace] + } + + +async def extract_conflicts_node(state: GitAnalysisState) -> dict: + """Stage 3: Extract conflict information.""" + start = time.time() + logger.info("๐Ÿ”ง [extract_conflicts] Extracting conflict information...") + + if state["issue_type"] != "merge_conflict": + logger.info("โญ๏ธ [extract_conflicts] Skipped - not a merge conflict") + return {"conflicts": [], "stage_traces": state.get("stage_traces", [])} + + tool = ExtractConflictsTool() + options = state.get("options", {}) + conflicts = await tool.execute( + state["snapshot"], + max_files=options.get("maxConflictFiles", 10), + max_hunks=options.get("maxHunksPerFile", 5) + ) + + duration_ms = int((time.time() - start) * 1000) + conflict_count = len(conflicts) + logger.info(f"โœ… [extract_conflicts] Completed in {duration_ms}ms - Found {conflict_count} conflict file(s)") + + trace = { + "stage": "extract_conflicts", + "duration_ms": duration_ms, + "output": {"conflict_count": conflict_count} + } + + return { + "conflicts": conflicts, + "stage_traces": state.get("stage_traces", []) + [trace] + } + + +async def collect_signals_node(state: GitAnalysisState) -> dict: + """Stage 4: Collect normalized signals for AI analysis.""" + start = time.time() + logger.info("๐Ÿ“ก [collect_signals] Collecting repository signals...") + snapshot = state["snapshot"] + + signals = { + "primaryIssue": state["issue_type"], + "riskLevel": state["risk_level"], + "conflictCount": len(state.get("conflicts", [])), + "isDetachedHead": snapshot.get("isDetachedHead", False), + "isRebaseInProgress": snapshot.get("rebaseState", {}).get("inProgress", False), + "currentBranch": snapshot.get("branch", {}).get("head", "unknown"), + "hasStagedChanges": len(snapshot.get("stagedFiles", [])) > 0, + "hasUnstagedChanges": len(snapshot.get("modifiedFiles", [])) > 0, + "hasUntrackedFiles": len(snapshot.get("untrackedFiles", [])) > 0, + "recentActions": [ + entry.get("action", "") for entry in snapshot.get("recentReflog", [])[:5] + ], + } + + duration_ms = int((time.time() - start) * 1000) + logger.info(f"โœ… [collect_signals] Completed in {duration_ms}ms - Primary issue: {signals.get('primaryIssue')}") + + trace = { + "stage": "collect_signals", + "duration_ms": duration_ms, + "output": signals + } + + return { + "signals": signals, + "stage_traces": state.get("stage_traces", []) + [trace] + } + + +async def generate_analysis_node(state: GitAnalysisState, llm: "ChatBot") -> dict: + """Stage 5: Use LLM to generate analysis, explanations, and plan.""" + start = time.time() + logger.info("๐Ÿค– [generate_analysis] Generating AI analysis and recovery plan...") + + snapshot = state["snapshot"] + signals = state["signals"] + conflicts = state.get("conflicts", []) + + # Build detailed context for LLM + context = f"""You are GitGuard, an expert Git recovery assistant. Analyze this repository state and provide specific, actionable guidance. + +## Current Situation +- **Issue Type**: {signals["primaryIssue"].replace("_", " ").title()} +- **Risk Level**: {signals["riskLevel"].upper()} +- **Branch**: {signals["currentBranch"]} +- **Conflicts**: {signals["conflictCount"]} file(s) +- **Detached HEAD**: {"Yes" if signals["isDetachedHead"] else "No"} +- **Rebase in Progress**: {"Yes" if signals["isRebaseInProgress"] else "No"} + +## Repository State +- Staged files: {len(snapshot.get("stagedFiles", []))} +- Modified files: {len(snapshot.get("modifiedFiles", []))} +- Untracked files: {len(snapshot.get("untrackedFiles", []))} + +## Recent Git Actions +{chr(10).join(f"- {action}" for action in signals["recentActions"]) if signals["recentActions"] else "No recent actions recorded"} +""" + + if conflicts: + context += "\n## Conflict Details\n" + for cf in conflicts[:5]: + context += f"\n### {cf['path']} ({cf['hunkCount']} hunks, {cf['severity']} severity)\n" + for hunk in cf["hunks"][:2]: + context += f""" +**Hunk {hunk['index'] + 1}** (lines {hunk.get('startLine', '?')}-{hunk.get('endLine', '?')}): +- OURS ({hunk['linesAdded']} lines): +``` +{hunk['oursText'][:300]}{'...' if len(hunk['oursText']) > 300 else ''} +``` +- THEIRS ({hunk['linesRemoved']} lines): +``` +{hunk['theirsText'][:300]}{'...' if len(hunk['theirsText']) > 300 else ''} +``` +""" + + prompt = f"""{context} + +## Your Task +Provide a comprehensive analysis in JSON format with these fields: + +1. **summary**: A clear 2-3 sentence explanation of what happened and why (be specific to THIS situation, not generic) + +2. **conflictExplanations**: For each conflict file, provide: + - "path": file path + - "whatHappened": Specific explanation of what each side changed + - "whyConflict": Why these changes conflict + - "recommendation": Specific resolution strategy (keep ours, keep theirs, or how to combine) + - "priority": "high", "medium", or "low" + +3. **planSteps**: Array of specific recovery steps, each with: + - "title": Clear action title + - "description": Detailed explanation of what this step does and why + - "commands": Exact git commands to run (with actual file names from the conflicts) + - "expectedOutput": What user should see after running + - "verify": Commands to verify success + - "undo": Commands to undo if something goes wrong + - "dangerLevel": "safe", "caution", or "dangerous" + - "estimatedTime": "quick" (< 1 min), "moderate" (1-5 min), or "careful" (> 5 min) + +4. **quickActions**: Array of 2-3 one-click actions for common resolutions: + - "label": Button label + - "command": Single git command + - "description": What it does + +Be SPECIFIC to this user's actual situation. Reference actual file names and branch names. Focus on SAFE, REVERSIBLE solutions. + +Respond with ONLY valid JSON, no markdown code blocks.""" + + try: + logger.info(f"๐Ÿค– [generate_analysis] Calling AI model (LLM)...") + logger.info(f" Model: {llm.model_name if hasattr(llm, 'model_name') else 'unknown'}") + logger.info(f" Provider: {llm.llm_provider if hasattr(llm, 'llm_provider') else 'unknown'}") + logger.info(f" Prompt length: {len(prompt)} characters") + + llm_call_start = time.time() + response = await llm.chat(prompt) + llm_call_duration = int((time.time() - llm_call_start) * 1000) + logger.info(f"โœ… [generate_analysis] AI model responded in {llm_call_duration}ms") + + content = response if isinstance(response, str) else response.content + logger.info(f" Response length: {len(content)} characters") + + # Parse JSON response + try: + result = json.loads(content) + logger.info(f"โœ… [generate_analysis] Successfully parsed JSON response from AI") + except json.JSONDecodeError: + logger.warning(f"โš ๏ธ [generate_analysis] JSON parse failed, attempting markdown extraction...") + # Try to extract from markdown + if "```json" in content: + json_str = content.split("```json")[1].split("```")[0] + result = json.loads(json_str) + logger.info(f"โœ… [generate_analysis] Extracted JSON from markdown code block") + elif "```" in content: + json_str = content.split("```")[1].split("```")[0] + result = json.loads(json_str) + logger.info(f"โœ… [generate_analysis] Extracted JSON from code block") + else: + logger.error(f"โŒ [generate_analysis] Failed to parse JSON response") + raise + + duration_ms = int((time.time() - start) * 1000) + plan_steps_count = len(result.get("planSteps", [])) + logger.info(f"โœ… [generate_analysis] Completed in {duration_ms}ms - Generated {plan_steps_count} plan steps") + logger.info(f"๐ŸŽฏ [generate_analysis] USING REAL AI MODEL - Analysis generated by LLM") + + trace = { + "stage": "generate_analysis", + "duration_ms": duration_ms, + "output": {"has_summary": "summary" in result, "plan_steps": plan_steps_count} + } + + return { + "summary": result.get("summary", "Analysis complete."), + "conflict_explanations": { + exp["path"]: exp for exp in result.get("conflictExplanations", []) + }, + "plan_steps": result.get("planSteps", []), + "stage_traces": state.get("stage_traces", []) + [trace] + } + + except Exception as e: + duration_ms = int((time.time() - start) * 1000) + logger.error(f"โŒ [generate_analysis] AI model call failed: {str(e)}") + logger.error(f" Error type: {type(e).__name__}") + logger.warning(f"โš ๏ธ [generate_analysis] FALLBACK MODE: Using rule-based plan (NO AI)") + trace = { + "stage": "generate_analysis", + "duration_ms": duration_ms, + "error": str(e) + } + + # Fallback plan + fallback_plan = generate_fallback_plan(signals["primaryIssue"], snapshot, conflicts) + logger.info(f"โœ… [generate_analysis] Fallback plan generated: {len(fallback_plan)} steps") + return { + "summary": f"Detected {signals['primaryIssue'].replace('_', ' ')} in your repository.", + "conflict_explanations": {}, + "plan_steps": fallback_plan, + "stage_traces": state.get("stage_traces", []) + [trace] + } + diff --git a/apps/agent/requirements.txt b/apps/agent/requirements.txt new file mode 100644 index 0000000..6d2caf1 --- /dev/null +++ b/apps/agent/requirements.txt @@ -0,0 +1,7 @@ +fastapi>=0.104.0 +uvicorn[standard]>=0.24.0 +pydantic>=2.5.0 +httpx>=0.25.0 +python-dotenv>=1.0.0 +anthropic>=0.39.0 +spoon-ai-sdk>=0.1.0 diff --git a/apps/agent/tools.py b/apps/agent/tools.py new file mode 100644 index 0000000..9dce105 --- /dev/null +++ b/apps/agent/tools.py @@ -0,0 +1,195 @@ +""" +SpoonOS Tools for Git Analysis +Tools used by the StateGraph pipeline +""" +from spoon_ai.tools.base import BaseTool + + +class DetectIssueTool(BaseTool): + """Tool to detect the primary issue type from a git snapshot.""" + name: str = "detect_issue" + description: str = "Analyze git snapshot to detect the primary issue (merge_conflict, detached_head, rebase_in_progress, clean)" + parameters: dict = { + "type": "object", + "properties": { + "snapshot": {"type": "object", "description": "Git repository snapshot"} + }, + "required": ["snapshot"] + } + + async def execute(self, snapshot: dict) -> dict: + unmerged = snapshot.get("unmergedFiles", []) + is_detached = snapshot.get("isDetachedHead", False) + rebase_state = snapshot.get("rebaseState", {}) + + if unmerged and len(unmerged) > 0: + issue_type = "merge_conflict" + risk = "high" if len(unmerged) > 3 else "medium" + elif rebase_state.get("inProgress", False): + issue_type = "rebase_in_progress" + risk = "medium" + elif is_detached: + issue_type = "detached_head" + risk = "medium" + else: + staged = snapshot.get("stagedFiles", []) + modified = snapshot.get("modifiedFiles", []) + if not staged and not modified: + issue_type = "clean" + risk = "low" + else: + issue_type = "unknown" + risk = "low" + + return {"issue_type": issue_type, "risk_level": risk} + + +class BuildGraphTool(BaseTool): + """Tool to build repository visualization graph.""" + name: str = "build_graph" + description: str = "Build a visual graph representation of the repository state" + parameters: dict = { + "type": "object", + "properties": { + "snapshot": {"type": "object", "description": "Git repository snapshot"} + }, + "required": ["snapshot"] + } + + async def execute(self, snapshot: dict) -> dict: + nodes = [] + edges = [] + + branch = snapshot.get("branch", {}) + head_oid = branch.get("oid", "")[:7] + head_name = branch.get("head", "HEAD") + is_detached = snapshot.get("isDetachedHead", False) + + # Add HEAD node + nodes.append({ + "id": "head", + "type": "head", + "label": "HEAD", + "sha": head_oid, + "isCurrent": True, + "isDetached": is_detached, + "x": 400, "y": 50 + }) + + # Add current branch + if not is_detached and head_name: + nodes.append({ + "id": f"branch-{head_name}", + "type": "branch", + "label": head_name, + "sha": head_oid, + "isCurrent": True, + "x": 550, "y": 50 + }) + edges.append({"from": "head", "to": f"branch-{head_name}", "type": "ref"}) + + # Add commits from log + recent_log = snapshot.get("recentLog", [])[:8] + for i, entry in enumerate(recent_log): + commit_id = f"commit-{i}" + sha = entry.get("hash", "")[:7] + message = entry.get("message", "")[:40] + refs = entry.get("refs", []) + + nodes.append({ + "id": commit_id, + "type": "commit", + "label": message, + "sha": sha, + "isCurrent": (i == 0), + "x": 400, + "y": 120 + (i * 80) + }) + + # Add refs + for j, ref in enumerate(refs): + if ref and not ref.startswith("HEAD"): + ref_id = f"ref-{ref.replace('/', '-')}" + ref_type = "remote" if "/" in ref else "branch" + nodes.append({ + "id": ref_id, + "type": ref_type, + "label": ref, + "sha": sha, + "x": 550 + (j * 100), + "y": 120 + (i * 80) + }) + edges.append({"from": ref_id, "to": commit_id, "type": "ref"}) + + # Connect to previous commit + if i > 0: + edges.append({"from": f"commit-{i-1}", "to": commit_id, "type": "parent"}) + + # Connect HEAD to first commit + if recent_log: + edges.append({"from": "head", "to": "commit-0", "type": "pointer"}) + + # Add merge head if exists + merge_head = snapshot.get("mergeHead") + if merge_head: + nodes.append({ + "id": "merge-head", + "type": "merge", + "label": "MERGE_HEAD", + "sha": merge_head[:7], + "x": 250, "y": 50 + }) + edges.append({"from": "merge-head", "to": "commit-0", "type": "merge"}) + + return { + "nodes": nodes, + "edges": edges, + "headRef": head_oid, + "mergeHeadRef": merge_head[:7] if merge_head else None + } + + +class ExtractConflictsTool(BaseTool): + """Tool to extract and structure conflict information.""" + name: str = "extract_conflicts" + description: str = "Extract conflict files and hunks from snapshot" + parameters: dict = { + "type": "object", + "properties": { + "snapshot": {"type": "object"}, + "max_files": {"type": "integer", "default": 10}, + "max_hunks": {"type": "integer", "default": 5} + }, + "required": ["snapshot"] + } + + async def execute(self, snapshot: dict, max_files: int = 10, max_hunks: int = 5) -> list: + conflicts = [] + unmerged = snapshot.get("unmergedFiles", [])[:max_files] + + for file_data in unmerged: + path = file_data.get("path", "unknown") + blocks = file_data.get("conflictBlocks", [])[:max_hunks] + + hunks = [] + for i, block in enumerate(blocks): + hunks.append({ + "index": i, + "startLine": block.get("startLine"), + "endLine": block.get("endLine"), + "baseText": block.get("context", ""), + "oursText": block.get("oursContent", ""), + "theirsText": block.get("theirsContent", ""), + "linesAdded": len(block.get("oursContent", "").split("\n")), + "linesRemoved": len(block.get("theirsContent", "").split("\n")), + }) + + conflicts.append({ + "path": path, + "hunks": hunks, + "hunkCount": len(hunks), + "severity": "high" if len(hunks) > 2 else "medium" if len(hunks) > 1 else "low" + }) + + return conflicts + diff --git a/apps/cli/README.md b/apps/cli/README.md new file mode 100644 index 0000000..2272822 --- /dev/null +++ b/apps/cli/README.md @@ -0,0 +1,88 @@ +# GitGuard CLI + +Command-line tool for capturing and analyzing Git repository state. + +## Installation & Usage + +Since this is a local package (not published to npm), use one of these methods: + +### Method 1: Using pnpm (Recommended - Easiest) + +From the **project root**: +```bash +# Build the CLI first (one time) +pnpm build:cli + +# Then use via pnpm (use -- to pass arguments) +pnpm --filter @gitguard/cli -- snapshot --pretty > snapshot.json +# pnpm --filter @gitguard/cli -- send +# pnpm --filter @gitguard/cli -- send --open +``` + +### Method 2: Direct Node Execution (After Building) + +```bash +# Build first (one time) +pnpm build:cli + +# Run directly from project root +node apps/cli/dist/index.js snapshot --pretty > snapshot.json +# node apps/cli/dist/index.js send +# node apps/cli/dist/index.js send --open +``` + +### Method 3: Global Link (For Development) + +```bash +# From project root +pnpm build:cli +cd apps/cli +pnpm link --global + +# Now you can use `gitguard` from anywhere on your system +gitguard snapshot --pretty > snapshot.json +gitguard send +gitguard send --open +``` + +### Method 4: Create an Alias (PowerShell) + +Add to your PowerShell profile (`$PROFILE`): +```powershell +function gitguard { node D:\Workplace\Web\gitguardian\apps\cli\dist\index.js $args } +``` + +Then use normally: +```bash +gitguard snapshot --pretty > snapshot.json +gitguard send +``` + +## Commands + +### `gitguard snapshot` + +Generate a read-only snapshot of your repository state. + +```bash +gitguard snapshot # Output to stdout +gitguard snapshot -o snapshot.json # Save to file +gitguard snapshot --pretty # Pretty-print JSON +``` + +### `gitguard send` + +Capture repository state and send to GitGuard for AI analysis. + +```bash +gitguard send # Send to default API (localhost:3000) +gitguard send -u https://api.example.com # Custom API URL +gitguard send --open # Open incident room in browser +``` + +## Requirements + +- Node.js 18+ +- Git repository with issues (conflicts, detached HEAD, rebase in progress) +- Build the CLI first: `pnpm build:cli` or `cd apps/cli && pnpm build` + diff --git a/apps/cli/src/commands/send.ts b/apps/cli/src/commands/send.ts new file mode 100644 index 0000000..62f93b5 --- /dev/null +++ b/apps/cli/src/commands/send.ts @@ -0,0 +1,207 @@ +import { SnapshotV1Schema, type SnapshotV1 } from '@gitguard/schema'; +import { collectGitInfo } from '../collectors/git-info.js'; +import { parseStatus } from '../parsers/status-parser.js'; +import { parseBranches } from '../parsers/branch-parser.js'; +import { parseLog } from '../parsers/log-parser.js'; +import { parseReflog } from '../parsers/reflog-parser.js'; +import { parseDiffStat } from '../parsers/diffstat-parser.js'; +import { detectRebaseState } from '../collectors/rebase-detector.js'; +import { extractConflicts } from '../collectors/conflict-extractor.js'; + +const DEFAULT_API_URL = 'http://localhost:3000'; + +interface SendOptions { + apiUrl?: string; + open?: boolean; +} + +interface IngestResponse { + sessionId: string; + url: string; + analysis: { + issueType: string; + summary: string; + }; +} + +export async function sendCommand(options: SendOptions): Promise { + const apiUrl = options.apiUrl || process.env.GITGUARD_API_URL || DEFAULT_API_URL; + const requestId = `cli-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`; + + console.error(`[CLI:SEND:${requestId}] ========================================`); + console.error(`[CLI:SEND:${requestId}] ๐Ÿš€ Starting GitGuard snapshot collection`); + console.error(`[CLI:SEND:${requestId}] API URL: ${apiUrl}`); + console.error(`[CLI:SEND:${requestId}] Timestamp: ${new Date().toISOString()}`); + + try { + console.error(`[CLI:SEND:${requestId}] ๐Ÿ“ฅ Collecting Git repository information...`); + // Collect git information + const gitInfo = await collectGitInfo(); + console.error(`[CLI:SEND:${requestId}] โœ… Git info collected`); + console.error(`[CLI:SEND:${requestId}] Repo root: ${gitInfo.repoRoot}`); + console.error(`[CLI:SEND:${requestId}] Git dir: ${gitInfo.gitDir}`); + + console.error(`[CLI:SEND:${requestId}] ๐Ÿ” Parsing Git status...`); + // Parse status + const statusInfo = parseStatus(gitInfo.status); + console.error(`[CLI:SEND:${requestId}] โœ… Status parsed`); + console.error(`[CLI:SEND:${requestId}] Branch: ${statusInfo.branch}`); + console.error(`[CLI:SEND:${requestId}] Unmerged files: ${statusInfo.unmergedPaths.length}`); + console.error(`[CLI:SEND:${requestId}] Staged: ${statusInfo.stagedFiles.length}, Modified: ${statusInfo.modifiedFiles.length}`); + + console.error(`[CLI:SEND:${requestId}] ๐Ÿ” Parsing branches...`); + // Parse branches + const branchInfo = parseBranches(gitInfo.branches, statusInfo.branch); + console.error(`[CLI:SEND:${requestId}] โœ… Branches parsed: ${branchInfo.head}`); + + console.error(`[CLI:SEND:${requestId}] ๐Ÿ” Parsing commit logs...`); + // Parse logs + const logEntries = parseLog(gitInfo.log); + const reflogEntries = parseReflog(gitInfo.reflog); + console.error(`[CLI:SEND:${requestId}] โœ… Logs parsed: ${logEntries.length} commits, ${reflogEntries.length} reflog entries`); + + console.error(`[CLI:SEND:${requestId}] ๐Ÿ” Parsing diff stats...`); + // Parse diff stats + const diffStats = parseDiffStat(gitInfo.diffStat); + console.error(`[CLI:SEND:${requestId}] โœ… Diff stats parsed: ${diffStats.length} files`); + + console.error(`[CLI:SEND:${requestId}] ๐Ÿ” Detecting rebase state...`); + // Detect rebase state + const rebaseState = await detectRebaseState(gitInfo.gitDir); + console.error(`[CLI:SEND:${requestId}] โœ… Rebase state: ${rebaseState.inProgress ? 'IN PROGRESS' : 'none'}`); + + console.error(`[CLI:SEND:${requestId}] ๐Ÿ” Extracting conflict details...`); + // Extract conflict details + const unmergedFiles = await extractConflicts( + gitInfo.repoRoot, + statusInfo.unmergedPaths.slice(0, 10) // Extract up to 10 conflict files + ); + console.error(`[CLI:SEND:${requestId}] โœ… Conflicts extracted: ${unmergedFiles.length} files with conflicts`); + + // Build snapshot + const snapshot: SnapshotV1 = { + version: 1, + timestamp: new Date().toISOString(), + platform: process.platform as 'win32' | 'darwin' | 'linux', + repoRoot: gitInfo.repoRoot, + gitDir: gitInfo.gitDir, + + branch: branchInfo, + isDetachedHead: statusInfo.isDetachedHead, + + rebaseState, + + unmergedFiles, + stagedFiles: statusInfo.stagedFiles, + modifiedFiles: statusInfo.modifiedFiles, + untrackedFiles: statusInfo.untrackedFiles, + + recentLog: logEntries, + recentReflog: reflogEntries, + + commitGraph: gitInfo.commitGraph || undefined, + diffStats: diffStats.length > 0 ? diffStats : undefined, + mergeHead: gitInfo.mergeHead || undefined, + mergeMessage: gitInfo.mergeMessage || undefined, + + rawStatus: gitInfo.status, + rawBranches: gitInfo.branches, + }; + + console.error(`[CLI:SEND:${requestId}] โœ… Validating snapshot with schema...`); + // Validate with Zod + const validated = SnapshotV1Schema.parse(snapshot); + console.error(`[CLI:SEND:${requestId}] โœ… Snapshot validated`); + console.error(`[CLI:SEND:${requestId}] Snapshot size: ${JSON.stringify(validated).length} bytes`); + + console.error(`[CLI:SEND:${requestId}] ๐Ÿ“ค Uploading snapshot to GitGuard API...`); + console.error(`[CLI:SEND:${requestId}] Endpoint: ${apiUrl}/api/snapshots/ingest`); + + const uploadStart = Date.now(); + // Send to API + const response = await fetch(`${apiUrl}/api/snapshots/ingest`, { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ snapshot: validated }), + }); + + const uploadDuration = Date.now() - uploadStart; + console.error(`[CLI:SEND:${requestId}] Upload duration: ${uploadDuration}ms`); + console.error(`[CLI:SEND:${requestId}] Response status: ${response.status}`); + + if (!response.ok) { + let errorMessage = `Server returned ${response.status}`; + try { + const errorData = await response.json() as { error?: string }; + if (errorData.error) { + errorMessage = errorData.error; + } + } catch { + // Ignore JSON parse errors + } + console.error(`[CLI:SEND:${requestId}] โŒ Upload failed: ${errorMessage}`); + throw new Error(errorMessage); + } + + const result = await response.json() as IngestResponse; + console.error(`[CLI:SEND:${requestId}] โœ… Upload successful`); + console.error(`[CLI:SEND:${requestId}] Session ID: ${result.sessionId}`); + console.error(`[CLI:SEND:${requestId}] Issue type: ${result.analysis.issueType}`); + + // Display results + console.log(''); + console.log('='.repeat(60)); + console.log(''); + console.log(` Issue Type: ${result.analysis.issueType.replace('_', ' ').toUpperCase()}`); + console.log(` Summary: ${result.analysis.summary}`); + console.log(''); + console.log(` Incident Room: ${result.url}`); + console.log(''); + console.log('='.repeat(60)); + console.log(''); + + // Try to open in browser if requested + if (options.open) { + try { + const open = await getOpenCommand(); + if (open) { + const { exec } = await import('node:child_process'); + exec(`${open} "${result.url}"`); + console.error(`[CLI:SEND:${requestId}] ๐ŸŒ Opening incident room in browser...`); + } + } catch { + // Silently fail if we can't open the browser + } + } + + console.error(`[CLI:SEND:${requestId}] โœ… Command completed successfully`); + console.error(`[CLI:SEND:${requestId}] ========================================`); + } catch (error) { + console.error(`[CLI:SEND:${requestId}] โŒ Command failed`); + if (error instanceof Error) { + console.error(`[CLI:SEND:${requestId}] Error: ${error.message}`); + if (error.message.includes('not a git repository')) { + console.error('Please run this command from within a git repository.'); + } else if (error.message.includes('fetch')) { + console.error(`Could not connect to ${apiUrl}. Is the server running?`); + } + } + console.error(`[CLI:SEND:${requestId}] ========================================`); + process.exit(1); + } +} + +async function getOpenCommand(): Promise { + switch (process.platform) { + case 'darwin': + return 'open'; + case 'win32': + return 'start'; + case 'linux': + return 'xdg-open'; + default: + return null; + } +} diff --git a/apps/cli/src/commands/snapshot.ts b/apps/cli/src/commands/snapshot.ts index b119f88..5d45f63 100644 --- a/apps/cli/src/commands/snapshot.ts +++ b/apps/cli/src/commands/snapshot.ts @@ -15,9 +15,18 @@ interface SnapshotOptions { } export async function snapshotCommand(options: SnapshotOptions): Promise { + const requestId = `snapshot-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`; + + console.error(`[CLI:SNAPSHOT:${requestId}] ========================================`); + console.error(`[CLI:SNAPSHOT:${requestId}] ๐Ÿ“ธ Starting snapshot generation`); + console.error(`[CLI:SNAPSHOT:${requestId}] Output: ${options.output || 'stdout'}`); + console.error(`[CLI:SNAPSHOT:${requestId}] Pretty: ${options.pretty || false}`); + try { + console.error(`[CLI:SNAPSHOT:${requestId}] ๐Ÿ“ฅ Collecting Git repository information...`); // Collect git information const gitInfo = await collectGitInfo(); + console.error(`[CLI:SNAPSHOT:${requestId}] โœ… Git info collected`); // Parse status const statusInfo = parseStatus(gitInfo.status); @@ -76,8 +85,11 @@ export async function snapshotCommand(options: SnapshotOptions): Promise { rawBranches: gitInfo.branches, }; + console.error(`[CLI:SNAPSHOT:${requestId}] โœ… Validating snapshot with schema...`); // Validate with Zod const validated = SnapshotV1Schema.parse(snapshot); + console.error(`[CLI:SNAPSHOT:${requestId}] โœ… Snapshot validated`); + console.error(`[CLI:SNAPSHOT:${requestId}] Snapshot size: ${JSON.stringify(validated).length} bytes`); // Output const jsonOutput = options.pretty @@ -86,17 +98,21 @@ export async function snapshotCommand(options: SnapshotOptions): Promise { if (options.output) { writeFileSync(options.output, jsonOutput, 'utf-8'); - console.error(`Snapshot written to: ${options.output}`); + console.error(`[CLI:SNAPSHOT:${requestId}] โœ… Snapshot written to: ${options.output}`); } else { console.log(jsonOutput); + console.error(`[CLI:SNAPSHOT:${requestId}] โœ… Snapshot output to stdout`); } + console.error(`[CLI:SNAPSHOT:${requestId}] ========================================`); } catch (error) { + console.error(`[CLI:SNAPSHOT:${requestId}] โŒ Snapshot generation failed`); if (error instanceof Error) { - console.error(`Error: ${error.message}`); + console.error(`[CLI:SNAPSHOT:${requestId}] Error: ${error.message}`); if (error.message.includes('not a git repository')) { console.error('Please run this command from within a git repository.'); } } + console.error(`[CLI:SNAPSHOT:${requestId}] ========================================`); process.exit(1); } } diff --git a/apps/cli/src/index.ts b/apps/cli/src/index.ts index 9b5b5e0..71f87d3 100644 --- a/apps/cli/src/index.ts +++ b/apps/cli/src/index.ts @@ -1,6 +1,7 @@ #!/usr/bin/env node import { program } from 'commander'; import { snapshotCommand } from './commands/snapshot.js'; +import { sendCommand } from './commands/send.js'; program .name('gitguard') @@ -14,4 +15,11 @@ program .option('--pretty', 'Pretty-print JSON output') .action(snapshotCommand); +program + .command('send') + .description('Capture and send repository state to GitGuard for analysis') + .option('-u, --api-url ', 'GitGuard API URL (default: http://localhost:3000)') + .option('-o, --open', 'Open the incident room in browser after upload') + .action(sendCommand); + program.parse(); diff --git a/apps/web/package.json b/apps/web/package.json index 4ff247a..3150d29 100644 --- a/apps/web/package.json +++ b/apps/web/package.json @@ -19,13 +19,13 @@ "bcryptjs": "^2.4.3", "class-variance-authority": "^0.7.1", "clsx": "^2.1.1", - "lucide-react": "^0.562.0", + "lucide-react": "^0.469.0", "nanoid": "^5.0.9", "next": "^15.1.3", "next-auth": "^5.0.0-beta.25", "react": "^19.0.0", "react-dom": "^19.0.0", - "tailwind-merge": "^3.4.0", + "tailwind-merge": "^2.6.0", "zod": "^3.24.1" }, "devDependencies": { diff --git a/apps/web/prisma/schema.prisma b/apps/web/prisma/schema.prisma index 6bb2177..f3f9e9b 100644 --- a/apps/web/prisma/schema.prisma +++ b/apps/web/prisma/schema.prisma @@ -1,4 +1,4 @@ -// Prisma schema for GitGuard Agent +// Prisma schema for GitGuard Agent - Git Incident Room // Using Neon Postgres as the database generator client { @@ -6,8 +6,8 @@ generator client { } datasource db { - provider = "postgresql" - url = env("DATABASE_URL") + provider = "postgresql" + url = env("DATABASE_URL") } // ============================================ @@ -20,16 +20,34 @@ model User { email String? @unique emailVerified DateTime? image String? - password String? // For credentials provider + password String? // For credentials provider - accounts Account[] - sessions Session[] - gitSessions GitSession[] + accounts Account[] + sessions Session[] + gitSessions GitSession[] + authenticationLogs AuthenticationLog[] createdAt DateTime @default(now()) updatedAt DateTime @updatedAt } +// Authentication audit logging +model AuthenticationLog { + id String @id @default(cuid()) + userId String? + user User? @relation(fields: [userId], references: [id], onDelete: SetNull) + action String // LOGIN, REGISTER, LOGOUT, LOGIN_FAILED, TOKEN_REFRESH + ipAddress String? + userAgent String? @db.Text + success Boolean + failureReason String? + createdAt DateTime @default(now()) + + @@index([userId]) + @@index([action]) + @@index([createdAt]) +} + model Account { id String @id @default(cuid()) userId String @@ -67,105 +85,195 @@ model VerificationToken { } // ============================================ -// GitGuard Application Tables +// GitGuard Application Tables - Git Incident Room // ============================================ +// Main session for a git incident model GitSession { - id String @id @default(cuid()) - createdAt DateTime @default(now()) - updatedAt DateTime @updatedAt + id String @id @default(cuid()) + createdAt DateTime @default(now()) + updatedAt DateTime @updatedAt - title String? - os String? + title String? + os String? repoRootHash String? + status String @default("pending") // pending, analyzing, ready, error // Owner (nullable for anonymous/public uploads) - userId String? - user User? @relation(fields: [userId], references: [id], onDelete: SetNull) + userId String? + user User? @relation(fields: [userId], references: [id], onDelete: SetNull) // Relations - snapshots Snapshot[] - plans Plan[] - traces Trace[] + snapshots Snapshot[] + analyses Analysis[] + traces Trace[] @@index([userId]) @@index([createdAt]) + @@index([status]) } +// Raw snapshot from CLI model Snapshot { - id String @id @default(cuid()) - createdAt DateTime @default(now()) + id String @id @default(cuid()) + createdAt DateTime @default(now()) gitSessionId String gitSession GitSession @relation(fields: [gitSessionId], references: [id], onDelete: Cascade) // Snapshot data stored as JSONB snapshotJson Json - truncated Boolean @default(false) + truncated Boolean @default(false) + + // Analysis created from this snapshot + analysis Analysis? // Traces that reference this snapshot - traces Trace[] + traces Trace[] @@index([gitSessionId]) @@index([createdAt]) } -model Plan { - id String @id @default(cuid()) - createdAt DateTime @default(now()) +// Analysis result from SpoonOS agent +model Analysis { + id String @id @default(cuid()) + createdAt DateTime @default(now()) gitSessionId String gitSession GitSession @relation(fields: [gitSessionId], references: [id], onDelete: Cascade) - issueType String? - risk String? + // Link to source snapshot + snapshotId String @unique + snapshot Snapshot @relation(fields: [snapshotId], references: [id], onDelete: Cascade) - // Plan data stored as JSONB - planJson Json + // Issue classification + issueType String // merge_conflict, detached_head, rebase_in_progress, clean, unknown + summary String? @db.Text - dangerousAllowed Boolean @default(false) + // Repo graph for visualization (nodes/edges) + repoGraphJson Json? + + // Relations + conflictFiles ConflictFile[] + planSteps PlanStep[] @@index([gitSessionId]) - @@index([createdAt]) + @@index([issueType]) +} + +// Conflict file in a merge conflict +model ConflictFile { + id String @id @default(cuid()) + createdAt DateTime @default(now()) + + analysisId String + analysis Analysis @relation(fields: [analysisId], references: [id], onDelete: Cascade) + + path String + highLevelSummary String? @db.Text + + // Hunks within this file + hunks ConflictHunk[] + + @@index([analysisId]) } +// Individual conflict hunk within a file +model ConflictHunk { + id String @id @default(cuid()) + createdAt DateTime @default(now()) + + conflictFileId String + conflictFile ConflictFile @relation(fields: [conflictFileId], references: [id], onDelete: Cascade) + + index Int // Order within the file + startLine Int? + endLine Int? + + // Three-way content + baseText String @db.Text + oursText String @db.Text + theirsText String @db.Text + + // AI analysis + explanation String? @db.Text + suggestedChoice String? // ours, theirs, manual, combine + suggestedContent String? @db.Text + + // User's choice (for tracking resolution) + userChoice String? // ours, theirs, manual + userContent String? @db.Text + + @@index([conflictFileId]) +} + +// Step in the recovery plan +model PlanStep { + id String @id @default(cuid()) + createdAt DateTime @default(now()) + + analysisId String + analysis Analysis @relation(fields: [analysisId], references: [id], onDelete: Cascade) + + index Int + title String + rationale String? @db.Text + dangerLevel String @default("safe") // safe, caution, dangerous + + // Commands as JSON arrays + commandsJson Json // string[] + verifyJson Json // string[] - commands to verify success + undoJson Json // string[] - commands to undo + + // Execution tracking + status String @default("pending") // pending, completed, skipped + completedAt DateTime? + userConfirmed Boolean @default(false) // For dangerous steps + + @@index([analysisId]) + @@index([index]) +} + +// Trace for SpoonOS pipeline debugging model Trace { - id String @id @default(cuid()) - createdAt DateTime @default(now()) + id String @id @default(cuid()) + createdAt DateTime @default(now()) gitSessionId String gitSession GitSession @relation(fields: [gitSessionId], references: [id], onDelete: Cascade) // SpoonOS stage identifier - stage String // collector, classifier, visual_explainer, planner, verifier + stage String // ingest, analyze, explain, plan, verify // Reference to input snapshot (optional) - snapshotId String? - snapshot Snapshot? @relation(fields: [snapshotId], references: [id], onDelete: SetNull) + snapshotId String? + snapshot Snapshot? @relation(fields: [snapshotId], references: [id], onDelete: SetNull) - // Output data stored as JSONB - outputJson Json + // Input/Output data stored as JSONB + inputJson Json? + outputJson Json? // Execution metadata - durationMs Int? - success Boolean @default(true) - errorMessage String? + durationMs Int? + success Boolean @default(true) + errorMessage String? @db.Text @@index([gitSessionId]) @@index([stage]) @@index([createdAt]) } -// Optional: Event log for audit/analytics +// Event log for audit/analytics model Event { - id String @id @default(cuid()) - createdAt DateTime @default(now()) + id String @id @default(cuid()) + createdAt DateTime @default(now()) - type String // session_created, plan_generated, conflict_explained, etc. - userId String? + type String // session_created, analysis_completed, step_executed, etc. + userId String? gitSessionId String? - metadata Json? + metadata Json? @@index([type]) @@index([userId]) diff --git a/apps/web/public/test-snapshots/detached-head-resolved.json b/apps/web/public/test-snapshots/detached-head-resolved.json new file mode 100644 index 0000000..e6aa313 --- /dev/null +++ b/apps/web/public/test-snapshots/detached-head-resolved.json @@ -0,0 +1,39 @@ +{ + "version": 1, + "timestamp": "2024-12-21T11:15:00.000Z", + "platform": "linux", + "repoRoot": "/home/dev/webapp", + "gitDir": "/home/dev/webapp/.git", + "branch": { + "head": "main", + "oid": "f6g7h8i9j0k1l2m", + "upstream": "origin/main", + "aheadBehind": { "ahead": 1, "behind": 0 } + }, + "isDetachedHead": false, + "rebaseState": { + "inProgress": false, + "type": "none" + }, + "unmergedFiles": [], + "stagedFiles": ["src/components/Button.tsx", "src/styles/theme.css"], + "modifiedFiles": ["src/App.tsx", "package.json"], + "untrackedFiles": ["temp-debug.log"], + "recentLog": [ + { "hash": "f6g7h8i", "refs": ["HEAD", "main"], "message": "Fix button hover state" }, + { "hash": "e5f6789", "refs": ["tag: v2.0.0"], "message": "Release v2.0.0" }, + { "hash": "b3c4d5e", "refs": [], "message": "Update dependencies" }, + { "hash": "j9k0l1m", "refs": ["origin/main"], "message": "Refactor component structure" }, + { "hash": "n2o3p4q", "refs": ["origin/develop"], "message": "WIP: New feature" } + ], + "recentReflog": [ + { "hash": "f6g7h8i", "selector": "HEAD@{0}", "action": "checkout", "message": "checkout: moving from a1b2c3d to main" }, + { "hash": "a1b2c3d", "selector": "HEAD@{1}", "action": "commit", "message": "commit: Fix button hover state" }, + { "hash": "f6g7h8i", "selector": "HEAD@{2}", "action": "checkout", "message": "checkout: moving from main to a1b2c3d" }, + { "hash": "j9k0l1m", "selector": "HEAD@{3}", "action": "commit", "message": "commit: Add dark mode support" } + ], + "commitGraph": "* f6g7h8i (HEAD -> main) Fix button hover state\n* e5f6789 (tag: v2.0.0) Release v2.0.0\n* b3c4d5e Update dependencies\n* j9k0l1m (origin/main) Refactor component structure", + "rawStatus": "M src/components/Button.tsx\nM src/styles/theme.css\n M src/App.tsx\n M package.json\n?? temp-debug.log", + "rawBranches": "* main f6g7h8i Fix button hover state\n develop n2o3p4q WIP: New feature" +} + diff --git a/apps/web/public/test-snapshots/detached-head.json b/apps/web/public/test-snapshots/detached-head.json new file mode 100644 index 0000000..3d9b7dc --- /dev/null +++ b/apps/web/public/test-snapshots/detached-head.json @@ -0,0 +1,38 @@ +{ + "version": 1, + "timestamp": "2024-12-21T11:00:00.000Z", + "platform": "linux", + "repoRoot": "/home/dev/webapp", + "gitDir": "/home/dev/webapp/.git", + "branch": { + "head": "(detached)", + "oid": "a1b2c3d4e5f6789", + "aheadBehind": { "ahead": 0, "behind": 0 } + }, + "isDetachedHead": true, + "rebaseState": { + "inProgress": false, + "type": "none" + }, + "unmergedFiles": [], + "stagedFiles": ["src/components/Button.tsx", "src/styles/theme.css"], + "modifiedFiles": ["src/App.tsx", "package.json"], + "untrackedFiles": ["temp-debug.log"], + "recentLog": [ + { "hash": "a1b2c3d", "refs": ["HEAD"], "message": "Fix button hover state" }, + { "hash": "e5f6789", "refs": ["tag: v2.0.0"], "message": "Release v2.0.0" }, + { "hash": "b3c4d5e", "refs": [], "message": "Update dependencies" }, + { "hash": "f6g7h8i", "refs": ["origin/main", "main"], "message": "Add dark mode support" }, + { "hash": "j9k0l1m", "refs": [], "message": "Refactor component structure" }, + { "hash": "n2o3p4q", "refs": ["origin/develop"], "message": "WIP: New feature" } + ], + "recentReflog": [ + { "hash": "a1b2c3d", "selector": "HEAD@{0}", "action": "checkout", "message": "checkout: moving from main to a1b2c3d" }, + { "hash": "f6g7h8i", "selector": "HEAD@{1}", "action": "commit", "message": "commit: Add dark mode support" }, + { "hash": "j9k0l1m", "selector": "HEAD@{2}", "action": "commit", "message": "commit: Refactor component structure" }, + { "hash": "n2o3p4q", "selector": "HEAD@{3}", "action": "checkout", "message": "checkout: moving from develop to main" } + ], + "commitGraph": "* a1b2c3d (HEAD) Fix button hover state\n* e5f6789 (tag: v2.0.0) Release v2.0.0\n* b3c4d5e Update dependencies\n* f6g7h8i (origin/main, main) Add dark mode support\n* j9k0l1m Refactor component structure", + "rawStatus": "M src/components/Button.tsx\nM src/styles/theme.css\n M src/App.tsx\n M package.json\n?? temp-debug.log", + "rawBranches": "* (HEAD detached at a1b2c3d)\n main f6g7h8i Add dark mode support\n develop n2o3p4q WIP: New feature" +} diff --git a/apps/web/public/test-snapshots/merge-conflict-resolved.json b/apps/web/public/test-snapshots/merge-conflict-resolved.json new file mode 100644 index 0000000..9f4e066 --- /dev/null +++ b/apps/web/public/test-snapshots/merge-conflict-resolved.json @@ -0,0 +1,39 @@ +{ + "version": 1, + "timestamp": "2024-12-21T10:45:00.000Z", + "platform": "darwin", + "repoRoot": "/Users/dev/my-project", + "gitDir": "/Users/dev/my-project/.git", + "branch": { + "head": "feature/user-auth", + "oid": "xyz789abc123def", + "upstream": "origin/feature/user-auth", + "aheadBehind": { "ahead": 3, "behind": 0 } + }, + "isDetachedHead": false, + "rebaseState": { + "inProgress": false, + "type": "none" + }, + "unmergedFiles": [], + "stagedFiles": [], + "modifiedFiles": ["README.md"], + "untrackedFiles": ["notes.txt"], + "recentLog": [ + { "hash": "xyz789a", "refs": ["HEAD", "feature/user-auth"], "message": "Merge branch 'main' into feature/user-auth" }, + { "hash": "abc123d", "refs": [], "message": "Add user authentication" }, + { "hash": "def456e", "refs": [], "message": "Update login form validation" }, + { "hash": "789ghij", "refs": ["origin/main", "main"], "message": "Update config" }, + { "hash": "klm012n", "refs": [], "message": "Initial project setup" } + ], + "recentReflog": [ + { "hash": "xyz789a", "selector": "HEAD@{0}", "action": "commit", "message": "commit: Merge branch 'main' into feature/user-auth" }, + { "hash": "abc123d", "selector": "HEAD@{1}", "action": "merge", "message": "merge main: Merge made by recursive" }, + { "hash": "def456e", "selector": "HEAD@{2}", "action": "commit", "message": "commit: Update login form validation" }, + { "hash": "789ghij", "selector": "HEAD@{3}", "action": "checkout", "message": "checkout: moving from main to feature/user-auth" } + ], + "commitGraph": "* xyz789a (HEAD -> feature/user-auth) Merge branch 'main' into feature/user-auth\n|\\\n| * 789ghij (origin/main, main) Update config\n* | abc123d Add user authentication\n* | def456e Update login form validation\n|/\n* klm012n Initial setup", + "rawStatus": " M README.md\n?? notes.txt", + "rawBranches": "* feature/user-auth xyz789a Merge branch 'main' into feature/user-auth\n main 789ghij Update config" +} + diff --git a/apps/web/public/test-snapshots/merge-conflict.json b/apps/web/public/test-snapshots/merge-conflict.json new file mode 100644 index 0000000..ca5201d --- /dev/null +++ b/apps/web/public/test-snapshots/merge-conflict.json @@ -0,0 +1,68 @@ +{ + "version": 1, + "timestamp": "2024-12-21T10:30:00.000Z", + "platform": "darwin", + "repoRoot": "/Users/dev/my-project", + "gitDir": "/Users/dev/my-project/.git", + "branch": { + "head": "feature/user-auth", + "oid": "abc123def456789", + "upstream": "origin/feature/user-auth", + "aheadBehind": { "ahead": 2, "behind": 0 } + }, + "isDetachedHead": false, + "rebaseState": { + "inProgress": false, + "type": "none" + }, + "unmergedFiles": [ + { + "path": "src/auth/login.ts", + "stageOurs": "100644", + "stageTheirs": "100644", + "conflictBlocks": [ + { + "startLine": 15, + "endLine": 28, + "oursContent": "export async function login(email: string, password: string) {\n const user = await db.users.findByEmail(email);\n if (!user) throw new AuthError('User not found');\n const valid = await bcrypt.compare(password, user.password);\n if (!valid) throw new AuthError('Invalid password');\n return createSession(user);\n}", + "theirsContent": "export async function login(credentials: LoginCredentials) {\n const { email, password } = credentials;\n const user = await userRepository.findByEmail(email);\n if (!user) throw new UnauthorizedError('Invalid credentials');\n const isValid = await verifyPassword(password, user.passwordHash);\n if (!isValid) throw new UnauthorizedError('Invalid credentials');\n return sessionService.create(user.id);\n}", + "context": "import { db } from '../database';\nimport { createSession } from './session';\n\n// Authentication logic" + } + ] + }, + { + "path": "src/config/database.ts", + "stageOurs": "100644", + "stageTheirs": "100644", + "conflictBlocks": [ + { + "startLine": 5, + "endLine": 12, + "oursContent": "export const dbConfig = {\n host: 'localhost',\n port: 5432,\n database: 'myapp_dev'\n};", + "theirsContent": "export const dbConfig = {\n host: process.env.DB_HOST || 'localhost',\n port: parseInt(process.env.DB_PORT || '5432'),\n database: process.env.DB_NAME || 'myapp'\n};", + "context": "// Database configuration\n\nimport { PoolConfig } from 'pg';" + } + ] + } + ], + "stagedFiles": ["src/utils/helpers.ts"], + "modifiedFiles": ["README.md"], + "untrackedFiles": ["notes.txt"], + "recentLog": [ + { "hash": "abc123d", "refs": ["HEAD", "feature/user-auth"], "message": "Add user authentication" }, + { "hash": "def456e", "refs": [], "message": "Update login form validation" }, + { "hash": "789ghij", "refs": ["origin/main", "main"], "message": "Initial project setup" }, + { "hash": "klm012n", "refs": [], "message": "Add database migrations" }, + { "hash": "opq345r", "refs": [], "message": "Configure CI/CD pipeline" } + ], + "recentReflog": [ + { "hash": "abc123d", "selector": "HEAD@{0}", "action": "merge", "message": "merge main: Merge made by recursive" }, + { "hash": "def456e", "selector": "HEAD@{1}", "action": "commit", "message": "commit: Update login form validation" }, + { "hash": "789ghij", "selector": "HEAD@{2}", "action": "checkout", "message": "checkout: moving from main to feature/user-auth" } + ], + "commitGraph": "* abc123d (HEAD -> feature/user-auth) Add user authentication\n* def456e Update login form validation\n| * 789ghij (origin/main, main) Update config\n|/\n* klm012n Initial setup", + "mergeHead": "789ghij", + "mergeMessage": "Merge branch 'main' into feature/user-auth", + "rawStatus": "UU src/auth/login.ts\nUU src/config/database.ts\nM src/utils/helpers.ts\n M README.md\n?? notes.txt", + "rawBranches": "* feature/user-auth abc123d Add user authentication\n main 789ghij Update config" +} diff --git a/apps/web/public/test-snapshots/rebase-in-progress-resolved.json b/apps/web/public/test-snapshots/rebase-in-progress-resolved.json new file mode 100644 index 0000000..1be824b --- /dev/null +++ b/apps/web/public/test-snapshots/rebase-in-progress-resolved.json @@ -0,0 +1,42 @@ +{ + "version": 1, + "timestamp": "2024-12-21T12:00:00.000Z", + "platform": "win32", + "repoRoot": "C:/Projects/api-server", + "gitDir": "C:/Projects/api-server/.git", + "branch": { + "head": "feature/api-v2", + "oid": "z9a8b7c6d5e4f3g", + "upstream": "origin/feature/api-v2", + "aheadBehind": { "ahead": 4, "behind": 0 } + }, + "isDetachedHead": false, + "rebaseState": { + "inProgress": false, + "type": "none" + }, + "unmergedFiles": [], + "stagedFiles": [], + "modifiedFiles": [], + "untrackedFiles": [], + "recentLog": [ + { "hash": "z9a8b7c", "refs": ["HEAD", "feature/api-v2"], "message": "Add validation middleware" }, + { "hash": "h6i5j4k", "refs": [], "message": "Update error handling" }, + { "hash": "l3m2n1o", "refs": [], "message": "Refactor API v2 endpoints" }, + { "hash": "m4n5o6p", "refs": ["origin/main", "main"], "message": "Update Express to v5" }, + { "hash": "q7r8s9t", "refs": [], "message": "Add rate limiting" }, + { "hash": "u0v1w2x", "refs": [], "message": "Implement caching layer" } + ], + "recentReflog": [ + { "hash": "z9a8b7c", "selector": "HEAD@{0}", "action": "rebase", "message": "rebase (finish): returning to refs/heads/feature/api-v2" }, + { "hash": "h6i5j4k", "selector": "HEAD@{1}", "action": "rebase", "message": "rebase (continue): Update error handling" }, + { "hash": "l3m2n1o", "selector": "HEAD@{2}", "action": "rebase", "message": "rebase (continue): Refactor API v2 endpoints" }, + { "hash": "x1y2z3w", "selector": "HEAD@{3}", "action": "rebase", "message": "rebase (continue): Add validation middleware" }, + { "hash": "m4n5o6p", "selector": "HEAD@{4}", "action": "rebase", "message": "rebase (start): checkout main" }, + { "hash": "y3z4a5b", "selector": "HEAD@{5}", "action": "checkout", "message": "checkout: moving from main to feature/api-v2" } + ], + "commitGraph": "* z9a8b7c (HEAD -> feature/api-v2) Add validation middleware\n* h6i5j4k Update error handling\n* l3m2n1o Refactor API v2 endpoints\n* m4n5o6p (origin/main, main) Update Express to v5\n* q7r8s9t Add rate limiting\n* u0v1w2x Implement caching layer", + "rawStatus": "", + "rawBranches": "* feature/api-v2 z9a8b7c Add validation middleware\n main m4n5o6p Update Express to v5" +} + diff --git a/apps/web/public/test-snapshots/rebase-in-progress.json b/apps/web/public/test-snapshots/rebase-in-progress.json new file mode 100644 index 0000000..0a013bf --- /dev/null +++ b/apps/web/public/test-snapshots/rebase-in-progress.json @@ -0,0 +1,57 @@ +{ + "version": 1, + "timestamp": "2024-12-21T11:30:00.000Z", + "platform": "win32", + "repoRoot": "C:/Projects/api-server", + "gitDir": "C:/Projects/api-server/.git", + "branch": { + "head": "(no branch, rebasing feature/api-v2)", + "oid": "x1y2z3w4v5u6789", + "upstream": "origin/feature/api-v2", + "aheadBehind": { "ahead": 3, "behind": 5 } + }, + "isDetachedHead": true, + "rebaseState": { + "inProgress": true, + "type": "merge", + "headName": "feature/api-v2", + "onto": "main", + "currentStep": 2, + "totalSteps": 4 + }, + "unmergedFiles": [ + { + "path": "src/routes/users.ts", + "stageOurs": "100644", + "stageTheirs": "100644", + "conflictBlocks": [ + { + "startLine": 22, + "endLine": 35, + "oursContent": "router.get('/users/:id', async (req, res) => {\n const user = await UserService.getById(req.params.id);\n if (!user) return res.status(404).json({ error: 'Not found' });\n res.json(user);\n});", + "theirsContent": "router.get('/users/:id', validateParams(userIdSchema), async (req, res) => {\n const user = await userRepository.findById(req.validatedParams.id);\n if (!user) throw new NotFoundError('User not found');\n res.json(userSerializer.serialize(user));\n});", + "context": "import { Router } from 'express';\nimport { UserService } from '../services/user';\n\nconst router = Router();" + } + ] + } + ], + "stagedFiles": ["src/middleware/validation.ts"], + "modifiedFiles": [], + "untrackedFiles": [], + "recentLog": [ + { "hash": "x1y2z3w", "refs": ["HEAD"], "message": "Apply commit 2/4: Add validation middleware" }, + { "hash": "m4n5o6p", "refs": ["origin/main", "main"], "message": "Update Express to v5" }, + { "hash": "q7r8s9t", "refs": [], "message": "Add rate limiting" }, + { "hash": "u0v1w2x", "refs": [], "message": "Implement caching layer" }, + { "hash": "y3z4a5b", "refs": ["origin/feature/api-v2"], "message": "Refactor API v2 endpoints" } + ], + "recentReflog": [ + { "hash": "x1y2z3w", "selector": "HEAD@{0}", "action": "rebase", "message": "rebase (continue): Add validation middleware" }, + { "hash": "c6d7e8f", "selector": "HEAD@{1}", "action": "rebase", "message": "rebase (pick): Update error handling" }, + { "hash": "m4n5o6p", "selector": "HEAD@{2}", "action": "rebase", "message": "rebase (start): checkout main" }, + { "hash": "y3z4a5b", "selector": "HEAD@{3}", "action": "checkout", "message": "checkout: moving from main to feature/api-v2" } + ], + "commitGraph": "* x1y2z3w (HEAD) Apply commit 2/4: Add validation middleware\n* m4n5o6p (origin/main, main) Update Express to v5\n* q7r8s9t Add rate limiting\n|\n| * y3z4a5b (origin/feature/api-v2) Refactor API v2 endpoints\n|/\n* u0v1w2x Implement caching layer", + "rawStatus": "UU src/routes/users.ts\nM src/middleware/validation.ts", + "rawBranches": "* (no branch, rebasing feature/api-v2)\n main m4n5o6p Update Express to v5\n feature/api-v2 y3z4a5b Refactor API v2 endpoints" +} diff --git a/apps/web/src/app/analyzer/page.tsx b/apps/web/src/app/analyzer/page.tsx deleted file mode 100644 index e7c8f03..0000000 --- a/apps/web/src/app/analyzer/page.tsx +++ /dev/null @@ -1,130 +0,0 @@ -'use client'; - -import React, { useState } from 'react'; -import { ArrowLeft, ChevronDown, ChevronUp } from 'lucide-react'; -import Link from 'next/link'; - -const ConflictAnalyzer = () => { - const [showContext, setShowContext] = useState(false); - - return ( -
-
-
- - - Back to home - - -
-
-

Conflict Analysis

-

src/routes/users.ts โ€ข Lines 22-35

-
- -
-
- -
-
- Mixed -
-
- moderate complexity -
-
- -
-
-
-
- OURS - (no branch, rebasing feature/api-v2) -
-
-
-
-                
-{`router.get('/users/:id', async (req, res) => {
-  const user = await UserService.getById(req.params.id);
-  if (!user) return res.status(404).json({ error: 'Not found' });
-});
-  res.json(user);
-});`}
-                
-              
-
-
- -
-
-
- THEIRS - incoming -
-
-
-
-                
-{`router.get('/users/:id', validateParams(userIdSchema), async (req, res) => {
-  const user = await userRepository.findById(req.validatedParams.id);
-  if (!user) throw new NotFoundError('User not found');
-  res.json(userSerializer.serialize(user));
-});`}
-                
-              
-
-
-
- -
-
-

- What OURS changed: -

-

- The current branch's version of the GET /users/:id route handler uses the UserService to fetch the user by ID and returns the user object directly. If the user is not found, it returns a 404 Not Found response. -

-
- -
-

- What THEIRS changed: -

-

- The incoming version of the GET /users/:id route handler uses the userRepository to fetch the user by ID, validates the ID using the userIdSchema, and serializes the user object using the userSerializer before returning it. If the user is not found, it throws a NotFoundError. -

-
- -
-

- Why this conflict occurred: -

-

- The two versions of the route handler have different implementations, with the current branch's version using a UserService and the incoming version using a userRepository and serializer. Additionally, the error handling and response formats differ between the two versions. -

-
- -
-

- Suggested resolution strategy: -

-

- Merge the two versions by keeping the core functionality of fetching the user by ID, but combine the error handling and response formatting. Use the userRepository and serializer from the incoming version, but handle the NotFoundError and return a 404 response like the current branch's version. -

-
-
- -
- - View Resolution Steps - -
-
-
- ); -}; - -export default ConflictAnalyzer; \ No newline at end of file diff --git a/apps/web/src/app/api/auth/register/route.ts b/apps/web/src/app/api/auth/register/route.ts index 0802417..c693e66 100644 --- a/apps/web/src/app/api/auth/register/route.ts +++ b/apps/web/src/app/api/auth/register/route.ts @@ -1,8 +1,12 @@ import { NextRequest, NextResponse } from 'next/server'; import prisma from '@/lib/prisma'; import { hashPassword } from '@/lib/auth'; +import { createAuthLog, extractClientIp } from '@/lib/db'; export async function POST(request: NextRequest) { + const ipAddress = extractClientIp(request.headers); + const userAgent = request.headers.get('user-agent'); + try { const { name, email, password } = await request.json(); @@ -26,6 +30,15 @@ export async function POST(request: NextRequest) { }); if (existingUser) { + await createAuthLog({ + userId: null, + action: 'REGISTER_FAILED', + ipAddress, + userAgent, + success: false, + failureReason: 'Email already exists', + }).catch(console.error); + return NextResponse.json( { error: 'An account with this email already exists' }, { status: 409 } @@ -48,6 +61,15 @@ export async function POST(request: NextRequest) { }, }); + // Log successful registration + await createAuthLog({ + userId: user.id, + action: 'REGISTER', + ipAddress, + userAgent, + success: true, + }).catch(console.error); + return NextResponse.json({ user }, { status: 201 }); } catch (error) { console.error('Registration error:', error); diff --git a/apps/web/src/app/api/incident/[id]/route.ts b/apps/web/src/app/api/incident/[id]/route.ts new file mode 100644 index 0000000..2295930 --- /dev/null +++ b/apps/web/src/app/api/incident/[id]/route.ts @@ -0,0 +1,95 @@ +import { NextRequest, NextResponse } from 'next/server'; +import { getSessionWithDetails } from '@/lib/db'; + +/** + * GET /api/incident/[id] + * + * Fetch session data with all related details for the incident room. + */ +export async function GET( + request: NextRequest, + { params }: { params: Promise<{ id: string }> } +) { + try { + const { id } = await params; + const session = await getSessionWithDetails(id); + + if (!session) { + return NextResponse.json( + { error: 'Session not found' }, + { status: 404 } + ); + } + + // Get the latest snapshot and analysis + const snapshot = session.snapshots[0]; + const analysis = session.analyses[0]; + + // Transform to the expected format + const response = { + id: session.id, + title: session.title, + status: session.status, + createdAt: session.createdAt.toISOString(), + snapshot: snapshot ? { + branch: (snapshot.snapshotJson as { branch?: { head?: string; oid?: string } })?.branch || { head: 'unknown', oid: '' }, + platform: (snapshot.snapshotJson as { platform?: string })?.platform || 'unknown', + isDetachedHead: (snapshot.snapshotJson as { isDetachedHead?: boolean })?.isDetachedHead || false, + rebaseState: (snapshot.snapshotJson as { rebaseState?: { inProgress: boolean } })?.rebaseState || { inProgress: false }, + } : { + branch: { head: 'unknown', oid: '' }, + platform: 'unknown', + isDetachedHead: false, + rebaseState: { inProgress: false }, + }, + analysis: analysis ? { + issueType: analysis.issueType, + summary: analysis.summary, + repoGraphJson: analysis.repoGraphJson, + conflictFiles: analysis.conflictFiles.map(file => ({ + id: file.id, + path: file.path, + highLevelSummary: file.highLevelSummary, + hunks: file.hunks.map(hunk => ({ + id: hunk.id, + index: hunk.index, + baseText: hunk.baseText, + oursText: hunk.oursText, + theirsText: hunk.theirsText, + explanation: hunk.explanation, + suggestedChoice: hunk.suggestedChoice, + userChoice: hunk.userChoice, + })), + })), + planSteps: analysis.planSteps.map(step => ({ + id: step.id, + index: step.index, + title: step.title, + rationale: step.rationale, + commandsJson: step.commandsJson, + verifyJson: step.verifyJson, + undoJson: step.undoJson, + dangerLevel: step.dangerLevel, + status: step.status, + })), + } : null, + traces: session.traces.map(trace => ({ + id: trace.id, + stage: trace.stage, + inputJson: trace.inputJson, + outputJson: trace.outputJson, + durationMs: trace.durationMs, + success: trace.success, + createdAt: trace.createdAt.toISOString(), + })), + }; + + return NextResponse.json(response); + } catch (error) { + console.error('Error fetching session:', error); + return NextResponse.json( + { error: 'Failed to fetch session' }, + { status: 500 } + ); + } +} diff --git a/apps/web/src/app/api/sessions/[id]/plan/route.ts b/apps/web/src/app/api/sessions/[id]/plan/route.ts index 4a9c3a7..f92d97b 100644 --- a/apps/web/src/app/api/sessions/[id]/plan/route.ts +++ b/apps/web/src/app/api/sessions/[id]/plan/route.ts @@ -1,65 +1,398 @@ import { NextRequest, NextResponse } from 'next/server'; -import { SnapshotV1Schema, type Signals } from '@gitguard/schema'; -import { getLatestSnapshot, getTraces, createPlan, saveTrace } from '@/lib/db'; -import { classifyIssue, generatePlan } from '@/lib/agent'; +import { SnapshotV1Schema } from '@gitguard/schema'; +import { + getSession, + getLatestSnapshot, + getLatestAnalysis, + updatePlanStepStatus, + createAnalysis, + createPlanStep, + createConflictFile, + createConflictHunk, + saveTrace, + deleteAnalysisBySnapshotId, +} from '@/lib/db'; +import { collectSignals } from '@/lib/agent/collector'; +import { classifyIssue } from '@/lib/agent/classifier'; +import { generatePlan } from '@/lib/agent/planner'; +const AGENT_URL = process.env.AGENT_URL || 'http://localhost:8000'; + +/** + * GET /api/sessions/[id]/plan + * Get the recovery plan for a session. + */ +export async function GET( + request: NextRequest, + { params }: { params: Promise<{ id: string }> } +) { + try { + const { id: sessionId } = await params; + + const analysis = await getLatestAnalysis(sessionId); + if (!analysis) { + return NextResponse.json({ error: 'No analysis found' }, { status: 404 }); + } + + return NextResponse.json({ + issueType: analysis.issueType, + summary: analysis.summary, + planSteps: analysis.planSteps.map((step) => ({ + id: step.id, + index: step.index, + title: step.title, + rationale: step.rationale, + commands: step.commandsJson, + verify: step.verifyJson, + undo: step.undoJson, + dangerLevel: step.dangerLevel, + status: step.status, + })), + }); + } catch (error) { + console.error('Error fetching plan:', error); + const message = error instanceof Error ? error.message : 'Unknown error'; + return NextResponse.json({ error: message }, { status: 500 }); + } +} + +/** + * PATCH /api/sessions/[id]/plan + * Update a plan step status. + */ +export async function PATCH( + request: NextRequest, + { params }: { params: Promise<{ id: string }> } +) { + try { + await params; // Validate params exist + const body = await request.json(); + const { stepId, status, userConfirmed } = body; + + if (!stepId || !status) { + return NextResponse.json( + { error: 'stepId and status are required' }, + { status: 400 } + ); + } + + const updatedStep = await updatePlanStepStatus(stepId, status, userConfirmed); + + return NextResponse.json({ + id: updatedStep.id, + status: updatedStep.status, + completedAt: updatedStep.completedAt?.toISOString(), + }); + } catch (error) { + console.error('Error updating plan step:', error); + const message = error instanceof Error ? error.message : 'Unknown error'; + return NextResponse.json({ error: message }, { status: 500 }); + } +} + +/** + * POST /api/sessions/[id]/plan + * Generate a recovery plan for a session. + */ export async function POST( request: NextRequest, { params }: { params: Promise<{ id: string }> } ) { + const requestId = `plan-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`; + const pipelineStart = Date.now(); + + console.log(`[WEB:PLAN:${requestId}] ========================================`); + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ“‹ New plan generation request`); + console.log(`[WEB:PLAN:${requestId}] Timestamp: ${new Date().toISOString()}`); + console.log(`[WEB:PLAN:${requestId}] Agent URL: ${AGENT_URL}`); + try { const { id: sessionId } = await params; + console.log(`[WEB:PLAN:${requestId}] Session ID: ${sessionId}`); + + // Check for optional dangerousAllowed body parameter + let dangerousAllowed = false; + try { + const body = await request.json(); + dangerousAllowed = body.dangerousAllowed ?? false; + } catch { + // No body or invalid JSON is fine, use defaults + } + + const session = await getSession(sessionId); + if (!session) { + console.error(`[WEB:PLAN:${requestId}] โŒ Session not found`); + return NextResponse.json({ error: 'Session not found' }, { status: 404 }); + } - // Get snapshot const snapshotRecord = await getLatestSnapshot(sessionId); if (!snapshotRecord) { + console.error(`[WEB:PLAN:${requestId}] โŒ Snapshot not found`); return NextResponse.json({ error: 'Snapshot not found' }, { status: 404 }); } const snapshot = SnapshotV1Schema.parse(snapshotRecord.snapshotJson); + console.log(`[WEB:PLAN:${requestId}] โœ… Snapshot loaded`); + console.log(`[WEB:PLAN:${requestId}] Branch: ${snapshot.branch.head}`); + console.log(`[WEB:PLAN:${requestId}] Conflicts: ${snapshot.unmergedFiles.length} files`); + + // Try to call Python agent first + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ”„ Attempting to call Python agent for AI analysis...`); + let usePythonAgent = false; + let agentAnalysis = null; + + try { + const agentRequestStart = Date.now(); + console.log(`[WEB:PLAN:${requestId}] Calling: ${AGENT_URL}/analyze`); + + const agentResult = await fetch(`${AGENT_URL}/analyze`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + snapshot, + options: { + includeGraph: true, + maxConflictFiles: 10, + maxHunksPerFile: 5, + }, + }), + }); + + const agentRequestDuration = Date.now() - agentRequestStart; + console.log(`[WEB:PLAN:${requestId}] Request duration: ${agentRequestDuration}ms`); + console.log(`[WEB:PLAN:${requestId}] Response status: ${agentResult.status}`); - // Get signals from collector trace - const traces = await getTraces(sessionId); - const collectorTrace = traces.find((t) => t.stage === 'collector'); - if (!collectorTrace) { - return NextResponse.json({ error: 'No collector trace found' }, { status: 400 }); + if (agentResult.ok) { + agentAnalysis = await agentResult.json(); + usePythonAgent = true; + console.log(`[WEB:PLAN:${requestId}] โœ… Python agent responded successfully`); + console.log(`[WEB:PLAN:${requestId}] Success: ${agentAnalysis.success}`); + console.log(`[WEB:PLAN:${requestId}] Pipeline traces: ${agentAnalysis.pipelineTraces?.length || 0}`); + console.log(`[WEB:PLAN:${requestId}] ๐ŸŽฏ USING REAL AI MODEL (Python Agent)`); + + // Save pipeline traces from Python agent + if (agentAnalysis.pipelineTraces && Array.isArray(agentAnalysis.pipelineTraces)) { + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ’พ Saving ${agentAnalysis.pipelineTraces.length} pipeline traces from Python agent...`); + let cumulativeTime = pipelineStart; + for (const trace of agentAnalysis.pipelineTraces) { + const traceStart = cumulativeTime; + cumulativeTime += (trace.duration_ms || trace.durationMs || 0); + await saveTrace( + sessionId, + trace.stage || 'unknown', + snapshotRecord.id, + trace.input || trace.inputJson || {}, + trace.output || trace.outputJson || {}, + traceStart, + trace.success !== false, + ); + console.log(`[WEB:PLAN:${requestId}] โœ“ Saved trace: ${trace.stage}`); + } + } + } else { + const errorText = await agentResult.text(); + console.error(`[WEB:PLAN:${requestId}] โŒ Python agent returned error: ${agentResult.status}`); + console.error(`[WEB:PLAN:${requestId}] Error: ${errorText.substring(0, 200)}`); + throw new Error(`Agent returned ${agentResult.status}`); + } + } catch (agentError) { + console.error(`[WEB:PLAN:${requestId}] โŒ Python agent call failed`); + console.error(`[WEB:PLAN:${requestId}] Error: ${agentError instanceof Error ? agentError.message : String(agentError)}`); + console.warn(`[WEB:PLAN:${requestId}] โš ๏ธ FALLBACK MODE: Using TypeScript analysis (NO AI)`); + usePythonAgent = false; } - const signals = collectorTrace.outputJson as Signals; - - // Run classifier - const classifierStart = Date.now(); - const classification = await classifyIssue(signals); - await saveTrace(sessionId, 'classifier', null, classification, classifierStart); - - // Update signals with refined classification - const refinedSignals: Signals = { - ...signals, - primaryIssue: classification.primaryIssue, - secondaryIssues: classification.secondaryIssues, - estimatedRisk: classification.estimatedRisk, - }; - - // Run planner - const dangerousAllowed = false; // Could be passed from request body - const plannerStart = Date.now(); - const plan = await generatePlan(snapshot, refinedSignals, dangerousAllowed); - - // Save plan - await createPlan({ - gitSessionId: sessionId, - issueType: plan.issueType, - risk: plan.risk, - planJson: plan, - dangerousAllowed, - }); + // Run the SpoonOS pipeline (either from Python agent or TypeScript fallback) + let plan; + let analysis; + + if (usePythonAgent && agentAnalysis?.success && agentAnalysis?.analysis) { + // Use Python agent's AI-generated analysis + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ“ Processing Python agent analysis...`); + const aiAnalysis = agentAnalysis.analysis; + + // Delete existing analysis for this snapshot (allows regeneration) + await deleteAnalysisBySnapshotId(snapshotRecord.id); + + // Save analysis and plan steps from Python agent + analysis = await createAnalysis({ + gitSessionId: sessionId, + snapshotId: snapshotRecord.id, + issueType: aiAnalysis.issueType, + summary: aiAnalysis.summary, + repoGraphJson: aiAnalysis.repoGraph, + }); + console.log(`[WEB:PLAN:${requestId}] โœ… Analysis saved: ${analysis.id}`); + + // Store conflict files and hunks from Python agent + if (aiAnalysis.conflicts) { + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ’พ Storing ${aiAnalysis.conflicts.length} conflict files from Python agent...`); + for (const conflict of aiAnalysis.conflicts) { + const conflictFile = await createConflictFile({ + analysisId: analysis.id, + path: conflict.path, + highLevelSummary: conflict.highLevelSummary, + }); + + for (const hunk of conflict.hunks) { + await createConflictHunk({ + conflictFileId: conflictFile.id, + index: hunk.index, + startLine: hunk.startLine, + endLine: hunk.endLine, + baseText: hunk.baseText, + oursText: hunk.oursText, + theirsText: hunk.theirsText, + explanation: hunk.explanation, + suggestedChoice: hunk.suggestedChoice, + suggestedContent: hunk.suggestedContent, + }); + } + } + } + + // Convert Python agent plan steps to our format + const planSteps = aiAnalysis.planSteps || []; + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ’พ Storing ${planSteps.length} plan steps from Python agent...`); + for (let i = 0; i < planSteps.length; i++) { + const step = planSteps[i]; + await createPlanStep({ + analysisId: analysis.id, + index: i, + title: step.title, + rationale: step.description || step.rationale, + commandsJson: step.commands || [], + verifyJson: step.verify || [], + undoJson: step.undo || [], + dangerLevel: step.dangerLevel || 'safe', + }); + } + + // Convert to PlanV1 format for response + plan = { + issueType: aiAnalysis.issueType, + issueSummary: aiAnalysis.summary, + risk: aiAnalysis.riskLevel || 'medium', + steps: planSteps.map((step: any, idx: number) => ({ + id: `step-${idx}`, + title: step.title, + description: step.description || step.rationale || '', + commands: step.commands || [], + expected: step.expectedOutput || '', + undo: { + possible: (step.undo || []).length > 0, + description: step.undo?.length > 0 ? 'Use undo commands' : '', + commands: step.undo || [], + }, + dangerous: step.dangerLevel === 'dangerous' || step.dangerLevel === 'caution', + })), + }; - // Save planner trace - await saveTrace(sessionId, 'planner', null, plan, plannerStart); + console.log(`[WEB:PLAN:${requestId}] โœ… Plan generated from Python agent (AI-powered)`); + } else { + // Fallback to TypeScript analysis + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ“ Using TypeScript fallback analysis...`); - return NextResponse.json({ plan }); + // Stage 1: Detect Issue (SpoonOS: detect_issue) + const detectStart = Date.now(); + const signals = collectSignals(snapshot); + const issueType = signals.primaryIssue || 'unknown'; + const riskLevel = signals.estimatedRisk || 'medium'; + await saveTrace(sessionId, 'detect_issue', snapshotRecord.id, { snapshot: 'parsed' }, { issueType, riskLevel }, detectStart); + console.log(`[WEB:PLAN:${requestId}] โœ“ detect_issue: ${issueType}, ${riskLevel}`); + + // Stage 2: Build Graph (SpoonOS: build_graph) + const graphStart = Date.now(); + const repoGraph = { + nodes: [], + edges: [], + headRef: snapshot.branch.oid, + }; + await saveTrace(sessionId, 'build_graph', snapshotRecord.id, { snapshot }, { nodes: 0, edges: 0 }, graphStart); + console.log(`[WEB:PLAN:${requestId}] โœ“ build_graph`); + + // Stage 3: Extract Conflicts (SpoonOS: extract_conflicts) + const extractStart = Date.now(); + const conflictCount = snapshot.unmergedFiles.length; + await saveTrace(sessionId, 'extract_conflicts', snapshotRecord.id, { snapshot }, { conflictCount }, extractStart); + console.log(`[WEB:PLAN:${requestId}] โœ“ extract_conflicts: ${conflictCount} files`); + + // Stage 4: Collect Signals (SpoonOS: collect_signals) + const signalsStart = Date.now(); + await saveTrace(sessionId, 'collect_signals', snapshotRecord.id, { snapshot }, signals, signalsStart); + console.log(`[WEB:PLAN:${requestId}] โœ“ collect_signals`); + + // Stage 5: Classify Issue (refinement) + const classifierStart = Date.now(); + const classification = await classifyIssue(signals); + await saveTrace(sessionId, 'classifier', snapshotRecord.id, signals, classification, classifierStart); + console.log(`[WEB:PLAN:${requestId}] โœ“ classifier`); + + // Update signals with classification results + const classifiedSignals = { + ...signals, + primaryIssue: classification.primaryIssue, + secondaryIssues: classification.secondaryIssues, + estimatedRisk: classification.estimatedRisk, + }; + + // Stage 6: Generate Analysis (SpoonOS: generate_analysis) + const plannerStart = Date.now(); + plan = await generatePlan(snapshot, classifiedSignals, dangerousAllowed); + await saveTrace(sessionId, 'generate_analysis', snapshotRecord.id, classifiedSignals, plan, plannerStart); + console.log(`[WEB:PLAN:${requestId}] โœ“ generate_analysis: ${plan.steps.length} steps`); + + // Delete existing analysis for this snapshot (allows regeneration) + await deleteAnalysisBySnapshotId(snapshotRecord.id); + + // Save analysis and plan steps to database + analysis = await createAnalysis({ + gitSessionId: sessionId, + snapshotId: snapshotRecord.id, + issueType: plan.issueType, + summary: plan.issueSummary, + }); + console.log(`[WEB:PLAN:${requestId}] โœ… Analysis saved: ${analysis.id}`); + + // Create plan steps + console.log(`[WEB:PLAN:${requestId}] ๐Ÿ’พ Storing ${plan.steps.length} plan steps from TypeScript fallback...`); + for (let i = 0; i < plan.steps.length; i++) { + const step = plan.steps[i]; + await createPlanStep({ + analysisId: analysis.id, + index: i, + title: step.title, + rationale: step.description, + commandsJson: step.commands, + verifyJson: { expected: step.expected }, + undoJson: step.undo, + dangerLevel: step.dangerous ? 'dangerous' : 'safe', + }); + } + + console.log(`[WEB:PLAN:${requestId}] โœ… Plan generated from TypeScript fallback (NO AI)`); + } + + const totalDuration = Date.now() - pipelineStart; + console.log(`[WEB:PLAN:${requestId}] โœ… Plan generation completed`); + console.log(`[WEB:PLAN:${requestId}] Total duration: ${totalDuration}ms`); + console.log(`[WEB:PLAN:${requestId}] Plan steps: ${plan.steps.length}`); + console.log(`[WEB:PLAN:${requestId}] Source: ${usePythonAgent ? 'Python Agent (AI)' : 'TypeScript Fallback (NO AI)'}`); + console.log(`[WEB:PLAN:${requestId}] ========================================`); + + return NextResponse.json({ + success: true, + plan, + analysisId: analysis.id, + duration: totalDuration, + }); } catch (error) { - console.error('Error generating plan:', error); + console.error(`[WEB:PLAN:${requestId}] โŒ Fatal error during plan generation`); + console.error(`[WEB:PLAN:${requestId}] Error: ${error instanceof Error ? error.message : String(error)}`); + if (error instanceof Error && error.stack) { + console.error(`[WEB:PLAN:${requestId}] Stack: ${error.stack.substring(0, 500)}`); + } + console.error(`[WEB:PLAN:${requestId}] ========================================`); const message = error instanceof Error ? error.message : 'Unknown error'; return NextResponse.json({ error: message }, { status: 500 }); } diff --git a/apps/web/src/app/api/sessions/[id]/route.ts b/apps/web/src/app/api/sessions/[id]/route.ts index 680a72d..26d53a2 100644 --- a/apps/web/src/app/api/sessions/[id]/route.ts +++ b/apps/web/src/app/api/sessions/[id]/route.ts @@ -1,7 +1,12 @@ import { NextRequest, NextResponse } from 'next/server'; -import { SnapshotV1Schema, type PlanV1, type Signals } from '@gitguard/schema'; +import { SnapshotV1Schema } from '@gitguard/schema'; import { getSessionWithDetails } from '@/lib/db'; +import { collectSignals } from '@/lib/agent/collector'; +/** + * GET /api/sessions/[id] + * Get session data with snapshot, analysis, and traces. + */ export async function GET( request: NextRequest, { params }: { params: Promise<{ id: string }> } @@ -20,35 +25,108 @@ export async function GET( } const snapshot = SnapshotV1Schema.parse(latestSnapshot.snapshotJson); + const latestAnalysis = sessionData.analyses[0]; - const latestPlan = sessionData.plans[0]; - const plan: PlanV1 | null = latestPlan - ? (latestPlan.planJson as PlanV1) - : null; + // Compute signals from snapshot + const signals = collectSignals(snapshot); const traces = sessionData.traces.map((t) => ({ stage: t.stage, + input: t.inputJson, output: t.outputJson, createdAt: t.createdAt.toISOString(), durationMs: t.durationMs, - })); + success: t.success, + })).filter(t => + // Only include SpoonOS pipeline stages for the pipeline tab + ['detect_issue', 'build_graph', 'extract_conflicts', 'collect_signals', 'generate_analysis'].includes(t.stage) || + // Or include all traces if none match (for backward compatibility) + !sessionData.traces.some(tr => ['detect_issue', 'build_graph', 'extract_conflicts', 'collect_signals', 'generate_analysis'].includes(tr.stage)) + ); - // Get signals from collector trace - let signals: Signals | null = null; - const collectorTrace = sessionData.traces.find((t) => t.stage === 'collector'); - if (collectorTrace) { - signals = collectorTrace.outputJson as Signals; + // Transform plan steps from database to PlanV1 format + let plan = null; + if (latestAnalysis && latestAnalysis.planSteps.length > 0) { + plan = { + version: 1, + timestamp: latestAnalysis.createdAt.toISOString(), + issueType: latestAnalysis.issueType, + issueSummary: latestAnalysis.summary || 'Recovery plan generated', + risk: signals.estimatedRisk || 'medium', + steps: latestAnalysis.planSteps.map((s) => { + // Parse undo from JSON + const undoData = s.undoJson as { commands?: string[]; description?: string; possible?: boolean } | string[] || {}; + const undoCommands = Array.isArray(undoData) ? undoData : (undoData.commands || []); + const undoDescription = Array.isArray(undoData) ? 'Undo this step' : (undoData.description || 'Undo this step'); + + return { + id: s.id, + title: s.title, + description: s.rationale || s.title, + commands: Array.isArray(s.commandsJson) ? s.commandsJson : [], + expected: typeof s.verifyJson === 'object' && s.verifyJson !== null && 'expected' in s.verifyJson + ? String((s.verifyJson as { expected?: unknown }).expected) + : 'Command executed successfully', + undo: { + possible: true, + commands: undoCommands, + description: undoDescription, + }, + dangerous: s.dangerLevel === 'dangerous', + requiresUserInput: false, + }; + }), + reflogRecovery: { + description: 'If something goes wrong, use reflog to recover', + relevantEntries: ['HEAD@{1}'], + recoveryCommand: 'git reset --hard HEAD@{1}', + }, + }; } return NextResponse.json({ session: { id: sessionData.id, title: sessionData.title, + status: sessionData.status, createdAt: sessionData.createdAt.toISOString(), }, snapshot, signals, plan, + analysis: latestAnalysis + ? { + issueType: latestAnalysis.issueType, + summary: latestAnalysis.summary, + repoGraph: latestAnalysis.repoGraphJson, + conflictFiles: latestAnalysis.conflictFiles.map((f) => ({ + id: f.id, + path: f.path, + summary: f.highLevelSummary, + hunks: f.hunks.map((h) => ({ + id: h.id, + index: h.index, + baseText: h.baseText, + oursText: h.oursText, + theirsText: h.theirsText, + explanation: h.explanation, + suggestedChoice: h.suggestedChoice, + userChoice: h.userChoice, + })), + })), + planSteps: latestAnalysis.planSteps.map((s) => ({ + id: s.id, + index: s.index, + title: s.title, + rationale: s.rationale, + commands: s.commandsJson, + verify: s.verifyJson, + undo: s.undoJson, + dangerLevel: s.dangerLevel, + status: s.status, + })), + } + : null, traces, }); } catch (error) { diff --git a/apps/web/src/app/api/sessions/[id]/verify/route.ts b/apps/web/src/app/api/sessions/[id]/verify/route.ts index 4ccd080..987921f 100644 --- a/apps/web/src/app/api/sessions/[id]/verify/route.ts +++ b/apps/web/src/app/api/sessions/[id]/verify/route.ts @@ -1,8 +1,13 @@ import { NextRequest, NextResponse } from 'next/server'; -import { SnapshotV1Schema, type Signals, type PlanV1 } from '@gitguard/schema'; -import { getLatestPlan, getTraces, createSnapshot, saveTrace } from '@/lib/db'; -import { verifyProgress, collectSignals } from '@/lib/agent'; +import { SnapshotV1Schema } from '@gitguard/schema'; +import { createSnapshot, getLatestAnalysis, getTraces, saveTrace } from '@/lib/db'; +const AGENT_URL = process.env.AGENT_URL || 'http://localhost:8000'; + +/** + * POST /api/sessions/[id]/verify + * Upload a new snapshot to verify progress. + */ export async function POST( request: NextRequest, { params }: { params: Promise<{ id: string }> } @@ -21,33 +26,53 @@ export async function POST( snapshotJson: newSnapshot, }); - // Get original signals from collector trace - const traces = await getTraces(sessionId); - const collectorTrace = traces.find((t) => t.stage === 'collector'); - if (!collectorTrace) { - return NextResponse.json({ error: 'No collector trace found' }, { status: 400 }); - } - - const originalSignals = collectorTrace.outputJson as Signals; + // Get original analysis + const originalAnalysis = await getLatestAnalysis(sessionId); + const originalIssueType = originalAnalysis?.issueType || 'unknown'; - // Get current plan - const planRecord = await getLatestPlan(sessionId); - if (!planRecord) { - return NextResponse.json({ error: 'No plan found' }, { status: 400 }); + // Determine new issue type based on snapshot + let newIssueType = 'unknown'; + if (newSnapshot.unmergedFiles.length > 0) { + newIssueType = 'merge_conflict'; + } else if (newSnapshot.rebaseState.inProgress) { + newIssueType = 'rebase_in_progress'; + } else if (newSnapshot.isDetachedHead) { + newIssueType = 'detached_head'; + } else { + newIssueType = 'clean'; } - const plan = planRecord.planJson as PlanV1; + const resolved = newIssueType === 'clean'; + const remainingIssues: string[] = []; - // Run verifier - const verifierStart = Date.now(); - const result = await verifyProgress(newSnapshot, originalSignals, plan); + if (newSnapshot.unmergedFiles.length > 0) { + remainingIssues.push(...newSnapshot.unmergedFiles.map((f) => `Conflict: ${f.path}`)); + } + if (newSnapshot.rebaseState.inProgress) { + remainingIssues.push('Rebase in progress'); + } + if (newSnapshot.isDetachedHead) { + remainingIssues.push('Detached HEAD'); + } - // Save new collector trace for the new snapshot - const newSignals = collectSignals(newSnapshot); - await saveTrace(sessionId, 'collector', snapshotRecord.id, newSignals); + const result = { + previousIssue: originalIssueType, + currentIssue: newIssueType, + resolved, + remainingIssues, + snapshotId: snapshotRecord.id, + }; - // Save verifier trace - await saveTrace(sessionId, 'verifier', snapshotRecord.id, result, verifierStart); + // Save verify trace + await saveTrace( + sessionId, + 'verify', + snapshotRecord.id, + { previousSnapshot: 'omitted', newSnapshot: 'omitted' }, + result, + Date.now(), + true + ); return NextResponse.json(result); } catch (error) { diff --git a/apps/web/src/app/api/sessions/route.ts b/apps/web/src/app/api/sessions/route.ts index 2583b0d..1ff8dc6 100644 --- a/apps/web/src/app/api/sessions/route.ts +++ b/apps/web/src/app/api/sessions/route.ts @@ -1,10 +1,45 @@ import { NextRequest, NextResponse } from 'next/server'; import { SnapshotV1Schema } from '@gitguard/schema'; -import { createSession, createSnapshot, saveTrace } from '@/lib/db'; +import { createSession, createSnapshot, saveTrace, getUserSessions } from '@/lib/db'; import { collectSignals } from '@/lib/agent'; import { auth } from '@/lib/auth'; import { createHash } from 'crypto'; +export async function GET() { + try { + const session = await auth(); + if (!session?.user?.id) { + return NextResponse.json({ error: 'Unauthorized' }, { status: 401 }); + } + + const sessions = await getUserSessions(session.user.id); + + // Transform to match frontend interface + const formattedSessions = sessions.map(s => ({ + id: s.id, + createdAt: s.createdAt.toISOString(), + title: s.title, + status: s.status, + analysis: s.analyses[0] ? { + issueType: s.analyses[0].issueType, + summary: s.analyses[0].summary, + } : null, + traces: s.traces.map(t => ({ + stage: t.stage, + outputJson: t.outputJson, + createdAt: t.createdAt.toISOString(), + success: t.success, + })), + })); + + return NextResponse.json({ sessions: formattedSessions }); + } catch (error) { + console.error('Error fetching sessions:', error); + const message = error instanceof Error ? error.message : 'Unknown error'; + return NextResponse.json({ error: message }, { status: 500 }); + } +} + export async function POST(request: NextRequest) { try { const body = await request.json(); diff --git a/apps/web/src/app/api/snapshots/ingest/route.ts b/apps/web/src/app/api/snapshots/ingest/route.ts new file mode 100644 index 0000000..1ebf56c --- /dev/null +++ b/apps/web/src/app/api/snapshots/ingest/route.ts @@ -0,0 +1,285 @@ +import { NextRequest, NextResponse } from 'next/server'; +import { SnapshotV1Schema } from '@gitguard/schema'; +import { createSession, createSnapshot, updateSessionStatus, saveTrace, createAnalysis, createConflictFile, createConflictHunk, createPlanStep } from '@/lib/db'; +import { auth } from '@/lib/auth'; +import { createHash } from 'crypto'; +import { generateTitle, generateFallbackAnalysis, type AgentAnalysis } from './utils'; + +const AGENT_URL = process.env.AGENT_URL || 'http://localhost:8000'; + +/** + * POST /api/snapshots/ingest + * + * Ingest a snapshot from CLI, analyze with SpoonOS agent, and return session URL. + * This is the main entry point for the CLI `gitguard send` command. + */ +export async function POST(request: NextRequest) { + const startTime = Date.now(); + const requestId = `req-${Date.now()}-${Math.random().toString(36).slice(2, 9)}`; + + console.log(`[WEB:INGEST:${requestId}] ========================================`); + console.log(`[WEB:INGEST:${requestId}] ๐Ÿ“ฅ New snapshot ingestion request received`); + console.log(`[WEB:INGEST:${requestId}] Timestamp: ${new Date().toISOString()}`); + console.log(`[WEB:INGEST:${requestId}] Agent URL: ${AGENT_URL}`); + + try { + const body = await request.json(); + const { snapshot: rawSnapshot } = body; + + console.log(`[WEB:INGEST:${requestId}] โœ… Request body parsed successfully`); + console.log(`[WEB:INGEST:${requestId}] Snapshot size: ${JSON.stringify(rawSnapshot).length} bytes`); + + // Validate snapshot + const snapshot = SnapshotV1Schema.parse(rawSnapshot); + console.log(`[WEB:INGEST:${requestId}] โœ… Snapshot validated against schema`); + console.log(`[WEB:INGEST:${requestId}] Repository: ${snapshot.repoRoot}`); + console.log(`[WEB:INGEST:${requestId}] Branch: ${snapshot.branch.head}`); + console.log(`[WEB:INGEST:${requestId}] Conflicts: ${snapshot.unmergedFiles.length} files`); + console.log(`[WEB:INGEST:${requestId}] Detached HEAD: ${snapshot.isDetachedHead}`); + console.log(`[WEB:INGEST:${requestId}] Rebase in progress: ${snapshot.rebaseState.inProgress}`); + + // Get current user (optional - uploads can be anonymous) + const session = await auth(); + const userId = session?.user?.id || null; + console.log(`[WEB:INGEST:${requestId}] User: ${userId || 'anonymous'}`); + + // Create hash of repo root for deduplication + const repoRootHash = createHash('sha256') + .update(snapshot.repoRoot) + .digest('hex') + .slice(0, 16); + + // Generate title from branch and issue + const title = generateTitle(snapshot); + console.log(`[WEB:INGEST:${requestId}] Generated title: ${title}`); + + // Create session in database + const gitSession = await createSession({ + title, + os: snapshot.platform, + repoRootHash, + userId, + status: 'analyzing', + }); + console.log(`[WEB:INGEST:${requestId}] โœ… Session created: ${gitSession.id}`); + + // Save snapshot + const snapshotRecord = await createSnapshot({ + gitSessionId: gitSession.id, + snapshotJson: snapshot, + }); + console.log(`[WEB:INGEST:${requestId}] โœ… Snapshot saved: ${snapshotRecord.id}`); + + // Save ingest trace + await saveTrace( + gitSession.id, + 'ingest', + snapshotRecord.id, + { source: 'api' }, + { snapshotId: snapshotRecord.id }, + startTime + ); + console.log(`[WEB:INGEST:${requestId}] โœ… Ingest trace saved`); + + // Call SpoonOS agent for analysis + const analyzeStartTime = Date.now(); + let agentResponse: { + success: boolean; + analysis?: AgentAnalysis; + error?: string; + durationMs?: number; + pipelineTraces?: Array<{ + stage: string; + duration_ms?: number; + durationMs?: number; + input?: unknown; + inputJson?: unknown; + output?: unknown; + outputJson?: unknown; + success?: boolean; + }>; + }; + + try { + console.log(`[WEB:INGEST:${requestId}] ๐Ÿ”„ Attempting to call Python agent...`); + console.log(`[WEB:INGEST:${requestId}] URL: ${AGENT_URL}/analyze`); + console.log(`[WEB:INGEST:${requestId}] Method: POST`); + + const agentRequestStart = Date.now(); + const agentResult = await fetch(`${AGENT_URL}/analyze`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ + snapshot, + options: { + includeGraph: true, + maxConflictFiles: 10, + maxHunksPerFile: 5, + }, + }), + }); + + const agentRequestDuration = Date.now() - agentRequestStart; + console.log(`[WEB:INGEST:${requestId}] Request duration: ${agentRequestDuration}ms`); + console.log(`[WEB:INGEST:${requestId}] Response status: ${agentResult.status}`); + + if (!agentResult.ok) { + const errorText = await agentResult.text(); + console.error(`[WEB:INGEST:${requestId}] โŒ Agent returned error status ${agentResult.status}`); + console.error(`[WEB:INGEST:${requestId}] Error body: ${errorText.substring(0, 200)}`); + throw new Error(`Agent returned ${agentResult.status}: ${errorText}`); + } + + agentResponse = await agentResult.json(); + console.log(`[WEB:INGEST:${requestId}] โœ… Agent response received successfully`); + console.log(`[WEB:INGEST:${requestId}] Success: ${agentResponse.success}`); + console.log(`[WEB:INGEST:${requestId}] Pipeline traces: ${agentResponse.pipelineTraces?.length || 0}`); + console.log(`[WEB:INGEST:${requestId}] Duration (agent): ${agentResponse.durationMs || 'unknown'}ms`); + console.log(`[WEB:INGEST:${requestId}] ๐ŸŽฏ USING REAL AI MODEL (Python Agent)`); + + // Save SpoonOS pipeline traces if available from Python agent + if (agentResponse.pipelineTraces && Array.isArray(agentResponse.pipelineTraces)) { + console.log(`[WEB:INGEST:${requestId}] ๐Ÿ’พ Saving ${agentResponse.pipelineTraces.length} pipeline traces...`); + let cumulativeTime = analyzeStartTime; + for (const trace of agentResponse.pipelineTraces) { + const traceStart = cumulativeTime; + cumulativeTime += (trace.duration_ms || trace.durationMs || 0); + await saveTrace( + gitSession.id, + trace.stage || 'unknown', + snapshotRecord.id, + trace.input || trace.inputJson || {}, + trace.output || trace.outputJson || {}, + traceStart, + trace.success !== false, + ); + console.log(`[WEB:INGEST:${requestId}] โœ“ Saved trace: ${trace.stage} (${trace.duration_ms || trace.durationMs || 0}ms)`); + } + console.log(`[WEB:INGEST:${requestId}] โœ… All pipeline traces saved`); + } + } catch (agentError) { + // Fallback to basic analysis if agent is unavailable + console.error(`[WEB:INGEST:${requestId}] โŒ Python agent call failed`); + console.error(`[WEB:INGEST:${requestId}] Error type: ${agentError instanceof Error ? agentError.constructor.name : typeof agentError}`); + console.error(`[WEB:INGEST:${requestId}] Error message: ${agentError instanceof Error ? agentError.message : String(agentError)}`); + console.error(`[WEB:INGEST:${requestId}] โš ๏ธ FALLBACK MODE: Using TypeScript fallback analysis (NO AI)`); + agentResponse = generateFallbackAnalysis(snapshot); + console.log(`[WEB:INGEST:${requestId}] โœ… Fallback analysis generated`); + if (agentResponse.analysis) { + console.log(`[WEB:INGEST:${requestId}] Issue type: ${agentResponse.analysis.issueType}`); + console.log(`[WEB:INGEST:${requestId}] Plan steps: ${agentResponse.analysis.plan.length}`); + } + } + + // Save analyze trace (for compatibility) + await saveTrace( + gitSession.id, + 'analyze', + snapshotRecord.id, + { snapshotId: snapshotRecord.id }, + agentResponse, + analyzeStartTime, + agentResponse.success + ); + + if (!agentResponse.success || !agentResponse.analysis) { + console.error(`[WEB:INGEST:${requestId}] โŒ Analysis failed or missing`); + await updateSessionStatus(gitSession.id, 'error'); + return NextResponse.json( + { error: agentResponse.error || 'Analysis failed' }, + { status: 500 } + ); + } + + // TypeScript guard: we know analysis exists after the check above + const analysis: AgentAnalysis = agentResponse.analysis!; + console.log(`[WEB:INGEST:${requestId}] โœ… Analysis received successfully`); + + console.log(`[WEB:INGEST:${requestId}] ๐Ÿ’พ Storing analysis in database...`); + // Store analysis in database + const analysisRecord = await createAnalysis({ + gitSessionId: gitSession.id, + snapshotId: snapshotRecord.id, + issueType: analysis.issueType, + summary: analysis.summary, + repoGraphJson: analysis.repoGraph, + }); + console.log(`[WEB:INGEST:${requestId}] โœ… Analysis record created: ${analysisRecord.id}`); + + // Store conflict files and hunks + if (analysis.conflicts) { + console.log(`[WEB:INGEST:${requestId}] ๐Ÿ’พ Storing ${analysis.conflicts.length} conflict files...`); + for (const conflict of analysis.conflicts) { + const conflictFile = await createConflictFile({ + analysisId: analysisRecord.id, + path: conflict.path, + highLevelSummary: conflict.highLevelSummary, + }); + + for (const hunk of conflict.hunks) { + await createConflictHunk({ + conflictFileId: conflictFile.id, + index: hunk.index, + startLine: hunk.startLine, + endLine: hunk.endLine, + baseText: hunk.baseText, + oursText: hunk.oursText, + theirsText: hunk.theirsText, + explanation: hunk.explanation, + suggestedChoice: hunk.suggestedChoice, + suggestedContent: hunk.suggestedContent, + }); + } + } + console.log(`[WEB:INGEST:${requestId}] โœ… All conflict files stored`); + } + + // Store plan steps + console.log(`[WEB:INGEST:${requestId}] ๐Ÿ’พ Storing ${analysis.plan.length} plan steps...`); + for (const step of analysis.plan) { + await createPlanStep({ + analysisId: analysisRecord.id, + index: step.index, + title: step.title, + rationale: step.rationale, + commandsJson: step.commands, + verifyJson: step.verify, + undoJson: step.undo, + dangerLevel: step.dangerLevel, + }); + } + console.log(`[WEB:INGEST:${requestId}] โœ… All plan steps stored`); + + // Update session status to ready + await updateSessionStatus(gitSession.id, 'ready'); + console.log(`[WEB:INGEST:${requestId}] โœ… Session status updated to 'ready'`); + + // Build response URL + const baseUrl = process.env.NEXTAUTH_URL || process.env.AUTH_URL || 'http://localhost:3000'; + const sessionUrl = `${baseUrl}/incident/${gitSession.id}`; + + const totalDuration = Date.now() - startTime; + console.log(`[WEB:INGEST:${requestId}] โœ… Request completed successfully`); + console.log(`[WEB:INGEST:${requestId}] Total duration: ${totalDuration}ms`); + console.log(`[WEB:INGEST:${requestId}] Session URL: ${sessionUrl}`); + console.log(`[WEB:INGEST:${requestId}] ========================================`); + + return NextResponse.json({ + sessionId: gitSession.id, + url: sessionUrl, + analysis: { + issueType: analysis.issueType, + summary: analysis.summary, + }, + }); + } catch (error) { + console.error(`[WEB:INGEST:${requestId}] โŒ Fatal error during ingestion`); + console.error(`[WEB:INGEST:${requestId}] Error: ${error instanceof Error ? error.message : String(error)}`); + if (error instanceof Error && error.stack) { + console.error(`[WEB:INGEST:${requestId}] Stack: ${error.stack.substring(0, 500)}`); + } + const message = error instanceof Error ? error.message : 'Unknown error'; + return NextResponse.json({ error: message }, { status: 400 }); + } +} + diff --git a/apps/web/src/app/api/snapshots/ingest/utils.ts b/apps/web/src/app/api/snapshots/ingest/utils.ts new file mode 100644 index 0000000..b3ae587 --- /dev/null +++ b/apps/web/src/app/api/snapshots/ingest/utils.ts @@ -0,0 +1,212 @@ +/** + * Utility functions for snapshot ingestion + */ + +interface SnapshotForTitle { + branch: { head: string }; + isDetachedHead: boolean; + unmergedFiles: unknown[]; + rebaseState: { inProgress: boolean }; +} + +export interface AgentAnalysis { + issueType: string; + summary: string; + repoGraph?: { + nodes: Array<{ id: string; type: string; label: string; sha?: string; isCurrent?: boolean }>; + edges: Array<{ from: string; to: string; type?: string }>; + }; + conflicts?: Array<{ + path: string; + highLevelSummary?: string; + hunks: Array<{ + index: number; + startLine?: number; + endLine?: number; + baseText: string; + oursText: string; + theirsText: string; + explanation?: string; + suggestedChoice?: string; + suggestedContent?: string; + }>; + }>; + plan: Array<{ + index: number; + title: string; + rationale?: string; + commands: string[]; + verify: string[]; + undo: string[]; + dangerLevel: string; + }>; +} + +export function generateTitle(snapshot: SnapshotForTitle): string { + const parts: string[] = []; + + if (snapshot.unmergedFiles.length > 0) { + parts.push('Merge Conflict'); + } else if (snapshot.rebaseState.inProgress) { + parts.push('Rebase'); + } else if (snapshot.isDetachedHead) { + parts.push('Detached HEAD'); + } + + parts.push(`on ${snapshot.branch.head}`); + + return parts.join(' ') || 'Git Recovery Session'; +} + +export +interface SnapshotForFallback { + unmergedFiles: Array<{ path: string; conflictBlocks?: Array<{ oursContent: string; theirsContent: string; context?: string }> }>; + isDetachedHead: boolean; + rebaseState: { inProgress: boolean }; + branch: { head: string }; +} + +export function generateFallbackAnalysis(snapshot: SnapshotForFallback): { success: boolean; analysis: AgentAnalysis } { + let issueType = 'unknown'; + let summary = 'Repository state analysis'; + + if (snapshot.unmergedFiles.length > 0) { + issueType = 'merge_conflict'; + summary = `Found ${snapshot.unmergedFiles.length} file(s) with merge conflicts. Review each conflict and choose how to resolve.`; + } else if (snapshot.rebaseState.inProgress) { + issueType = 'rebase_in_progress'; + summary = 'A rebase operation is in progress. You can continue, skip, or abort.'; + } else if (snapshot.isDetachedHead) { + issueType = 'detached_head'; + summary = 'HEAD is detached. Consider creating a branch to save your work.'; + } else { + issueType = 'clean'; + summary = 'Repository appears to be in a clean state.'; + } + + // Extract conflicts + const conflicts = snapshot.unmergedFiles.map((file) => ({ + path: file.path, + hunks: (file.conflictBlocks || []).map((block, i) => ({ + index: i, + baseText: block.context || '', + oursText: block.oursContent || '', + theirsText: block.theirsContent || '', + })), + })); + + // Generate basic plan + const plan: AgentAnalysis['plan'] = []; + + if (issueType === 'merge_conflict') { + plan.push( + { + index: 0, + title: 'Review conflicts', + rationale: 'Understand what changes conflict before resolving', + commands: ['git status', 'git diff --name-only --diff-filter=U'], + verify: ['git status'], + undo: [], + dangerLevel: 'safe', + }, + { + index: 1, + title: 'Resolve each conflict', + rationale: 'Edit files to remove conflict markers and choose correct content', + commands: ['# Edit files manually or use the Conflict Explorer'], + verify: ['git diff '], + undo: ['git checkout --conflict=merge '], + dangerLevel: 'safe', + }, + { + index: 2, + title: 'Stage resolved files', + rationale: 'Mark conflicts as resolved', + commands: ['git add '], + verify: ['git status'], + undo: ['git reset HEAD '], + dangerLevel: 'safe', + }, + { + index: 3, + title: 'Complete merge', + rationale: 'Commit the merge', + commands: ['git commit'], + verify: ['git log -1'], + undo: ['git reset --soft HEAD~1'], + dangerLevel: 'caution', + } + ); + } else if (issueType === 'detached_head') { + plan.push( + { + index: 0, + title: 'Check current state', + rationale: 'Understand where HEAD is pointing', + commands: ['git log --oneline -5', 'git status'], + verify: [], + undo: [], + dangerLevel: 'safe', + }, + { + index: 1, + title: 'Create branch to save work', + rationale: 'Preserve commits before switching', + commands: ['git branch temp-save'], + verify: ['git branch'], + undo: ['git branch -d temp-save'], + dangerLevel: 'safe', + }, + { + index: 2, + title: 'Return to main branch', + rationale: 'Switch back to your working branch', + commands: [`git checkout ${snapshot.branch.head}`], + verify: ['git status'], + undo: [], + dangerLevel: 'safe', + } + ); + } else if (issueType === 'rebase_in_progress') { + plan.push( + { + index: 0, + title: 'Check rebase status', + rationale: 'Understand the current rebase state', + commands: ['git status'], + verify: [], + undo: [], + dangerLevel: 'safe', + }, + { + index: 1, + title: 'Option A: Continue rebase', + rationale: 'If conflicts are resolved, continue', + commands: ['git add .', 'git rebase --continue'], + verify: ['git status'], + undo: ['git rebase --abort'], + dangerLevel: 'caution', + }, + { + index: 2, + title: 'Option B: Abort rebase', + rationale: 'Cancel and return to previous state', + commands: ['git rebase --abort'], + verify: ['git log -3'], + undo: [], + dangerLevel: 'safe', + } + ); + } + + return { + success: true, + analysis: { + issueType, + summary, + conflicts: conflicts.length > 0 ? conflicts : undefined, + plan, + }, + }; +} + diff --git a/apps/web/src/app/auth/error/page.tsx b/apps/web/src/app/auth/error/page.tsx index 757abf6..c146811 100644 --- a/apps/web/src/app/auth/error/page.tsx +++ b/apps/web/src/app/auth/error/page.tsx @@ -3,6 +3,7 @@ import { Suspense } from 'react'; import { useSearchParams } from 'next/navigation'; import Link from 'next/link'; +import { AlertTriangle } from 'lucide-react'; const errorMessages: Record = { Configuration: 'There is a problem with the server configuration.', @@ -19,7 +20,7 @@ function ErrorContent() { return (
-
โš ๏ธ
+

Authentication Error

{errorMessage}

diff --git a/apps/web/src/app/auth/signin/page.tsx b/apps/web/src/app/auth/signin/page.tsx index 7f764c9..6d8881d 100644 --- a/apps/web/src/app/auth/signin/page.tsx +++ b/apps/web/src/app/auth/signin/page.tsx @@ -8,7 +8,7 @@ import Link from 'next/link'; function SignInForm() { const router = useRouter(); const searchParams = useSearchParams(); - const callbackUrl = searchParams.get('callbackUrl') || '/'; + const callbackUrl = searchParams.get('callbackUrl') || '/dashboard'; const error = searchParams.get('error'); const [email, setEmail] = useState(''); @@ -46,10 +46,19 @@ function SignInForm() { signIn('google', { callbackUrl }); }; + const handleKakaoSignIn = () => { + signIn('kakao', { callbackUrl }); + }; + return (
-
-

Sign In to GitGuard

+
+ + + GitGuard + + +

Sign in to your account

{errorMessage && (
@@ -89,7 +98,7 @@ function SignInForm() { @@ -97,34 +106,49 @@ function SignInForm() {
- or + or continue with
- +
+ + + +

Don't have an account?{' '} @@ -140,9 +164,10 @@ function SignInForm() { function LoadingFallback() { return (

-
+
-
+
+
@@ -154,7 +179,7 @@ function LoadingFallback() { export default function SignInPage() { return ( -
+
}> diff --git a/apps/web/src/app/auth/signup/page.tsx b/apps/web/src/app/auth/signup/page.tsx index a7d5d5d..7212443 100644 --- a/apps/web/src/app/auth/signup/page.tsx +++ b/apps/web/src/app/auth/signup/page.tsx @@ -1,6 +1,7 @@ 'use client'; import { useState } from 'react'; +import { signIn } from 'next-auth/react'; import { useRouter } from 'next/navigation'; import Link from 'next/link'; @@ -42,7 +43,18 @@ export default function SignUpPage() { if (!response.ok) { setErrorMessage(data.error || 'Failed to create account'); } else { - router.push('/auth/signin?registered=true'); + // Auto sign-in after registration + const result = await signIn('credentials', { + email, + password, + redirect: false, + }); + + if (result?.ok) { + router.push('/dashboard'); + } else { + router.push('/auth/signin?registered=true'); + } } } catch { setErrorMessage('An error occurred. Please try again.'); @@ -51,11 +63,24 @@ export default function SignUpPage() { } }; + const handleGoogleSignIn = () => { + signIn('google', { callbackUrl: '/dashboard' }); + }; + + const handleKakaoSignIn = () => { + signIn('kakao', { callbackUrl: '/dashboard' }); + }; + return ( -
+
-
-

Create Account

+
+ + + GitGuard + + +

Create your account

{errorMessage && (
@@ -63,6 +88,53 @@ export default function SignUpPage() {
)} + {/* OAuth Buttons First */} +
+ + + +
+ +
+
+ or with email +
+
+