Kubernetes-native AI automation platform for intelligent agentic sessions with multi-agent collaboration
vTeam is an AI automation platform that combines Claude Code CLI with multi-agent collaboration capabilities. The platform enables teams to create and manage intelligent agentic sessions through a modern web interface.
- Intelligent Agentic Sessions: AI-powered automation for analysis, research, content creation, and development tasks
- Multi-Agent Workflows: Specialized AI agents model realistic software team dynamics
- Kubernetes Native: Built with Custom Resources, Operators, and proper RBAC for enterprise deployment
- Real-time Monitoring: Live status updates and job execution tracking
The platform consists of containerized microservices orchestrated via Kubernetes:
| Component | Technology | Description |
|---|---|---|
| Frontend | NextJS + Shadcn | User interface for managing agentic sessions |
| Backend API | Go + Gin | REST API for managing Kubernetes Custom Resources (multi-tenant: projects, sessions, access control) |
| Agentic Operator | Go | Kubernetes operator that watches CRs and creates Jobs |
| Claude Code Runner | Python + Claude Code CLI | Pod that executes AI with multi-agent collaboration capabilities |
- Create Session: User creates agentic session via web UI with task description
- API Processing: Backend creates
AgenticSessionCustom Resource in Kubernetes - Job Scheduling: Operator detects CR and creates Kubernetes Job with runner pod
- AI Execution: Pod runs Claude Code CLI with multi-agent collaboration for intelligent analysis
- Result Storage: Analysis results stored back in Custom Resource status
- UI Updates: Frontend displays real-time progress and completed results
- OpenShift Local (CRC) for local development or OpenShift cluster for production
- oc (OpenShift CLI) or kubectl v1.28+ configured to access your cluster
- Docker or Podman for building container images
- Container registry access (Docker Hub, Quay.io, ECR, etc.) for production
- Go 1.24+ for building backend services (if building from source)
- Node.js 20+ and npm for the frontend (if building from source)
- Anthropic API Key - Get from Anthropic Console
- Configure via web UI: Settings → Runner Secrets after deployment
Deploy using the default images from quay.io/ambient_code:
# From repo root, prepare env for deploy script (required once)
cp components/manifests/env.example components/manifests/.env
# Edit .env and set at least ANTHROPIC_API_KEY
# Deploy to ambient-code namespace (default)
make deploy
# Or deploy to custom namespace
make deploy NAMESPACE=my-namespace# Check pod status
oc get pods -n ambient-code
# Check services and routes
oc get services,routes -n ambient-code# Get the route URL
oc get route frontend-route -n ambient-code
# Or use port forwarding as fallback
kubectl port-forward svc/frontend-service 3000:3000 -n ambient-code- Access the web interface
- Navigate to Settings → Runner Secrets
- Add your Anthropic API key
- Access Web Interface: Navigate to your deployed route URL
- Create New Session:
- Prompt: Task description (e.g., "Review this codebase for security vulnerabilities and suggest improvements")
- Model: Choose AI model (Claude Sonnet/Haiku)
- Settings: Adjust temperature, token limits, timeout (default: 300s)
- Monitor Progress: View real-time status updates and execution logs
- Review Results: Download analysis results and structured output
- Code Analysis: Security reviews, code quality assessments, architecture analysis
- Technical Documentation: API documentation, user guides, technical specifications
- Project Planning: Feature specifications, implementation plans, task breakdowns
- Research & Analysis: Technology research, competitive analysis, requirement gathering
- Development Workflows: Code reviews, testing strategies, deployment planning
To build and deploy your own container images:
# Set your container registry
export REGISTRY="quay.io/your-username"
# Build all images
make build-all
# Push to registry (requires authentication)
make push-all REGISTRY=$REGISTRY
# Deploy with custom images
cd components/manifests
REGISTRY=$REGISTRY ./deploy.sh# Use Podman instead of Docker
make build-all CONTAINER_ENGINE=podman
# Build for specific platform
# Default is linux/amd64
make build-all PLATFORM=linux/arm64
# Build with additional flags
make build-all BUILD_FLAGS="--no-cache --pull"For cluster-based authentication and authorization, the deployment script can configure the Route host, create an OAuthClient, and set the frontend secret when provided a .env file. See the guide for details and a manual alternative:
Sessions have a configurable timeout (default: 300 seconds):
- Environment Variable: Set
TIMEOUT=1800for 30-minute sessions - CRD Default: Modify
components/manifests/crds/agenticsessions-crd.yaml - Interactive Mode: Set
interactive: truefor unlimited chat-based sessions
Configure AI API keys and integrations via the web interface:
- Settings → Runner Secrets: Add Anthropic API keys
- Project-scoped: Each project namespace has isolated secret management
- Security: All secrets stored as Kubernetes Secrets with proper RBAC
Pods Not Starting:
oc describe pod <pod-name> -n ambient-code
oc logs <pod-name> -n ambient-codeAPI Connection Issues:
oc get endpoints -n ambient-code
oc exec -it <pod-name> -- curl http://backend-service:8080/healthJob Failures:
oc get jobs -n ambient-code
oc describe job <job-name> -n ambient-code
oc logs <failed-pod-name> -n ambient-code# Check all resources
oc get all -l app=ambient-code -n ambient-code
# View recent events
oc get events --sort-by='.lastTimestamp' -n ambient-code
# Test frontend access
curl -f http://localhost:3000 || echo "Frontend not accessible"
# Test backend API
kubectl port-forward svc/backend-service 8080:8080 -n ambient-code &
curl http://localhost:8080/health- API Key Management: Store Anthropic API keys securely in Kubernetes secrets
- RBAC: Configure appropriate role-based access controls
- Network Policies: Implement network isolation between components
- Image Scanning: Scan container images for vulnerabilities before deployment
- Prometheus Metrics: Configure metrics collection for all components
- Log Aggregation: Set up centralized logging (ELK, Loki, etc.)
- Alerting: Configure alerts for pod failures, resource exhaustion
- Health Checks: Implement comprehensive health endpoints
- Horizontal Pod Autoscaling: Configure HPA based on CPU/memory usage
- Resource Limits: Set appropriate resource requests and limits
- Node Affinity: Configure pod placement for optimal resource usage
Single Command Setup:
# Start complete local development environment
make dev-startWhat this provides:
- ✅ Full OpenShift cluster with CRC
- ✅ Real OpenShift authentication and RBAC
- ✅ Production-like environment
- ✅ Automatic image builds and deployments
- ✅ Working frontend-backend integration
Prerequisites:
# Install CRC (macOS)
brew install crc
# Get Red Hat pull secret (free):
# 1. Visit: https://console.redhat.com/openshift/create/local
# 2. Download pull secret to ~/.crc/pull-secret.json
# 3. Run: crc setup
# Then start development
make dev-startHot Reloading (optional):
# Terminal 1: Start with development images
DEV_MODE=true make dev-start
# Terminal 2: Enable file sync for hot-reloading
make dev-syncAccess URLs:
- Frontend:
https://vteam-frontend-vteam-dev.apps-crc.testing - Backend:
https://vteam-backend-vteam-dev.apps-crc.testing/health - Console:
https://console-openshift-console.apps-crc.testing
# Build all images locally
make build-all
# Build specific components
make build-frontend
make build-backend
make build-operator
make build-runnervTeam/
├── components/ # 🚀 Ambient Agentic Runner Platform
│ ├── frontend/ # NextJS web interface
│ ├── backend/ # Go API service
│ ├── operator/ # Kubernetes operator
│ ├── runners/ # AI runner services
│ │ └── claude-code-runner/ # Python Claude Code CLI service
│ └── manifests/ # Kubernetes deployment manifests
├── docs/ # Documentation
│ ├── OPENSHIFT_DEPLOY.md # Detailed deployment guide
│ └── OPENSHIFT_OAUTH.md # OAuth configuration
├── tools/ # Supporting development tools
│ ├── vteam_shared_configs/ # Team configuration management
│ └── mcp_client_integration/ # MCP client library
└── Makefile # Build and deployment automation
- RBAC: Comprehensive role-based access controls
- Network Policies: Component isolation and secure communication
- Secret Management: Kubernetes-native secret storage with encryption
- Image Scanning: Vulnerability scanning for all container images
- Health Checks: Comprehensive health endpoints for all services
- Metrics: Prometheus-compatible metrics collection
- Logging: Structured logging with OpenShift logging integration
- Alerting: Integration with OpenShift monitoring and alerting
- Horizontal Pod Autoscaling: Auto-scaling based on CPU/memory metrics
- Resource Management: Proper requests/limits for optimal resource usage
- Job Queuing: Intelligent job scheduling and resource allocation
- Multi-tenancy: Project-based isolation with shared infrastructure
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes following the existing patterns
- Add tests if applicable
- Commit with conventional commit messages
- Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Deployment Guide: docs/OPENSHIFT_DEPLOY.md
- OAuth Setup: docs/OPENSHIFT_OAUTH.md
- Architecture Details: diagrams/
- API Documentation: Available in web interface after deployment
This project is licensed under the MIT License - see the LICENSE file for details.