An Express API that exposes a Base AI Agent. Built with TypeScript and LangChain.
This API provides a LangChain-based Base Agent with RAG (Retrieval-Augmented Generation) capabilities. It allows for the use of a Redis vector store and supports streaming responses for real-time chat interactions.
Add contextual knowledge to the agent via:
- Plain text input
- URLs (automatically crawled and processed)
- Documents are embedded and stored in Redis for semantic search
Real-time streaming chat responses using Server-Sent Events (SSE) for better user experience.
The following endpoints enable interaction from the Code Capsules Agent Capsule UI:
POST /api/chat/message/stream- Stream chat responsesGET /api/chat/history- Retrieve chat history
Authentication: These endpoints use a lightweight security model:
- Headers:
X-CC-API-KEY,X-CC-EMAIL - Environment:
INTERNAL_API_KEY
The API validates X-CC-API-KEY against INTERNAL_API_KEY and uses X-CC-EMAIL as the user identifier. These are not required for the agent to function but enable Code Capsules chatbot integration.
To create your own Agent Capsule in Code Capsules with this functionaity:
- Create a new repo from this template
- Create an AI Agent Capsule
- Link your new repo to the capsule
- Mark the capsule as 'using a Code Capsules template' and enter your provider, model, and API key details
- Finish the setup and wait for the capsule to build
By default, Code Capsules Agent templates cater for vector store logic using a Redis instance. This is NOT recommended for production as the instance data is not permanent - if the Redis instance restarts, it will lose all vector data.
To link a vector store through Code Capsules:
- Create a new Redis Capsule
- Go to the Details tab of your new Redis Capsule
- Copy the Connection string
- Go to the Config tab of your Agent Capsule
- Edit the environment variables using Text Editor and add the following:
REDIS_URL=your_copied_connection_string- Save the changed environment variables
- Wait for the capsule to restart. Once complete, your Agent Capsule will now use this Redis Capsule as its vector store.
Warning
Please note: Agent Capsule templates have been built with the capabilities of using either a Redis or in-memory vector store. When using the in-memory option (i.e. no Redis Capsule is linked), it is recommended to scale the Agent Capsule to the following in order to ensure optimal performance:
- CPU 25%
- Memory 1GB
- Replicas 1
To do this, visit the Agent Capsule Scale page.
To link a vector store:
- Start a new Redis instance using Docker.
- Add the following as an environment variable to your agent:
REDIS_URL=your_redis_instance_url- Your Agent Capsule will now use this Redis Capsule as its vector store.
- Vector Store: Makes use of Redis as a vector database or a fallback Memory vector store to facilitate RAG functionality.
- History: Non-persistent Redis or In-Memory store. Can be easily expanded to be persistent.
- Embeddings: Uses Hugging Face Transformers embeddings model to embed text into a vector space locally.
- Agent: The core LangChain agent that handles the chat logic.
- API: API exposed via Express.js.
The Base Agent has access to the following tools:
- Purpose: RAG (Retrieval-Augmented Generation) tool for retrieving contextual information
- Description: Performs similarity search in the Redis vector store to find relevant documents
- Input: Query string
- Output: Retrieved documents with source metadata and content
npm installFor local testing, create a .env file:
# App Configuration
PORT=3000
APP_URL=localhost:3000
APP_NAME=my-agent
DEV_MODE=false
# Security
# This is generated by Code Capsules on creation of the Agent capsule
INTERNAL_API_KEY=your-secret-key
# LLM Provider (Required)
PROVIDER_API_KEY=your-google-api-key
PROVIDER_NAME=google-genai
PROVIDER_MODEL=gemini-2.0-flash
# Vector Store (Optional, but recommended)
REDIS_URL=redis://localhost:6379
# Host documentation (Optional)
SHOW_DOCUMENTATION=true
To run
npm start-
POST /api/chat/message- Send a message (non-streaming)- Body:
{ message: string } - Response: Complete chat response
- Auth: Required (
X-CC-API-KEY,X-CC-EMAIL)
- Body:
-
POST /api/chat/message/stream- Send a message (SSE streaming)- Body:
{ message: string } - Response: Server-Sent Events stream
- Auth: Required (
X-CC-API-KEY,X-CC-EMAIL)
- Body:
-
GET /api/chat/history- Get chat history for user- Response: Array of previous messages
- Auth: Required (
X-CC-API-KEY,X-CC-EMAIL)
-
POST /api/rag/text- Add text context to vector store- Body:
{ text: string } - Response: Confirmation of context addition
- Auth: Required (
X-CC-API-KEY,X-CC-EMAIL)
- Body:
-
POST /api/rag/url- Add context from URL to vector store- Body:
{ url: string } - Response: Confirmation of context addition
- Auth: Required (
X-CC-API-KEY,X-CC-EMAIL)
- Body:
GET /api-docs- Swagger UI (public, no auth required)GET /swagger.json- Swagger JSON (public, no auth required)
Edit src/middleware/auth-user.ts to change API authorization. By default, this uses the pre-described headers (X-CC-API-KEY, X-CC-EMAIL) and API key validation against INTERNAL_API_KEY.
When using Code Capsules, changing the provider, model, and API key is easy via the Agent Capsule 'Config' page. For other implementations, edit the PROVIDER_NAME, PROVIDER_MODEL, and PROVIDER_API_KEY environment variables according to your requirements.
Supported providers include:
google-genai(default)openaianthropicmistralaigroqcoherecerebrasxai
Update src/modules/agent/config/system-prompt.ts to change agent behavior, personality, or instructions. When using Code Capsules, be sure to ensure that the Agent Capsule is rebuilt and deployed to see your changes.
Add New Tools:
- Create a new tool class in
src/modules/agent/tools/implementations/extendingBaseTool - Implement the tool using LangChain's
tool()function - Register the tool in
src/modules/agent/tools/tools-manager.ts
Remove Tools:
- Remove the tool import from
src/modules/agent/tools/implementations/index.ts - Remove the tool registration from
src/modules/agent/tools/tools-manager.ts
When using Code Capsules, ensure that the Agent Capsule is rebuild and deployed to see your changes.
The RAG module currently uses Redis or an in-memory vector store. To switch to another vector store (Pinecone, Weaviate, etc.), modify src/modules/rag/services/vector-store.service.ts.
- Runtime: Node.js + TypeScript
- Framework: Express 5
- LLM Framework: LangChain
- LLM Provider: Google Generative AI
- Vector Store: Redis (or Memory for fallback)
- Documentation: Swagger/OpenAPI
- CORS is enabled for all origins (
*) - Request size limit: 10MB
- Swagger docs are publicly accessible at
/api-docs