Warning
Early & Experimental Development Stage: This project started as a way for me to better understand scalable LLM Agent collaboration. As is, it has been mainly a work between myself and a Junie LLM coding agent. It is currently in early development. APIs, configuration schemas, and core architectures are subject to significant changes. We do not recommend using this in a production environment yet.
Several design decisions were deliberately fixed to keep scope simple and focus exploration:
- The only target platforms are Docker Compose and Google Cloud.
- The only persistence framework supported is Firestore.
These both could be fairly easily updated to support additional options, I have just not focused on them specifically in favor of learning and exploring AI agent orchestration.
BitBrat Platform is an LLM-powered event orchestration and execution engine currently designed for streamers, though can easily be adapted for a wide range of use cases. It bridges external event sources (like Twitch, Kick, Discord, and Twilio) with internal processing logic and AI-driven reactions.
- Multi-Platform Ingress: Listen to events from Twitch (IRC & EventSub), Discord, and Twilio Conversations.
- AI-Driven Reactions: Integration with OpenAI and Model Context Protocol (MCP) to provide intelligent responses and tool execution.
- Microservices Architecture: Scalable, cloud-native services deployed on Google Cloud Platform (Cloud Run).
- Event-Driven: Built on top of a robust message bus (NATS/PubSub) for asynchronous processing.
- Extensible: Easily add new event sources, command processors, or MCP tools.
The platform consists of several core services:
- Ingress-Egress: The gateway for external platforms.
- Auth Service: Handles user enrichment and authorization.
- Event Router: Matches incoming events, enriches them, and routes them through the platform.
- LLM Bot: The brain of the platform, processing events using LLMs.
- Persistence: Ensures events and states are stored reliably.
- Scheduler: Manages periodic tasks and ticks.
For a detailed view, see architecture.yaml and the documentation folder.
- Node.js (v24.x recommended)
- npm
- Docker and Docker Compose
- Google Cloud Project (for project ID)
- OpenAI API Key
-
Clone the repository:
git clone https://github.com/cnavta/BitBrat.git cd BitBrat -
Install dependencies:
npm install
-
Initialize the platform:
npm run brat -- setup
The
setupcommand will guide you through configuring your GCP Project ID, OpenAI API Key, and Bot Name. It will also bootstrap your local environment using Docker.
Once setup is complete, you can start an interactive chat session with your bot:
npm run brat -- chatTo manually start the platform locally using Docker Compose:
npm run localTo stop the local environment:
npm run local:downBuild the project:
npm run buildRun tests:
npm testbrat (BitBrat Rapid Administration Tool) is the primary CLI tool for managing the platform. It simplifies common tasks such as environment validation, service bootstrapping, deployment, and infrastructure management.
For more details, see the brat documentation.
Usage:
npm run brat -- <command> [options](Note: Use -- to pass arguments through npm to the underlying script)
--env <name>: Specify the environment (e.g.,dev,prod). Can also be set viaBITBRAT_ENV.--project-id <id>: Override the Google Cloud Project ID.--region <name>: Override the GCP region.--dry-run: Preview changes without applying them.--json: Output results in JSON format.
brat setup [--project-id <id>] [--openai-key <key>] [--bot-name <name>]: Interactive platform initialization.brat chat [--env <name>] [--url <url>]: Start an interactive chat session with the platform.
brat doctor: Run diagnostic checks to ensure required tools (gcloud,terraform,docker) are installed.brat config show: Display the resolved platform configuration.brat config validate: Validatearchitecture.yamlagainst the platform schema.
brat service bootstrap --name <name> [--mcp] [--force]: Create a new service from a template. Use--mcpfor Model Context Protocol servers.
brat deploy services --all: Deploy all services defined inarchitecture.yaml.brat deploy service <name>: Deploy a specific service (alias:brat deploy <name>).
brat infra plan <module>: Generate an execution plan for infrastructure changes.brat infra apply <module>: Apply infrastructure changes.- Modules:
network,lb(load-balancer),connectors,buckets.
brat lb urlmap render: Generate the GCP Load Balancer URL map YAML.brat lb urlmap import: Import the rendered URL map into Google Cloud.
brat apis enable: Enable required Google Cloud APIs.brat cloud-run shutdown: Stop all Cloud Run services in the environment (cost-saving).
brat trigger create --name <n> --repo <repo> --branch <regex> --config <path>: Manage Cloud Build triggers.
The BitBrat platform follows a robust, event-driven architecture built on a unified message bus (NATS or Google Cloud Pub/Sub) and a standardized internal event contract.
InternalEventV2 is the canonical event format used throughout the platform. It flattens the legacy V1 envelope and introduces specialized fields for AI-driven orchestration:
- Metadata:
correlationId,traceId,source,egressDestination. - Payloads:
message: Normalized chat/text message metadata.externalEvent: Normalized platform-specific behavioral events (e.g., follows, subs).payload: Fallback for system or non-message data.
- Enrichment:
annotations: A collection of insights produced by services (e.g., intent, sentiment, user profile).candidates: Potential replies or actions proposed by processing services.
- Routing:
routingSlip: An array ofRoutingStepobjects defining the remaining processing path.
The typical lifecycle of an event involves several specialized microservices:
- Ingress: External platforms (Twitch, Discord, Twilio) hit the
Ingress-Egressservice. It maps the raw payload toInternalEventV2, sets theegressDestinationto its specific instance topic, and publishes tointernal.ingress.v1. - Auth (User Enrichment): The
Auth Serviceconsumes the event, enriches it with user metadata (roles, tags, notes) from Firestore, and publishes tointernal.user.enriched.v1. - Event Router: The
Event Routerevaluates the enriched event against a set of rules (using JsonLogic). It generates aroutingSlipdefining the next processing steps and dispatches the event. - Orchestration & Processing: Services like
LLM BotorCommand Processorreceive events based on the routing slip. They addannotationsorcandidates, update the routing step status, and useBaseServerhelpers (next()) to advance the event. - Egress: Once processing is complete, the event is routed back to the specific
Ingress-Egressinstance via theegressDestination. The service selects the best candidate reply and delivers it to the target platform. - Persistence: The
Persistenceservice listens to various topics (includinginternal.persistence.finalize.v1) to store the final state, selections, and errors for auditing and long-term memory.
All services leverage BaseServer for standardized messaging patterns:
onMessage<T>(topic, handler): Unified subscription to the message bus with automatic V1->V2 conversion.next(event): Automatically advances the event to the next pending step in theroutingSlip.complete(event): Bypasses the remaining routing slip and sends the event directly toegressDestination.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
For security-related issues, please refer to SECURITY.md.
