Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Copy this file to .env and fill in with your model provider details.

# Base URL for the LLM API (optional for default OpenAI endpoint)
BASE_URL=

# Target model name used by the LLM demo agent
MODEL=gpt-4o-mini

# API key for the chosen provider (required for LLM responses)
API_KEY=

# Optional: override provider name used by AgentConfig (openai, anthropic, etc.)
PROVIDER=openai
19 changes: 19 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Repository Guidelines

## Project Structure & Module Organization
Source code lives in `src/openagents`, split into `agents/`, `core/`, `mods/`, `workspace/`, and related helpers. Templates and configs sit in `src/openagents/templates` and `config/`. Tests mirror the package layout inside `tests/` using `test_*.py`. The Studio front end is in `studio/`, while docs and assets live in `docs/`. Example workspaces are under `examples/` and `demos/`.

## Build, Test, and Development Commands
Install dependencies in editable mode with dev extras via `pip install -e .[dev]`. Run the Python suite using `pytest`, and add coverage with `pytest --cov=src/openagents --cov-report=term-missing`. For the Studio, run `npm install` then `npm start` from `studio/`. To boot the stack, use `docker compose up --build`; to rely on the published image with the bundled sample agent, run `docker compose -f docker-compose.remote.yml up -d`. All workflows are mirrored by the top-level `Makefile`; run `make help` for shortcuts like `make install-dev`, `make test`, `make docker-up`, and `make docker-remote-up`.

## Coding Style & Naming Conventions
Python code should pass Black (line length 88) and Flake8; run `black src tests` and `flake8 src tests` before submitting. Use type hints, `snake_case` functions, `PascalCase` classes, and `SCREAMING_SNAKE_CASE` constants. CLI extensions belong in `openagents.cli`. On the front end, follow the existing Tailwind + React patterns in `studio/src` and colocate component assets.

## Testing Guidelines
Pytest with `pytest-asyncio` powers async tests; decorate coroutines with `@pytest.mark.asyncio`. Keep test files as `test_<module>.py` and mirror package paths. Changes to transports or mods should add integration-style tests beneath `tests/<domain>/`. Maintain or raise the coverage configured in `pyproject.toml` and include regression cases for reported bugs.

## Commit & Pull Request Guidelines
Use concise, imperative commit subjects (example: `Add grpc transport healthcheck`), optionally followed by wrapped body paragraphs at 72 characters. Reference GitHub issues with `Fixes #123` when applicable. Every pull request should outline the change, testing performed, and any docs or config updates. Include screenshots for Studio UI tweaks and attach sample commands for new CLI behavior. Run the full lint and test suite before requesting review, and ensure the PR passes existing GitHub Actions checks.

## Agent & Network Tips
When contributing new agents, place reusable logic under `src/openagents/agents` and keep workspace scaffolds in `examples/`. Use the `NETWORK_HOST` and `NETWORK_PORT` environment variables (see `examples/agents/simple_worker_agent_example.py`) so agents behave in Docker and local runs. The LLM demos (`examples/agents/llm_worker_agent.py` 和 `examples/agents/chinese_poet_agent.py`) 展示了如何通过 `.env` 提供 `BASE_URL`, `MODEL`, `API_KEY` 并在缺省时优雅降级。Verify agents connect to a network started with `openagents network start` or Docker, and document any credentials. For network-level changes, update the corresponding YAML templates and call out migration steps in the README or release notes.
95 changes: 95 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
SHELL := /bin/bash

.DEFAULT_GOAL := help

NETWORK_DIR ?= ./my_first_network
COMPOSE_FILE ?= docker-compose.yml
REMOTE_COMPOSE_FILE ?= docker-compose.remote.yml
STUDIO_DIR ?= studio

.PHONY: help install install-dev test lint format network-init network-start studio \
studio-install studio-start docker-up docker-down docker-clean docker-logs \
docker-remote-up docker-remote-down

help: ## Show categorized targets and example flows
@echo "Usage: make <target> [VARIABLE=value]"
@echo
@echo "Setup"
@echo " install Install runtime dependencies (editable)"
@echo " install-dev Install development dependencies"
@echo
@echo "Quality"
@echo " test Run pytest suite"
@echo " lint Run Flake8 checks"
@echo " format Format Python code with Black"
@echo
@echo "Network & Studio"
@echo " network-init Scaffold a network workspace (NETWORK_DIR=...)"
@echo " network-start Start network for a workspace"
@echo " studio Launch Studio in standalone mode"
@echo " studio-install Install Studio front-end dependencies"
@echo " studio-start Start Studio dev server"
@echo
@echo "Docker (Local Build)"
@echo " docker-up Build and run compose stack"
@echo " docker-down Stop compose stack"
@echo " docker-clean Stop and remove volumes"
@echo " docker-logs Tail service logs"
@echo
@echo "Docker (Published Image)"
@echo " docker-remote-up Run network + sample agent (remote compose)"
@echo " docker-remote-down Stop remote stack"
@echo
@echo "Examples"
@echo " make install-dev && make test"
@echo " make network-init NETWORK_DIR=./workspace"
@echo " make docker-up"
@echo " make docker-remote-up REMOTE_COMPOSE_FILE=my-compose.yml"

install: ## Install runtime dependencies (editable mode)
@pip install -e .

install-dev: ## Install development dependencies and tooling
@pip install -e .[dev]

test: ## Run pytest test suite
@pytest

lint: ## Run Flake8 lint checks
@flake8 src tests

format: ## Format Python code with Black
@black src tests

network-init: ## Scaffold a new network workspace at NETWORK_DIR
@openagents init $(NETWORK_DIR)

network-start: ## Start the network defined at NETWORK_DIR
@openagents network start $(NETWORK_DIR)

studio: ## Launch OpenAgents Studio in standalone mode
@openagents studio -s

studio-install: ## Install Studio front-end dependencies
@cd $(STUDIO_DIR) && npm install

studio-start: ## Start the Studio front-end dev server
@cd $(STUDIO_DIR) && npm start

docker-up: ## Build and run stack using local Docker Compose
@docker compose -f $(COMPOSE_FILE) up --build

docker-down: ## Stop stack and remove containers (local compose)
@docker compose -f $(COMPOSE_FILE) down

docker-clean: ## Stop stack and remove containers + volumes (local compose)
@docker compose -f $(COMPOSE_FILE) down -v

docker-logs: ## Tail logs from the openagents service (local compose)
@docker compose -f $(COMPOSE_FILE) logs -f openagents

docker-remote-up: ## Run published image via remote compose file
@docker compose -f $(REMOTE_COMPOSE_FILE) up -d

docker-remote-down: ## Stop remote compose deployment and remove containers
@docker compose -f $(REMOTE_COMPOSE_FILE) down
95 changes: 88 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@

### OpenAgents: AI Agent Networks for Open Collaboration

<p>
English | <a href="README.zh.md">中文</a>
</p>


[![PyPI Version](https://img.shields.io/pypi/v/openagents.svg)](https://pypi.org/project/openagents/)
[![Python Version](https://img.shields.io/badge/python-3.10%2B-blue.svg)](https://www.python.org/downloads/)
Expand All @@ -17,7 +21,7 @@

</div>

**OpenAgents** is an open-source project for creating **AI Agent Networks** and connecting agents into networks for open collaboration. In other words, OpenAgents offers a foundational network infrastructure that enables AI Agents to connect and collaborate seamlessly.
**OpenAgents** is an open-source project for creating **AI Agent Networks** and connecting agents into networks for open collaboration. In other words, OpenAgents offers a foundational network infrastructure that enables AI Agents to connect and collaborate seamlessly. A Chinese version of this guide is available in [README.zh.md](README.zh.md).

Each agent network on **OpenAgents** is a self-contained community where agents can discover peers, collaborate on problems, learn from each other, and grow together. It is protocol-agnostic and works with popular LLM providers and agent frameworks.

Expand Down Expand Up @@ -93,20 +97,97 @@ pip install openagents

### Option 2: Docker

If you want to quickly spin up a network and test the studio locally, you can use Docker to run OpenAgents:
If you want to quickly spin up a network and test the studio locally without cloning this repository, use the published Docker image:

```bash
# Pull the latest image
docker pull ghcr.io/openagents-org/openagents:latest

# Run with Docker Compose
docker-compose up
# Launch with Docker Compose (create docker-compose.yml with the snippet below)
docker compose up -d

# Or run directly
# Or run the container directly
docker run -p 8700:8700 -p 8600:8600 -p 8050:8050 ghcr.io/openagents-org/openagents:latest
```

**Note:** Even you run the network with docker, you might still need to install the `openagents` package through pip for using the agent client to connect your agents to the network.
To use Docker Compose without cloning the repo, create a `docker-compose.yml` (or any filename you prefer) with the following content:

```yaml
services:
openagents:
image: ghcr.io/openagents-org/openagents:latest
container_name: openagents-network-studio
ports:
- "8700:8700" # HTTP transport
- "8600:8600" # gRPC transport
- "8050:8050" # Studio web interface
environment:
- NODE_ENV=production
restart: unless-stopped
```

Then run `docker compose up -d` in the same directory. The container exposes the network on port `8700` and the studio on port `8050`.

**Note:** Even though the network runs inside Docker, you may still want to install the `openagents` Python package via pip so client agents can connect to the network.

### Option 3: Docker (Build from Source)

If you prefer to build the image locally—for example, when modifying the codebase—clone this repository and use the bundled Compose file:

```bash
git clone https://github.com/openagents-org/openagents.git
cd openagents
docker compose up --build
```

This Compose configuration builds the image from the local source (see `docker-compose.yml`) and mounts a data volume for persistence.

To launch the prebuilt image together with all bundled demo agents, run the remote stack:

```bash
make docker-remote-up
```

This command (or `docker compose -f docker-compose.remote.yml up -d`) launches the network, Studio, the simple demo agent (`examples/agents/simple_worker_agent_example.py`), an LLM-powered helper (`examples/agents/llm_worker_agent.py`), and a classical-poetry agent (`examples/agents/chinese_poet_agent.py`).

- The simple agent posts welcome messages and demonstrates event handling without external dependencies.
- The LLM helper answers generic questions via `run_agent`, and the poetry agent crafts classical-style verses from detected keywords. Both rely on the shared `.env` configuration.
- Before launch, copy `.env.example` and provide model credentials:

```bash
cp .env.example .env
# Edit .env and set your model details:
BASE_URL=https://api.openai.com/v1 # Optional, custom inference endpoint
MODEL=gpt-4o-mini # Target model name
API_KEY=sk-... # Required for live LLM calls
PROVIDER=openai # Optional provider override
```

If `API_KEY` is missing, both LLM agents stay connected but reply with a reminder instead of invoking the model.

Both services mount `examples/agents` so you can iterate on the scripts and restart the stack to test changes.

### Makefile Quick Commands

The repository ships with a `Makefile` that captures the most common developer workflows. Examples:

```bash
# Install dev requirements and run tests
make install-dev
make test

# Launch network from a local workspace
make network-init NETWORK_DIR=./my_first_network
make network-start NETWORK_DIR=./my_first_network

# Build and start via local Docker compose
make docker-up

# Use the published image + sample agent (runs docker-compose.remote.yml)
make docker-remote-up
```

Run `make help` to see the full list of targets; override variables such as `NETWORK_DIR` or `COMPOSE_FILE` inline as needed.

## 🚀 Quick Start: Create and launch your first network

Expand Down Expand Up @@ -345,4 +426,4 @@ We welcome contributions of all kinds! Here's how to get involved:
⭐ **If OpenAgents helps your project, please give us a star on GitHub!** ⭐

![OpenAgents Logo](docs/assets/images/openagents_logo_100.png)
</div>
</div>
Loading