Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions .github/scripts/check_examples.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
#!/bin/bash

# Check that all example Python files are documented in examples/README.md

EXAMPLES_DIR="examples"
README="examples/README.md"

MISSING_README=()

while IFS= read -r -d '' file; do
rel_path="${file#./}"
readme_path="/${rel_path}"

if ! grep -qF "$readme_path" "$README"; then
MISSING_README+=("$readme_path")
fi
done < <(find "$EXAMPLES_DIR" -name "*.py" -print0 | sort -z)

FAILED=0

if [ ${#MISSING_README[@]} -gt 0 ]; then
echo "ERROR: The following example files are not documented in $README:"
for f in "${MISSING_README[@]}"; do
echo " - $f"
done
echo ""
echo "Please add entries for these files to the table in $README."
FAILED=1
fi

if [ $FAILED -eq 1 ]; then
exit 1
fi

echo "All example files are documented in $README."
4 changes: 4 additions & 0 deletions .github/workflows/shared-packages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,10 @@ jobs:
run: |
uv run ./check_licenses.sh
- name: Check all examples are documented in README
run: |
bash .github/scripts/check_examples.sh
- name: Check documentation builds correctly
run: |
uv run mkdocs build --strict
Expand Down
6 changes: 6 additions & 0 deletions docs/api_reference/agents/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,10 @@

::: ragbits.agents.AgentResultStreaming

::: ragbits.agents.AgentRunContext

::: ragbits.agents.a2a.server.create_agent_server

::: ragbits.agents.hooks.Hook

::: ragbits.agents.hooks.EventType
80 changes: 77 additions & 3 deletions docs/how-to/agents/define_and_use_agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Use a structured prompt to instruct the LLM. For details on writing prompts with
from pydantic import BaseModel
from ragbits.core.prompt import Prompt

--8<-- "examples/agents/tool_use.py:51:70"
--8<-- "examples/agents/tool_use.py:51:72"
```

### Run the agent
Expand All @@ -49,6 +49,33 @@ The result is an [AgentResult][ragbits.agents.AgentResult], which includes the m

You can find the complete code example in the Ragbits repository [here](https://github.com/deepsense-ai/ragbits/blob/main/examples/agents/tool_use.py).

### Alternative approach: inheritance with `prompt_config`

In addition to explicitly attaching a Prompt instance, Ragbits also supports defining agents through a combination of inheritance and the `@Agent.prompt_config` decorator.

This approach lets you bind input (and optionally output) models directly to your agent class. The agent then derives its prompt structure automatically, without requiring a prompt argument in the constructor.

```python
from pydantic import BaseModel
from ragbits.agents import Agent

--8<-- "examples/agents/with_decorator.py:51:71"
```

The decorator can also accept an output type, allowing you to strongly type both the inputs and outputs of the agent. If you do not explicitly define a `user_prompt`, Ragbits will default to `{{ input }}`.

Once defined, the agent class can be used directly, just like any other subclass of Agent:

```python
import asyncio
from ragbits.agents import Agent
from ragbits.core.llms import LiteLLM

--8<-- "examples/agents/with_decorator.py:73:84"
```

You can find the complete code example in the Ragbits repository [here](https://github.com/deepsense-ai/ragbits/blob/main/examples/agents/with_decorator.py).

## Tool choice
To control what tool is used at first call you could use `tool_choice` parameter. There are the following options:
- "auto": let model decide if tool call is needed
Expand Down Expand Up @@ -120,9 +147,56 @@ async def main() -> None:
See the runnable example in `examples/agents/dependencies.py`.

## Streaming agent responses
For use cases where you want to process partial outputs from the LLM as they arrive (e.g., in chat UIs), the [`Agent`][ragbits.agents.Agent] class supports streaming through the `run_streaming()` method.
For use cases where you want to process partial outputs from the LLM as they arrive (e.g., in chat UIs),
the [`Agent`][ragbits.agents.Agent] class supports streaming through the `run_streaming()` method.

This method returns an `AgentResultStreaming` object — an async iterator that yields parts of the LLM response and
tool-related events in real time.

```python
from ragbits.agents import Agent, ToolCall, ToolCallResult
from ragbits.core.llms import LiteLLM

async def main() -> None:
"""Run the weather agent with streaming output."""
llm = LiteLLM(model_name="gpt-4o-2024-08-06", use_structured_output=True)
agent = Agent(llm=llm, prompt=WeatherPrompt, tools=[get_weather])

async for chunk in agent.run_streaming(WeatherPromptInput(location="Paris")):
if isinstance(chunk, ToolCall):
print(f"Calling tool: {chunk.name}({chunk.arguments})")
elif isinstance(chunk, ToolCallResult):
print(f"Tool result: {chunk.result}")
elif isinstance(chunk, str):
print(chunk, end="", flush=True)
```

## Streaming custom events from tools
Tools can emit custom events during execution that are surfaced through the streaming loop.
To do this, define your tool as an async generator that yields intermediate events and a final
`ToolReturn` value:

```python
from collections.abc import AsyncGenerator

from pydantic import BaseModel

from ragbits.agents.tool import ToolReturn

--8<-- "examples/agents/stream_events_from_tools.py:30:44"
```

Events yielded before the `ToolReturn` are collected and available via `result.tool_events`
after the stream completes:

```python
from ragbits.agents import Agent
from ragbits.core.llms import LiteLLM

--8<-- "examples/agents/stream_events_from_tools.py:46:55"
```

This method returns an `AgentResultStreaming` object — an async iterator that yields parts of the LLM response and tool-related events in real time.
You can find the complete code example in the Ragbits repository [here](https://github.com/deepsense-ai/ragbits/blob/main/examples/agents/stream_events_from_tools.py).

## Native OpenAI tools
Ragbits supports selected native OpenAI tools (web_search_preview, image_generation and code_interpreter). You can use them together with your tools.
Expand Down
48 changes: 48 additions & 0 deletions docs/how-to/agents/stream_downstream_agents.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# How-To: Stream downstream agents with Ragbits

Ragbits [Agent][ragbits.agents.Agent] can call other agents as tools, creating a chain of reasoning where downstream agents provide structured results to the parent agent.

Using the streaming API, you can observe every chunk of output as it is generated, including tool calls, tool results, and final text - perfect for real-time monitoring or chat interfaces.

## Define a simple tool

A tool is just a Python function returning a JSON-serializable result. Here’s an example tool returning the current time for a given location:

```python
import json

--8<-- "examples/agents/downstream_agents_streaming.py:33:51"
```

## Create a downstream agent

The downstream agent wraps the tool with a prompt, allowing the LLM to use it as a function.

```python
from pydantic import BaseModel
from ragbits.core.prompt import Prompt
from ragbits.agents import Agent
from ragbits.agents._main import AgentOptions
from ragbits.core.llms import LiteLLM

--8<-- "examples/agents/downstream_agents_streaming.py:54:82"
```

## Create a parent QA agent

The parent agent can call downstream agents as tools. This lets the LLM reason and decide when to invoke the downstream agent.

```python
--8<-- "examples/agents/downstream_agents_streaming.py:85:111"
```

## Streaming output from downstream agents

Use `run_streaming` with an [AgentRunContext][ragbits.agents.AgentRunContext] to see output as it happens. Each chunk contains either text, a tool call, or a tool result. You can print agent names when they change and handle downstream agent events.

```python
import asyncio
from ragbits.agents import DownstreamAgentResult

--8<-- "examples/agents/downstream_agents_streaming.py:114:133"
```
45 changes: 45 additions & 0 deletions docs/how-to/agents/stream_events_from_tools_to_chat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# How-To: Streaming events from tools and using them in the client

In this document, we will build a tool that is able to send custom commands to the UI and display a Markdown table
to the user.

## Define a streaming tool
To achieve this, define a tool as a `Generator` or `AsyncGenerator` that yields an event that the client is able
to handle. In our case, we will use a `TextReponse` event, which is supported by Ragbits UI.

Note, that the custom events yielded from the tool won't be automatically passed to the LLM. To indicate the output of
the tool that needs to be passed to the LLM, use `ToolReturn`.

```python
from collections.abc import Generator

from ragbits.agents.tool import ToolReturn
from ragbits.chat.interface.types import TextContent, TextResponse

--8<-- "examples/chat/stream_events_from_tools_to_chat.py:61:75"
```

You can also define custom events by inheriting `ChatResponse` class with a custom `ResponseContent`. The content can
be arbitrary, as long as your client understands it and is able to handle it. To do that, you will need to extend
Ragbits UI with your own event handlers.

## Use it in a ChatInterface

Now, we will implement a [`ChatInterface`][ragbits.chat.interface.ChatInterface] that uses the agent with the streaming tool.
You can serve and test it via `RagbitsAPI`. Since we used a TextResponse that is understood by the Ragbits UI, we can
deploy the app and test the agent right away.

```python
from collections.abc import AsyncGenerator

from ragbits.agents import Agent
from ragbits.chat.api import RagbitsAPI
from ragbits.chat.interface import ChatInterface
from ragbits.chat.interface.types import ChatContext, ChatResponse, TextResponse
from ragbits.core.llms import LiteLLM
from ragbits.core.prompt.base import ChatFormat

--8<-- "examples/chat/stream_events_from_tools_to_chat.py:82:111"
```

Note, that the system receiving the events may be completely different from Ragbits frontend.
Loading
Loading