Skip to content

76Trystan/durable-runtime-agents

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Durable-Runtime-Agents

Implementing Temporals durable runtime workflows with pydantic AI. Ollama is used to test agents with locally run LLM's.


Current Agent Workflows

  1. Data Collection Agent (Currently reads off a CSV file)

Installation & Setup

  1. Clone the repository and move into the project directory:
git clone https://github.com/76Trystan/durable-runtime-agents.git
cd durable-runtime-agents
  1. Create and activate a virtual environment (recommended):
python -m venv venv
source venv/bin/activate  # macOS / Linux
venv\Scripts\activate   # Windows
  1. Install the required Python dependencies:
pip install -r requirements.txt
  1. Start Ollama and ensure the required model is available:
ollama pull llama3.1:8b

Ollama should be running at http://localhost:11434. Start the Temporal server locally (for example, using Docker):

docker compose up
  1. Temporal should be accessible at localhost:7233. In a separate terminal:
temporal server start-dev
  1. To run the agent, navigate to the agents directory and execute the agent entry point:
cd agents
python agent.py

Disclaimer

All data used in this example is mocked and only generate for demo purposes.

About

Durable runtime agents using pydantic.ai + Temporal

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages