Implementing Temporals durable runtime workflows with pydantic AI. Ollama is used to test agents with locally run LLM's.
- Data Collection Agent (Currently reads off a CSV file)
- Clone the repository and move into the project directory:
git clone https://github.com/76Trystan/durable-runtime-agents.git
cd durable-runtime-agents
- Create and activate a virtual environment (recommended):
python -m venv venv
source venv/bin/activate # macOS / Linux
venv\Scripts\activate # Windows
- Install the required Python dependencies:
pip install -r requirements.txt
- Start Ollama and ensure the required model is available:
ollama pull llama3.1:8b
Ollama should be running at http://localhost:11434. Start the Temporal server locally (for example, using Docker):
docker compose up
- Temporal should be accessible at localhost:7233. In a separate terminal:
temporal server start-dev
- To run the agent, navigate to the agents directory and execute the agent entry point:
cd agents
python agent.py
All data used in this example is mocked and only generate for demo purposes.