Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Build artifacts
bin/
obj/

# IDE and editor files
.vs/
.vscode/
*.user
*.suo
.foundry/

# Source control
.git/

# Documentation
README.md

# Ignore files
.gitignore
.dockerignore

# Logs
*.log

# Temporary files
*.tmp
*.temp

# OS files
.DS_Store
Thumbs.db

# Package manager directories
node_modules/
packages/

# Test results
TestResults/
*.trx

# Coverage reports
coverage/
*.coverage
*.coveragexml

# Local development config
appsettings.Development.json
.env

.venv/
__pycache__/
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# IMPORTANT: Never commit .env to version control - add it to .gitignore
PROJECT_ENDPOINT=
MODEL_DEPLOYMENT_NAME=
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM python:3.12-slim

WORKDIR /app

COPY . user_agent/
WORKDIR /app/user_agent

RUN if [ -f requirements.txt ]; then \
pip install -r requirements.txt; \
else \
echo "No requirements.txt found"; \
fi

EXPOSE 8088

CMD ["python", "main.py"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
**IMPORTANT!** All samples and other resources made available in this GitHub repository ("samples") are designed to assist in accelerating development of agents, solutions, and agent workflows for various scenarios. Review all provided resources and carefully test output behavior in the context of your use case. AI responses may be inaccurate and AI actions should be monitored with human oversight. Learn more in the transparency documents for [Agent Service](https://learn.microsoft.com/en-us/azure/ai-foundry/responsible-ai/agents/transparency-note) and [Agent Framework](https://github.com/microsoft/agent-framework/blob/main/TRANSPARENCY_FAQ.md).

Agents, solutions, or other output you create may be subject to legal and regulatory requirements, may require licenses, or may not be suitable for all industries, scenarios, or use cases. By using any sample, you are acknowledging that any output created using those samples are solely your responsibility, and that you will comply with all applicable laws, regulations, and relevant safety standards, terms of service, and codes of conduct.

Third-party samples contained in this folder are subject to their own designated terms, and they have not been tested or verified by Microsoft or its affiliates.

Microsoft has no responsibility to you or others with respect to any of these samples or any resulting output.

# What this sample demonstrates

This sample demonstrates a **key advantage of code-based hosted agents**:

- **Agents in Workflows** - Use AI agents as executors within a workflow pipeline

Code-based agents can execute **any Python code** you write. This sample includes a multi-agent workflow where Writer and Reviewer agents collaborate to draft content and provide review feedback.

The agent is hosted using the [Azure AI AgentServer SDK](https://pypi.org/project/azure-ai-agentserver-agentframework/) and can be deployed to Microsoft Foundry using the Azure Developer CLI.

## How It Works

### Agents in Workflows

This sample demonstrates the integration of AI agents within a workflow pipeline. The workflow operates as follows:

1. **Writer Agent** - Drafts content
2. **Reviewer Agent** - Reviews the draft and provides concise, actionable feedback

### Agent Hosting

The agent workflow is hosted using the [Azure AI AgentServer SDK](https://pypi.org/project/azure-ai-agentserver-agentframework/),
which provisions a REST API endpoint compatible with the OpenAI Responses protocol.

### Agent Deployment

The hosted agent workflow can be deployed to Microsoft Foundry using the Azure Developer CLI [ai agent](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/hosted-agents?view=foundry&tabs=cli#create-a-hosted-agent) extension.

## Running the Agent Locally

### Prerequisites

Before running this sample, ensure you have:

1. **Azure AI Foundry Project**
- Project created in [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-foundry?view=foundry#microsoft-foundry-portals)
- Chat model deployed (e.g., `gpt-4o` or `gpt-4.1`)
- Note your project endpoint URL and model deployment name

2. **Azure CLI**
- Installed and authenticated
- Run `az login` and verify with `az account show`

3. **Python 3.10 or higher**
- Verify your version: `python --version`
- If you have Python 3.9 or older, install a newer version:
- Windows: `winget install Python.Python.3.12`
- macOS: `brew install python@3.12`
- Linux: Use your package manager

### Environment Variables

Set the following environment variables (matching `agent.yaml`):

- `PROJECT_ENDPOINT` - Your Azure AI Foundry project endpoint URL (required)
- `MODEL_DEPLOYMENT_NAME` - The deployment name for your chat model (defaults to `gpt-4.1-mini`)

This sample loads environment variables from a local `.env` file if present.

Create a `.env` file in this directory with the following content:

```
PROJECT_ENDPOINT=https://<your-resource>.services.ai.azure.com/api/projects/<your-project>
MODEL_DEPLOYMENT_NAME=gpt-4.1-mini
```

Or set them via PowerShell:

```powershell
# Replace with your actual values
$env:PROJECT_ENDPOINT="https://<your-resource>.services.ai.azure.com/api/projects/<your-project>"
$env:MODEL_DEPLOYMENT_NAME="gpt-4.1-mini"
```

### Setting Up a Virtual Environment

It's recommended to use a virtual environment to isolate project dependencies:

**macOS/Linux:**

```bash
python -m venv .venv
source .venv/bin/activate
```

**Windows (PowerShell):**

```powershell
python -m venv .venv
.\.venv\Scripts\Activate.ps1
```

### Installing Dependencies

Install the required Python dependencies using pip:

```bash
pip install -r requirements.txt
```

### Running the Sample

To run the agent, execute the following command in your terminal:

```powershell
python main.py
```

This will start the hosted agent locally on `http://localhost:8088/`.

### Interacting with the Agent

**PowerShell (Windows):**

```powershell
$body = @{
input = "Create a slogan for a new electric SUV that is affordable and fun to drive."
stream = $false
} | ConvertTo-Json

Invoke-RestMethod -Uri http://localhost:8088/responses -Method Post -Body $body -ContentType "application/json"
```

**Bash/curl (Linux/macOS):**

```bash
curl -sS -H "Content-Type: application/json" -X POST http://localhost:8088/responses \
-d '{"input": "Create a slogan for a new electric SUV that is affordable and fun to drive.","stream":false}'
```

### Deploying the Agent to Microsoft Foundry

To deploy your agent to Microsoft Foundry, follow the comprehensive deployment guide at https://learn.microsoft.com/en-us/azure/ai-foundry/agents/concepts/hosted-agents?view=foundry&tabs=cli

## Troubleshooting

### Images built on Apple Silicon or other ARM64 machines do not work on our service

We **recommend using `azd` cloud build**, which always builds images with the correct architecture.

If you choose to **build locally**, and your machine is **not `linux/amd64`** (for example, an Apple Silicon Mac), the image will **not be compatible with our service**, causing runtime failures.

**Fix for local builds**

Use this command to build the image locally:

```shell
docker build --platform=linux/amd64 -t image .
```

This forces the image to be built for the required `amd64` architecture.
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Unique identifier/name for this agent
name: azure-ai-agents-in-workflow
# Brief description of what this agent does
description: >
A multi-agent workflow featuring a Writer and Reviewer that collaborate
to create and refine content.
metadata:
# Categorization tags for organizing and discovering agents
authors:
- Microsoft Agent Framework Team
tags:
- Azure AI AgentServer
- Microsoft Agent Framework
- Multi-Agent Workflow
- Writer-Reviewer
- Content Creation
template:
name: azure-ai-agents-in-workflow
# The type of agent - "hosted" for HOBO, "container" for COBO
kind: hosted
protocols:
- protocol: responses
environment_variables:
- name: PROJECT_ENDPOINT
value: ${AZURE_AI_PROJECT_ENDPOINT}
- name: MODEL_DEPLOYMENT_NAME
value: "{{chat}}"
resources:
- kind: model
id: gpt-4.1-mini
name: chat
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import asyncio
import os
from contextlib import asynccontextmanager

from agent_framework import WorkflowBuilder
from agent_framework.azure import AzureAIAgentClient
from azure.ai.agentserver.agentframework import from_agent_framework
from azure.identity.aio import DefaultAzureCredential, ManagedIdentityCredential
from dotenv import load_dotenv

load_dotenv(override=True)

# Configure these for your Foundry project
# Read the explicit variables present in the .env file
PROJECT_ENDPOINT = os.getenv(
"PROJECT_ENDPOINT"
) # e.g., "https://<project>.services.ai.azure.com/api/projects/<project-name>"
MODEL_DEPLOYMENT_NAME = os.getenv(
"MODEL_DEPLOYMENT_NAME", "gpt-4.1-mini"
) # Your model deployment name e.g., "gpt-4.1-mini"


def get_credential():
"""Will use Managed Identity when running in Azure, otherwise falls back to DefaultAzureCredential."""
return (
ManagedIdentityCredential()
if os.getenv("MSI_ENDPOINT")
else DefaultAzureCredential()
)


@asynccontextmanager
async def create_agents():
async with (
get_credential() as credential,
AzureAIAgentClient(
project_endpoint=PROJECT_ENDPOINT,
model_deployment_name=MODEL_DEPLOYMENT_NAME,
credential=credential,
) as writer_client,
AzureAIAgentClient(
project_endpoint=PROJECT_ENDPOINT,
model_deployment_name=MODEL_DEPLOYMENT_NAME,
credential=credential,
) as reviewer_client,
):
writer = writer_client.create_agent(
name="Writer",
instructions="You are an excellent content writer. You create new content and edit contents based on the feedback.",
)
reviewer = reviewer_client.create_agent(
name="Reviewer",
instructions="You are an excellent content reviewer. Provide actionable feedback to the writer about the provided content in the most concise manner possible.",
)
yield writer, reviewer


def create_workflow(writer, reviewer):
workflow = (
WorkflowBuilder(name="Writer-Reviewer")
.register_agent(lambda: writer, name="Writer", output_response=True)
.register_agent(lambda: reviewer, name="Reviewer", output_response=True)
.set_start_executor("Writer")
.add_edge("Writer", "Reviewer")
.build()
)
return workflow.as_agent()


async def main() -> None:
"""
The writer and reviewer multi-agent workflow.

Environment variables required:
- PROJECT_ENDPOINT: Your Microsoft Foundry project endpoint
- MODEL_DEPLOYMENT_NAME: Your Microsoft Foundry model deployment name
"""

async with create_agents() as (writer, reviewer):
agent = create_workflow(writer, reviewer)
await from_agent_framework(agent).run_async()


if __name__ == "__main__":
asyncio.run(main())
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
azure-ai-agentserver-agentframework==1.0.0b12