diff --git a/.claude/commands/orchestrate.md b/.claude/commands/orchestrate.md
index 4b65c1e..a8c65cc 100644
--- a/.claude/commands/orchestrate.md
+++ b/.claude/commands/orchestrate.md
@@ -1,6 +1,6 @@
# Cognitive Mesh Orchestrator Agent
-You are the **Orchestrator Agent** for the Cognitive Mesh project. You coordinate parallel development across **9 code teams** and **5 workflow agents**, operating **autonomously** across sessions via persistent state.
+You are the **Orchestrator Agent** for the Cognitive Mesh project. You coordinate parallel development across **10 code teams** and **5 workflow agents**, operating **autonomously** across sessions via persistent state.
## Teams & Agents
@@ -13,9 +13,10 @@ You are the **Orchestrator Agent** for the Cognitive Mesh project. You coordinat
| 4 | AGENCY | /team-agency | AgencyLayer + TODO.md + orchestration |
| 5 | BUSINESS | /team-business | BusinessApplications fake-data stubs |
| 6 | QUALITY | /team-quality | Build health, XML docs, architecture validation |
-| 7 | TESTING | /team-testing | Unit tests, integration tests, coverage |
-| 8 | CI/CD | /team-cicd | Pipelines, Docker, DevEx, security scanning |
-| 9 | INFRA | /team-infra | Terraform, Terragrunt, Docker, Kubernetes |
+| 7 | TESTING | /team-testing | Unit tests, integration tests, coverage, frontend tests |
+| 8 | CI/CD | /team-cicd | Pipelines, Docker, DevEx, security scanning, frontend CI |
+| 9 | INFRA | /team-infra | Terraform, Terragrunt, Docker, Kubernetes, frontend hosting |
+| 10 | FRONTEND | /team-frontend | UI/Frontend API integration, widget PRDs, settings, auth |
### Workflow Agents (process automation)
| Agent | Slash Command | When to Run |
@@ -53,6 +54,7 @@ If the file has `last_phase_completed: 0` or `null` metrics, this is a fresh sta
Run a fresh codebase scan (equivalent to `/discover --quick`):
+### Backend Scan
1. **Build**: `dotnet build CognitiveMesh.sln --verbosity quiet`
2. **Tests**: `dotnet test CognitiveMesh.sln --no-build --verbosity quiet`
3. **TODOs**: Search `// TODO`, `// PLACEHOLDER`, `// HACK` across `src/**/*.cs` — count per layer
@@ -63,8 +65,22 @@ Run a fresh codebase scan (equivalent to `/discover --quick`):
- Methods containing only `return Task.CompletedTask` with a TODO comment nearby
5. **Fake data**: Search `Task.Delay` + hardcoded sample data across `src/**/*.cs` — count per layer
6. **Infra**: Check for `infra/`, `Dockerfile`, `k8s/`, `.github/dependabot.yml`
-7. **Git**: Current branch, uncommitted changes
-8. **Backlog**: Read `AGENT_BACKLOG.md` for known items
+
+### Frontend Scan
+7. **Frontend build**: Check if `src/UILayer/web/package.json` exists; if so count:
+ - Mocked API calls (`Math.random`, `simulated`, `hardcoded`, `TODO` in `*.ts`/`*.tsx`)
+ - Components without tests (directories in `src/components/` without `*.test.tsx`)
+ - Missing real API integration (check if `services/api.ts` still has mock data)
+ - SignalR connection status (check for `@microsoft/signalr` in package.json)
+ - Auth flow (check for login page or auth context)
+ - Settings page (check for `app/settings/` route)
+ - Widget PRD implementations vs PRD count
+8. **Frontend CI**: Check `.github/workflows/` for npm/frontend steps
+9. **Frontend deployment**: Check for frontend Dockerfile, K8s manifests, Terraform modules
+
+### General
+10. **Git**: Current branch, uncommitted changes
+11. **Backlog**: Read `AGENT_BACKLOG.md` for known items (FE-*, FECICD-*, FETEST-* prefixes for frontend)
Report a discovery summary:
@@ -81,10 +97,24 @@ Report a discovery summary:
| Terraform modules | ?/9 expected |
| Docker | yes/no |
| K8s manifests | yes/no |
-| CI workflows | ?/5 expected |
+| CI workflows | ?/6 expected |
| Dependabot | yes/no |
| CodeQL | yes/no |
+| Frontend | Status |
+|----------|--------|
+| API client generated | yes/no |
+| Mocked API calls | ? remaining |
+| SignalR connected | yes/no |
+| Auth flow | yes/no |
+| Settings page | yes/no |
+| Widget PRDs implemented | ?/17 |
+| Component test coverage | ?% |
+| Frontend in CI pipeline | yes/no |
+| Frontend Docker | yes/no |
+| Frontend K8s/Terraform | yes/no |
+| Grade | A-F |
+
Compare against previous state. Flag regressions (count went up instead of down).
## Step 3: Determine Phase
@@ -92,7 +122,7 @@ Compare against previous state. Flag regressions (count went up instead of down)
Use **layer health grades** (not just a fixed sequence) to pick the right phase:
```text
-IF build is broken:
+IF backend build is broken:
→ Phase 1 (must fix build first)
ELSE IF Foundation.grade < B OR Reasoning.grade < B:
@@ -107,19 +137,41 @@ ELSE IF Business.grade < B:
ELSE IF infra.grade < B OR cicd.grade < B:
→ Phase 1 (infra/cicd run in Phase 1)
-ELSE IF any test failures OR missing test files:
+ELSE IF any backend test failures OR missing test files:
→ Phase 4 (testing sweep)
+ELSE IF frontend.grade == F (no API integration, all mocked):
+ → Phase 13 (frontend API foundation — client gen, auth, state)
+
+ELSE IF frontend.grade == D (API client exists but missing integration):
+ → Phase 14 (frontend core integration — replace mocks, SignalR, settings)
+
+ELSE IF frontend.grade == C (integration done but missing widget PRDs + deployment):
+ → Phase 15 (widget PRDs + frontend CI/CD + deployment infra)
+
+ELSE IF frontend.grade == B (widgets done but missing tests + remaining items):
+ → Phase 16 (remaining widgets + frontend testing)
+
+ELSE IF frontend.grade < A (P3-LOW advanced features remain):
+ → Phase 17 (final sweep — advanced features, full-stack validation)
+
ELSE:
- → COMPLETE
+ → COMPLETE (all layers + frontend at grade A)
```
Grading scale:
-- **A**: Zero stubs, zero TODOs, tests exist and pass
+- **A**: Zero stubs, zero TODOs, tests exist and pass, full API integration
- **B**: 1-2 minor items remaining
- **C**: Active stubs or TODOs, some tests missing
-- **D**: Multiple stubs, fake data, no tests
-- **F**: Build errors or dependency violations
+- **D**: Multiple stubs, fake data, no tests, partial integration
+- **F**: Build errors, dependency violations, or all API calls mocked
+
+Frontend-specific grading:
+- **A**: Real API client, SignalR connected, auth flow, settings, all widget PRDs, 80%+ test coverage, in CI/CD, deployed
+- **B**: Core integration done, most widgets built, tests exist but < 80%
+- **C**: API client generated, some widgets, auth flow works, but many mocks remain
+- **D**: API client exists but most data still mocked, no settings page
+- **F**: All data mocked (current state), no real backend integration
## Step 4: Healthcheck (Pre-Flight)
@@ -137,6 +189,8 @@ If healthcheck FAILS: dispatch Team 6 (Quality) alone to fix blockers before pro
Launch teams for the selected phase using **Task tool with parallel calls**.
+### Backend Round (Phases 1-4) — COMPLETE
+
### Phase 1 (up to 5 teams parallel):
- Team 1 — Foundation (if Foundation.grade < A)
- Team 2 — Reasoning (if Reasoning.grade < A)
@@ -153,15 +207,40 @@ Launch teams for the selected phase using **Task tool with parallel calls**.
- Team 5 — Business (if Business.grade < A)
- Team 7 — Testing (add Business tests + integration tests)
-### Phase 4 (final sweep):
+### Phase 4 (final backend sweep):
- Team 6 — Quality (architecture validation, final build check)
- Team 7 — Testing (full coverage report)
+### Frontend Integration Round (Phases 13-17) — NEW
+
+### Phase 13 (up to 2 teams parallel): API Foundation
+- Team 10 — Frontend: FE-001 (OpenAPI client gen), FE-004 (auth flow), FE-005 (state management)
+- Team 8 — CI/CD: FECICD-001 (add frontend build/test/lint to CI pipeline)
+
+### Phase 14 (up to 3 teams parallel): Core Integration
+- Team 10 — Frontend: FE-002 (replace mocked APIs), FE-003 (SignalR), FE-006 (error handling), FE-007 (loading states), FE-008 (settings page), FE-009 (notification preferences), FE-010 (user profile), FE-022 (navigation)
+- Team 7 — Testing: FETEST-001 (component unit tests, 80% target)
+
+### Phase 15 (up to 3 teams parallel): Widget PRDs & Deployment
+- Team 10 — Frontend: FE-011 to FE-015 (5 priority widget PRD implementations: NIST, Adaptive Balance, Value Gen, Impact Metrics, Cognitive Sandwich)
+- Team 8 — CI/CD: FECICD-002 (frontend Docker), FECICD-003 (docker-compose), FECICD-004 (frontend deploy pipeline)
+- Team 9 — Infra: FECICD-005 (K8s frontend manifests), FECICD-006 (Terraform frontend hosting)
+
+### Phase 16 (up to 2 teams parallel): Remaining Widgets & Testing
+- Team 10 — Frontend: FE-016 to FE-020 (5 more widget PRDs), FE-021 (multi-page routing), FE-023 (role-based UI)
+- Team 7 — Testing: FETEST-002 (API integration tests), FETEST-003 (E2E with real API), FETEST-004 (visual regression), FETEST-005 (Lighthouse CI)
+
+### Phase 17 (final sweep):
+- Team 10 — Frontend: FE-024 to FE-028 (P3-LOW: dashboard export, command palette, collaboration, locales, PWA)
+- Team 6 — Quality: Full-stack validation (backend + frontend build, architecture check)
+- Team 7 — Testing: Full frontend test suite with coverage report
+
**Dispatch rules:**
- Use `subagent_type: "general-purpose"` for all teams
- Launch all phase teams in a **single message** for parallelism
- Each team reads `CLAUDE.md` + their `.claude/rules/` file
-- Each team verifies build passes before returning
+- Backend teams verify `dotnet build` passes before returning
+- Frontend teams verify `npm run build && npm test` passes before returning
## Step 6: Collect Results
@@ -222,6 +301,20 @@ Write updated state to `.claude/state/orchestrator.json`:
}
],
"layer_health": { ... },
+ "frontend_health": {
+ "api_client_generated": false,
+ "mocked_api_calls": 12,
+ "signalr_connected": false,
+ "auth_flow": false,
+ "settings_page": false,
+ "widget_prds_implemented": 0,
+ "widget_prds_total": 17,
+ "component_test_coverage": 2,
+ "frontend_in_ci": false,
+ "frontend_docker": false,
+ "frontend_k8s": false,
+ "grade": "F"
+ },
"next_action": "Run /orchestrate to execute Phase 2"
}
```
@@ -284,25 +377,43 @@ Override default behavior:
## Full Autonomous Loop
+### Backend Round (COMPLETE — Phases 1-12)
```text
- Session 1: /orchestrate
+ Sessions 1-12: Backend development complete (70/70 items)
+ All layers at Grade A. 1,000+ tests. 8 PRDs implemented.
+```
+
+### Frontend Integration Round (NEW — Phases 13-17)
+```text
+ Session N: /orchestrate
+ ┌──────────────────────────────────────────────────────┐
+ │ Load State (Phase 12 done) → Discover (incl frontend)│
+ │ → Frontend grade F → Phase 13 (API foundation) │
+ │ → Teams 10+8 → Sync Backlog → Save State │
+ │ "Run /orchestrate again for Phase 14" │
+ └──────────────────────────────────────────────────────┘
+
+ Session N+1: /orchestrate
┌──────────────────────────────────────────────────────┐
- │ Load State → Discover → Healthcheck → Phase 1 │
- │ → Collect → Sync Backlog → Review → Save State │
- │ "Run /orchestrate again for Phase 2" │
+ │ Load State (Phase 13 done) → Discover │
+ │ → Frontend grade D → Phase 14 (core integration) │
+ │ → Teams 10+7 → Sync Backlog → Save State │
+ │ "Run /orchestrate again for Phase 15" │
└──────────────────────────────────────────────────────┘
- Session 2: /orchestrate
+ Session N+2: /orchestrate
┌──────────────────────────────────────────────────────┐
- │ Load State (Phase 1 done) → Discover → Healthcheck │
- │ → Phase 2 → Collect → Sync Backlog → Save State │
- │ "Run /orchestrate again for Phase 3" │
+ │ Load State (Phase 14 done) → Discover │
+ │ → Frontend grade C → Phase 15 (widgets + deployment) │
+ │ → Teams 10+8+9 → Sync Backlog → Save State │
+ │ "Run /orchestrate again for Phase 16" │
└──────────────────────────────────────────────────────┘
- Session 3: /orchestrate
+ Session N+3: /orchestrate
┌──────────────────────────────────────────────────────┐
- │ Load State (Phase 2 done) → Discover → Phase 3 │
- │ → Phase 4 → All Green → "PROJECT COMPLETE" │
+ │ Load State (Phase 15 done) → Phase 16 (remaining) │
+ │ → Phase 17 (final sweep) → All Green │
+ │ → "PROJECT FULLY COMPLETE — BACKEND + FRONTEND" │
└──────────────────────────────────────────────────────┘
```
diff --git a/.claude/commands/team-frontend.md b/.claude/commands/team-frontend.md
new file mode 100644
index 0000000..8e88825
--- /dev/null
+++ b/.claude/commands/team-frontend.md
@@ -0,0 +1,130 @@
+# Team FRONTEND — UI/Frontend Agent
+
+You are **Team FRONTEND** for the Cognitive Mesh project. Your focus is the Next.js 15 / React 19 frontend application, API integration, user-facing features, and frontend infrastructure.
+
+## Setup
+1. Read `CLAUDE.md` for project conventions
+2. Read `src/UILayer/web/package.json` for dependencies
+3. Read `src/UILayer/web/tsconfig.json` for TypeScript config
+4. Read `src/UILayer/README.md` for UILayer architecture
+5. Read `AGENT_BACKLOG.md` for current frontend backlog items (FE-* prefix)
+6. Read `docs/openapi.yaml` for backend API contract
+
+## Scope
+- **Primary:** `src/UILayer/web/` — Next.js frontend application
+- **Secondary:** `src/UILayer/` — C# Backend-for-Frontend services, widget adapters
+- **Tests:** `src/UILayer/web/__tests__/`, `cypress/e2e/`
+- **Do NOT** modify backend C# code outside `src/UILayer/`
+
+## Current State
+
+**Framework Stack:**
+- Next.js 15.4.7 (App Router) + React 19 + TypeScript 5
+- Tailwind CSS 4 + shadcn/ui + Framer Motion
+- D3.js for visualizations, Storybook 8 for docs
+- Jest + Testing Library for unit tests, Cypress for E2E
+- Service worker for offline PWA support
+- i18n (en-US, fr-FR, de-DE), WCAG 2.1 AA accessibility
+
+**What Exists (43+ components):**
+- Dashboard layout with draggable panels and dock zones
+- Command Nexus (AI prompt interface with voice)
+- Agency widgets (AgentControlCenter, AuthorityConsentModal, RegistryViewer, etc.)
+- D3 visualizations (AuditTimeline, MetricsChart, AgentNetworkGraph)
+- Design system (CognitiveMeshButton, CognitiveMeshCard, tokens)
+- Accessibility (SkipNavigation, FocusTrap, LiveRegion, VisuallyHidden)
+- Code splitting (LazyWidgetLoader), service worker, audio system
+
+**What's MOCKED (critical gaps):**
+- `src/services/api.ts` — All data is hardcoded with simulated delays
+- `src/lib/api/adapters/AgentSystemDataAPIAdapter.ts` — Mock client, no backend calls
+- No real SignalR connection (hub exists in backend but UI uses polling with fake data)
+- No OpenAPI-generated TypeScript client
+- No authentication flow (backend has JWT/OAuth but no login UI)
+
+## Backend API Surface (must integrate with)
+
+### REST Controllers (13 endpoints):
+- `AgentController` — `/api/v1/agent/registry`, `/api/v1/agent/orchestrate`
+- `ValueGenerationController` — `/api/v1/ValueGeneration/value-diagnostic`, org-blindness, employability
+- `AdaptiveBalanceController` — Spectrum management, overrides, recommendations
+- `NISTComplianceController` — Score, checklist, evidence, gap analysis, roadmap
+- `ImpactMetricsController` — Safety scores, alignment, adoption, assessment
+- `CognitiveSandwichController` — Workflow phases, audit, debt
+- `ComplianceController` — GDPR consent, data subject requests
+- `CustomerServiceController` — Inquiry, troubleshoot, conversation
+- `SecurityController` — Auth, authorization management
+- `ConvenerController` — Innovation spread, learning catalyst
+
+### SignalR Hub:
+- `CognitiveMeshHub` — JoinDashboardGroup, SubscribeToAgent, real-time events
+
+### OpenAPI Spec:
+- `docs/openapi.yaml` — Champion Discovery, Community, Learning, Innovation, Approvals, Provenance, Notifications
+
+## Priority Work Items
+
+### P0 — API Integration Foundation
+1. **FE-001: Generate TypeScript API client** from `docs/openapi.yaml` using openapi-typescript-codegen or orval
+2. **FE-002: Replace mocked `api.ts`** with real API service layer using generated client
+3. **FE-003: Add SignalR client** — connect to `CognitiveMeshHub` for real-time updates (replace polling)
+4. **FE-004: Add authentication flow** — Login page, JWT token management, protected routes, auth context
+
+### P1 — State Management & Error Handling
+5. **FE-005: Add global state management** — Zustand or React Context for auth, agents, dashboard data, notifications
+6. **FE-006: Add error handling infrastructure** — Toast notifications (sonner), global error boundary, API error interceptor
+7. **FE-007: Add loading states** — Skeleton screens for all data-driven components, optimistic updates
+
+### P1 — Settings & Preferences
+8. **FE-008: Settings page** — Theme (light/dark/system), language selector, accessibility preferences
+9. **FE-009: Notification preferences UI** — Channel toggles (email, push, in-app), quiet hours, category filters
+10. **FE-010: User profile page** — Account info, consent management, data export (GDPR)
+
+### P1 — Widget PRD Implementations
+11. **FE-011: NIST Compliance Dashboard Widget** — Maturity scores, gap analysis, evidence upload (per mesh-widget.md)
+12. **FE-012: Adaptive Balance Widget** — Spectrum sliders, override controls, audit trail (per mesh-widget.md)
+13. **FE-013: Value Generation Widget** — Full diagnostic flow with consent, scoring, strengths/opportunities
+14. **FE-014: Impact Metrics Widget** — Safety scores, alignment, adoption telemetry visualization
+15. **FE-015: Cognitive Sandwich Widget** — Phase progression, HITL workflow, debt tracking
+
+### P2 — Additional Widget PRDs
+16. **FE-016: Context Engineering Widget** — AI context frame management
+17. **FE-017: Agentic System Control Widget** — Agent lifecycle, authority management
+18. **FE-018: Convener Widget** — Innovation spread, learning recommendations
+19. **FE-019: Widget Marketplace UI** — Browse, install, configure widgets from C# WidgetRegistry
+20. **FE-020: Organizational Mesh Widget** — Org-level cognitive mesh visualization
+
+### P2 — App Structure & Routing
+21. **FE-021: Multi-page routing** — Dashboard, Settings, Agent Management, Compliance, Analytics pages
+22. **FE-022: Navigation component** — Sidebar nav, breadcrumbs, responsive mobile menu
+23. **FE-023: Role-based UI** — Admin vs Analyst vs Viewer role gating on UI elements
+
+### P3 — Testing & Quality
+24. **FE-024: Expand unit tests** — Target 80% component coverage with Jest + Testing Library
+25. **FE-025: Expand E2E tests** — Real API integration tests in Cypress (not mocked data)
+26. **FE-026: Visual regression testing** — Chromatic or Percy for Storybook stories
+27. **FE-027: Performance testing** — Lighthouse CI scores, bundle size monitoring
+
+### P3 — Advanced Features
+28. **FE-028: Real-time collaboration** — Presence indicators, live cursor sharing via SignalR
+29. **FE-029: Dashboard export** — PDF/PNG export of dashboard views
+30. **FE-030: Keyboard shortcuts** — Global shortcuts for power users (Cmd+K command palette)
+
+## Workflow
+1. Start with FE-001 (API client generation) — this unblocks everything
+2. Build FE-004 (auth) + FE-005 (state management) as foundation
+3. Replace mocked services (FE-002, FE-003)
+4. Build settings page (FE-008) and navigation (FE-022)
+5. Implement widget PRDs one by one (FE-011 through FE-020)
+6. Expand tests throughout (FE-024, FE-025)
+7. Verify: `npm run build` + `npm test` all green
+
+## Conventions
+- Components in PascalCase directories with index.tsx, *.test.tsx, *.stories.tsx
+- Hooks prefixed with `use` in `src/hooks/`
+- API types generated from OpenAPI, never hand-written
+- All new components must have WCAG 2.1 AA compliance
+- Use existing design tokens from Style Dictionary
+- Prefer server components where possible, `"use client"` only when needed
+
+$ARGUMENTS
diff --git a/.claude/hooks/protect-sensitive.sh b/.claude/hooks/protect-sensitive.sh
index 4fa3489..dc32540 100755
--- a/.claude/hooks/protect-sensitive.sh
+++ b/.claude/hooks/protect-sensitive.sh
@@ -3,7 +3,7 @@
# Exit 2 = block the operation. Exit 0 = allow.
# Fail-closed: if jq is missing or parsing fails, the hook blocks the operation.
-export PATH="$HOME/.local/bin:$PATH"
+export PATH="/usr/local/bin:/usr/bin:/bin:${HOME}/.local/bin:${PATH}"
if ! command -v jq &>/dev/null; then
echo "BLOCKED: jq is required for the protect-sensitive hook but is not installed." >&2
diff --git a/.claude/hooks/session-start.sh b/.claude/hooks/session-start.sh
index 2db0338..b4292bb 100755
--- a/.claude/hooks/session-start.sh
+++ b/.claude/hooks/session-start.sh
@@ -1,117 +1,209 @@
#!/bin/bash
-# SessionStart hook: Verify .NET environment and build state on session start.
+# SessionStart hook: Ensure all required tools are available and the project builds.
# Output goes to Claude's context so it knows the current project state.
+#
+# Required tools for this repo:
+# - dotnet (.NET 9 SDK) — build, test, restore
+# - git — version control
+# - gh — GitHub CLI for PR/issue workflows
+# - jq — JSON processing (used by protect-sensitive.sh hook)
+# - rg (ripgrep) — fast code search (used by Claude Code's Grep tool)
+# - curl, tar — downloading tools
+# - grep, find, ls, etc — standard Unix utilities
-set -e
-cd "$CLAUDE_PROJECT_DIR" 2>/dev/null || cd "$(dirname "$0")/../.." 2>/dev/null || {
- echo "Failed to change directory to project dir" >&2
+# ─── PATH SETUP ──────────────────────────────────────────────────────────────
+# CRITICAL: Ensure system bin directories are in PATH FIRST.
+# The Claude Code web environment may not include /usr/bin or /bin in PATH,
+# which breaks every standard Unix tool (grep, find, ls, curl, git, etc.).
+export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:${HOME}/.dotnet:${HOME}/.local/bin:${PATH}"
+export DOTNET_ROOT="${HOME}/.dotnet"
+export DOTNET_CLI_TELEMETRY_OPTOUT=1
+export DOTNET_NOLOGO=1
+
+# Bail on unrecoverable errors, but handle individual tool failures gracefully
+set -o pipefail
+
+# Navigate to project directory
+if [ -n "$CLAUDE_PROJECT_DIR" ]; then
+ cd "$CLAUDE_PROJECT_DIR" || exit 1
+elif [ -d "$(dirname "$0")/../.." ]; then
+ cd "$(dirname "$0")/../.." || exit 1
+else
+ echo "ERROR: Cannot determine project directory" >&2
exit 1
-}
+fi
echo "=== Cognitive Mesh Session Start ==="
-# Ensure PATH includes user-local install directories
-export PATH="$HOME/.dotnet:$HOME/.local/bin:$PATH"
-export DOTNET_ROOT="$HOME/.dotnet"
-
-# Detect OS and architecture for platform-specific downloads
-KERNEL=$(uname -s)
-MACHINE=$(uname -m)
-
-case "$KERNEL" in
- Linux) JQ_OS="linux"; GH_OS="linux" ;;
- Darwin) JQ_OS="macos"; GH_OS="macOS" ;;
- *) JQ_OS=""; GH_OS=""
- echo "WARNING: Unsupported OS '$KERNEL'. Skipping binary installs." ;;
-esac
-
-case "$MACHINE" in
- x86_64|amd64) JQ_ARCH="amd64"; GH_ARCH="amd64" ;;
- aarch64|arm64) JQ_ARCH="arm64"; GH_ARCH="arm64" ;;
- *) JQ_ARCH=""; GH_ARCH=""
- echo "WARNING: Unsupported architecture '$MACHINE'. Skipping binary installs." ;;
-esac
-
-# Install jq if missing (needed by protect-sensitive.sh hook)
-if ! command -v jq &>/dev/null; then
- if [ -n "$JQ_OS" ] && [ -n "$JQ_ARCH" ]; then
- echo "Installing jq for ${JQ_OS}-${JQ_ARCH}..."
- mkdir -p "$HOME/.local/bin"
- curl -fsSL "https://github.com/jqlang/jq/releases/download/jq-1.7.1/jq-${JQ_OS}-${JQ_ARCH}" -o "$HOME/.local/bin/jq" \
- && chmod +x "$HOME/.local/bin/jq" \
- && echo "Tools: jq installed" \
- || echo "WARNING: jq installation failed."
- else
- echo "WARNING: Cannot install jq — unsupported platform (${KERNEL}/${MACHINE})."
+# ─── VERIFY CORE UNIX TOOLS ─────────────────────────────────────────────────
+MISSING_CORE=""
+for tool in grep find ls curl tar git tr head wc awk sed; do
+ if ! command -v "$tool" &>/dev/null; then
+ MISSING_CORE="$MISSING_CORE $tool"
fi
+done
+if [ -n "$MISSING_CORE" ]; then
+ echo "WARNING: Core tools missing:${MISSING_CORE}"
+ echo "PATH=$PATH"
fi
-# Install .NET 9 SDK if missing
+# ─── INSTALL .NET 9 SDK ─────────────────────────────────────────────────────
if ! command -v dotnet &>/dev/null; then
echo "Installing .NET 9 SDK..."
- curl -fsSL https://dot.net/v1/dotnet-install.sh | bash -s -- --channel 9.0 2>&1
+ if command -v curl &>/dev/null && command -v bash &>/dev/null; then
+ # Try multiple sources for the install script:
+ # 1. GitHub raw (works behind proxies that block dot.net redirects)
+ # 2. Official dot.net URL (redirects to builds.dotnet.microsoft.com)
+ INSTALL_SCRIPT=""
+ for script_url in \
+ "https://raw.githubusercontent.com/dotnet/install-scripts/main/src/dotnet-install.sh" \
+ "https://dot.net/v1/dotnet-install.sh"; do
+ if curl -fsSL --connect-timeout 10 --max-time 30 "$script_url" -o /tmp/dotnet-install.sh 2>/dev/null; then
+ INSTALL_SCRIPT="/tmp/dotnet-install.sh"
+ break
+ fi
+ done
+
+ if [ -n "$INSTALL_SCRIPT" ]; then
+ bash "$INSTALL_SCRIPT" --channel 9.0 --install-dir "${HOME}/.dotnet" 2>&1 | tail -5
+ rm -f "$INSTALL_SCRIPT"
+ export PATH="${HOME}/.dotnet:${PATH}"
+ else
+ echo "WARNING: Failed to download .NET install script from any source."
+ fi
+ else
+ echo "WARNING: curl or bash not available — cannot install .NET SDK."
+ fi
fi
if command -v dotnet &>/dev/null; then
SDK_VERSION=$(dotnet --version 2>/dev/null || echo "unknown")
- echo "SDK: .NET $SDK_VERSION"
+ echo "SDK: .NET ${SDK_VERSION}"
else
- echo "WARNING: dotnet SDK installation failed."
- exit 0
+ echo "WARNING: .NET SDK not available. Build/test/restore commands will fail."
+ echo " Manual install: curl -fsSL https://dot.net/v1/dotnet-install.sh | bash -s -- --channel 9.0"
fi
-# Install GitHub CLI if missing
+# ─── INSTALL GITHUB CLI ─────────────────────────────────────────────────────
if ! command -v gh &>/dev/null; then
- if [ -n "$GH_OS" ] && [ -n "$GH_ARCH" ]; then
- echo "Installing GitHub CLI for ${GH_OS}-${GH_ARCH}..."
- GH_VERSION=$(curl -fsSL https://api.github.com/repos/cli/cli/releases/latest | grep '"tag_name"' | sed 's/.*"v\(.*\)".*/\1/')
+ echo "Installing GitHub CLI..."
+ ARCH=$(uname -m 2>/dev/null)
+ case "$ARCH" in
+ x86_64|amd64) GH_ARCH="amd64" ;;
+ aarch64|arm64) GH_ARCH="arm64" ;;
+ *) GH_ARCH="" ;;
+ esac
+
+ if [ -n "$GH_ARCH" ] && command -v curl &>/dev/null && command -v tar &>/dev/null; then
+ GH_VERSION=$(curl -fsSL https://api.github.com/repos/cli/cli/releases/latest 2>/dev/null \
+ | grep '"tag_name"' | sed 's/.*"v\(.*\)".*/\1/')
if [ -n "$GH_VERSION" ]; then
- GH_ARCHIVE="gh_${GH_VERSION}_${GH_OS}_${GH_ARCH}.tar.gz"
- GH_DIR="gh_${GH_VERSION}_${GH_OS}_${GH_ARCH}"
- curl -fsSL "https://github.com/cli/cli/releases/download/v${GH_VERSION}/${GH_ARCHIVE}" -o "/tmp/${GH_ARCHIVE}" \
- && mkdir -p "$HOME/.local/bin" \
- && tar -xzf "/tmp/${GH_ARCHIVE}" -C /tmp \
- && cp "/tmp/${GH_DIR}/bin/gh" "$HOME/.local/bin/gh" \
- && chmod +x "$HOME/.local/bin/gh" \
- && rm -rf "/tmp/${GH_ARCHIVE}" "/tmp/${GH_DIR}"
+ GH_ARCHIVE="gh_${GH_VERSION}_linux_${GH_ARCH}.tar.gz"
+ GH_DIR="gh_${GH_VERSION}_linux_${GH_ARCH}"
+ mkdir -p "${HOME}/.local/bin"
+ if curl -fsSL "https://github.com/cli/cli/releases/download/v${GH_VERSION}/${GH_ARCHIVE}" -o "/tmp/${GH_ARCHIVE}" 2>/dev/null; then
+ tar -xzf "/tmp/${GH_ARCHIVE}" -C /tmp 2>/dev/null \
+ && cp "/tmp/${GH_DIR}/bin/gh" "${HOME}/.local/bin/gh" \
+ && chmod +x "${HOME}/.local/bin/gh" \
+ && echo "Tools: gh ${GH_VERSION} installed"
+ rm -rf "/tmp/${GH_ARCHIVE}" "/tmp/${GH_DIR}"
+ else
+ echo "WARNING: Failed to download GitHub CLI."
+ fi
fi
- else
- echo "WARNING: Cannot install GitHub CLI — unsupported platform (${KERNEL}/${MACHINE})."
fi
fi
if command -v gh &>/dev/null; then
- echo "CLI: gh $(gh --version | head -1 | awk '{print $3}')"
-else
- echo "WARNING: GitHub CLI installation failed."
+ echo "CLI: gh $(gh --version 2>/dev/null | head -1 | awk '{print $3}')"
fi
-# Restore packages (quiet mode)
-echo "Restoring packages..."
-if dotnet restore CognitiveMesh.sln --verbosity quiet 2>&1; then
- echo "Packages: OK"
-else
- echo "WARNING: Package restore had issues. Run 'dotnet restore' manually."
+# ─── VERIFY JQ (needed by protect-sensitive.sh hook) ─────────────────────────
+if ! command -v jq &>/dev/null; then
+ echo "Installing jq..."
+ ARCH=$(uname -m 2>/dev/null)
+ case "$ARCH" in
+ x86_64|amd64) JQ_ARCH="amd64" ;;
+ aarch64|arm64) JQ_ARCH="arm64" ;;
+ *) JQ_ARCH="" ;;
+ esac
+
+ if [ -n "$JQ_ARCH" ] && command -v curl &>/dev/null; then
+ mkdir -p "${HOME}/.local/bin"
+ curl -fsSL "https://github.com/jqlang/jq/releases/download/jq-1.7.1/jq-linux-${JQ_ARCH}" \
+ -o "${HOME}/.local/bin/jq" 2>/dev/null \
+ && chmod +x "${HOME}/.local/bin/jq" \
+ && echo "Tools: jq installed" \
+ || echo "WARNING: jq installation failed."
+ fi
fi
-# Quick build check
-echo "Building..."
-BUILD_OUTPUT=$(dotnet build CognitiveMesh.sln --no-restore --verbosity quiet 2>&1 || true)
-if echo "$BUILD_OUTPUT" | grep -q "Build succeeded"; then
- WARN_COUNT=$(echo "$BUILD_OUTPUT" | grep -E -o '[0-9]+ Warning' | head -1 || echo "0")
- echo "Build: PASSED ($WARN_COUNT)"
-else
- ERROR_LINES=$(echo "$BUILD_OUTPUT" | grep -E "error [A-Z]+[0-9]+" | head -5)
- echo "Build: FAILED"
- if [ -n "$ERROR_LINES" ]; then
- echo "Errors:"
- echo "$ERROR_LINES"
+# ─── VERIFY RIPGREP (needed by Claude Code's Grep tool) ─────────────────────
+if ! command -v rg &>/dev/null; then
+ echo "Installing ripgrep..."
+ ARCH=$(uname -m 2>/dev/null)
+ case "$ARCH" in
+ x86_64|amd64) RG_ARCH="x86_64-unknown-linux-musl" ;;
+ aarch64|arm64) RG_ARCH="aarch64-unknown-linux-gnu" ;;
+ *) RG_ARCH="" ;;
+ esac
+
+ if [ -n "$RG_ARCH" ] && command -v curl &>/dev/null && command -v tar &>/dev/null; then
+ RG_VERSION=$(curl -fsSL https://api.github.com/repos/BurntSushi/ripgrep/releases/latest 2>/dev/null \
+ | grep '"tag_name"' | sed 's/.*"\(.*\)".*/\1/')
+ if [ -n "$RG_VERSION" ]; then
+ RG_DIR="ripgrep-${RG_VERSION}-${RG_ARCH}"
+ mkdir -p "${HOME}/.local/bin"
+ curl -fsSL "https://github.com/BurntSushi/ripgrep/releases/download/${RG_VERSION}/${RG_DIR}.tar.gz" \
+ -o "/tmp/rg.tar.gz" 2>/dev/null \
+ && tar -xzf /tmp/rg.tar.gz -C /tmp 2>/dev/null \
+ && cp "/tmp/${RG_DIR}/rg" "${HOME}/.local/bin/rg" \
+ && chmod +x "${HOME}/.local/bin/rg" \
+ && echo "Tools: ripgrep ${RG_VERSION} installed"
+ rm -rf "/tmp/rg.tar.gz" "/tmp/${RG_DIR}"
+ fi
+ fi
+fi
+
+# ─── RESTORE + BUILD ────────────────────────────────────────────────────────
+if command -v dotnet &>/dev/null; then
+ echo "Restoring packages..."
+ if dotnet restore CognitiveMesh.sln --verbosity quiet 2>&1; then
+ echo "Packages: OK"
+ else
+ echo "WARNING: Package restore had issues. Run 'dotnet restore' manually."
+ fi
+
+ echo "Building..."
+ BUILD_OUTPUT=$(dotnet build CognitiveMesh.sln --no-restore --verbosity quiet 2>&1 || true)
+ if echo "$BUILD_OUTPUT" | grep -q "Build succeeded"; then
+ WARN_COUNT=$(echo "$BUILD_OUTPUT" | grep -Eo '[0-9]+ Warning' | head -1 || echo "0")
+ echo "Build: PASSED (${WARN_COUNT})"
+ else
+ ERROR_LINES=$(echo "$BUILD_OUTPUT" | grep -E "error [A-Z]+[0-9]+" | head -5)
+ echo "Build: FAILED"
+ if [ -n "$ERROR_LINES" ]; then
+ echo "Errors:"
+ echo "$ERROR_LINES"
+ fi
fi
fi
-# Git status summary
-BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
-CHANGED=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ')
-echo "Git: branch=$BRANCH, uncommitted=$CHANGED"
+# ─── GIT STATUS ─────────────────────────────────────────────────────────────
+if command -v git &>/dev/null; then
+ BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
+ CHANGED=$(git status --porcelain 2>/dev/null | wc -l | tr -d ' ')
+ echo "Git: branch=${BRANCH}, uncommitted=${CHANGED}"
+fi
+
+# ─── TOOL SUMMARY ───────────────────────────────────────────────────────────
+echo "Tools:"
+for tool in dotnet git gh jq rg curl make; do
+ if command -v "$tool" &>/dev/null; then
+ printf " %-8s OK\n" "$tool"
+ else
+ printf " %-8s MISSING\n" "$tool"
+ fi
+done
echo "=== Ready ==="
diff --git a/.claude/hooks/stop-build-check.sh b/.claude/hooks/stop-build-check.sh
index d5edb79..340ce23 100755
--- a/.claude/hooks/stop-build-check.sh
+++ b/.claude/hooks/stop-build-check.sh
@@ -3,8 +3,8 @@
# Catches regressions introduced during the conversation.
# Ensure dotnet and gh are on PATH if installed by session-start hook
-export PATH="$HOME/.dotnet:$HOME/.local/bin:$PATH"
-export DOTNET_ROOT="$HOME/.dotnet"
+export PATH="/usr/local/bin:/usr/bin:/bin:${HOME}/.dotnet:${HOME}/.local/bin:${PATH}"
+export DOTNET_ROOT="${HOME}/.dotnet"
cd "$CLAUDE_PROJECT_DIR" 2>/dev/null || cd "$(dirname "$0")/../.." 2>/dev/null || {
echo "Failed to change to project directory" >&2
diff --git a/.claude/settings.json b/.claude/settings.json
index 4b3bd05..e50694f 100644
--- a/.claude/settings.json
+++ b/.claude/settings.json
@@ -37,7 +37,7 @@
{
"type": "command",
"command": "$CLAUDE_PROJECT_DIR/.claude/hooks/session-start.sh",
- "timeout": 120
+ "timeout": 300
}
]
}
@@ -92,8 +92,6 @@
"env": {
"DOTNET_CLI_TELEMETRY_OPTOUT": "1",
- "DOTNET_NOLOGO": "1",
- "DOTNET_ROOT": "$HOME/.dotnet",
- "PATH": "$HOME/.dotnet:$HOME/.local/bin:$PATH"
+ "DOTNET_NOLOGO": "1"
}
}
diff --git a/.claude/state/orchestrator.json b/.claude/state/orchestrator.json
new file mode 100644
index 0000000..c4897ab
--- /dev/null
+++ b/.claude/state/orchestrator.json
@@ -0,0 +1,176 @@
+{
+ "_comment": "Persistent orchestrator state — survives across Claude Code sessions. Updated by /discover, /sync-backlog, /healthcheck, and /orchestrate.",
+ "last_updated": "2026-02-20T12:00:00Z",
+ "last_phase_completed": 12,
+ "last_phase_result": "success",
+ "current_metrics": {
+ "build_errors": null,
+ "build_warnings": null,
+ "test_passed": null,
+ "test_failed": null,
+ "test_skipped": null,
+ "todo_count": 0,
+ "stub_count": 0,
+ "await_task_completed_count": 0,
+ "task_delay_count": 12,
+ "placeholder_count": 9,
+ "dependency_violations": 0,
+ "iac_modules": 9,
+ "docker_exists": true,
+ "ci_workflows": 6,
+ "test_files_created": [
+ "MultiAgentOrchestrationEngine", "SelfEvaluator", "PerformanceMonitor",
+ "DecisionExecutor", "CustomerIntelligenceManager", "DecisionSupportManager",
+ "ResearchAnalyst", "KnowledgeManager", "LearningManager",
+ "ValueGenerationController", "ValueGenerationDiagnosticEngine",
+ "OrganizationalValueBlindnessEngine", "EmployabilityPredictorEngine",
+ "CognitiveSandwichEngine", "CognitiveSandwichController",
+ "CognitiveSovereigntyEngine", "TemporalDecisionCoreEngine",
+ "MemoryStrategyEngine", "ImpactMetricsEngine", "ImpactMetricsController",
+ "NISTComplianceController", "NISTComplianceService",
+ "InMemoryNISTEvidenceAdapter", "InMemoryEvidenceArtifactAdapter",
+ "NISTMaturityAssessmentEngine",
+ "AdaptiveBalanceEngine", "LearningFrameworkEngine", "ReflexionEngine",
+ "AdaptiveBalanceController", "AdaptiveBalanceService"
+ ],
+ "integration_test_files": ["EthicalComplianceFramework", "DurableWorkflowCrashRecovery", "DecisionExecutor", "ConclAIvePipeline"],
+ "test_files_missing": [],
+ "total_new_tests": 1000,
+ "backlog_done": 70,
+ "backlog_total": 109,
+ "backlog_remaining": 39
+ },
+ "phase_history": [
+ {
+ "phase": 1,
+ "timestamp": "2026-02-19T22:55:00Z",
+ "teams": ["foundation", "reasoning", "quality", "cicd", "infra"],
+ "result": "success",
+ "notes": "Foundation: 3 stubs. Reasoning: 4 Task.Delay. Quality: 16 XML docs. CI/CD: 8 files. Infra: 41 files."
+ },
+ {
+ "phase": 2,
+ "timestamp": "2026-02-20T01:30:00Z",
+ "teams": ["metacognitive", "agency", "testing"],
+ "result": "success",
+ "notes": "Metacognitive: All 6 items. Agency: All 5 items. Testing: 4 files, 87 tests."
+ },
+ {
+ "phase": 3,
+ "timestamp": "2026-02-20T04:00:00Z",
+ "teams": ["business", "testing"],
+ "result": "success",
+ "notes": "Business: All 4 items, 4 new ports. Testing: 4 files, 119 tests."
+ },
+ {
+ "phase": 4,
+ "timestamp": "2026-02-20T06:55:00Z",
+ "teams": ["quality", "testing"],
+ "result": "success",
+ "notes": "Quality: 3 arch violations fixed. Testing: LearningManager 103 cases."
+ },
+ {
+ "phase": 5,
+ "timestamp": "2026-02-20T10:30:00Z",
+ "teams": ["cicd"],
+ "result": "success",
+ "notes": "CI/CD: deploy.yml + coverage.yml + codecov.yml + README badges."
+ },
+ {
+ "phase": 6,
+ "timestamp": "2026-02-20T12:00:00Z",
+ "teams": ["testing"],
+ "result": "success",
+ "notes": "Testing: TST-008 — Integration.Tests.csproj, 25 new integration tests (33 total)."
+ },
+ {
+ "phase": 7,
+ "timestamp": "2026-02-20T14:00:00Z",
+ "teams": ["business"],
+ "result": "success",
+ "notes": "Business: BIZ-004 + PRD-007 wiring."
+ },
+ {
+ "phase": 8,
+ "timestamp": "2026-02-20T18:00:00Z",
+ "teams": ["agency", "business"],
+ "result": "success",
+ "notes": "Agency: PRD-003 Cognitive Sandwich foundation (17 models, 4 ports, engine, 27 tests). Business: PRD-007 complete (DI + 5 adapters + 70 tests)."
+ },
+ {
+ "phase": 9,
+ "timestamp": "2026-02-20T22:00:00Z",
+ "teams": ["agency", "reasoning", "business"],
+ "result": "success",
+ "notes": "PRD-003 to PRD-008 complete. +162 new tests."
+ },
+ {
+ "phase": 10,
+ "timestamp": "2026-02-21T01:00:00Z",
+ "teams": ["foundation", "reasoning", "business"],
+ "result": "success",
+ "notes": "PRD-001 NIST + PRD-002 Adaptive Balance. ALL 8 PRDs complete. +213 tests."
+ },
+ {
+ "phase": 11,
+ "timestamp": "2026-02-21T02:00:00Z",
+ "teams": ["foundation", "metacognitive", "agency"],
+ "result": "success",
+ "notes": "P3-LOW backend: Notifications (Slack/Teams/Webhook), OpenTelemetry, Performance Monitoring, Real-Time (SignalR hub). 43 files, 5,905 lines."
+ },
+ {
+ "phase": 12,
+ "timestamp": "2026-02-21T03:00:00Z",
+ "teams": ["frontend"],
+ "result": "success",
+ "notes": "P3-LOW frontend: i18n (3 locales, 170 keys), Cypress E2E (3 suites), WCAG 2.1 AA (axe-core, 5 a11y components), D3 visualizations (3 charts), code splitting (LazyWidgetLoader), service worker (offline + caching). Original 70/70 backlog items complete."
+ }
+ ],
+ "layer_health": {
+ "foundation": { "stubs": 0, "todos": 0, "placeholders": 0, "task_delay": 1, "tests": 6, "build_clean": null, "grade": "A" },
+ "reasoning": { "stubs": 0, "todos": 0, "placeholders": 0, "task_delay": 0, "tests": 11, "build_clean": null, "grade": "A" },
+ "metacognitive": { "stubs": 0, "todos": 0, "placeholders": 1, "task_delay": 2, "tests": 10, "build_clean": null, "grade": "A" },
+ "agency": { "stubs": 0, "todos": 0, "placeholders": 2, "task_delay": 5, "tests": 30, "build_clean": null, "grade": "A" },
+ "business": { "stubs": 0, "todos": 0, "placeholders": 0, "task_delay": 0, "tests": 12, "build_clean": null, "grade": "A" },
+ "infra": { "modules": 9, "docker": true, "k8s": true, "grade": "A" },
+ "cicd": { "workflows": 6, "security_scanning": true, "dependabot": true, "deploy_pipeline": true, "coverage_reporting": true, "grade": "A" }
+ },
+ "frontend_health": {
+ "api_client_generated": false,
+ "mocked_api_calls": 12,
+ "signalr_connected": false,
+ "auth_flow": false,
+ "settings_page": false,
+ "notification_preferences": false,
+ "user_profile_page": false,
+ "navigation_component": false,
+ "multi_page_routing": false,
+ "role_based_ui": false,
+ "widget_prds_implemented": 0,
+ "widget_prds_total": 17,
+ "component_test_count": 1,
+ "component_test_coverage_pct": 2,
+ "e2e_tests_real_api": false,
+ "visual_regression": false,
+ "lighthouse_ci": false,
+ "frontend_in_ci": false,
+ "frontend_docker": false,
+ "frontend_k8s": false,
+ "frontend_terraform": false,
+ "state_management": "context-only",
+ "error_handling": "none",
+ "grade": "F"
+ },
+ "frontend_backlog": {
+ "p0_critical": { "total": 4, "done": 0, "items": ["FE-001 API client gen", "FE-002 Replace mocked APIs", "FE-003 SignalR client", "FE-004 Auth flow"] },
+ "p1_high_infra": { "total": 6, "done": 0, "items": ["FE-005 State mgmt", "FE-006 Error handling", "FE-007 Loading states", "FE-008 Settings", "FE-009 Notifications prefs", "FE-010 User profile"] },
+ "p1_high_widgets": { "total": 5, "done": 0, "items": ["FE-011 NIST", "FE-012 Adaptive Balance", "FE-013 Value Gen", "FE-014 Impact Metrics", "FE-015 Cognitive Sandwich"] },
+ "p2_medium_widgets": { "total": 5, "done": 0, "items": ["FE-016 Context Eng", "FE-017 Agentic System", "FE-018 Convener", "FE-019 Marketplace", "FE-020 Org Mesh"] },
+ "p2_medium_app": { "total": 3, "done": 0, "items": ["FE-021 Multi-page routing", "FE-022 Navigation", "FE-023 Role-based UI"] },
+ "p2_medium_cicd": { "total": 6, "done": 0, "items": ["FECICD-001 CI pipeline", "FECICD-002 Docker", "FECICD-003 Compose", "FECICD-004 Deploy", "FECICD-005 K8s", "FECICD-006 Terraform"] },
+ "p2_medium_testing": { "total": 5, "done": 0, "items": ["FETEST-001 Unit tests 80%", "FETEST-002 API integration", "FETEST-003 E2E real API", "FETEST-004 Visual regression", "FETEST-005 Lighthouse CI"] },
+ "p3_low_advanced": { "total": 5, "done": 0, "items": ["FE-024 Export", "FE-025 Cmd+K", "FE-026 Collaboration", "FE-027 Locales", "FE-028 PWA"] }
+ },
+ "blockers": [],
+ "next_action": "Frontend integration round begins. Run /orchestrate to execute Phase 13 — API foundation (Team 10: FRONTEND + Team 8: CI/CD). Frontend currently grade F: all API data mocked, no auth flow, no real backend integration. 39 new backlog items across 5 phases (13-17)."
+}
diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
new file mode 100644
index 0000000..b26e0c7
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -0,0 +1,104 @@
+name: Bug Report
+description: Report a bug or unexpected behavior in Cognitive Mesh
+title: "[Bug]: "
+labels: ["bug", "triage"]
+assignees: []
+
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Thank you for reporting a bug. Please fill out the sections below to help us diagnose and fix the issue.
+
+ - type: textarea
+ id: description
+ attributes:
+ label: Description
+ description: A clear and concise description of the bug.
+ placeholder: Describe what happened...
+ validations:
+ required: true
+
+ - type: textarea
+ id: steps
+ attributes:
+ label: Steps to Reproduce
+ description: Steps to reproduce the behavior.
+ placeholder: |
+ 1. Run command '...'
+ 2. Call method '...'
+ 3. See error
+ validations:
+ required: true
+
+ - type: textarea
+ id: expected
+ attributes:
+ label: Expected Behavior
+ description: What you expected to happen.
+ validations:
+ required: true
+
+ - type: textarea
+ id: actual
+ attributes:
+ label: Actual Behavior
+ description: What actually happened. Include error messages and stack traces if applicable.
+ validations:
+ required: true
+
+ - type: dropdown
+ id: layer
+ attributes:
+ label: Architecture Layer
+ description: Which layer is affected?
+ multiple: true
+ options:
+ - FoundationLayer
+ - ReasoningLayer
+ - MetacognitiveLayer
+ - AgencyLayer
+ - BusinessApplications
+ - Infrastructure / CI/CD
+ - Unknown
+ validations:
+ required: true
+
+ - type: input
+ id: dotnet-version
+ attributes:
+ label: .NET Version
+ description: Output of `dotnet --version`
+ placeholder: "9.0.100"
+ validations:
+ required: false
+
+ - type: dropdown
+ id: os
+ attributes:
+ label: Operating System
+ options:
+ - Windows
+ - macOS
+ - Linux
+ - Docker
+ - Other
+ validations:
+ required: false
+
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant Logs
+ description: Paste any relevant log output. This will be automatically formatted as code.
+ render: shell
+ validations:
+ required: false
+
+ - type: textarea
+ id: additional
+ attributes:
+ label: Additional Context
+ description: Any other context about the problem (configuration, environment, workarounds tried).
+ validations:
+ required: false
diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml
new file mode 100644
index 0000000..0c2c6e5
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/feature_request.yml
@@ -0,0 +1,73 @@
+name: Feature Request
+description: Suggest a new feature or enhancement for Cognitive Mesh
+title: "[Feature]: "
+labels: ["enhancement"]
+assignees: []
+
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Thank you for suggesting a feature. Please describe your idea so we can evaluate and prioritize it.
+
+ - type: textarea
+ id: problem
+ attributes:
+ label: Problem Statement
+ description: What problem does this feature solve? Is it related to a frustration?
+ placeholder: I'm always frustrated when...
+ validations:
+ required: true
+
+ - type: textarea
+ id: solution
+ attributes:
+ label: Proposed Solution
+ description: Describe the solution you would like to see.
+ validations:
+ required: true
+
+ - type: textarea
+ id: alternatives
+ attributes:
+ label: Alternatives Considered
+ description: Describe any alternative solutions or features you have considered.
+ validations:
+ required: false
+
+ - type: dropdown
+ id: layer
+ attributes:
+ label: Architecture Layer
+ description: Which layer would this feature affect?
+ multiple: true
+ options:
+ - FoundationLayer
+ - ReasoningLayer
+ - MetacognitiveLayer
+ - AgencyLayer
+ - BusinessApplications
+ - Infrastructure / CI/CD
+ - New / Cross-cutting
+ validations:
+ required: true
+
+ - type: dropdown
+ id: priority
+ attributes:
+ label: Priority
+ description: How important is this feature to you?
+ options:
+ - Nice to have
+ - Important
+ - Critical
+ validations:
+ required: false
+
+ - type: textarea
+ id: additional
+ attributes:
+ label: Additional Context
+ description: Add any other context, mockups, or references related to the feature request.
+ validations:
+ required: false
diff --git a/.github/dependabot.yml b/.github/dependabot.yml
new file mode 100644
index 0000000..e746d72
--- /dev/null
+++ b/.github/dependabot.yml
@@ -0,0 +1,46 @@
+version: 2
+
+updates:
+ # NuGet packages
+ - package-ecosystem: "nuget"
+ directory: "/"
+ schedule:
+ interval: "weekly"
+ day: "monday"
+ open-pull-requests-limit: 10
+ reviewers:
+ - "JustAGhosT"
+ labels:
+ - "dependencies"
+ - "nuget"
+ commit-message:
+ prefix: "deps(nuget)"
+ groups:
+ microsoft:
+ patterns:
+ - "Microsoft.*"
+ - "System.*"
+ azure:
+ patterns:
+ - "Azure.*"
+ testing:
+ patterns:
+ - "xunit*"
+ - "Moq*"
+ - "FluentAssertions*"
+ - "coverlet*"
+
+ # GitHub Actions
+ - package-ecosystem: "github-actions"
+ directory: "/"
+ schedule:
+ interval: "weekly"
+ day: "monday"
+ open-pull-requests-limit: 5
+ reviewers:
+ - "JustAGhosT"
+ labels:
+ - "dependencies"
+ - "github-actions"
+ commit-message:
+ prefix: "deps(actions)"
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
new file mode 100644
index 0000000..1c0185e
--- /dev/null
+++ b/.github/pull_request_template.md
@@ -0,0 +1,41 @@
+## Summary
+
+
+
+## Changes
+
+
+
+-
+
+## Related Issues
+
+
+
+## Architecture Layer
+
+
+
+- [ ] FoundationLayer
+- [ ] ReasoningLayer
+- [ ] MetacognitiveLayer
+- [ ] AgencyLayer
+- [ ] BusinessApplications
+- [ ] Infrastructure / CI/CD
+
+## Checklist
+
+- [ ] Code compiles without warnings (`dotnet build` passes with `TreatWarningsAsErrors`)
+- [ ] XML documentation added for all new public types (CS1591)
+- [ ] Unit tests added or updated
+- [ ] All tests pass (`dotnet test`)
+- [ ] No circular dependencies introduced between layers
+- [ ] No secrets or credentials committed
+
+## Test Plan
+
+
+
+## Screenshots
+
+
diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml
new file mode 100644
index 0000000..7859264
--- /dev/null
+++ b/.github/workflows/codeql.yml
@@ -0,0 +1,50 @@
+name: CodeQL Security Scanning
+
+on:
+ pull_request:
+ branches: [main]
+ schedule:
+ # Run every Monday at 06:00 UTC
+ - cron: '0 6 * * 1'
+ workflow_dispatch:
+
+permissions:
+ actions: read
+ contents: read
+ security-events: write
+
+jobs:
+ analyze:
+ name: Analyze C#
+ runs-on: ubuntu-latest
+ timeout-minutes: 30
+
+ strategy:
+ fail-fast: false
+ matrix:
+ language: ['csharp']
+
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+ with:
+ fetch-depth: 0
+
+ - name: Set up .NET
+ uses: actions/setup-dotnet@v4
+ with:
+ dotnet-version: '9.0.x'
+
+ - name: Initialize CodeQL
+ uses: github/codeql-action/init@v3
+ with:
+ languages: ${{ matrix.language }}
+ queries: security-and-quality
+
+ - name: Build solution
+ run: dotnet build CognitiveMesh.sln --configuration Release
+
+ - name: Perform CodeQL Analysis
+ uses: github/codeql-action/analyze@v3
+ with:
+ category: '/language:${{ matrix.language }}'
diff --git a/.github/workflows/coverage.yml b/.github/workflows/coverage.yml
new file mode 100644
index 0000000..7143e94
--- /dev/null
+++ b/.github/workflows/coverage.yml
@@ -0,0 +1,111 @@
+###############################################################################
+# Cognitive Mesh — Code Coverage Reporting
+#
+# Runs on PRs and pushes to main. Collects coverage via opencover format,
+# generates a report, uploads to Codecov, and posts a summary comment on PRs.
+#
+# Required secrets:
+# CODECOV_TOKEN – Upload token from https://app.codecov.io
+###############################################################################
+
+name: Code Coverage
+
+on:
+ push:
+ branches: [main]
+ paths:
+ - "src/**"
+ - "tests/**"
+ - "*.sln"
+ - "Directory.Build.props"
+ pull_request:
+ types: [opened, synchronize, reopened]
+ paths:
+ - "src/**"
+ - "tests/**"
+ - "*.sln"
+ - "Directory.Build.props"
+ workflow_dispatch:
+
+permissions:
+ contents: read
+ pull-requests: write # For posting coverage comment
+ checks: write
+
+jobs:
+ coverage:
+ name: Collect & Report Coverage
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+ with:
+ fetch-depth: 0
+
+ - name: Set up .NET
+ uses: actions/setup-dotnet@v4
+ with:
+ dotnet-version: "9.0.x"
+
+ - name: Restore packages
+ run: dotnet restore CognitiveMesh.sln
+
+ - name: Build
+ run: dotnet build CognitiveMesh.sln -c Release --no-restore
+
+ - name: Run tests with coverage
+ run: |
+ dotnet test CognitiveMesh.sln \
+ --no-build \
+ -c Release \
+ --collect:"XPlat Code Coverage;Format=opencover" \
+ --results-directory TestResults \
+ -- DataCollectionRunSettings.DataCollectors.DataCollector.Configuration.ExcludeByFile="**/Migrations/**"
+
+ - name: Install ReportGenerator
+ run: dotnet tool install -g dotnet-reportgenerator-globaltool
+
+ - name: Generate coverage report
+ run: |
+ reportgenerator \
+ -reports:"TestResults/**/coverage.opencover.xml" \
+ -targetdir:TestResults/report \
+ -reporttypes:"Html;Cobertura;MarkdownSummaryGithub;Badges" \
+ -assemblyfilters:"+CognitiveMesh.*;-*.Tests"
+
+ - name: Upload to Codecov
+ uses: codecov/codecov-action@v4
+ with:
+ token: ${{ secrets.CODECOV_TOKEN }}
+ files: TestResults/**/coverage.opencover.xml
+ flags: unittests
+ name: cognitive-mesh-coverage
+ fail_ci_if_error: false
+ verbose: true
+
+ - name: Upload coverage report artifact
+ uses: actions/upload-artifact@v4
+ if: always()
+ with:
+ name: coverage-report
+ path: TestResults/report
+ retention-days: 14
+
+ - name: Post coverage summary on PR
+ if: github.event_name == 'pull_request'
+ uses: marocchino/sticky-pull-request-comment@v2
+ with:
+ path: TestResults/report/SummaryGithub.md
+ header: coverage
+
+ - name: Write job summary
+ if: always()
+ run: |
+ if [ -f "TestResults/report/SummaryGithub.md" ]; then
+ echo "## Code Coverage Report" >> "$GITHUB_STEP_SUMMARY"
+ cat TestResults/report/SummaryGithub.md >> "$GITHUB_STEP_SUMMARY"
+ else
+ echo "## Code Coverage Report" >> "$GITHUB_STEP_SUMMARY"
+ echo "Coverage report generation failed. Check the test run for errors." >> "$GITHUB_STEP_SUMMARY"
+ fi
diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml
new file mode 100644
index 0000000..33510a8
--- /dev/null
+++ b/.github/workflows/deploy.yml
@@ -0,0 +1,254 @@
+###############################################################################
+# Cognitive Mesh — Deploy Pipeline
+#
+# Trigger: Push to main (after build.yml succeeds) or manual dispatch.
+# Flow: Build Docker -> Push to ACR -> Deploy Staging -> Manual Gate -> Deploy Production
+#
+# Required secrets:
+# AZURE_CREDENTIALS – Service principal JSON for az login (federated or secret)
+# ACR_LOGIN_SERVER – e.g. cognitivemeshacr.azurecr.io
+# ACR_USERNAME – ACR admin or SP username
+# ACR_PASSWORD – ACR admin or SP password
+# AKS_CLUSTER_NAME – AKS cluster name
+# AKS_RESOURCE_GROUP – Resource group containing the AKS cluster
+###############################################################################
+
+name: Deploy
+
+on:
+ workflow_run:
+ workflows: ["Build and Analyze"]
+ types: [completed]
+ branches: [main]
+ workflow_dispatch:
+ inputs:
+ skip_staging:
+ description: "Skip staging deployment and deploy directly to production"
+ required: false
+ default: "false"
+ type: boolean
+ image_tag:
+ description: "Override image tag (default: git SHA)"
+ required: false
+ type: string
+
+concurrency:
+ group: deploy-${{ github.ref }}
+ cancel-in-progress: false
+
+permissions:
+ contents: read
+ id-token: write # For OIDC federated credentials (Azure)
+ actions: read
+
+env:
+ IMAGE_NAME: cognitive-mesh-api
+ KUSTOMIZE_VERSION: "5.4.3"
+
+jobs:
+ # -----------------------------------------------------------------------
+ # 1. Build & Push Docker image to ACR
+ # -----------------------------------------------------------------------
+ build-and-push:
+ name: Build & Push Docker Image
+ runs-on: ubuntu-latest
+ # Only run if the triggering workflow succeeded (or if manually dispatched)
+ if: >
+ github.event_name == 'workflow_dispatch' ||
+ github.event.workflow_run.conclusion == 'success'
+ outputs:
+ image_tag: ${{ steps.meta.outputs.tag }}
+ image_digest: ${{ steps.push.outputs.digest }}
+
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+ with:
+ fetch-depth: 0
+
+ - name: Compute image tag
+ id: meta
+ run: |
+ if [ -n "${{ inputs.image_tag }}" ]; then
+ TAG="${{ inputs.image_tag }}"
+ else
+ TAG="sha-$(git rev-parse --short HEAD)"
+ fi
+ echo "tag=${TAG}" >> "$GITHUB_OUTPUT"
+ echo "Image tag: ${TAG}"
+
+ - name: Set up Docker Buildx
+ uses: docker/setup-buildx-action@v3
+
+ - name: Log in to Azure Container Registry
+ uses: docker/login-action@v3
+ with:
+ registry: ${{ secrets.ACR_LOGIN_SERVER }}
+ username: ${{ secrets.ACR_USERNAME }}
+ password: ${{ secrets.ACR_PASSWORD }}
+
+ - name: Build and push
+ id: push
+ uses: docker/build-push-action@v6
+ with:
+ context: .
+ push: true
+ tags: |
+ ${{ secrets.ACR_LOGIN_SERVER }}/${{ env.IMAGE_NAME }}:${{ steps.meta.outputs.tag }}
+ ${{ secrets.ACR_LOGIN_SERVER }}/${{ env.IMAGE_NAME }}:latest
+ cache-from: type=gha
+ cache-to: type=gha,mode=max
+ labels: |
+ org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
+ org.opencontainers.image.revision=${{ github.sha }}
+
+ # -----------------------------------------------------------------------
+ # 2. Deploy to Staging
+ # -----------------------------------------------------------------------
+ deploy-staging:
+ name: Deploy to Staging
+ needs: build-and-push
+ if: inputs.skip_staging != true
+ runs-on: ubuntu-latest
+ environment:
+ name: staging
+ url: https://staging.cognitivemesh.io
+
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+
+ - name: Azure Login
+ uses: azure/login@v2
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Set AKS context
+ uses: azure/aks-set-context@v4
+ with:
+ cluster-name: ${{ secrets.AKS_CLUSTER_NAME }}
+ resource-group: ${{ secrets.AKS_RESOURCE_GROUP }}
+
+ - name: Install Kustomize
+ uses: imranismail/setup-kustomize@v2
+ with:
+ kustomize-version: ${{ env.KUSTOMIZE_VERSION }}
+
+ - name: Update image tag in staging overlay
+ run: |
+ cd k8s/overlays/staging
+ kustomize edit set image \
+ cognitive-mesh-api=${{ secrets.ACR_LOGIN_SERVER }}/${{ env.IMAGE_NAME }}:${{ needs.build-and-push.outputs.image_tag }}
+
+ - name: Apply staging manifests
+ run: |
+ kustomize build k8s/overlays/staging | kubectl apply -f -
+
+ - name: Wait for rollout
+ run: |
+ kubectl rollout status deployment/cognitive-mesh-api \
+ -n cognitive-mesh-staging \
+ --timeout=300s
+
+ - name: Run smoke tests
+ run: |
+ STAGING_URL="http://cognitive-mesh-api.cognitive-mesh-staging.svc.cluster.local:8080"
+ # Wait for the service to become reachable
+ for i in $(seq 1 30); do
+ if kubectl exec -n cognitive-mesh-staging deploy/cognitive-mesh-api -- \
+ curl -sf "${STAGING_URL}/healthz" > /dev/null 2>&1; then
+ echo "Staging health check passed"
+ exit 0
+ fi
+ echo "Waiting for staging to be ready... ($i/30)"
+ sleep 10
+ done
+ echo "Staging health check failed after 5 minutes"
+ exit 1
+
+ # -----------------------------------------------------------------------
+ # 3. Deploy to Production (manual approval via GitHub Environment)
+ # -----------------------------------------------------------------------
+ deploy-production:
+ name: Deploy to Production
+ needs: [build-and-push, deploy-staging]
+ # Run if staging succeeded, or if staging was skipped
+ if: always() && needs.build-and-push.result == 'success' && (needs.deploy-staging.result == 'success' || needs.deploy-staging.result == 'skipped')
+ runs-on: ubuntu-latest
+ environment:
+ name: production
+ url: https://cognitivemesh.io
+
+ steps:
+ - name: Checkout repository
+ uses: actions/checkout@v4
+
+ - name: Azure Login
+ uses: azure/login@v2
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Set AKS context
+ uses: azure/aks-set-context@v4
+ with:
+ cluster-name: ${{ secrets.AKS_CLUSTER_NAME }}
+ resource-group: ${{ secrets.AKS_RESOURCE_GROUP }}
+
+ - name: Install Kustomize
+ uses: imranismail/setup-kustomize@v2
+ with:
+ kustomize-version: ${{ env.KUSTOMIZE_VERSION }}
+
+ - name: Update image tag in production overlay
+ run: |
+ cd k8s/overlays/prod
+ kustomize edit set image \
+ cognitive-mesh-api=${{ secrets.ACR_LOGIN_SERVER }}/${{ env.IMAGE_NAME }}:${{ needs.build-and-push.outputs.image_tag }}
+
+ - name: Apply production manifests
+ run: |
+ kustomize build k8s/overlays/prod | kubectl apply -f -
+
+ - name: Wait for rollout
+ run: |
+ kubectl rollout status deployment/cognitive-mesh-api \
+ -n cognitive-mesh-prod \
+ --timeout=600s
+
+ - name: Verify production health
+ run: |
+ PROD_URL="http://cognitive-mesh-api.cognitive-mesh-prod.svc.cluster.local:8080"
+ for i in $(seq 1 30); do
+ if kubectl exec -n cognitive-mesh-prod deploy/cognitive-mesh-api -- \
+ curl -sf "${PROD_URL}/healthz" > /dev/null 2>&1; then
+ echo "Production health check passed"
+ exit 0
+ fi
+ echo "Waiting for production to be ready... ($i/30)"
+ sleep 10
+ done
+ echo "Production health check failed after 5 minutes"
+ exit 1
+
+ # -----------------------------------------------------------------------
+ # 4. Post-deploy notification
+ # -----------------------------------------------------------------------
+ notify:
+ name: Notify on Failure
+ if: failure()
+ needs: [build-and-push, deploy-staging, deploy-production]
+ runs-on: ubuntu-latest
+
+ steps:
+ - name: Send failure notification
+ uses: rtCamp/action-slack-notify@v2
+ if: env.SLACK_WEBHOOK != ''
+ env:
+ SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
+ SLACK_COLOR: "#FF0000"
+ SLACK_TITLE: "Deployment Failed"
+ SLACK_MESSAGE: |
+ Deployment of `${{ github.sha }}` failed.
+ Workflow: ${{ github.workflow }}
+ Run: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
+ SLACK_USERNAME: "GitHub Actions"
diff --git a/.gitignore b/.gitignore
index 950b43d..442a624 100644
--- a/.gitignore
+++ b/.gitignore
@@ -37,6 +37,13 @@ orleans.codegen.cs
# Node.js Tools
node_modules/
npm-debug.log
+package-lock.json
+
+# Next.js
+.next/
+out/
+*.tsbuildinfo
+next-env.d.ts
# Testing
TestResults/
diff --git a/AGENT_BACKLOG.md b/AGENT_BACKLOG.md
index cfb58fd..9b5bd75 100644
--- a/AGENT_BACKLOG.md
+++ b/AGENT_BACKLOG.md
@@ -15,17 +15,21 @@
## P0-CRITICAL: Build & Infrastructure Fixes
-### BLD-001: Fix Shared Project Build Errors
-- **File:** `src/Shared/NodeLabels.cs`
-- **Issue:** Missing XML doc comments causing CS1591 build failure (TreatWarningsAsErrors=true)
-- **Fix:** Add `/// ` XML docs to all public types and members
-- **Team:** 6 (Quality) or 4 (Agency)
-- **Verify:** `dotnet build CognitiveMesh.sln` passes clean
+### ~~BLD-001: Fix Shared Project Build Errors~~ DONE (Phase 1)
+- **Status:** Shared/NodeLabels.cs already had XML docs. Team Quality added docs to 16 additional files across all layers.
### BLD-002: Verify Core Test Suites Pass
- **Command:** `dotnet test CognitiveMesh.sln --no-build`
- **Focus:** DecisionExecutorTests, ConclAIveReasoningAdapterTests (per TODO.md)
- **Team:** 6 (Quality)
+- **Note:** Cannot verify in current environment (no .NET SDK). Needs CI pipeline validation.
+
+### ~~BLD-003: Fix Architecture Violations~~ DONE (Phase 4)
+- **Status:** All 3 circular dependency violations fixed.
+ - ARCH-001: Removed unused `AgencyLayer/ToolIntegration` reference from `Protocols.csproj` (phantom dependency)
+ - ARCH-002: Extracted `ICollaborationPort` interface into MetacognitiveLayer, created `CollaborationPortAdapter` in AgencyLayer (correct direction). Removed upward reference.
+ - ARCH-003: Removed unused `BusinessApplications/Common` reference from `Notifications.csproj` (phantom dependency)
+- **Team:** 6 (Quality)
---
@@ -33,339 +37,264 @@
### AGENCY Layer
-#### AGN-001: DecisionExecutor — 3 stub methods
-- **File:** `src/AgencyLayer/DecisionExecution/DecisionExecutor.cs`
-- **Line 36:** `// TODO: Implement actual decision execution logic` — currently uses Task.Delay()
-- **Line 82:** `// TODO: Implement actual status retrieval logic` — returns hardcoded success
-- **Line 112:** `// TODO: Implement actual log retrieval logic` — returns hardcoded logs
-- **Fix:** Integrate with IDecisionReasoningEngine and IMediator for real execution
-- **Team:** 4 (Agency)
+#### ~~AGN-001: DecisionExecutor — 3 stub methods~~ DONE (Phase 2)
+- **Status:** Replaced all 3 Task.Delay stubs with real logic: Stopwatch-based timing, knowledge graph queries, LLM completion, execution tracking via ConcurrentDictionary, log buffer with date range filtering.
-#### AGN-002: MultiAgentOrchestrationEngine — 2 placeholder methods
-- **File:** `src/AgencyLayer/MultiAgentOrchestration/Engines/MultiAgentOrchestrationEngine.cs`
-- **Lines 160, 169:** Methods returning Task.CompletedTask
-- **Fix:** Implement actual agent lifecycle and learning insight logic
-- **Team:** 4 (Agency)
+#### ~~AGN-002: MultiAgentOrchestrationEngine — 2 placeholder methods~~ DONE (Phase 2)
+- **Status:** SetAgentAutonomyAsync now validates and persists autonomy changes as learning insights. ConfigureAgentAuthorityAsync logs endpoint details. Added GetAgentByIdAsync, ListAgentsAsync, UpdateAgentAsync, RetireAgentAsync.
-#### AGN-003: InMemoryAgentKnowledgeRepository — 2 placeholders
-- **File:** `src/AgencyLayer/MultiAgentOrchestration/Adapters/InMemoryAgentKnowledgeRepository.cs`
-- **Lines 31, 52:** Placeholder implementations
-- **Fix:** Implement proper in-memory knowledge storage with query support
-- **Team:** 4 (Agency)
+#### ~~AGN-003: InMemoryAgentKnowledgeRepository — 2 placeholders~~ DONE (Phase 2)
+- **Status:** Switched to ConcurrentDictionary keyed by InsightId. Multi-signal relevance scoring for GetRelevantInsightsAsync (type match, token overlap, confidence weighting).
-#### AGN-004: InMemoryCheckpointManager — PurgeWorkflowCheckpoints
-- **File:** `src/AgencyLayer/Orchestration/Checkpointing/InMemoryCheckpointManager.cs`
-- **Line 87:** Placeholder
-- **Fix:** Implement actual checkpoint purge logic
-- **Team:** 4 (Agency)
+#### ~~AGN-004: InMemoryCheckpointManager — PurgeWorkflowCheckpoints~~ DONE (Phase 2)
+- **Status:** Input validation, cancellation support, explicit count+clear of removed checkpoints with structured logging.
-#### AGN-005: DurableWorkflowEngine — Placeholder
-- **File:** `src/AgencyLayer/Orchestration/Execution/DurableWorkflowEngine.cs`
-- **Line 118:** Placeholder
-- **Fix:** Complete implementation
-- **Team:** 4 (Agency)
+#### ~~AGN-005: DurableWorkflowEngine — Placeholder~~ DONE (Phase 2)
+- **Status:** CancelWorkflowAsync now validates, guards terminal states, signals CancellationTokenSource, saves cancellation checkpoint with step metadata.
-#### AGN-006: Add MultiAgentOrchestrationEngine Tests (CRITICAL GAP)
-- **Location:** `tests/AgencyLayer/` — NO test file exists for the core orchestration engine
-- **Fix:** Create `tests/AgencyLayer/MultiAgentOrchestration/MultiAgentOrchestrationEngineTests.cs`
-- **Cover:** RegisterAgent, ExecuteTask, SetAgentAutonomy, SpawnAgent, coordination patterns
-- **Team:** 4 (Agency)
+#### ~~AGN-006: Add MultiAgentOrchestrationEngine Tests~~ DONE (Phase 2)
+- **Status:** Created `tests/AgencyLayer/MultiAgentOrchestration/MultiAgentOrchestrationEngineTests.cs` — 22 tests covering constructor guards, all coordination patterns (Parallel, Hierarchical, Competitive, CollaborativeSwarm), autonomy, ethical checks, learning insights, spawning.
### METACOGNITIVE Layer
-#### META-001: SelfEvaluator — 4 TODO methods
-- **File:** `src/MetacognitiveLayer/SelfEvaluation/SelfEvaluator.cs`
-- **Line 30:** `// TODO: Implement actual performance evaluation logic` — returns hardcoded perfect scores
-- **Line 46:** `// TODO: Implement actual learning progress assessment logic`
-- **Line 62:** `// TODO: Implement actual insight generation logic`
-- **Line 78:** `// TODO: Implement actual behavior validation logic`
-- **Fix:** Implement real evaluation using metrics from PerformanceMonitor and HybridMemoryStore
-- **Team:** 3 (Metacognitive)
-
-#### META-002: PerformanceMonitor — Threshold checking
-- **File:** `src/MetacognitiveLayer/PerformanceMonitoring/PerformanceMonitor.cs`
-- **Line 108:** `// TODO: Implement threshold checking logic` — returns Array.Empty
-- **Fix:** Implement configurable threshold comparison against collected metrics
-- **Team:** 3 (Metacognitive)
-
-#### META-003: ACPHandler — Tool execution
-- **File:** `src/MetacognitiveLayer/Protocols/ACP/ACPHandler.cs`
-- **Line 240:** `// TODO: Implement actual tool execution logic`
-- **Fix:** Implement tool dispatch based on registered tool interfaces
-- **Team:** 3 (Metacognitive)
-
-#### META-004: SessionManager — UpdateSession
-- **File:** `src/MetacognitiveLayer/Protocols/Common/SessionManager.cs`
-- **Line 86:** Placeholder returning Task.CompletedTask
-- **Fix:** Implement session state persistence
-- **Team:** 3 (Metacognitive)
-
-#### META-005: LearningManager — 48 framework-enablement stubs
-- **File:** `src/MetacognitiveLayer/ContinuousLearning/LearningManager.cs`
-- **Lines:** 21, 27, 61, 72, 83, 94, 105, 116, 127, 138, 149, 160, 171, 182, 193, 204, 215, 226, 237, 248, 259, 270, 281, 292, 303, 314, 325, 336, 347, 358, 369, 380, 391, 402, 413, 424, 435, 446, 457, 468, 479, 490, 501, 512, 523, 534, 545, 556
-- **Pattern:** Multiple `EnableXxxAsync()` methods all returning `Task.CompletedTask`
-- **Fix:** Group by pattern (config-based frameworks vs. service-based) and implement enable/disable logic
-- **Team:** 3 (Metacognitive)
-
-#### META-006: ContinuousLearningComponent — 2 placeholders
-- **File:** `src/MetacognitiveLayer/ContinuousLearning/ContinuousLearningComponent.cs`
-- **Lines 455, 461:** Placeholder implementations
-- **Fix:** Complete implementation
-- **Team:** 3 (Metacognitive)
+#### ~~META-001: SelfEvaluator — 4 TODO methods~~ DONE (Phase 2)
+- **Status:** Real evaluation logic: composite scoring from 7 metric types, domain-appropriate formulas, actionable recommendations. Learning progress from completionRate/iterations. Statistical insight generation with z-score outlier detection. Behavior validation for nulls, empty strings, NaN/Infinity.
-### FOUNDATION Layer
+#### ~~META-002: PerformanceMonitor — Threshold checking~~ DONE (Phase 2)
+- **Status:** Added MetricThreshold config, ThresholdCondition/ThresholdAggregation enums, IMetricsStore interface. CheckThresholdsAsync evaluates registered thresholds against aggregated stats.
-#### FND-001: DocumentIngestionFunction — Fabric integration
-- **File:** `src/FoundationLayer/DocumentProcessing/DocumentIngestionFunction.cs`
-- **Line 52:** `// Placeholder for Fabric integration`
-- **Fix:** Implement document ingestion pipeline (or proper abstraction/port)
-- **Team:** 1 (Foundation)
+#### ~~META-003: ACPHandler — Tool execution~~ DONE (Phase 2)
+- **Status:** Multi-dispatch pattern matching: IToolRunner, async Func delegate, sync Func delegate, raw fallback. RequiredTools iteration with error isolation.
-#### FND-002: EnhancedRAGSystem — Pipeline connections
-- **File:** `src/FoundationLayer/SemanticSearch/EnhancedRAGSystem.cs`
-- **Lines 208, 214:** `// Connect to Fabric/Orchestrate pipelines`
-- **Fix:** Implement RAG pipeline connections
-- **Team:** 1 (Foundation)
+#### ~~META-004: SessionManager — UpdateSession~~ DONE (Phase 2)
+- **Status:** Atomic AddOrUpdate on ConcurrentDictionary. Handles re-add after cleanup timer removal.
-#### FND-003: SecretsManagementEngine — Placeholder
-- **File:** `src/FoundationLayer/Security/Engines/SecretsManagementEngine.cs`
-- **Line 117:** Placeholder
-- **Fix:** Complete secrets management implementation
-- **Team:** 1 (Foundation)
+#### ~~META-005: LearningManager — 48 framework-enablement stubs~~ DONE (Phase 2)
+- **Status:** Complete rewrite: _enabledFrameworks ConcurrentDictionary, 42-entry prerequisites map, common EnableFrameworkAsync helper. All 48 methods now one-liner delegates with feature flag checks and prerequisite validation.
-### REASONING Layer
+#### ~~META-006: ContinuousLearningComponent — 2 placeholders~~ DONE (Phase 2)
+- **Status:** IntegrateWithFabricForFeedbackAsync generates LLM learning summaries, stores EnrichedFeedback in CosmosDB. IntegrateWithFabricForInteractionAsync detects weak dimensions (<0.7), generates learning signals.
-#### RSN-001: SystemsReasoner — 2 placeholders
-- **File:** `src/ReasoningLayer/SystemsReasoning/SystemsReasoner.cs`
-- **Lines 79, 85:** Placeholder implementations
-- **Fix:** Implement systems-level reasoning logic
-- **Team:** 2 (Reasoning)
+### FOUNDATION Layer
----
+#### ~~FND-001: DocumentIngestionFunction — Fabric integration~~ DONE (Phase 1)
+- **Status:** Created `IFabricDataIntegrationPort` interface. Implemented real integration logic with graceful fallback.
-## P1-HIGH: CI/CD & DevOps (Team 8)
+#### ~~FND-002: EnhancedRAGSystem — Pipeline connections~~ DONE (Phase 1)
+- **Status:** Created `IDataPipelinePort` interface with `ConnectToFabricEndpointsAsync`, `TriggerDataFactoryPipelineAsync`, `GetPipelineRunStatusAsync`. Full implementation with error handling.
-### CICD-001: Add CodeQL Security Scanning
-- **Create:** `.github/workflows/codeql.yml`
-- **Purpose:** Automated security vulnerability scanning for C# code
-- **Trigger:** PR + weekly schedule
-- **Team:** 8 (CI/CD)
+#### ~~FND-003: SecretsManagementEngine — Placeholder~~ DONE (Phase 1)
+- **Status:** `DeleteSecretAsync` now validates inputs, throws on missing secrets, clears sensitive data on removal.
-### CICD-002: Add Dependabot Configuration
-- **Create:** `.github/dependabot.yml`
-- **Purpose:** Automated NuGet and GitHub Actions version updates
-- **Team:** 8 (CI/CD)
+### REASONING Layer
-### CICD-003: Create Dockerfile
-- **Create:** `Dockerfile` (multi-stage .NET 9 build)
-- **Purpose:** Containerize the application for deployment
-- **Team:** 8 (CI/CD)
+#### ~~RSN-001: SystemsReasoner — 2 placeholders~~ DONE (Phase 1)
+- **Status:** Both methods (`IntegrateWithFabricDataEndpointsAsync`, `OrchestrateDataFactoryPipelinesAsync`) fully implemented with LLM-based logic, feature flags, typed result objects, XML docs.
-### CICD-004: Create docker-compose for Local Dev
-- **Create:** `docker-compose.yml`
-- **Services:** Redis, Qdrant, Azurite (Blob emulator), CosmosDB emulator
-- **Purpose:** Local development environment matching production dependencies
-- **Team:** 8 (CI/CD)
+#### ~~RSN-002: DomainSpecificReasoner — placeholder~~ DONE (Phase 1)
+- **Status:** Created `IDomainKnowledgePort` interface. Removed `Task.Delay(100)` and hardcoded data. Real port-based retrieval.
-### CICD-005: Create Makefile
-- **Create:** `Makefile`
-- **Targets:** build, test, coverage, format, clean, docker-up, docker-down
-- **Team:** 8 (CI/CD)
+#### ~~RSN-003: ValueGenerationEngine — placeholder data~~ DONE (Phase 1)
+- **Status:** Replaced hardcoded strengths/opportunities with data-driven `DeriveStrengths()` and `DeriveDevelopmentOpportunities()` methods.
-### CICD-006: Add PR and Issue Templates
-- **Create:** `.github/pull_request_template.md`, `.github/ISSUE_TEMPLATE/`
-- **Team:** 8 (CI/CD)
+#### ~~RSN-004: AnalyticalReasoner — 3 Task.Delay~~ DONE (Phase 1)
+- **Status:** Created `IDataPlatformIntegrationPort` interface. All 3 `Task.Delay` removed, replaced with port-based integration.
-### CICD-007: Add Deployment Pipeline
-- **Create:** `.github/workflows/deploy.yml`
-- **Purpose:** Build Docker image -> Push to ACR -> Deploy to staging -> Manual gate -> Production
-- **Team:** 8 (CI/CD)
+---
-### CICD-008: Add Coverage Reporting
-- **Fix:** Current `build.yml` collects coverage but doesn't publish
-- **Add:** Codecov or SonarQube dashboard integration, coverage badge in README
-- **Team:** 8 (CI/CD)
+## P1-HIGH: CI/CD & DevOps (Team 8)
----
+### ~~CICD-001: Add CodeQL Security Scanning~~ DONE (Phase 1)
+- **Status:** Created `.github/workflows/codeql.yml` — C# analysis, PR + weekly triggers, CodeQL v3.
-## P1-HIGH: Infrastructure-as-Code (Team 9)
+### ~~CICD-002: Add Dependabot Configuration~~ DONE (Phase 1)
+- **Status:** Created `.github/dependabot.yml` — NuGet + GitHub Actions ecosystems, grouped updates.
-### IaC-001: Terraform Module — CosmosDB
-- **Create:** `infra/modules/cosmosdb/` (main.tf, variables.tf, outputs.tf)
-- **Resources:** azurerm_cosmosdb_account, azurerm_cosmosdb_sql_database
-- **Referenced by:** src/FoundationLayer/AzureCosmosDB/CosmosDbAdapter.cs
-- **Team:** 9 (Infra)
+### ~~CICD-003: Create Dockerfile~~ DONE (Phase 1)
+- **Status:** Created `Dockerfile` — multi-stage .NET 9 build, non-root user, configurable entrypoint.
-### IaC-002: Terraform Module — Blob Storage
-- **Create:** `infra/modules/storage/`
-- **Resources:** azurerm_storage_account, azurerm_storage_container, Data Lake
-- **Referenced by:** src/FoundationLayer/AzureBlobStorage/BlobStorageManager.cs
-- **Team:** 9 (Infra)
+### ~~CICD-004: Create docker-compose for Local Dev~~ DONE (Phase 1)
+- **Status:** Created `docker-compose.yml` — Redis, Qdrant, Azurite with health checks and persistent volumes.
-### IaC-003: Terraform Module — Redis Cache
-- **Create:** `infra/modules/redis/`
-- **Resources:** azurerm_redis_cache
-- **Referenced by:** HybridMemoryStore (StackExchange.Redis v2.8.41)
-- **Team:** 9 (Infra)
+### ~~CICD-005: Create Makefile~~ DONE (Phase 1)
+- **Status:** Created `Makefile` — build, test, coverage, format, lint, clean, docker-up/down, help targets.
-### IaC-004: Terraform Module — Qdrant Vector DB
-- **Create:** `infra/modules/qdrant/`
-- **Resources:** azurerm_container_group (Qdrant runs as container)
-- **Referenced by:** src/FoundationLayer/VectorDatabase/QdrantVectorDatabaseAdapter.cs
-- **Team:** 9 (Infra)
+### ~~CICD-006: Add PR and Issue Templates~~ DONE (Phase 1)
+- **Status:** Created PR template + bug report + feature request YAML forms.
-### IaC-005: Terraform Module — Azure OpenAI
-- **Create:** `infra/modules/openai/`
-- **Resources:** azurerm_cognitive_account, azurerm_cognitive_deployment (GPT-3.5, 4o, 4.1, Embeddings)
-- **Team:** 9 (Infra)
+### ~~CICD-007: Add Deployment Pipeline~~ DONE (Phase 5)
+- **Status:** Created `.github/workflows/deploy.yml` — Build Docker image, push to ACR, deploy to staging (Kustomize + AKS), manual gate via GitHub Environments, deploy to production, health checks + smoke tests, Slack failure notifications. Triggered by successful build.yml or manual dispatch. Supports skip-staging and image-tag overrides.
-### IaC-006: Terraform Module — Key Vault
-- **Create:** `infra/modules/keyvault/`
-- **Resources:** azurerm_key_vault, azurerm_key_vault_secret (for connection strings)
-- **Team:** 9 (Infra)
+### ~~CICD-008: Add Coverage Reporting~~ DONE (Phase 5)
+- **Status:** Created `.github/workflows/coverage.yml` — Runs on PRs + pushes to main, collects opencover coverage, generates HTML/Cobertura/Markdown reports via ReportGenerator, uploads to Codecov, posts sticky PR comment with coverage summary, writes GitHub job summary. Added `codecov.yml` config with per-layer components, 80% patch target. Added coverage + deploy badges to README.
-### IaC-007: Terraform Module — AI Search
-- **Create:** `infra/modules/ai-search/`
-- **Resources:** azurerm_search_service
-- **Team:** 9 (Infra)
+---
-### IaC-008: Terraform Module — Monitoring
-- **Create:** `infra/modules/monitoring/`
-- **Resources:** azurerm_application_insights, azurerm_log_analytics_workspace
-- **Team:** 9 (Infra)
+## P1-HIGH: Infrastructure-as-Code (Team 9)
-### IaC-009: Terraform Module — Networking
-- **Create:** `infra/modules/networking/`
-- **Resources:** azurerm_virtual_network, azurerm_private_endpoint (for CosmosDB, Redis, Storage)
-- **Team:** 9 (Infra)
+### ~~IaC-001 through IaC-009: All 9 Terraform Modules~~ DONE (Phase 1)
+- **Status:** Created `infra/modules/` with cosmosdb, storage, redis, qdrant, openai, keyvault, ai-search, monitoring, networking — 32 .tf files total. Root module wires all modules together with Key Vault secret storage.
-### IaC-010: Terragrunt Root Config + Dev Environment
-- **Create:** `infra/terragrunt.hcl`, `infra/environments/dev/terragrunt.hcl`
-- **Purpose:** Orchestrate modules with environment-specific variables
-- **Team:** 9 (Infra)
+### ~~IaC-010: Terragrunt Root Config + Dev Environment~~ DONE (Phase 1)
+- **Status:** Created `infra/terragrunt.hcl` and `infra/environments/dev/terragrunt.hcl`.
-### IaC-011: Kubernetes Manifests
-- **Create:** `k8s/base/` (deployment.yaml, service.yaml, configmap.yaml)
-- **Create:** `k8s/overlays/{dev,staging,prod}/kustomization.yaml`
-- **Team:** 9 (Infra)
+### ~~IaC-011: Kubernetes Manifests~~ DONE (Phase 1)
+- **Status:** Created `k8s/base/` (deployment, service, configmap, kustomization) and `k8s/overlays/` (dev, staging, prod) — 7 YAML files with Kustomize.
---
## P2-MEDIUM: Business Application Stubs
-### BIZ-001: CustomerIntelligenceManager — 4 fake-data methods
-- **File:** `src/BusinessApplications/CustomerIntelligence/CustomerIntelligenceManager.cs`
-- **Line 44:** `// TODO: Implement actual customer profile retrieval` — uses Task.Delay, returns sample data
-- **Line 81:** `// TODO: Implement actual segment retrieval logic`
-- **Line 136:** `// TODO: Implement actual insight generation logic`
-- **Line 199:** `// TODO: Implement actual prediction logic`
-- **Fix:** Integrate with HybridMemoryStore for profile retrieval, reasoning engines for insights
-- **Team:** 5 (Business)
+### ~~BIZ-001: CustomerIntelligenceManager — 4 fake-data methods~~ DONE (Phase 3)
+- **Status:** Added `ICustomerDataPort` interface. All 4 methods now use port-based data retrieval: profile lookup, segment queries, LLM-driven insight generation from interaction history, vector similarity + LLM for behavioral predictions. All Task.Delay and TODO comments removed.
-### BIZ-002: DecisionSupportManager — 4 hardcoded methods
-- **File:** `src/BusinessApplications/DecisionSupport/DecisionSupportManager.cs`
-- **Line 35:** `// TODO: Implement actual decision analysis logic`
-- **Line 51:** `// TODO: Implement actual risk evaluation logic` — returns `{ riskLevel: low, riskScore: 0.1 }`
-- **Line 67:** `// TODO: Implement actual recommendation generation logic` — returns empty arrays
-- **Line 83:** `// TODO: Implement actual outcome simulation logic` — returns empty results
-- **Fix:** Integrate with ConclAIve reasoning engines for real analysis
-- **Team:** 5 (Business)
+### ~~BIZ-002: DecisionSupportManager — 4 hardcoded methods~~ DONE (Phase 3)
+- **Status:** Added `IDecisionAnalysisPort` interface. All 4 methods now delegate to port: option scoring, risk assessment, recommendation generation, outcome simulation. Input validation added. All TODO comments removed.
-### BIZ-003: ResearchAnalyst — 4 fake-data methods
-- **File:** `src/BusinessApplications/ResearchAnalysis/ResearchAnalyst.cs`
-- **Line 47:** `// TODO: Implement actual research analysis logic`
-- **Line 87:** `// TODO: Implement actual research result retrieval logic`
-- **Line 122:** `// TODO: Implement actual research search logic`
-- **Line 161:** `// TODO: Implement actual research update logic`
-- **Fix:** Integrate with SemanticSearch/RAG and knowledge graph for real research
-- **Team:** 5 (Business)
+### ~~BIZ-003: ResearchAnalyst — 4 fake-data methods~~ DONE (Phase 3)
+- **Status:** Added `IResearchDataPort` + `IResearchAnalysisPort` interfaces. LLM-based topic analysis with persistence, semantic vector search with text fallback, read-modify-write update cycle with re-indexing. All Task.Delay and TODO comments removed.
-### BIZ-004: ConvenerController — 2 NotImplemented features
-- **File:** `src/BusinessApplications/ConvenerServices/ConvenerController.cs`
-- **Lines 151-161:** Innovation Spread tracking + Learning Catalyst recommendations
-- **Fix:** Implement endpoints per docs/prds/03-convener/convener-backend.md
+### ~~BIZ-004: ConvenerController — 2 NotImplemented features~~ DONE (Phase 7)
+- **Status:** Both placeholder endpoints replaced with full async implementations:
+ - `GetInnovationSpread`: New `IInnovationSpreadPort` interface with `InnovationSpreadResult`, `AdoptionEvent`, `SpreadPhase` (Rogers diffusion model). Controller: tenant scoping, null check, audit logging, error handling.
+ - `GetLearningRecommendations`: New `ILearningCatalystPort` interface with `LearningCatalystRequest/Response`, `LearningRecommendation`, `SkillGap`, `LearningActivityType`. Controller: user ID from claims, tenant scoping, error handling.
+ - Created `DiscoverChampionsUseCase` + `IChampionDiscoveryPort` + DTOs (resolves broken imports).
+ - Fixed ConvenerController: null guard constructors, correct namespace imports, `GetTenantIdFromClaims` returns nullable.
+ - Updated `ConvenerServices.csproj`: added MetacognitiveLayer + ASP.NET MVC references.
- **Team:** 5 (Business)
+### ~~BIZ-005: KnowledgeManager — 28 Task.Delay stubs~~ DONE (Phase 3)
+- **Status:** Complete refactor: removed 7-way framework branching. Added `IKnowledgeStorePort` interface. All 28 `Task.Delay(1000)` removed. 4 methods now delegate to port with CancellationToken, input validation, structured logging. File reduced from 399 to 173 lines.
+
---
## P2-MEDIUM: Missing Test Coverage
-### TST-001: MultiAgentOrchestrationEngine tests
-- **Gap:** No test file exists for the core multi-agent engine
-- **Team:** 4 (Agency)
-- **Note:** Tracked as P1 under AGN-006 — this entry kept for cross-reference
+### ~~TST-001: MultiAgentOrchestrationEngine tests~~ DONE (Phase 2)
+- **Status:** Created `tests/AgencyLayer/MultiAgentOrchestration/MultiAgentOrchestrationEngineTests.cs` — 22 tests covering constructor guards, all coordination patterns, autonomy, ethical checks, learning insights, spawning.
-### TST-002: SelfEvaluator tests
-- **Gap:** No dedicated test file
-- **Team:** 3 (Metacognitive)
+### ~~TST-002: SelfEvaluator tests~~ DONE (Phase 2)
+- **Status:** Created `tests/MetacognitiveLayer/SelfEvaluation/SelfEvaluatorTests.cs` — 17 tests covering all 4 evaluation methods, dispose, interface compliance.
-### TST-003: LearningManager tests
-- **Gap:** 48 methods with no test coverage
-- **Team:** 3 (Metacognitive)
+### ~~TST-003: LearningManager tests~~ DONE (Phase 4)
+- **Status:** Created `tests/MetacognitiveLayer/ContinuousLearning/LearningManagerTests.cs` — 43 test methods (~103 test case invocations) covering constructor guards, EnabledFrameworks property, IsFrameworkEnabled, core learning operations, all 7 framework families (ADK, LangGraph, CrewAI, SemanticKernel, AutoGen, Smolagents, AutoGPT), sub-feature prerequisite validation, flag-disabled paths, idempotency, concurrency safety, logging verification.
+- **Team:** 7 (Testing)
-### TST-004: PerformanceMonitor tests
-- **Gap:** Limited test coverage for threshold checking
-- **Team:** 3 (Metacognitive)
+### ~~TST-004: PerformanceMonitor tests~~ DONE (Phase 2)
+- **Status:** Created `tests/MetacognitiveLayer/PerformanceMonitoring/PerformanceMonitorTests.cs` — 27 tests covering RecordMetric, GetAggregatedStats, QueryMetricsAsync, CheckThresholds, Dispose.
-### TST-005: CustomerIntelligenceManager tests
-- **Gap:** No dedicated test file
-- **Team:** 5 (Business)
+### ~~TST-004b: DecisionExecutor tests~~ DONE (Phase 2)
+- **Status:** Created `tests/AgencyLayer/DecisionExecution/DecisionExecutorComprehensiveTests.cs` — 21 tests covering constructor guards, ExecuteDecision, GetStatus, GetLogs, model validation.
-### TST-006: DecisionSupportManager tests
-- **Gap:** No dedicated test file
-- **Team:** 5 (Business)
+### ~~TST-005: CustomerIntelligenceManager tests~~ DONE (Phase 3)
+- **Status:** Created `tests/BusinessApplications.UnitTests/CustomerIntelligence/CustomerIntelligenceManagerTests.cs` — 31 tests (28 Facts + 3 Theories) covering constructor guards, all 4 methods, cancellation, error cases.
-### TST-007: ResearchAnalyst tests
-- **Gap:** No dedicated test file
-- **Team:** 5 (Business)
+### ~~TST-006: DecisionSupportManager tests~~ DONE (Phase 3)
+- **Status:** Created `tests/BusinessApplications.UnitTests/DecisionSupport/DecisionSupportManagerTests.cs` — 20 tests covering constructor, all 4 methods, input validation, dispose safety.
-### TST-008: Cross-layer integration tests
-- **Gap:** Only 1 integration test file exists
-- **Need:** DecisionExecutor->ConclAIve->Persistence flow, MultiAgent->EthicalChecks flow
-- **Team:** 6 (Quality)
+### ~~TST-007: ResearchAnalyst tests~~ DONE (Phase 3)
+- **Status:** Created `tests/BusinessApplications.UnitTests/ResearchAnalysis/ResearchAnalystTests.cs` — 38 tests (26 Facts + 4 Theories) covering constructor guards, all 4 methods, cancellation, semantic search fallback.
+
+### ~~TST-008b: KnowledgeManager tests~~ DONE (Phase 3)
+- **Status:** Created `tests/BusinessApplications.UnitTests/KnowledgeManagement/KnowledgeManagerTests.cs` — 24 tests covering all CRUD methods across 7 framework feature flags, priority ordering, no-feature fallback.
+
+### ~~TST-008: Cross-layer integration tests~~ DONE (Phase 6)
+- **Status:** Created `tests/Integration/Integration.Tests.csproj` (added to solution) — rescued orphaned `EthicalComplianceFrameworkIntegrationTests.cs` (8 existing tests now compile). Added 3 new integration test files:
+ - `DurableWorkflowCrashRecoveryTests.cs` — 9 tests: checkpoint persistence, crash recovery resume, context flow, retry success, retry exhaustion, cancellation checkpoint, purge cleanup, concurrent isolation.
+ - `DecisionExecutorIntegrationTests.cs` — 8 tests: end-to-end KG+LLM+persist flow, empty context, LLM failure, cancellation, status retrieval, log filtering, concurrent decisions. Includes `InMemoryKnowledgeGraphManager`.
+ - `ConclAIvePipelineIntegrationTests.cs` — 8 tests: debate/sequential/strategic recipes with real engines, auto-selection, independent sessions, multi-perspective trace, SLA performance. Uses `ConclAIveTestFixture` with deterministic mock LLM.
+ - Total: **25 new integration tests** + 8 existing = 33 integration tests.
+- **Team:** 7 (Testing)
---
## P2-MEDIUM: PRD Implementation (Not Yet Started)
-### PRD-001: NIST AI RMF Governance Suite (FI-03)
+### ~~PRD-001: NIST AI RMF Governance Suite (FI-03)~~ DONE (Phase 10)
- **PRDs:** `docs/prds/01-foundational/nist-ai-rmf-maturity/`
- **Deliverable:** AI risk register, maturity scoring dashboard
-- **Team:** 1 (Foundation)
-
-### PRD-002: Adaptive Balance & Continuous Improvement (FI-04)
+- **Status:** Phase 10 built complete NIST governance suite across 3 layers:
+ - **Foundation:** `NISTEvidence` module — `INISTEvidenceRepositoryPort` (6 methods), `InMemoryNISTEvidenceAdapter`, 5 model classes (NISTEvidenceRecord, EvidenceQueryFilter, EvidenceReviewStatus, EvidenceAuditEntry, EvidenceStatistics). `EvidenceArtifacts` module — `IEvidenceArtifactRepositoryPort`, `InMemoryEvidenceArtifactAdapter`, 3 models (EvidenceArtifact, ArtifactSearchCriteria, RetentionPolicy).
+ - **Reasoning:** `NISTMaturity` module — `INISTMaturityAssessmentPort` (5 methods), `INISTEvidenceStorePort`, `NISTMaturityAssessmentEngine` (pillar scoring, gap analysis, evidence management, roadmap generation), 9 model classes.
+ - **Business:** `NISTCompliance` module — `INISTComplianceServicePort` (7 methods), `NISTComplianceService`, `NISTComplianceController` (7 REST endpoints: score, checklist, evidence submit, evidence review, gap analysis, roadmap, audit), `ServiceCollectionExtensions`, 12 DTO models.
+ - **Tests:** 23 Foundation (NISTEvidence adapter) + 15 Foundation (EvidenceArtifact adapter) + 35 Reasoning (NISTMaturity engine) + 24 Business (controller) + 22 Business (service) = **119 tests**
+- **Team:** 1 (Foundation) + 2 (Reasoning) + 5 (Business)
+
+### ~~PRD-002: Adaptive Balance & Continuous Improvement (FI-04)~~ DONE (Phase 10)
- **PRDs:** `docs/prds/02-adaptive-balance/`
- **Deliverable:** Live spectrums, P95 decision error <=1%
-- **Team:** 1 (Foundation)
+- **Status:** Phase 10 built complete Adaptive Balance suite across 2 layers:
+ - **Reasoning:** `AdaptiveBalance` module — `IAdaptiveBalancePort` (6 methods), `ILearningFrameworkPort`, `IMilestoneWorkflowPort`, `IReflexionPort`, `AdaptiveBalanceEngine` (spectrum positioning with 10 dimensions, milestone workflows, override management, recommendations), `LearningFrameworkEngine` (event recording, pattern analysis, mistake prevention), `ReflexionEngine` (hallucination detection, contradiction analysis, confidence scoring), 12 model classes.
+ - **Business:** `AdaptiveBalance` module — `IAdaptiveBalanceServicePort` (6 methods), `AdaptiveBalanceService`, `AdaptiveBalanceController` (6 REST endpoints: spectrum, history, override, learning evidence, reflexion status, recommendations), `ServiceCollectionExtensions`, 11 DTO models.
+ - **Tests:** 32 Reasoning (AdaptiveBalance engine) + 12 Reasoning (LearningFramework) + 14 Reasoning (Reflexion) + 15 Business (controller) + 21 Business (service) = **94 tests**
+- **Team:** 2 (Reasoning) + 5 (Business)
-### PRD-003: Cognitive Sandwich Workflow (AC-02)
+### ~~PRD-003: Cognitive Sandwich Workflow (AC-02)~~ DONE (Phase 9)
- **PRD:** `docs/prds/01-foundational-infrastructure/mesh-orchestration-hitl.md`
- **Deliverable:** Phase-based HITL workflow, 40% hallucination reduction
+- **Status:** Phase 8 built foundation (17 models, 4 ports, engine, 27 tests). Phase 9 completed:
+ - `CognitiveSandwichController` — 6 REST endpoints (create, get, advance, step-back, audit, debt)
+ - 3 in-memory adapters (CognitiveDebt, PhaseCondition, AuditLogging)
+ - `ServiceCollectionExtensions` DI registration (4 services)
+ - 24 controller tests (null guards, all endpoints, error cases)
+ - Total: 51 tests across engine + controller
- **Team:** 4 (Agency)
-### PRD-004: Cognitive Sovereignty Control (AC-03)
+### ~~PRD-004: Cognitive Sovereignty Control (AC-03)~~ DONE (Phase 9)
- **PRD:** `docs/prds/03-agentic-cognitive-systems/human-boundary.md`
- **Deliverable:** User autonomy toggles, audit trail
+- **Status:** Phase 9 built complete module:
+ - 6 model classes (SovereigntyMode, Profile, Override, AgentAction, AuthorshipTrail, AuditEntry)
+ - 4 port interfaces (Sovereignty, Override, ActionApproval, AuthorshipTrail)
+ - `CognitiveSovereigntyEngine` — mode resolution (override → domain → default), autonomy levels (0.0–1.0)
+ - `CognitiveSovereignty.csproj` + AgencyLayer reference + solution integration
+ - 23 test methods (~31 test cases with theories)
- **Team:** 4 (Agency)
-### PRD-005: Temporal Decision Core (TR-01)
+### ~~PRD-005: Temporal Decision Core (TR-01)~~ DONE (Phase 9)
- **PRDs:** `docs/prds/04-temporal-flexible-reasoning/`
- **Deliverable:** Dual-circuit gate, adaptive window, <5% spurious temporal links
+- **Status:** Phase 9 built complete module:
+ - 7 model classes (TemporalEvent, Edge, Window, GatingDecision, Query, Graph, EdgeLog)
+ - 4 port interfaces (Event, Gate, Graph, Audit)
+ - `TemporalDecisionCoreEngine` — dual-circuit gate (CA1 promoter + L2 suppressor), adaptive window (0–20s), BFS graph traversal
+ - `TemporalDecisionCore.csproj` + ReasoningLayer reference
+ - 25 unit tests (gating, window adjustment, graph queries, audit trail)
- **Team:** 2 (Reasoning)
-### PRD-006: Memory & Flexible Strategy (TR-02)
+### ~~PRD-006: Memory & Flexible Strategy (TR-02)~~ DONE (Phase 9)
- **PRDs:** `docs/prds/04-temporal-flexible-reasoning/`
- **Deliverable:** Recall F1 +30%, recovery +50%
+- **Status:** Phase 9 built complete module:
+ - 7 model classes (MemoryRecord, RecallStrategy, RecallQuery/Result, ConsolidationResult, StrategyPerformance, MemoryStatistics)
+ - 4 port interfaces (MemoryStore, Recall, Consolidation, StrategyAdaptation)
+ - `MemoryStrategyEngine` — 5 recall strategies (ExactMatch, Fuzzy, Semantic, Temporal, Hybrid), consolidation logic, strategy adaptation
+ - `MemoryStrategy.csproj` + ReasoningLayer reference
+ - 27 unit tests (CRUD, all strategies, consolidation, cosine similarity)
- **Team:** 2 (Reasoning)
-### PRD-007: Value Generation Analytics (VI-01)
-- **PRDs:** `docs/prds/04-value-impact/value-generation/`
-- **Deliverable:** ROI dashboard, 90% telemetry coverage
+### ~~PRD-007: Value Generation Analytics (VI-01)~~ DONE (Phase 8)
+- **Status:** Phase 7 wired csproj references + fixed controller imports. Phase 8 completed:
+ - `ServiceCollectionExtensions.AddValueGenerationServices()` — 8 DI registrations (3 engine ports + 5 repository adapters)
+ - 5 in-memory adapters: `InMemoryValueDiagnosticDataRepository`, `InMemoryOrganizationalDataRepository`, `InMemoryEmployabilityDataRepository`, `InMemoryConsentVerifier`, `InMemoryManualReviewRequester`
+ - `ValueGenerationControllerTests` — 30 tests (null guards, all endpoints, consent flows, audit logging)
+ - `ValueGenerationDiagnosticEngineTests` — 12 tests (profiles, strengths, opportunities)
+ - `OrganizationalValueBlindnessEngineTests` — 11 tests (blind spots, risk scoring)
+ - `EmployabilityPredictorEngineTests` — 17 tests (consent, risk classification, manual review)
+ - Total: 70 new tests for ValueGeneration pipeline
- **Team:** 5 (Business)
-### PRD-008: Impact-Driven AI Metrics (VI-02)
+### ~~PRD-008: Impact-Driven AI Metrics (VI-02)~~ DONE (Phase 9)
- **PRDs:** `docs/prds/04-value-impact/impact-driven-ai/`
- **Deliverable:** Psychological safety score >= 80/100
+- **Status:** Phase 9 built complete module:
+ - 9 model classes (PsychologicalSafetyScore, SafetyDimension, MissionAlignment, AdoptionTelemetry, AdoptionAction, ImpactAssessment, ResistanceIndicator, ImpactReport, ConfidenceLevel)
+ - 4 port interfaces (PsychologicalSafety, MissionAlignment, AdoptionTelemetry, ImpactAssessment)
+ - `ImpactMetricsEngine` — safety scoring (6 dimensions, 70% survey + 30% behavioral), alignment, resistance detection, impact assessment
+ - `ImpactMetricsController` — 8 REST endpoints
+ - `ServiceCollectionExtensions` DI registration
+ - `ImpactMetrics.csproj` + BusinessApplications reference
+ - 31 engine tests + 25 controller tests = 56 total
- **Team:** 5 (Business)
@@ -373,33 +302,275 @@
## P3-LOW: Future Enhancements (per docs/future_enhancements.md)
-- Integration testing (Cypress E2E)
-- Internationalization (i18n: en-US, fr-FR, de-DE)
-- Advanced analytics telemetry
-- Performance monitoring instrumentation
-- WCAG 2.1 AA/AAA accessibility audit
-- Code splitting (React.lazy)
-- Service worker caching
-- Audit timeline visualizations (D3.js)
-- Real-time collaboration features
-- Notification integration (email, Teams/Slack)
+- ~~Integration testing (Cypress E2E)~~ DONE (Phase 12) — cypress.config.ts, 3 E2E test suites (dashboard, agent-control, accessibility), custom commands (login, loadDashboard, waitForWidget, assertAccessibility)
+- ~~Internationalization (i18n: en-US, fr-FR, de-DE)~~ DONE (Phase 12) — react-i18next config, 170-key locales for en-US/fr-FR/de-DE, LanguageSelector component, typed useTranslation hook
+- ~~Advanced analytics telemetry~~ DONE (Phase 11) — ITelemetryPort, TelemetryEngine (ActivitySource + Meter with 6 well-known metrics), OpenTelemetryAdapter (OTLP exporter), DI extensions, 2 test files
+- ~~Performance monitoring instrumentation~~ DONE (Phase 11) — IPerformanceMonitoringPort, InMemoryMetricsStoreAdapter (thread-safe, 10K cap), PerformanceMonitoringAdapter (dashboard summary, health status), DI extensions, 2 test files
+- ~~WCAG 2.1 AA/AAA accessibility audit~~ DONE (Phase 12) — axe-core config, SkipNavigation/FocusTrap/LiveRegion/VisuallyHidden components, useReducedMotion/useFocusVisible hooks, 50+ WCAG 2.1 criteria checklist
+- ~~Code splitting (React.lazy)~~ DONE (Phase 12) — LazyWidgetLoader with Suspense + ErrorBoundary, WidgetSkeleton (shimmer), WidgetErrorFallback, lazy widget registry for all panels
+- ~~Service worker caching~~ DONE (Phase 12) — Cache-first for widgets, network-first for APIs, offline manager with request queuing + background sync, registration with update notifications, cache versioning
+- ~~Audit timeline visualizations (D3.js)~~ DONE (Phase 12) — AuditTimeline (zoom/pan, severity colors), MetricsChart (real-time line chart with thresholds), AgentNetworkGraph (force-directed), useD3 hook, light/dark themes
+- ~~Real-time collaboration features~~ DONE (Phase 11) — IRealTimeNotificationPort, CognitiveMeshHub (SignalR typed hub), SignalRNotificationAdapter (presence tracking, dashboard groups, agent subscriptions), 7 models, DI extensions, 2 test files
+- ~~Notification integration (email, Teams/Slack)~~ DONE (Phase 11) — SlackNotificationService (Block Kit), MicrosoftTeamsNotificationService (Adaptive Cards), WebhookNotificationService (HMAC-SHA256 signing), 3 test files
+
+---
+
+## P0-CRITICAL: Frontend API Integration (Team 10 — FRONTEND)
+
+> All UI data is currently mocked. 13 backend controllers exist with no frontend wiring. This is the #1 gap.
+
+### FE-001: Generate TypeScript API Client from OpenAPI
+- **File:** `docs/openapi.yaml` → `src/UILayer/web/src/lib/api/generated/`
+- **Action:** Use `openapi-typescript-codegen` or `orval` to auto-generate typed API client from OpenAPI spec. Configure as npm script (`npm run generate-api`).
+- **Team:** 10 (Frontend)
+
+### FE-002: Replace Mocked API Service with Real Backend Integration
+- **File:** `src/UILayer/web/src/services/api.ts`
+- **Action:** Replace hardcoded `DashboardAPI` singleton with real HTTP calls to all 13 backend controllers. Wire generated client from FE-001. Remove all `Math.random()` and simulated delays.
+- **Depends on:** FE-001
+- **Team:** 10 (Frontend)
+
+### FE-003: Add SignalR Client for Real-Time Updates
+- **File:** `src/UILayer/web/src/lib/realtime/` (new)
+- **Action:** Install `@microsoft/signalr`, connect to `CognitiveMeshHub`. Replace 5-second polling interval with real-time subscriptions (JoinDashboardGroup, SubscribeToAgent). Add reconnection logic with exponential backoff.
+- **Team:** 10 (Frontend)
+
+### FE-004: Add Authentication Flow
+- **Files:** `src/UILayer/web/src/app/login/`, `src/UILayer/web/src/contexts/AuthContext.tsx`
+- **Action:** Login page, JWT token management (access + refresh), protected route wrapper, auth context provider, logout, token refresh interceptor. Backend already has Bearer JWT + OAuth2.
+- **Team:** 10 (Frontend)
+
+---
+
+## P1-HIGH: Frontend State & Infrastructure (Team 10 — FRONTEND)
+
+### FE-005: Add Global State Management
+- **File:** `src/UILayer/web/src/stores/` (new)
+- **Action:** Add Zustand stores for: auth state, agent registry, dashboard data, notifications, user preferences. Replace scattered useState with centralized stores. Persist preferences to localStorage.
+- **Depends on:** FE-004
+- **Team:** 10 (Frontend)
+
+### FE-006: Add Error Handling Infrastructure
+- **Files:** `src/UILayer/web/src/components/ErrorBoundary/`, `src/UILayer/web/src/lib/api/interceptors.ts`
+- **Action:** Global error boundary wrapping app, toast notifications (sonner library), API error interceptor (401 → redirect to login, 403 → forbidden page, 500 → error toast), retry logic for transient failures.
+- **Team:** 10 (Frontend)
+
+### FE-007: Add Loading States & Skeleton Screens
+- **Files:** `src/UILayer/web/src/components/skeletons/`
+- **Action:** Skeleton screens for dashboard panels, agent lists, metrics cards, settings forms. Optimistic updates for mutations. Suspense boundaries for route-level loading.
+- **Team:** 10 (Frontend)
+
+### FE-008: Settings Page — Theme, Language & Accessibility
+- **Files:** `src/UILayer/web/src/app/settings/`
+- **Action:** Settings page with sections: Theme (light/dark/system), Language (en-US/fr-FR/de-DE using existing i18n), Accessibility (reduced motion, high contrast, font size), Data & Privacy (consent toggles). Persist to backend user preferences API + localStorage fallback.
+- **Team:** 10 (Frontend)
+
+### FE-009: Notification Preferences UI
+- **Files:** `src/UILayer/web/src/app/settings/notifications/`
+- **Action:** Notification preferences panel: channel toggles (email, push, SMS, in-app), category filters (approvals, security, system), quiet hours with timezone. Wire to backend Notification Preferences API from OpenAPI spec.
+- **Team:** 10 (Frontend)
+
+### FE-010: User Profile Page
+- **Files:** `src/UILayer/web/src/app/profile/`
+- **Action:** User profile view: account info, role display, consent management (GDPR), data export request, session history. Wire to ComplianceController for consent records and data subject requests.
+- **Team:** 10 (Frontend)
+
+---
+
+## P1-HIGH: Widget PRD Implementations (Team 10 — FRONTEND)
+
+> 17 widget PRDs exist in docs/prds/. Only generic components are built — no PRD-specific widgets.
+
+### FE-011: NIST Compliance Dashboard Widget
+- **PRD:** `docs/prds/01-foundational/nist-ai-rmf-maturity/mesh-widget.md`
+- **Backend:** `NISTComplianceController` — score, checklist, evidence, gap analysis, roadmap
+- **Action:** Maturity score gauge, pillar breakdown, evidence upload form, gap analysis table, roadmap timeline. Wire to 7 REST endpoints.
+- **Team:** 10 (Frontend)
+
+### FE-012: Adaptive Balance Widget
+- **PRD:** `docs/prds/02-adaptive-balance/mesh-widget.md`
+- **Backend:** `AdaptiveBalanceController` — spectrum, history, override, learning, reflexion, recommendations
+- **Action:** Interactive spectrum sliders (5 dimensions), override controls with approval flow, audit trail, recommendation cards. Real-time updates via SignalR.
+- **Team:** 10 (Frontend)
+
+### FE-013: Value Generation Widget (upgrade existing)
+- **PRD:** `docs/prds/04-value-impact/value-generation/mesh-widget.md`
+- **Backend:** `ValueGenerationController` — value-diagnostic, org-blindness, employability
+- **Action:** Upgrade existing TwoHundredDollarTestWidget + ValueDiagnosticDashboard. Add consent flow, full scoring visualization, strengths/opportunities radar chart, org blindness heatmap. Wire to real API (replace mocked adapter).
+- **Team:** 10 (Frontend)
+
+### FE-014: Impact Metrics Widget
+- **PRD:** `docs/prds/04-value-impact/impact-driven-ai/mesh-widget.md`
+- **Backend:** `ImpactMetricsController` — safety, alignment, adoption, assessment
+- **Action:** Psychological safety gauge (6 dimensions), mission alignment radar, adoption telemetry timeline, resistance indicator cards. 8 REST endpoints to wire.
+- **Team:** 10 (Frontend)
+
+### FE-015: Cognitive Sandwich Widget
+- **PRD:** `docs/prds/01-foundational-infrastructure/mesh-orchestration-hitl.md`
+- **Backend:** `CognitiveSandwichController` — create, get, advance, step-back, audit, debt
+- **Action:** Phase stepper UI (Human→AI→Human), HITL approval modal, cognitive debt tracker, audit log viewer. Real-time phase updates via SignalR.
+- **Team:** 10 (Frontend)
+
+---
+
+## P2-MEDIUM: Additional Widget PRDs (Team 10 — FRONTEND)
+
+### FE-016: Context Engineering Widget
+- **PRD:** `docs/prds/03-agentic-cognitive-systems/context-engineering-widget.md`
+- **Action:** AI context frame management UI — context window visualizer, token budget, frame composition.
+- **Team:** 10 (Frontend)
+
+### FE-017: Agentic System Control Widget (upgrade existing)
+- **PRD:** `docs/prds/07-agentic-systems/agentic-ai-system/mesh-widget.md`
+- **Backend:** `AgentController` — registry CRUD, orchestrate, authority
+- **Action:** Upgrade existing AgentControlCenter. Add agent lifecycle management (register, update, retire), authority configuration form, orchestration trigger with results viewer. Wire to real agent API.
+- **Team:** 10 (Frontend)
+
+### FE-018: Convener Widget
+- **PRD:** `docs/prds/03-convener/convener-widget.md`
+- **Backend:** `ConvenerController` — innovation spread, learning catalyst
+- **Action:** Innovation spread visualization (Rogers diffusion curve), learning recommendation cards, champion discovery with skill matching.
+- **Team:** 10 (Frontend)
+
+### FE-019: Widget Marketplace UI
+- **Backend:** C# `WidgetRegistry`, `PluginOrchestrator`, `MarketplaceEntry` models
+- **Action:** Widget marketplace page: browse available widgets, install/uninstall, version management, security sandbox info. Wire to WidgetRegistry C# service.
+- **Team:** 10 (Frontend)
+
+### FE-020: Organizational Mesh Widget
+- **PRD:** `docs/prds/08-organizational-transformation/org-mesh-widget.md`
+- **Action:** Organization-level cognitive mesh visualization — department network graph, capability heatmap, transformation progress tracker.
+- **Team:** 10 (Frontend)
+
+---
+
+## P2-MEDIUM: App Structure & Navigation (Team 10 — FRONTEND)
+
+### FE-021: Multi-Page Routing
+- **Files:** `src/UILayer/web/src/app/` — new route directories
+- **Action:** Create pages: `/dashboard`, `/settings`, `/agents`, `/compliance`, `/analytics`, `/marketplace`. Add route-level layouts with shared sidebar. Use Next.js App Router with loading.tsx and error.tsx per route.
+- **Team:** 10 (Frontend)
+
+### FE-022: Navigation Component
+- **Files:** `src/UILayer/web/src/components/Navigation/`
+- **Action:** Sidebar navigation with collapsible sections, breadcrumbs, mobile hamburger menu, responsive drawer. Active route highlighting. User avatar + quick settings in header.
+- **Team:** 10 (Frontend)
+
+### FE-023: Role-Based UI Gating
+- **Files:** `src/UILayer/web/src/lib/auth/permissions.ts`
+- **Action:** Role-based component visibility (Admin, Analyst, Viewer). Admin-only routes (agent registry CRUD, settings). Permission-gated buttons and forms. Roles from JWT claims.
+- **Depends on:** FE-004
+- **Team:** 10 (Frontend)
+
+---
+
+## P2-MEDIUM: Frontend CI/CD & Deployment (Teams 8, 9, 10)
+
+### FECICD-001: Add Frontend Build/Test/Lint to CI Pipeline
+- **File:** `.github/workflows/build.yml` (update existing)
+- **Action:** Add frontend job: `npm ci`, `npm run lint`, `npm run build`, `npm test -- --ci --coverage`. Fail pipeline on lint errors or test failures. Upload coverage to Codecov.
+- **Team:** 8 (CI/CD)
+
+### FECICD-002: Frontend Docker Container
+- **File:** `src/UILayer/web/Dockerfile` (new)
+- **Action:** Multi-stage Docker build: Node 20 build stage → Nginx alpine runtime. Add `nginx.conf` for SPA routing (try_files fallback). Health check endpoint. Environment variable injection at runtime.
+- **Team:** 8 (CI/CD) + 9 (Infra)
+
+### FECICD-003: Add Frontend to docker-compose.yml
+- **File:** `docker-compose.yml` (update)
+- **Action:** Add `web` service: build from `src/UILayer/web/Dockerfile`, port 3000, depends_on backend API, environment variables for API URL, health check.
+- **Team:** 8 (CI/CD)
+
+### FECICD-004: Frontend Deployment Pipeline
+- **File:** `.github/workflows/deploy.yml` (update)
+- **Action:** Add frontend deployment step: build Docker image, push to ACR, deploy to Azure Static Web Apps or AKS alongside backend. Environment-specific API URL injection.
+- **Team:** 8 (CI/CD)
+
+### FECICD-005: Kubernetes Frontend Manifests
+- **Files:** `k8s/base/frontend-deployment.yaml`, `k8s/base/frontend-service.yaml`
+- **Action:** K8s deployment for frontend container: Nginx serving Next.js static export, Ingress with TLS, ConfigMap for API URL, HPA for auto-scaling. Kustomize overlays for dev/staging/prod.
+- **Team:** 9 (Infra)
+
+### FECICD-006: Terraform Frontend Infrastructure
+- **File:** `infra/modules/frontend/` (new)
+- **Action:** Terraform module for Azure Static Web Apps or Azure CDN for frontend hosting. Custom domain, TLS certificate, CDN caching rules, WAF protection.
+- **Team:** 9 (Infra)
+
+---
+
+## P2-MEDIUM: Frontend Testing (Teams 7, 10)
+
+### FETEST-001: Component Unit Test Coverage (Target 80%)
+- **Files:** `src/UILayer/web/src/components/**/*.test.tsx`
+- **Action:** Add unit tests for all 43+ components. Currently only 1 test file exists (CognitiveMeshButton). Test rendering, user interactions, accessibility, error states. Use Jest + Testing Library.
+- **Team:** 7 (Testing) + 10 (Frontend)
+
+### FETEST-002: API Integration Tests
+- **Files:** `src/UILayer/web/src/lib/api/__tests__/`
+- **Action:** Test generated API client against mock server (MSW — Mock Service Worker). Verify all 13 controller integrations. Test error handling, retry logic, auth token injection.
+- **Team:** 7 (Testing) + 10 (Frontend)
+
+### FETEST-003: E2E Tests with Real API
+- **Files:** `cypress/e2e/`
+- **Action:** Update existing Cypress tests to use real API (not mocked data). Add E2E flows: login → dashboard → agent management → settings → logout. Add API intercepts for deterministic testing.
+- **Team:** 7 (Testing)
+
+### FETEST-004: Visual Regression Testing
+- **Files:** `.storybook/`, `chromatic.config.js`
+- **Action:** Add Chromatic or Percy for visual regression on Storybook stories. Run on PR to catch unintended visual changes. Baseline all existing 3+ stories + add stories for new components.
+- **Team:** 7 (Testing)
+
+### FETEST-005: Lighthouse CI Performance Monitoring
+- **File:** `.github/workflows/lighthouse.yml` (new)
+- **Action:** Run Lighthouse CI on every PR. Set thresholds: Performance >= 80, Accessibility >= 95, Best Practices >= 90, SEO >= 80. Track bundle size with `@next/bundle-analyzer`. Fail on regression.
+- **Team:** 8 (CI/CD)
+
+---
+
+## P3-LOW: Frontend Advanced Features (Team 10 — FRONTEND)
+
+### FE-024: Dashboard Export (PDF/PNG)
+- **Action:** Export current dashboard view as PDF or PNG. Use html2canvas + jsPDF. Include timestamp and user watermark.
+- **Team:** 10 (Frontend)
+
+### FE-025: Command Palette (Cmd+K)
+- **Action:** Global keyboard shortcut (Cmd+K / Ctrl+K) to open command palette. Search across pages, agents, widgets, settings. Quick actions: toggle theme, switch language, navigate.
+- **Team:** 10 (Frontend)
+
+### FE-026: Real-Time Collaboration Presence
+- **Action:** Show active users on dashboard via SignalR presence tracking. Live cursor indicators. Collaborative widget editing.
+- **Depends on:** FE-003
+- **Team:** 10 (Frontend)
+
+### FE-027: Additional Locale Support
+- **Action:** Add es-ES (Spanish), ja-JP (Japanese), zh-CN (Chinese Simplified) to existing i18n framework. Translation files for all 170+ keys.
+- **Team:** 10 (Frontend)
+
+### FE-028: PWA Enhancements
+- **Action:** Add web app manifest, install prompt, push notifications via service worker, offline dashboard with cached data.
+- **Team:** 10 (Frontend)
---
## Summary Counts
-| Priority | Items | Description |
-|----------|-------|-------------|
-| P0-CRITICAL | 2 | Build fixes |
-| P1-HIGH (stubs) | 16 | Core stub implementations |
-| P1-HIGH (CI/CD) | 8 | Pipeline, Docker, DevEx |
-| P1-HIGH (IaC) | 11 | Terraform modules + Terragrunt + K8s |
-| P2-MEDIUM (stubs) | 4 | Business app fake data |
-| P2-MEDIUM (tests) | 8 | Missing test coverage |
-| P2-MEDIUM (PRDs) | 8 | Unstarted PRD implementations |
-| P3-LOW | 10 | Future enhancements |
-| **Total** | **67** | Actionable work items |
+| Priority | Total | Done | Remaining | Description |
+|----------|-------|------|-----------|-------------|
+| P0-CRITICAL (backend) | 3 | 3 | 0 | Build fixes + arch violations — ALL RESOLVED |
+| P0-CRITICAL (frontend) | 4 | 0 | 4 | API client gen, mock replacement, SignalR, auth flow |
+| P1-HIGH (stubs) | 16 | 16 | 0 | All core stub implementations complete |
+| P1-HIGH (CI/CD) | 8 | 8 | 0 | Pipeline, Docker, DevEx — ALL COMPLETE |
+| P1-HIGH (IaC) | 11 | 11 | 0 | Terraform modules + Terragrunt + K8s |
+| P1-HIGH (frontend) | 11 | 0 | 11 | State mgmt, error handling, settings, widget PRDs |
+| P2-MEDIUM (stubs) | 5 | 5 | 0 | BIZ-004 (ConvenerController) DONE |
+| P2-MEDIUM (tests) | 9 | 9 | 0 | 334 total new backend tests |
+| P2-MEDIUM (PRDs) | 8 | 8 | 0 | ALL 8 backend PRDs DONE |
+| P2-MEDIUM (frontend) | 14 | 0 | 14 | Widget PRDs, routing, nav, CI/CD, deployment |
+| P2-MEDIUM (frontend testing) | 5 | 0 | 5 | Component tests, E2E, visual regression, Lighthouse |
+| P3-LOW (backend) | 10 | 10 | 0 | ALL DONE — Phase 11 + Phase 12 |
+| P3-LOW (frontend) | 5 | 0 | 5 | Export, Cmd+K, collaboration, locales, PWA |
+| **Total** | **109** | **70** | **39** | Backend 100% done. Frontend integration round: 39 new items. |
---
-*Generated: 2026-02-19 | Updated with CI/CD + Infrastructure items*
+*Updated: 2026-02-20 | Phase 12 closed original 70 items (100%). Phase 13+ begins frontend integration round: 39 new items across API integration, widget PRDs, settings, testing, CI/CD, deployment, and infrastructure.*
diff --git a/AGENT_TEAMS.md b/AGENT_TEAMS.md
index f7c125a..f546f55 100644
--- a/AGENT_TEAMS.md
+++ b/AGENT_TEAMS.md
@@ -283,28 +283,78 @@ This layer depends on Foundation, Reasoning, Metacognitive, and Agency layers.
---
+### Team 10: FRONTEND — UI/Frontend Integration & Widget PRDs
+
+**Scope:** `src/UILayer/web/`, `src/UILayer/` (C# BFF services)
+
+**Current state:**
+- 43+ React components built (design system, drag-drop, visualizations, accessibility)
+- **All API data is mocked** — zero real backend integration
+- 13 backend controllers exist but no frontend wiring
+- SignalR hub exists but frontend uses fake polling
+- No auth flow, no settings page, no multi-page routing
+- Only 1 unit test, Cypress config exists but tests use mock data
+
+**Work items (39 backlog items — FE-*, FECICD-*, FETEST-*):**
+- **P0**: Generate OpenAPI client, replace mocked APIs, add SignalR, add auth flow
+- **P1**: State management, error handling, settings/preferences, 5 widget PRD implementations
+- **P2**: Additional widget PRDs, routing, navigation, role-based UI, frontend CI/CD & deployment
+- **P2**: Frontend testing — component tests (80%), API integration tests, visual regression, Lighthouse CI
+- **P3**: Dashboard export, command palette, collaboration presence, additional locales, PWA
+
+**Claude Code session prompt:** `/team-frontend`
+
+---
+
## Execution Order
+### Backend Round (Phases 1-12 — COMPLETE)
+
```text
-Phase 1 (parallel — 5 teams):
+Phase 1 (parallel — 5 teams): ✅ DONE
+-- Team 1: FOUNDATION --- Fix stubs, implement FI-02
+-- Team 2: REASONING --- Complete SystemsReasoner, add temporal features
+-- Team 6: QUALITY --- Fix build errors, XML docs, architecture check
+-- Team 8: CI/CD --- Add Docker, CodeQL, Dependabot, Makefile
+-- Team 9: INFRA --- Create Terraform modules, Terragrunt envs
-Phase 2 (parallel — 3 teams, after Phase 1 stabilizes):
+Phase 2 (parallel — 3 teams): ✅ DONE
+-- Team 3: METACOGNITIVE --- Implement 50+ stubs (SelfEvaluator, LearningManager)
+-- Team 4: AGENCY --- Fix TODO.md items, complete orchestration
+-- Team 7: TESTING --- Add missing test files, integration tests
-Phase 3 (parallel — 2 teams, after lower layers functional):
+Phase 3 (parallel — 2 teams): ✅ DONE
+-- Team 5: BUSINESS APPS --- Replace all 12 fake-data stubs
+-- Team 7: TESTING --- Add Business layer tests
-Phase 4 (final sweep):
- +-- Team 6: QUALITY --- Full build validation, architecture check
- +-- Team 7: TESTING --- Full test suite with coverage report
+Phase 4-12: ✅ DONE — Quality sweep, CI/CD, integration tests, all 8 PRDs, P3-LOW enhancements
+```
+
+### Frontend Integration Round (Phases 13-17 — NEW)
+
+```text
+Phase 13 (parallel — 2 teams): API Foundation
+ +-- Team 10: FRONTEND --- FE-001 API client gen, FE-004 auth flow, FE-005 state mgmt
+ +-- Team 8: CI/CD --- FECICD-001 frontend build/test/lint in CI pipeline
+
+Phase 14 (parallel — 3 teams): Core Integration
+ +-- Team 10: FRONTEND --- FE-002 replace mocked APIs, FE-003 SignalR, FE-006 error handling
+ +-- Team 10: FRONTEND --- FE-007 loading states, FE-008 settings page, FE-022 navigation
+ +-- Team 7: TESTING --- FETEST-001 component unit tests (80% target)
+
+Phase 15 (parallel — 3 teams): Widget PRDs & Deployment
+ +-- Team 10: FRONTEND --- FE-011 to FE-015 (5 priority widget PRDs)
+ +-- Team 8: CI/CD --- FECICD-002 to FECICD-004 (Docker, compose, deploy pipeline)
+ +-- Team 9: INFRA --- FECICD-005 K8s manifests, FECICD-006 Terraform frontend
+
+Phase 16 (parallel — 2 teams): Remaining Widgets & Testing
+ +-- Team 10: FRONTEND --- FE-016 to FE-020 (5 additional widget PRDs), FE-021 routing, FE-023 RBAC
+ +-- Team 7: TESTING --- FETEST-002 to FETEST-005 (API tests, E2E, visual regression, Lighthouse)
+
+Phase 17 (final sweep):
+ +-- Team 10: FRONTEND --- FE-024 to FE-028 (P3-LOW advanced features)
+ +-- Team 6: QUALITY --- Full validation: backend + frontend build, architecture check
+ +-- Team 7: TESTING --- Full frontend test suite with coverage report
```
---
@@ -366,6 +416,7 @@ The orchestrator is **fully autonomous across sessions**:
/team-metacognitive # Team 3: MetacognitiveLayer 50+ stubs
/team-agency # Team 4: AgencyLayer + TODO.md items
/team-business # Team 5: BusinessApplications fake-data stubs
+/team-frontend # Team 10: UI/Frontend API integration + widget PRDs
# Support teams (cross-cutting):
/team-quality # Team 6: Build health + architecture validation
@@ -406,6 +457,7 @@ The orchestrator is **fully autonomous across sessions**:
| `/team-testing` | Unit tests, integration tests, coverage, benchmarks | `tests/` |
| `/team-cicd` | Pipelines, Docker, security scanning, DevEx | `.github/`, `scripts/` |
| `/team-infra` | Terraform, Terragrunt, Docker, Kubernetes | `infra/`, `k8s/` |
+| `/team-frontend` | UI/Frontend API integration, widget PRDs, settings | `src/UILayer/web/` |
### Workflow Agents
@@ -469,18 +521,34 @@ The orchestrator is **fully autonomous across sessions**:
## Work Item Summary
-| Team | Focus | Stubs | TODOs | New Files | Priority |
-|------|-------|-------|-------|-----------|----------|
-| 1 Foundation | Layer stubs + PRDs | 3 | 0 | ~10 tests | P0 |
-| 2 Reasoning | Layer stubs + PRDs | 2 | 0 | ~8 tests | P0/P1 |
-| 3 Metacognitive | 50+ stubs | 50+ | 5 | ~15 tests | P1 |
-| 4 Agency | Stubs + TODO.md | 8 | 5 | ~12 tests | P1 |
-| 5 Business | Fake-data stubs | 14 | 12 | ~20 tests | P2 |
-| 6 Quality | Build/XML/arch | -- | -- | -- | P0 |
-| 7 Testing | Test coverage | -- | -- | ~30 test files | P1 |
-| 8 CI/CD | Pipelines/Docker | -- | -- | ~8 configs | P1 |
-| 9 Infra | Terraform/K8s | -- | -- | ~20 .tf files | P1 |
-| **Total** | | **77+** | **22** | **~120+** | |
+### Backend Round (COMPLETE)
+
+| Team | Focus | Items | Status |
+|------|-------|-------|--------|
+| 1 Foundation | Layer stubs + PRDs | 3 stubs + 2 PRDs | DONE |
+| 2 Reasoning | Layer stubs + PRDs | 4 stubs + 2 PRDs | DONE |
+| 3 Metacognitive | 50+ stubs | 6 items | DONE |
+| 4 Agency | Stubs + TODO.md | 6 items + 2 PRDs | DONE |
+| 5 Business | Fake-data stubs | 5 items + 4 PRDs | DONE |
+| 6 Quality | Build/XML/arch | 3 arch violations | DONE |
+| 7 Testing | Test coverage | 9 test suites, 33 integration | DONE |
+| 8 CI/CD | Pipelines/Docker | 8 configs | DONE |
+| 9 Infra | Terraform/K8s | 11 items | DONE |
+| **Backend** | **70 items** | **70/70** | **100% DONE** |
+
+### Frontend Integration Round (NEW — 39 items)
+
+| Team | Focus | Items | New Files | Priority |
+|------|-------|-------|-----------|----------|
+| 10 Frontend | API integration, auth, state | 4 P0 + 6 P1 | ~20 files | P0/P1 |
+| 10 Frontend | Widget PRD implementations | 10 widgets | ~30 components | P1/P2 |
+| 10 Frontend | Settings, routing, navigation | 5 items | ~15 files | P1/P2 |
+| 10 Frontend | P3-LOW advanced features | 5 items | ~10 files | P3 |
+| 8 CI/CD | Frontend build/deploy pipeline | 4 items | ~6 configs | P2 |
+| 9 Infra | Frontend K8s + Terraform | 2 items | ~8 files | P2 |
+| 7 Testing | Frontend test coverage | 5 items | ~40 test files | P2 |
+| 6 Quality | Full-stack validation | 1 item | -- | P3 |
+| **Frontend** | **39 items** | **0/39** | **~129 files** | **Phase 13-17** |
---
@@ -490,12 +558,18 @@ Run `/orchestrate --status` or check `.claude/state/orchestrator.json` directly.
Manual verification:
```bash
+# Backend
dotnet build CognitiveMesh.sln
dotnet test CognitiveMesh.sln --no-build
+
+# Frontend
+cd src/UILayer/web && npm run build && npm test -- --ci && npm run lint
+
+# Stubs
grep -r "// TODO" src/ --include="*.cs" | wc -l
grep -rE "// TODO: Implement|// Placeholder|throw new NotImplementedException" src/ --include="*.cs" | wc -l
```
---
-*Generated: 2026-02-19 | 9 code teams + 5 workflow agents + autonomous state persistence*
+*Updated: 2026-02-20 | 10 code teams + 5 workflow agents + autonomous state persistence. Backend round complete (70/70). Frontend integration round: 39 new items across Phases 13-17.*
diff --git a/CognitiveMesh.sln b/CognitiveMesh.sln
index 299ef39..ebed182 100644
--- a/CognitiveMesh.sln
+++ b/CognitiveMesh.sln
@@ -21,6 +21,38 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "MetacognitiveLayer", "Metac
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "UncertaintyQuantification.Tests", "tests\MetacognitiveLayer\UncertaintyQuantification.Tests\UncertaintyQuantification.Tests.csproj", "{FAF348F7-99E7-4C20-A86D-3AC4D18DBCD6}"
EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ContinuousLearning.Tests", "tests\MetacognitiveLayer\ContinuousLearning\ContinuousLearning.Tests.csproj", "{A1B2C3D4-E5F6-7890-ABCD-EF0123456789}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Integration.Tests", "tests\Integration\Integration.Tests.csproj", "{B2C3D4E5-F678-9012-ABCD-EF1234567890}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CognitiveSandwich", "src\AgencyLayer\CognitiveSandwich\CognitiveSandwich.csproj", "{1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CognitiveSovereignty", "src\AgencyLayer\CognitiveSovereignty\CognitiveSovereignty.csproj", "{2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}"
+EndProject
+Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "AgencyLayer", "AgencyLayer", "{3C4D5E6F-7A8B-9C0D-1E2F-A3B4C5D6E7F8}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CognitiveSandwich.Tests", "tests\AgencyLayer\CognitiveSandwich\CognitiveSandwich.Tests.csproj", "{4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "CognitiveSovereignty.Tests", "tests\AgencyLayer\CognitiveSovereignty\CognitiveSovereignty.Tests.csproj", "{5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "TemporalDecisionCore", "src\ReasoningLayer\TemporalDecisionCore\TemporalDecisionCore.csproj", "{A1B2C3D4-0001-0001-0001-000000000001}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "MemoryStrategy", "src\ReasoningLayer\MemoryStrategy\MemoryStrategy.csproj", "{A1B2C3D4-0002-0002-0002-000000000002}"
+EndProject
+Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "ReasoningLayer", "ReasoningLayer", "{A1B2C3D4-0003-0003-0003-000000000003}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "ReasoningLayer.Tests", "tests\ReasoningLayer.Tests\ReasoningLayer.Tests.csproj", "{A1B2C3D4-0004-0004-0004-000000000004}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RefactoringAgents", "src\AgencyLayer\RefactoringAgents\RefactoringAgents.csproj", "{C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RealTime", "src\AgencyLayer\RealTime\RealTime.csproj", "{8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RealTime.Tests", "tests\AgencyLayer\RealTime\RealTime.Tests.csproj", "{9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Telemetry", "src\MetacognitiveLayer\Telemetry\Telemetry.csproj", "{6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}"
+EndProject
+Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Telemetry.Tests", "tests\MetacognitiveLayer\Telemetry\Telemetry.Tests.csproj", "{7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}"
+EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
@@ -103,6 +135,174 @@ Global
{FAF348F7-99E7-4C20-A86D-3AC4D18DBCD6}.Release|x64.Build.0 = Release|Any CPU
{FAF348F7-99E7-4C20-A86D-3AC4D18DBCD6}.Release|x86.ActiveCfg = Release|Any CPU
{FAF348F7-99E7-4C20-A86D-3AC4D18DBCD6}.Release|x86.Build.0 = Release|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Debug|x64.Build.0 = Debug|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Debug|x86.Build.0 = Debug|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Release|Any CPU.Build.0 = Release|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Release|x64.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Release|x64.Build.0 = Release|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Release|x86.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789}.Release|x86.Build.0 = Release|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Debug|x64.Build.0 = Debug|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Debug|x86.Build.0 = Debug|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Release|Any CPU.Build.0 = Release|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Release|x64.ActiveCfg = Release|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Release|x64.Build.0 = Release|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Release|x86.ActiveCfg = Release|Any CPU
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890}.Release|x86.Build.0 = Release|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Debug|x64.Build.0 = Debug|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Debug|x86.Build.0 = Debug|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Release|Any CPU.Build.0 = Release|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Release|x64.ActiveCfg = Release|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Release|x64.Build.0 = Release|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Release|x86.ActiveCfg = Release|Any CPU
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6}.Release|x86.Build.0 = Release|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Debug|x64.Build.0 = Debug|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Debug|x86.Build.0 = Debug|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Release|Any CPU.Build.0 = Release|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Release|x64.ActiveCfg = Release|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Release|x64.Build.0 = Release|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Release|x86.ActiveCfg = Release|Any CPU
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7}.Release|x86.Build.0 = Release|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Debug|x64.Build.0 = Debug|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Debug|x86.Build.0 = Debug|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Release|Any CPU.Build.0 = Release|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Release|x64.ActiveCfg = Release|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Release|x64.Build.0 = Release|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Release|x86.ActiveCfg = Release|Any CPU
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9}.Release|x86.Build.0 = Release|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Debug|x64.Build.0 = Debug|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Debug|x86.Build.0 = Debug|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Release|Any CPU.Build.0 = Release|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Release|x64.ActiveCfg = Release|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Release|x64.Build.0 = Release|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Release|x86.ActiveCfg = Release|Any CPU
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0}.Release|x86.Build.0 = Release|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Debug|x64.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Debug|x86.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Release|Any CPU.Build.0 = Release|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Release|x64.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Release|x64.Build.0 = Release|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Release|x86.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0001-0001-0001-000000000001}.Release|x86.Build.0 = Release|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Debug|x64.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Debug|x86.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Release|Any CPU.Build.0 = Release|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Release|x64.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Release|x64.Build.0 = Release|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Release|x86.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0002-0002-0002-000000000002}.Release|x86.Build.0 = Release|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Debug|x64.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Debug|x86.Build.0 = Debug|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Release|Any CPU.Build.0 = Release|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Release|x64.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Release|x64.Build.0 = Release|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Release|x86.ActiveCfg = Release|Any CPU
+ {A1B2C3D4-0004-0004-0004-000000000004}.Release|x86.Build.0 = Release|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Debug|x64.Build.0 = Debug|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Debug|x86.Build.0 = Debug|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Release|Any CPU.Build.0 = Release|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Release|x64.ActiveCfg = Release|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Release|x64.Build.0 = Release|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Release|x86.ActiveCfg = Release|Any CPU
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1}.Release|x86.Build.0 = Release|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Debug|x64.Build.0 = Debug|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Debug|x86.Build.0 = Debug|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Release|Any CPU.Build.0 = Release|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Release|x64.ActiveCfg = Release|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Release|x64.Build.0 = Release|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Release|x86.ActiveCfg = Release|Any CPU
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2}.Release|x86.Build.0 = Release|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Debug|x64.Build.0 = Debug|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Debug|x86.Build.0 = Debug|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Release|Any CPU.Build.0 = Release|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Release|x64.ActiveCfg = Release|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Release|x64.Build.0 = Release|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Release|x86.ActiveCfg = Release|Any CPU
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6}.Release|x86.Build.0 = Release|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Debug|x64.Build.0 = Debug|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Debug|x86.Build.0 = Debug|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Release|Any CPU.Build.0 = Release|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Release|x64.ActiveCfg = Release|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Release|x64.Build.0 = Release|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Release|x86.ActiveCfg = Release|Any CPU
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3}.Release|x86.Build.0 = Release|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Debug|Any CPU.Build.0 = Debug|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Debug|x64.ActiveCfg = Debug|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Debug|x64.Build.0 = Debug|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Debug|x86.ActiveCfg = Debug|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Debug|x86.Build.0 = Debug|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Release|Any CPU.ActiveCfg = Release|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Release|Any CPU.Build.0 = Release|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Release|x64.ActiveCfg = Release|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Release|x64.Build.0 = Release|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Release|x86.ActiveCfg = Release|Any CPU
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4}.Release|x86.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
@@ -115,5 +315,21 @@ Global
{C921245D-3807-4B46-BD9D-0072DEC343E8} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
{D54F1422-0CB8-7E5F-F9CC-49C88523FCD1} = {0AB3BF05-4346-4AA6-1389-037BE0695223}
{FAF348F7-99E7-4C20-A86D-3AC4D18DBCD6} = {D54F1422-0CB8-7E5F-F9CC-49C88523FCD1}
+ {A1B2C3D4-E5F6-7890-ABCD-EF0123456789} = {D54F1422-0CB8-7E5F-F9CC-49C88523FCD1}
+ {B2C3D4E5-F678-9012-ABCD-EF1234567890} = {0AB3BF05-4346-4AA6-1389-037BE0695223}
+ {1A2B3C4D-5E6F-7A8B-9C0D-E1F2A3B4C5D6} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
+ {2B3C4D5E-6F7A-8B9C-0D1E-F2A3B4C5D6E7} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
+ {3C4D5E6F-7A8B-9C0D-1E2F-A3B4C5D6E7F8} = {0AB3BF05-4346-4AA6-1389-037BE0695223}
+ {4D5E6F7A-8B9C-0D1E-2F3A-B4C5D6E7F8A9} = {3C4D5E6F-7A8B-9C0D-1E2F-A3B4C5D6E7F8}
+ {5E6F7A8B-9C0D-1E2F-3A4B-C5D6E7F8A9B0} = {3C4D5E6F-7A8B-9C0D-1E2F-A3B4C5D6E7F8}
+ {A1B2C3D4-0001-0001-0001-000000000001} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
+ {A1B2C3D4-0002-0002-0002-000000000002} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
+ {A1B2C3D4-0003-0003-0003-000000000003} = {0AB3BF05-4346-4AA6-1389-037BE0695223}
+ {A1B2C3D4-0004-0004-0004-000000000004} = {A1B2C3D4-0003-0003-0003-000000000003}
+ {6A7B8C9D-0E1F-2A3B-4C5D-E6F7A8B9C0D1} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
+ {7B8C9D0E-1F2A-3B4C-5D6E-F7A8B9C0D1E2} = {D54F1422-0CB8-7E5F-F9CC-49C88523FCD1}
+ {C1D2E3F4-5A6B-7C8D-9E0F-A1B2C3D4E5F6} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
+ {8A9B0C1D-2E3F-4A5B-6C7D-E8F9A0B1C2D3} = {DDB468F9-AACD-4EAF-9EBB-8DB0972963EB}
+ {9B0C1D2E-3F4A-5B6C-7D8E-F9A0B1C2D3E4} = {3C4D5E6F-7A8B-9C0D-1E2F-A3B4C5D6E7F8}
EndGlobalSection
EndGlobal
diff --git a/Directory.Build.props b/Directory.Build.props
index ef5c9fc..218f949 100644
--- a/Directory.Build.props
+++ b/Directory.Build.props
@@ -7,6 +7,6 @@
true
true
CS1591
- NU1604;NU1602;NU1506;NU1701
+ NU1604;NU1602;NU1506;NU1701;NU1608
\ No newline at end of file
diff --git a/Directory.Packages.props b/Directory.Packages.props
index e955cdb..0def836 100644
--- a/Directory.Packages.props
+++ b/Directory.Packages.props
@@ -6,16 +6,17 @@
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
@@ -28,26 +29,43 @@
+
+
+
+
+
+
+
+
+
-
+
+
+
+
+
+
+
+
+
@@ -58,5 +76,6 @@
+
diff --git a/Dockerfile b/Dockerfile
new file mode 100644
index 0000000..42bed65
--- /dev/null
+++ b/Dockerfile
@@ -0,0 +1,55 @@
+# syntax=docker/dockerfile:1
+
+# ------------------------------------------------------------------
+# Stage 1: Build
+# ------------------------------------------------------------------
+FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build
+WORKDIR /src
+
+# Copy solution and project files first for layer caching
+COPY CognitiveMesh.sln Directory.Build.props ./
+COPY src/FoundationLayer/*.csproj src/FoundationLayer/
+COPY src/ReasoningLayer/*.csproj src/ReasoningLayer/
+COPY src/MetacognitiveLayer/*.csproj src/MetacognitiveLayer/
+COPY src/AgencyLayer/*.csproj src/AgencyLayer/
+COPY src/BusinessApplications/*.csproj src/BusinessApplications/
+COPY src/MeshSimRuntime/*.csproj src/MeshSimRuntime/
+COPY src/Shared/*.csproj src/Shared/
+COPY src/UILayer/*.csproj src/UILayer/
+COPY tests/ tests/
+
+# Restore NuGet packages
+RUN dotnet restore CognitiveMesh.sln
+
+# Copy remaining source
+COPY . .
+
+# Build in Release mode
+RUN dotnet build CognitiveMesh.sln -c Release --no-restore
+
+# Publish the runtime project (configurable via build arg)
+ARG PUBLISH_PROJECT=src/MeshSimRuntime/MeshSimRuntime.csproj
+RUN dotnet publish "${PUBLISH_PROJECT}" -c Release --no-build -o /app/publish
+
+# ------------------------------------------------------------------
+# Stage 2: Runtime
+# ------------------------------------------------------------------
+FROM mcr.microsoft.com/dotnet/aspnet:9.0 AS runtime
+WORKDIR /app
+
+# Create non-root user for security
+RUN adduser --disabled-password --gecos "" appuser
+
+# Copy published output
+COPY --from=build /app/publish .
+
+# Configurable entrypoint DLL name
+ENV ENTRYPOINT_DLL="MeshSimRuntime.dll"
+
+# Expose default ASP.NET Core port
+EXPOSE 8080
+
+# Switch to non-root user
+USER appuser
+
+ENTRYPOINT ["sh", "-c", "dotnet ${ENTRYPOINT_DLL}"]
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..98990e9
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,57 @@
+.PHONY: build test coverage format clean docker-up docker-down lint restore publish docker-build help
+
+# --- Configuration ---
+SOLUTION := CognitiveMesh.sln
+CONFIG := Release
+COVERAGE_DIR := TestResults/coverage
+
+# --- Build ---
+build: ## Build the solution
+ dotnet build $(SOLUTION) -c $(CONFIG)
+
+restore: ## Restore NuGet packages
+ dotnet restore $(SOLUTION)
+
+publish: ## Publish the runtime project
+ dotnet publish src/MeshSimRuntime/MeshSimRuntime.csproj -c $(CONFIG) -o out/publish
+
+# --- Test ---
+test: ## Run all tests
+ dotnet test $(SOLUTION) --no-build -c $(CONFIG)
+
+coverage: ## Run tests with code coverage (opencover format)
+ dotnet test $(SOLUTION) -c $(CONFIG) \
+ --collect:"XPlat Code Coverage;Format=opencover" \
+ --results-directory $(COVERAGE_DIR)
+ @echo "Coverage reports written to $(COVERAGE_DIR)"
+
+# --- Code Quality ---
+format: ## Format code using dotnet format
+ dotnet format $(SOLUTION) --verbosity normal
+
+lint: ## Run dotnet format in verify mode (CI-friendly)
+ dotnet format $(SOLUTION) --verify-no-changes --verbosity normal
+
+# --- Docker ---
+docker-up: ## Start local dev dependencies (Redis, Qdrant, Azurite)
+ docker compose up -d
+
+docker-down: ## Stop local dev dependencies
+ docker compose down
+
+docker-build: ## Build the application Docker image
+ docker build -t cognitive-mesh:local .
+
+# --- Cleanup ---
+clean: ## Remove build artifacts and test results
+ dotnet clean $(SOLUTION) -c $(CONFIG)
+ rm -rf out TestResults
+ rm -rf src/*/bin src/*/obj
+ rm -rf tests/*/bin tests/*/obj
+
+# --- Help ---
+help: ## Show this help message
+ @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
+ awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-15s\033[0m %s\n", $$1, $$2}'
+
+.DEFAULT_GOAL := help
diff --git a/README.md b/README.md
index 4db32aa..dd96a96 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,9 @@
# Cognitive Mesh: Enterprise AI Transformation Framework
-[](https://github.com/phoenixvc/cognitive-mesh/actions/workflows/dotnet.yml)
+[](https://github.com/phoenixvc/cognitive-mesh/actions/workflows/build.yml)
+[](https://github.com/phoenixvc/cognitive-mesh/actions/workflows/deploy.yml)
+[](https://github.com/phoenixvc/cognitive-mesh/actions/workflows/coverage.yml)
+[](https://codecov.io/gh/phoenixvc/cognitive-mesh)
[](./docs/prds/PRD-PRIORITY-STATUS.md)
[](https://opensource.org/licenses/MIT)
diff --git a/codecov.yml b/codecov.yml
new file mode 100644
index 0000000..2d934d0
--- /dev/null
+++ b/codecov.yml
@@ -0,0 +1,65 @@
+# Codecov configuration — https://docs.codecov.io/docs/codecov-yaml
+codecov:
+ require_ci_to_pass: true
+
+coverage:
+ precision: 2
+ round: down
+ range: "60...90"
+
+ status:
+ project:
+ default:
+ target: auto
+ threshold: 2% # Allow up to 2% drop without failing
+ if_ci_failed: error
+ patch:
+ default:
+ target: 80% # New code should have >= 80% coverage
+ threshold: 5%
+
+comment:
+ layout: "header, diff, flags, components"
+ behavior: default
+ require_changes: true
+ require_base: false
+ require_head: true
+
+ignore:
+ - "tests/**"
+ - "docs/**"
+ - "infra/**"
+ - "k8s/**"
+ - "tools/**"
+ - "**/Migrations/**"
+ - "**/obj/**"
+ - "**/bin/**"
+
+flags:
+ unittests:
+ paths:
+ - src/
+ carryforward: true
+
+component_management:
+ individual_components:
+ - component_id: foundation
+ name: FoundationLayer
+ paths:
+ - src/FoundationLayer/**
+ - component_id: reasoning
+ name: ReasoningLayer
+ paths:
+ - src/ReasoningLayer/**
+ - component_id: metacognitive
+ name: MetacognitiveLayer
+ paths:
+ - src/MetacognitiveLayer/**
+ - component_id: agency
+ name: AgencyLayer
+ paths:
+ - src/AgencyLayer/**
+ - component_id: business
+ name: BusinessApplications
+ paths:
+ - src/BusinessApplications/**
diff --git a/cypress.config.ts b/cypress.config.ts
new file mode 100644
index 0000000..7c823b7
--- /dev/null
+++ b/cypress.config.ts
@@ -0,0 +1,65 @@
+/**
+ * @fileoverview Cypress configuration for the Cognitive Mesh E2E test suite.
+ *
+ * Configures both end-to-end and component testing, including retry
+ * strategies, viewport defaults, and custom support file paths.
+ */
+
+import { defineConfig } from 'cypress';
+
+export default defineConfig({
+ e2e: {
+ /** Base URL of the ASP.NET Core + React dev server. */
+ baseUrl: 'http://localhost:5000',
+
+ /** Path to the E2E support file that runs before every spec. */
+ supportFile: 'cypress/support/e2e.ts',
+
+ /** Glob pattern for locating E2E spec files. */
+ specPattern: 'cypress/e2e/**/*.cy.ts',
+
+ /** Default viewport width in pixels. */
+ viewportWidth: 1280,
+
+ /** Default viewport height in pixels. */
+ viewportHeight: 720,
+
+ /** Disable video recording to speed up CI runs. */
+ video: false,
+
+ /** Capture a screenshot when a test fails during `cypress run`. */
+ screenshotOnRunFailure: true,
+
+ /** Retry configuration: retries in headless (run) mode, none in open mode. */
+ retries: {
+ runMode: 2,
+ openMode: 0,
+ },
+
+ /** Default command timeout in milliseconds. */
+ defaultCommandTimeout: 10000,
+
+ /** Timeout for page-load transitions. */
+ pageLoadTimeout: 30000,
+
+ /** Timeout for `cy.request()` network calls. */
+ responseTimeout: 15000,
+
+ /**
+ * Hook that runs once before all specs.
+ * Can be extended with database seeding, auth token generation, etc.
+ */
+ setupNodeEvents(on, config) {
+ // Register any plugins here (e.g. code coverage, visual regression).
+ return config;
+ },
+ },
+
+ component: {
+ /** Dev server configuration for component testing. */
+ devServer: {
+ framework: 'react',
+ bundler: 'webpack',
+ },
+ },
+});
diff --git a/cypress/e2e/accessibility.cy.ts b/cypress/e2e/accessibility.cy.ts
new file mode 100644
index 0000000..b225a06
--- /dev/null
+++ b/cypress/e2e/accessibility.cy.ts
@@ -0,0 +1,262 @@
+/**
+ * @fileoverview WCAG compliance E2E tests for the Cognitive Mesh UI.
+ *
+ * Uses axe-core via cypress-axe to validate:
+ * - Automated WCAG 2.1 AA compliance on each page
+ * - Color contrast ratios
+ * - Keyboard navigation and focus management
+ * - Screen reader landmarks and ARIA attributes
+ */
+
+describe('Accessibility (WCAG 2.1 AA)', () => {
+ beforeEach(() => {
+ cy.login('testuser@example.com', 'P@ssw0rd!');
+ });
+
+ // -----------------------------------------------------------------------
+ // Axe-core audits on each page
+ // -----------------------------------------------------------------------
+
+ describe('Automated axe-core Audits', () => {
+ const pages = [
+ { name: 'Home / Dashboard', path: '/' },
+ { name: 'Dashboard Overview', path: '/dashboard/main-overview' },
+ { name: 'Agent Audit Trail', path: '/agents/audit-trail' },
+ { name: 'Agent Registry', path: '/agents/registry' },
+ { name: 'Settings', path: '/settings' },
+ ];
+
+ pages.forEach(({ name, path }) => {
+ it(`should have no critical or serious a11y violations on ${name}`, () => {
+ cy.visit(path);
+ // Wait for the page to stabilize.
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+ cy.assertAccessibility();
+ });
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Color contrast
+ // -----------------------------------------------------------------------
+
+ describe('Color Contrast', () => {
+ it('should pass the axe color-contrast rule on the dashboard', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+ cy.injectAxe();
+ cy.checkA11y(undefined, {
+ runOnly: {
+ type: 'rule',
+ values: ['color-contrast'],
+ },
+ });
+ });
+
+ it('should maintain contrast on status badges', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ // Status badges use white text on colored backgrounds.
+ // Verify at least minimal contrast ratio via computed styles.
+ cy.get('span').filter('[style*="color: white"], [style*="color:white"]').each(($el) => {
+ const bgColor = $el.css('background-color');
+ // Ensure the background is not white or transparent (would fail contrast).
+ expect(bgColor).to.not.match(/rgba?\(255,\s*255,\s*255/);
+ expect(bgColor).to.not.equal('transparent');
+ });
+ });
+
+ it('should maintain contrast in dark mode', () => {
+ cy.visit('/dashboard/main-overview');
+ // Trigger dark mode if the app supports it via a toggle.
+ cy.window().then((win) => {
+ win.localStorage.setItem('theme', 'Dark');
+ });
+ cy.reload();
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+ cy.injectAxe();
+ cy.checkA11y(undefined, {
+ runOnly: {
+ type: 'rule',
+ values: ['color-contrast'],
+ },
+ });
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Keyboard navigation
+ // -----------------------------------------------------------------------
+
+ describe('Keyboard Navigation', () => {
+ it('should allow tabbing through interactive elements on the dashboard', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ // Start tabbing from the body.
+ cy.get('body').tab();
+ cy.focused().should('exist');
+
+ // Tab through several elements and verify focus moves.
+ const focusedElements: string[] = [];
+ for (let i = 0; i < 10; i++) {
+ cy.focused().then(($el) => {
+ const tagName = $el.prop('tagName');
+ const role = $el.attr('role');
+ focusedElements.push(`${tagName}[role=${role}]`);
+ });
+ cy.focused().tab();
+ }
+
+ // At least some interactive elements should have received focus.
+ cy.wrap(focusedElements).should('have.length.greaterThan', 0);
+ });
+
+ it('should allow Enter key to activate table rows', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ cy.get('table tbody tr[tabindex]').first().focus();
+ cy.focused().type('{enter}');
+
+ // Pressing Enter on a row should open a details view.
+ cy.get('[role="dialog"]', { timeout: 5000 }).should('exist');
+ });
+
+ it('should trap focus inside modals', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ // Open a modal by clicking a table row.
+ cy.get('table tbody tr').first().click();
+ cy.get('[role="dialog"]', { timeout: 5000 }).should('be.visible');
+
+ // Tab forward many times and verify focus stays inside the modal.
+ for (let i = 0; i < 20; i++) {
+ cy.focused().tab();
+ cy.focused().closest('[role="dialog"]').should('exist');
+ }
+ });
+
+ it('should close modals with the Escape key', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ cy.get('table tbody tr').first().click();
+ cy.get('[role="dialog"]', { timeout: 5000 }).should('be.visible');
+ cy.get('body').type('{esc}');
+ cy.get('[role="dialog"]').should('not.exist');
+ });
+
+ it('should have visible focus indicators on interactive elements', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ cy.get('button, a, input, select, [tabindex]').first().focus();
+ cy.focused().then(($el) => {
+ // The focused element should have a visible outline or box-shadow.
+ const outline = $el.css('outline');
+ const boxShadow = $el.css('box-shadow');
+ const outlineVisible = outline && !outline.includes('none') && !outline.includes('0px');
+ const shadowVisible = boxShadow && !boxShadow.includes('none');
+ // At least one of outline or box-shadow should provide a focus indicator.
+ // Note: Some browsers may handle this differently.
+ expect(outlineVisible || shadowVisible).to.be.true;
+ });
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Screen reader landmarks
+ // -----------------------------------------------------------------------
+
+ describe('Screen Reader Landmarks', () => {
+ it('should have a main content landmark', () => {
+ cy.visit('/');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+ cy.get('main, [role="main"]').should('exist');
+ });
+
+ it('should use proper heading hierarchy', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ // There should be at least one h1 or h2.
+ cy.get('h1, h2').should('have.length.greaterThan', 0);
+
+ // Headings should not skip levels (h1 -> h3 without h2).
+ cy.get('h1, h2, h3, h4, h5, h6').then(($headings) => {
+ let previousLevel = 0;
+ $headings.each((_, el) => {
+ const level = parseInt(el.tagName.replace('H', ''), 10);
+ // A heading can be at the same level, one level deeper, or any
+ // level shallower than the previous heading.
+ if (previousLevel > 0) {
+ expect(level).to.be.at.most(previousLevel + 1);
+ }
+ previousLevel = level;
+ });
+ });
+ });
+
+ it('should have labeled regions for widgets', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ cy.get('[role="region"]').each(($region) => {
+ // Each region should have an accessible name via aria-labelledby or aria-label.
+ const hasLabelledBy = !!$region.attr('aria-labelledby');
+ const hasLabel = !!$region.attr('aria-label');
+ expect(hasLabelledBy || hasLabel).to.be.true;
+ });
+ });
+
+ it('should have accessible labels on form controls', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ cy.get('input, select, textarea').each(($control) => {
+ const hasAriaLabel = !!$control.attr('aria-label');
+ const hasAriaLabelledBy = !!$control.attr('aria-labelledby');
+ const id = $control.attr('id');
+ const hasAssociatedLabel = id ? Cypress.$(`label[for="${id}"]`).length > 0 : false;
+
+ // Each form control should have at least one labelling mechanism.
+ expect(hasAriaLabel || hasAriaLabelledBy || hasAssociatedLabel).to.be.true;
+ });
+ });
+
+ it('should use role="alert" for error messages', () => {
+ cy.intercept('GET', '/api/**', {
+ statusCode: 500,
+ body: { errorCode: 'SERVER_ERROR', message: 'Internal Server Error' },
+ });
+
+ cy.visit('/dashboard/main-overview');
+ cy.get('[role="alert"]', { timeout: 15000 }).should('exist');
+ });
+
+ it('should use aria-live regions for dynamic content updates', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+ // Loading indicators and status updates should use aria-live.
+ cy.get('[aria-live]').should('have.length.greaterThan', 0);
+ });
+
+ it('should have alt text or aria-hidden on decorative images', () => {
+ cy.visit('/dashboard/main-overview');
+ cy.get('body', { timeout: 10000 }).should('be.visible');
+
+ cy.get('img').each(($img) => {
+ const alt = $img.attr('alt');
+ const ariaHidden = $img.attr('aria-hidden');
+ const role = $img.attr('role');
+ // Each image should either have alt text, be marked decorative,
+ // or have role="presentation".
+ const isAccessible = (alt !== undefined) || ariaHidden === 'true' || role === 'presentation';
+ expect(isAccessible).to.be.true;
+ });
+ });
+ });
+});
diff --git a/cypress/e2e/agent-control.cy.ts b/cypress/e2e/agent-control.cy.ts
new file mode 100644
index 0000000..3497867
--- /dev/null
+++ b/cypress/e2e/agent-control.cy.ts
@@ -0,0 +1,307 @@
+/**
+ * @fileoverview E2E tests for the Cognitive Mesh agent control features.
+ *
+ * Covers:
+ * - Agent status banner rendering
+ * - Authority consent modal flow
+ * - Agent action audit trail display
+ * - Agent registration notification
+ */
+
+describe('Agent Control', () => {
+ beforeEach(() => {
+ cy.login('testuser@example.com', 'P@ssw0rd!');
+ });
+
+ // -----------------------------------------------------------------------
+ // Agent Status Banner
+ // -----------------------------------------------------------------------
+
+ describe('Agent Status Banner', () => {
+ it('should render the agent status banner at the top of the page', () => {
+ cy.visit('/');
+ cy.get('[role="status"]', { timeout: 10000 })
+ .should('be.visible');
+ });
+
+ it('should display the current agent status text', () => {
+ cy.visit('/');
+ cy.get('[role="status"]', { timeout: 10000 }).within(() => {
+ // The banner should contain status text like "Agent Idle", "Agent Executing", etc.
+ cy.get('span').should('exist').and('not.be.empty');
+ });
+ });
+
+ it('should display action buttons appropriate to the agent status', () => {
+ cy.visit('/');
+ cy.get('[role="status"]', { timeout: 10000 }).within(() => {
+ // At minimum, the "Control Center" button should be present.
+ cy.contains('button', /control center/i).should('be.visible');
+ });
+ });
+
+ it('should show a retry button when the agent status fetch fails', () => {
+ cy.intercept('GET', '/api/agents/*/status', {
+ statusCode: 500,
+ body: { errorCode: 'API_ERROR', message: 'Failed to fetch agent status', canRetry: true },
+ }).as('statusFetch');
+
+ cy.visit('/');
+ cy.get('[role="status"]', { timeout: 10000 }).within(() => {
+ cy.contains('button', /retry/i).should('be.visible');
+ });
+ });
+
+ it('should update the banner color based on agent status', () => {
+ cy.visit('/');
+ cy.get('[role="status"]', { timeout: 10000 }).then(($banner) => {
+ const bgColor = $banner.css('background-color');
+ // The banner should have a non-default background color.
+ expect(bgColor).to.not.equal('rgba(0, 0, 0, 0)');
+ expect(bgColor).to.not.equal('transparent');
+ });
+ });
+
+ it('should show the escalate button when agent is in error state', () => {
+ cy.intercept('GET', '/api/agents/*/status', {
+ statusCode: 200,
+ body: { data: 'error', isStale: false, lastSyncTimestamp: new Date().toISOString(), lastError: null },
+ }).as('statusFetch');
+
+ cy.visit('/');
+ cy.get('[role="status"]', { timeout: 10000 }).within(() => {
+ cy.contains('button', /escalate/i).should('be.visible');
+ });
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Authority Consent Modal
+ // -----------------------------------------------------------------------
+
+ describe('Authority Consent Modal', () => {
+ /**
+ * Helper to trigger the consent modal. In a real test this might
+ * navigate to an agent action that triggers the modal; here we
+ * simulate by visiting a route that shows the modal or by
+ * interacting with the agent control center.
+ */
+ function openConsentModal(): void {
+ cy.visit('/agents/agent-001/actions');
+ cy.get('[role="dialog"]', { timeout: 10000 }).should('be.visible');
+ }
+
+ it('should display the consent modal with action details', () => {
+ openConsentModal();
+ cy.get('[role="dialog"]').within(() => {
+ cy.get('h2').should('contain.text', 'Approval');
+ cy.contains('Action Details').should('be.visible');
+ cy.contains('Risk Level').should('be.visible');
+ });
+ });
+
+ it('should display Approve and Deny buttons', () => {
+ openConsentModal();
+ cy.get('[role="dialog"]').within(() => {
+ cy.contains('button', /approve/i).should('be.visible');
+ cy.contains('button', /deny/i).should('be.visible');
+ });
+ });
+
+ it('should close the modal when the close button is clicked', () => {
+ openConsentModal();
+ cy.get('[aria-label="Close modal"]').click();
+ cy.get('[role="dialog"]').should('not.exist');
+ });
+
+ it('should close the modal when Escape is pressed', () => {
+ openConsentModal();
+ cy.get('[role="dialog"]').type('{esc}');
+ cy.get('[role="dialog"]').should('not.exist');
+ });
+
+ it('should submit an approval decision and close the modal', () => {
+ cy.intercept('POST', '/api/consent/**', {
+ statusCode: 200,
+ body: true,
+ }).as('consentSubmission');
+
+ openConsentModal();
+ cy.get('[role="dialog"]').within(() => {
+ cy.contains('button', /approve/i).click();
+ });
+ // Modal should close after successful submission.
+ cy.get('[role="dialog"]').should('not.exist');
+ });
+
+ it('should submit a deny decision and close the modal', () => {
+ cy.intercept('POST', '/api/consent/**', {
+ statusCode: 200,
+ body: true,
+ }).as('consentSubmission');
+
+ openConsentModal();
+ cy.get('[role="dialog"]').within(() => {
+ cy.contains('button', /deny/i).click();
+ });
+ cy.get('[role="dialog"]').should('not.exist');
+ });
+
+ it('should show an error when consent submission fails', () => {
+ cy.intercept('POST', '/api/consent/**', {
+ statusCode: 500,
+ body: false,
+ }).as('consentSubmission');
+
+ openConsentModal();
+ cy.get('[role="dialog"]').within(() => {
+ cy.contains('button', /approve/i).click();
+ cy.get('[role="alert"]', { timeout: 5000 }).should('be.visible');
+ });
+ });
+
+ it('should show a confirmation step for high-risk actions', () => {
+ // Navigate to a high-risk action that triggers a two-step confirmation.
+ cy.visit('/agents/agent-001/actions?risk=High');
+ cy.get('[role="dialog"]', { timeout: 10000 }).within(() => {
+ cy.contains('button', /approve/i).click();
+ cy.contains('Confirm').should('be.visible');
+ cy.contains('button', /confirm approval/i).should('be.visible');
+ });
+ });
+
+ it('should display the timeout countdown for time-sensitive actions', () => {
+ openConsentModal();
+ // The countdown should appear when time is running low.
+ cy.get('[role="dialog"]').within(() => {
+ cy.get('[aria-live="polite"]').should('exist');
+ });
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Agent Action Audit Trail
+ // -----------------------------------------------------------------------
+
+ describe('Agent Action Audit Trail', () => {
+ beforeEach(() => {
+ cy.visit('/agents/audit-trail');
+ });
+
+ it('should render the audit trail panel', () => {
+ cy.get('[role="region"][aria-labelledby*="audit"]', { timeout: 10000 })
+ .should('be.visible');
+ });
+
+ it('should display the audit trail title', () => {
+ cy.contains('h2', /audit trail/i).should('be.visible');
+ });
+
+ it('should render filter controls', () => {
+ cy.get('[aria-label="Filter by event type"]').should('be.visible');
+ cy.get('[aria-label="Filter by outcome"]').should('be.visible');
+ cy.get('[aria-label="Filter by correlation ID"]').should('be.visible');
+ });
+
+ it('should filter audit events by outcome', () => {
+ cy.get('[aria-label="Filter by outcome"]').select('Success');
+ // All visible outcome badges should say "Success".
+ cy.get('[role="log"]').within(() => {
+ cy.get('span').filter(':contains("Success")').should('have.length.greaterThan', 0);
+ });
+ });
+
+ it('should expand an audit event group to show details', () => {
+ cy.get('[role="button"][aria-expanded]', { timeout: 10000 })
+ .first()
+ .click();
+ cy.get('[role="button"][aria-expanded="true"]').should('exist');
+ });
+
+ it('should paginate audit events', () => {
+ cy.contains('button', /next/i).should('exist');
+ cy.contains('button', /previous/i).should('exist');
+ cy.get('[aria-live="polite"]').contains(/page/i).should('exist');
+ });
+
+ it('should update the page when pagination buttons are clicked', () => {
+ cy.get('[aria-live="polite"]').contains(/page 1/i).should('exist');
+ cy.contains('button', /next/i).click();
+ cy.get('[aria-live="polite"]').contains(/page 2/i).should('exist');
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Agent Registration Notification
+ // -----------------------------------------------------------------------
+
+ describe('Agent Registration Notification', () => {
+ it('should display a notification when a new agent is registered', () => {
+ cy.visit('/');
+
+ // Simulate a notification via the app's notification system.
+ cy.window().then((win) => {
+ const event = new CustomEvent('agent-notification', {
+ detail: {
+ notificationId: 'test-notification-001',
+ title: 'New Agent Registered',
+ message: 'Agent "ComplianceAuditor v2.1" has been registered.',
+ severity: 'Info',
+ timestamp: new Date().toISOString(),
+ },
+ });
+ win.dispatchEvent(event);
+ });
+
+ // The notification should appear in the status banner or a toast.
+ cy.contains(/new agent registered|complianceauditor/i, { timeout: 10000 })
+ .should('be.visible');
+ });
+
+ it('should auto-dismiss the notification after a timeout', () => {
+ cy.visit('/');
+
+ cy.window().then((win) => {
+ const event = new CustomEvent('agent-notification', {
+ detail: {
+ notificationId: 'test-notification-002',
+ title: 'Agent Status Change',
+ message: 'Agent is now executing a task',
+ severity: 'Info',
+ timestamp: new Date().toISOString(),
+ },
+ });
+ win.dispatchEvent(event);
+ });
+
+ // Should appear first.
+ cy.contains(/agent.*executing|status change/i, { timeout: 10000 })
+ .should('be.visible');
+
+ // Should auto-dismiss after 5 seconds.
+ cy.contains(/agent.*executing|status change/i, { timeout: 10000 })
+ .should('not.exist');
+ });
+
+ it('should navigate to the agent details when a notification is clicked', () => {
+ cy.visit('/');
+
+ cy.window().then((win) => {
+ const event = new CustomEvent('agent-notification', {
+ detail: {
+ notificationId: 'test-notification-003',
+ title: 'Approval Required',
+ message: 'Agent requires approval to proceed',
+ severity: 'Warning',
+ timestamp: new Date().toISOString(),
+ },
+ });
+ win.dispatchEvent(event);
+ });
+
+ cy.contains(/approval required/i, { timeout: 10000 }).click();
+ // Should navigate or open the relevant agent view.
+ cy.url().should('include', '/agent');
+ });
+ });
+});
diff --git a/cypress/e2e/dashboard.cy.ts b/cypress/e2e/dashboard.cy.ts
new file mode 100644
index 0000000..eb80000
--- /dev/null
+++ b/cypress/e2e/dashboard.cy.ts
@@ -0,0 +1,207 @@
+/**
+ * @fileoverview E2E tests for the Cognitive Mesh dashboard.
+ *
+ * Covers:
+ * - Dashboard layout loading
+ * - Widget grid rendering
+ * - Widget interactions (click, expand, collapse)
+ * - Error state handling
+ * - Responsive layout at different viewports
+ */
+
+describe('Dashboard', () => {
+ beforeEach(() => {
+ cy.login('testuser@example.com', 'P@ssw0rd!');
+ });
+
+ // -----------------------------------------------------------------------
+ // Layout loading
+ // -----------------------------------------------------------------------
+
+ describe('Layout Loading', () => {
+ it('should load the default dashboard layout', () => {
+ cy.loadDashboard('main-overview');
+ cy.get('[data-testid="dashboard-container"]').should('be.visible');
+ });
+
+ it('should display a loading indicator while the dashboard loads', () => {
+ cy.visit('/dashboard/main-overview');
+ // The loading indicator should appear briefly.
+ cy.get('[aria-live="polite"]').should('exist');
+ // Then the dashboard container should render.
+ cy.get('[data-testid="dashboard-container"]', { timeout: 15000 })
+ .should('be.visible');
+ });
+
+ it('should display the dashboard title', () => {
+ cy.loadDashboard('main-overview');
+ cy.get('h2, h1').first().should('not.be.empty');
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Widget grid rendering
+ // -----------------------------------------------------------------------
+
+ describe('Widget Grid', () => {
+ beforeEach(() => {
+ cy.loadDashboard('main-overview');
+ });
+
+ it('should render at least one widget in the grid', () => {
+ cy.get('[data-testid="widget-container"], [role="region"]')
+ .should('have.length.greaterThan', 0);
+ });
+
+ it('should render widgets with identifiable headers', () => {
+ cy.get('[role="region"]').each(($region) => {
+ // Each region should have a heading or labelledby attribute.
+ const labelledBy = $region.attr('aria-labelledby');
+ if (labelledBy) {
+ cy.get(`#${labelledBy}`).should('exist').and('not.be.empty');
+ }
+ });
+ });
+
+ it('should display widget data once loading completes', () => {
+ // Widgets should not permanently show "Loading".
+ cy.get('[role="region"]', { timeout: 15000 }).first().within(() => {
+ cy.get('[aria-live="polite"]')
+ .should('not.contain.text', 'Loading');
+ });
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Widget interactions
+ // -----------------------------------------------------------------------
+
+ describe('Widget Interactions', () => {
+ beforeEach(() => {
+ cy.loadDashboard('main-overview');
+ });
+
+ it('should expand a widget when clicked', () => {
+ cy.get('[role="region"]').first().as('widget');
+ cy.get('@widget').click();
+ // After clicking, expect some expanded content or modal.
+ cy.get('[role="dialog"], [data-testid="widget-expanded"]')
+ .should('exist');
+ });
+
+ it('should collapse a widget when the close button is clicked', () => {
+ cy.get('[role="region"]').first().click();
+ // Close the expanded view.
+ cy.get('[aria-label="Close modal"], [aria-label="Close agent details"], button')
+ .filter(':contains("×")')
+ .first()
+ .click();
+ cy.get('[role="dialog"]').should('not.exist');
+ });
+
+ it('should support keyboard navigation between widgets', () => {
+ cy.get('[role="region"]').first().focus();
+ cy.focused().should('have.attr', 'role', 'region');
+ cy.focused().type('{tab}');
+ // The next focusable element should receive focus.
+ cy.focused().should('exist');
+ });
+
+ it('should handle table row clicks to show agent details', () => {
+ // Navigate to the agent control center.
+ cy.get('table[aria-label]').first().within(() => {
+ cy.get('tbody tr').first().click();
+ });
+ // A details modal or expanded section should appear.
+ cy.get('[role="dialog"]', { timeout: 5000 }).should('be.visible');
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Error state handling
+ // -----------------------------------------------------------------------
+
+ describe('Error States', () => {
+ it('should display an error message when the API fails', () => {
+ // Intercept the API call and force a failure.
+ cy.intercept('GET', '/api/**', {
+ statusCode: 500,
+ body: { errorCode: 'SERVER_ERROR', message: 'Internal Server Error' },
+ }).as('apiFailure');
+
+ cy.visit('/dashboard/main-overview');
+ cy.get('[role="alert"]', { timeout: 15000 }).should('be.visible');
+ });
+
+ it('should show a retry button when the error is retryable', () => {
+ cy.intercept('GET', '/api/**', {
+ statusCode: 503,
+ body: { errorCode: 'SERVICE_UNAVAILABLE', message: 'Service Unavailable', canRetry: true },
+ }).as('apiFailure');
+
+ cy.visit('/dashboard/main-overview');
+ cy.get('[role="alert"]', { timeout: 15000 }).within(() => {
+ cy.contains('button', /retry/i).should('be.visible');
+ });
+ });
+
+ it('should reload data when the retry button is clicked', () => {
+ let callCount = 0;
+ cy.intercept('GET', '/api/**', (req) => {
+ callCount += 1;
+ if (callCount <= 1) {
+ req.reply({ statusCode: 500, body: { errorCode: 'FETCH_ERROR', message: 'Error' } });
+ } else {
+ req.reply({ statusCode: 200, body: { data: [] } });
+ }
+ }).as('apiCall');
+
+ cy.visit('/dashboard/main-overview');
+ cy.get('[role="alert"]', { timeout: 15000 }).within(() => {
+ cy.contains('button', /retry/i).click();
+ });
+ // After retry, the error should be gone.
+ cy.get('[role="alert"]').should('not.exist');
+ });
+ });
+
+ // -----------------------------------------------------------------------
+ // Responsive layout
+ // -----------------------------------------------------------------------
+
+ describe('Responsive Layout', () => {
+ const viewports: Array<{ name: string; width: number; height: number }> = [
+ { name: 'Desktop (1280x720)', width: 1280, height: 720 },
+ { name: 'Tablet (768x1024)', width: 768, height: 1024 },
+ { name: 'Mobile (375x667)', width: 375, height: 667 },
+ ];
+
+ viewports.forEach(({ name, width, height }) => {
+ it(`should render correctly at ${name}`, () => {
+ cy.viewport(width, height);
+ cy.loadDashboard('main-overview');
+ cy.get('[data-testid="dashboard-container"], [role="region"]')
+ .should('be.visible');
+
+ // Ensure no horizontal overflow.
+ cy.window().then((win) => {
+ const bodyWidth = win.document.body.scrollWidth;
+ expect(bodyWidth).to.be.at.most(width + 20); // small tolerance for scrollbar
+ });
+ });
+ });
+
+ it('should stack widgets vertically on mobile viewports', () => {
+ cy.viewport(375, 667);
+ cy.loadDashboard('main-overview');
+ cy.get('[role="region"]').then(($widgets) => {
+ if ($widgets.length > 1) {
+ const firstTop = $widgets.eq(0).position().top;
+ const secondTop = $widgets.eq(1).position().top;
+ // Second widget should be below the first (stacked).
+ expect(secondTop).to.be.greaterThan(firstTop);
+ }
+ });
+ });
+ });
+});
diff --git a/cypress/support/commands.ts b/cypress/support/commands.ts
new file mode 100644
index 0000000..a87fccd
--- /dev/null
+++ b/cypress/support/commands.ts
@@ -0,0 +1,208 @@
+/**
+ * @fileoverview Custom Cypress commands for the Cognitive Mesh E2E test suite.
+ *
+ * All commands are declared on the `Cypress.Chainable` interface
+ * (see the ambient type augmentation at the bottom of this file)
+ * so that TypeScript provides full auto-complete and type checking.
+ */
+
+// ---------------------------------------------------------------------------
+// cy.login(username, password)
+// ---------------------------------------------------------------------------
+
+/**
+ * Authenticates the test user by posting credentials to the
+ * application's auth endpoint and storing the resulting token
+ * in `localStorage`.
+ *
+ * The command skips the login UI entirely to keep tests fast and
+ * avoid coupling every spec to the login page implementation.
+ *
+ * @example
+ * ```ts
+ * cy.login('testuser@example.com', 'P@ssw0rd!');
+ * ```
+ */
+Cypress.Commands.add('login', (username: string, password: string) => {
+ cy.log(`Logging in as **${username}**`);
+
+ cy.request({
+ method: 'POST',
+ url: '/api/auth/login',
+ body: { username, password },
+ failOnStatusCode: false,
+ }).then((response) => {
+ if (response.status === 200 && response.body?.token) {
+ window.localStorage.setItem('auth_token', response.body.token);
+ window.localStorage.setItem('auth_user', JSON.stringify({
+ username,
+ roles: response.body.roles ?? [],
+ tenantId: response.body.tenantId ?? 'default',
+ }));
+ cy.log('Login successful');
+ } else {
+ // Fall back to a mock token for environments without a real auth API.
+ cy.log('Auth API unavailable - using mock token');
+ window.localStorage.setItem('auth_token', 'mock-jwt-token-for-e2e');
+ window.localStorage.setItem('auth_user', JSON.stringify({
+ username,
+ roles: ['admin'],
+ tenantId: 'test-tenant',
+ }));
+ }
+ });
+});
+
+// ---------------------------------------------------------------------------
+// cy.loadDashboard(dashboardId)
+// ---------------------------------------------------------------------------
+
+/**
+ * Navigates to a specific dashboard by its identifier.
+ *
+ * Waits for the dashboard container and at least one widget to
+ * render before yielding control to the next command.
+ *
+ * @example
+ * ```ts
+ * cy.loadDashboard('main-overview');
+ * ```
+ */
+Cypress.Commands.add('loadDashboard', (dashboardId: string) => {
+ cy.log(`Loading dashboard **${dashboardId}**`);
+ cy.visit(`/dashboard/${dashboardId}`);
+
+ // Wait for the dashboard shell to appear.
+ cy.get('[data-testid="dashboard-container"]', { timeout: 15000 })
+ .should('exist')
+ .and('be.visible');
+
+ // Wait for at least one widget to finish loading.
+ cy.get('[data-testid="widget-container"]', { timeout: 15000 })
+ .first()
+ .should('exist');
+});
+
+// ---------------------------------------------------------------------------
+// cy.waitForWidget(widgetName)
+// ---------------------------------------------------------------------------
+
+/**
+ * Waits until a specific widget has fully rendered and is visible.
+ *
+ * Widgets are identified by the `data-testid` attribute matching
+ * `widget-{widgetName}` or a `role="region"` with an accessible
+ * name containing the widget name.
+ *
+ * @example
+ * ```ts
+ * cy.waitForWidget('agent-control-center');
+ * ```
+ */
+Cypress.Commands.add('waitForWidget', (widgetName: string) => {
+ cy.log(`Waiting for widget **${widgetName}** to render`);
+
+ // Try data-testid first, then fall back to aria-labelledby.
+ cy.get(
+ `[data-testid="widget-${widgetName}"], [role="region"][aria-labelledby*="${widgetName}"]`,
+ { timeout: 15000 },
+ )
+ .first()
+ .should('exist')
+ .and('be.visible');
+
+ // Ensure no loading spinner is visible inside the widget.
+ cy.get(`[data-testid="widget-${widgetName}"]`)
+ .find('[aria-live="polite"]')
+ .should('not.contain.text', 'Loading');
+});
+
+// ---------------------------------------------------------------------------
+// cy.assertAccessibility()
+// ---------------------------------------------------------------------------
+
+/**
+ * Runs an `axe-core` accessibility audit against the current page
+ * and fails the test if any violations are found.
+ *
+ * Requires `cypress-axe` to be installed. The command injects axe,
+ * runs the check, and logs each violation for easy debugging.
+ *
+ * @example
+ * ```ts
+ * cy.visit('/');
+ * cy.assertAccessibility();
+ * ```
+ */
+Cypress.Commands.add('assertAccessibility', () => {
+ cy.log('Running accessibility audit (axe-core)');
+
+ // Inject axe-core into the page under test.
+ cy.injectAxe();
+
+ // Run the audit and process results.
+ cy.checkA11y(
+ undefined,
+ {
+ // Only flag issues at the "critical" and "serious" severity levels
+ // to avoid noisy false positives during initial rollout.
+ includedImpacts: ['critical', 'serious'],
+ rules: {
+ // Disable color-contrast rule if running in CI with non-standard
+ // rendering; enable it locally by overriding in the spec.
+ 'color-contrast': { enabled: true },
+ },
+ },
+ (violations) => {
+ // Log a table of violations for quick debugging.
+ if (violations.length > 0) {
+ cy.log(`Found **${violations.length}** accessibility violations`);
+ const violationData = violations.map(({ id, impact, description, nodes }) => ({
+ id,
+ impact,
+ description,
+ nodeCount: nodes.length,
+ target: nodes.map((n) => n.target).join(', '),
+ }));
+ cy.task('log', JSON.stringify(violationData, null, 2));
+ }
+ },
+ );
+});
+
+// ---------------------------------------------------------------------------
+// Type augmentation
+// ---------------------------------------------------------------------------
+
+declare global {
+ namespace Cypress {
+ interface Chainable {
+ /**
+ * Authenticate the test user via the auth API.
+ * @param username - The user's email or username.
+ * @param password - The user's password.
+ */
+ login(username: string, password: string): Chainable;
+
+ /**
+ * Navigate to a dashboard and wait for it to render.
+ * @param dashboardId - The unique identifier of the dashboard.
+ */
+ loadDashboard(dashboardId: string): Chainable;
+
+ /**
+ * Wait for a named widget to finish rendering.
+ * @param widgetName - The `data-testid` suffix of the widget.
+ */
+ waitForWidget(widgetName: string): Chainable;
+
+ /**
+ * Run an axe-core accessibility audit against the current page.
+ * Fails if critical or serious violations are found.
+ */
+ assertAccessibility(): Chainable;
+ }
+ }
+}
+
+export {};
diff --git a/cypress/support/e2e.ts b/cypress/support/e2e.ts
new file mode 100644
index 0000000..f68bf89
--- /dev/null
+++ b/cypress/support/e2e.ts
@@ -0,0 +1,78 @@
+/**
+ * @fileoverview Cypress E2E support file for Cognitive Mesh.
+ *
+ * This file runs **before** every E2E spec file. It is the single
+ * entry point for importing custom commands, configuring global
+ * hooks, and injecting third-party Cypress plugins.
+ */
+
+// ---------------------------------------------------------------------------
+// Custom commands
+// ---------------------------------------------------------------------------
+
+import './commands';
+
+// ---------------------------------------------------------------------------
+// Third-party plugins
+// ---------------------------------------------------------------------------
+
+// cypress-axe provides `cy.injectAxe()` and `cy.checkA11y()`.
+import 'cypress-axe';
+
+// ---------------------------------------------------------------------------
+// Global hooks
+// ---------------------------------------------------------------------------
+
+/**
+ * Runs once before each test.
+ * Clears application state so tests start from a clean baseline.
+ */
+beforeEach(() => {
+ // Clear all cookies, localStorage, and sessionStorage.
+ cy.clearCookies();
+ cy.clearLocalStorage();
+ cy.window().then((win) => {
+ win.sessionStorage.clear();
+ });
+});
+
+/**
+ * Suppress uncaught exceptions that originate from the application
+ * under test. Some third-party scripts (analytics, hot-reload)
+ * throw benign errors that would otherwise fail every spec.
+ *
+ * Override on a per-test basis when you _want_ to assert that
+ * the application does not throw:
+ *
+ * ```ts
+ * Cypress.on('uncaught:exception', () => { throw err; });
+ * ```
+ */
+Cypress.on('uncaught:exception', (err) => {
+ // Ignore ResizeObserver errors which are common and benign.
+ if (err.message.includes('ResizeObserver loop')) {
+ return false;
+ }
+
+ // Ignore chunk-loading errors caused by stale service workers
+ // during hot-reload development sessions.
+ if (err.message.includes('Loading chunk') || err.message.includes('ChunkLoadError')) {
+ return false;
+ }
+
+ // Let all other errors fail the test.
+ return true;
+});
+
+// ---------------------------------------------------------------------------
+// Custom task registration for logging
+// ---------------------------------------------------------------------------
+
+/**
+ * Register a `log` task so that `cy.task('log', message)` prints
+ * to the Node process stdout. Useful for debugging accessibility
+ * violations and other structured data.
+ *
+ * Note: Task registration happens in `cypress.config.ts` via
+ * `setupNodeEvents`. The command is called from `commands.ts`.
+ */
diff --git a/docker-compose.yml b/docker-compose.yml
new file mode 100644
index 0000000..5a08fa6
--- /dev/null
+++ b/docker-compose.yml
@@ -0,0 +1,56 @@
+version: "3.8"
+
+services:
+ # Redis - used by HybridMemoryStore for caching
+ redis:
+ image: redis:7-alpine
+ container_name: cognitive-mesh-redis
+ ports:
+ - "6379:6379"
+ volumes:
+ - redis-data:/data
+ healthcheck:
+ test: ["CMD", "redis-cli", "ping"]
+ interval: 10s
+ timeout: 5s
+ retries: 5
+ restart: unless-stopped
+
+ # Qdrant - vector database for embeddings and semantic search
+ qdrant:
+ image: qdrant/qdrant:latest
+ container_name: cognitive-mesh-qdrant
+ ports:
+ - "6333:6333" # REST API
+ - "6334:6334" # gRPC
+ volumes:
+ - qdrant-data:/qdrant/storage
+ healthcheck:
+ test: ["CMD-SHELL", "curl -f http://localhost:6333/healthz || exit 1"]
+ interval: 10s
+ timeout: 5s
+ retries: 5
+ restart: unless-stopped
+
+ # Azurite - Azure Storage emulator (Blob, Queue, Table)
+ azurite:
+ image: mcr.microsoft.com/azure-storage/azurite:latest
+ container_name: cognitive-mesh-azurite
+ ports:
+ - "10000:10000" # Blob
+ - "10001:10001" # Queue
+ - "10002:10002" # Table
+ volumes:
+ - azurite-data:/data
+ command: "azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 -l /data"
+ healthcheck:
+ test: ["CMD-SHELL", "nc -z localhost 10000 || exit 1"]
+ interval: 10s
+ timeout: 5s
+ retries: 5
+ restart: unless-stopped
+
+volumes:
+ redis-data:
+ qdrant-data:
+ azurite-data:
diff --git a/docs/planning/ADR-001-memory-store-selection.md b/docs/planning/ADR-001-memory-store-selection.md
new file mode 100644
index 0000000..7ec087f
--- /dev/null
+++ b/docs/planning/ADR-001-memory-store-selection.md
@@ -0,0 +1,320 @@
+# ADR-001: Memory Store Selection for Cognitive Mesh
+
+**Status:** Accepted
+**Date:** 2026-02-21
+**Decision Makers:** Architecture Team
+**Context:** MetacognitiveLayer Memory Subsystem
+
+---
+
+## Context
+
+Cognitive Mesh requires a memory subsystem that serves two distinct purposes:
+
+1. **Session Context Memory** (MetacognitiveLayer) - Transient key-value storage for conversation sessions with vector similarity search for embedding retrieval
+2. **Episodic Memory** (ReasoningLayer) - Long-term cognitive memory with multi-strategy recall (exact, fuzzy, semantic, temporal, hybrid)
+
+The current architecture uses:
+- **HybridMemoryStore** wrapping Redis (hot cache + vector search) and DuckDB (persistent OLAP fallback)
+- **MemoryStrategyEngine** using in-memory `ConcurrentDictionary` for episodic memory (no persistence)
+
+### Problems with Current Implementation
+
+| Problem | Impact |
+|---------|--------|
+| DuckDB requires native library (`DuckDB.NET.Data`) that complicates CI/CD | Build failures, stub classes needed |
+| Episodic memory is entirely in-memory | Data lost on restart, no multi-instance sharing |
+| Write amplification in HybridMemoryStore (2x writes) | Increased latency, potential consistency gaps |
+| Fixed 768-dimension vectors in Redis provider | Cannot use different embedding models |
+| No transaction semantics between Redis and DuckDB | Partial write failures possible |
+| No TTL/expiration on session data | Memory growth unbounded |
+| Manual cosine similarity fallback in DuckDB is O(n*d) | Unusable at scale |
+
+---
+
+## Options Evaluated
+
+### Option 1: Redis + Redis Search (Current Hot Path)
+**Category:** In-memory cache + vector search
+**Maturity:** Production-ready
+**.NET SDK:** StackExchange.Redis (excellent)
+
+| Strength | Weakness |
+|----------|----------|
+| Sub-millisecond latency | No persistence by default (RDB/AOF optional) |
+| HNSW vector indexing via Redis Search | Requires Redis Stack or Redis Enterprise |
+| Pub/sub for distributed events | Memory-bound (cost scales with data) |
+| Excellent .NET ecosystem | Complex cluster setup for HA |
+
+### Option 2: DuckDB (Current Cold Path)
+**Category:** Embedded OLAP database
+**Maturity:** Stable (v1.x)
+**.NET SDK:** DuckDB.NET.Data (fair, native dependency)
+
+| Strength | Weakness |
+|----------|----------|
+| Zero-config embedded database | Native library dependency complicates CI/CD |
+| OLAP-optimized for analytical queries | Not designed for OLTP workloads |
+| Built-in vector extension | File-based, single-writer limitation |
+| SQL interface | Limited concurrent read/write support |
+
+### Option 3: Qdrant (Existing in FoundationLayer)
+**Category:** Purpose-built vector database
+**Maturity:** Production-ready (v1.x)
+**.NET SDK:** Qdrant.Client (good)
+
+| Strength | Weakness |
+|----------|----------|
+| Purpose-built for vector similarity search | External service dependency |
+| Supports HNSW, IVF, scalar quantization | Requires separate deployment |
+| Filtering + payload storage | Overkill for simple key-value |
+| Horizontal scaling | Additional operational cost |
+
+### Option 4: SQLite + sqlite-vec
+**Category:** Embedded relational + vector extension
+**Maturity:** SQLite is battle-tested; sqlite-vec is newer
+**.NET SDK:** Microsoft.Data.Sqlite (excellent, built-in)
+
+| Strength | Weakness |
+|----------|----------|
+| Zero-config, no native dependency issues | sqlite-vec extension is newer/less mature |
+| ACID transactions | Single-writer, readers don't block |
+| Excellent .NET support (built into runtime) | Not optimized for vector operations |
+| File-based, portable | No built-in distributed support |
+| WAL mode supports concurrent reads | Vector search performance ~100ms-1s |
+
+### Option 5: PostgreSQL + pgvector
+**Category:** Production relational + vector extension
+**Maturity:** Battle-tested
+**.NET SDK:** Npgsql (excellent)
+
+| Strength | Weakness |
+|----------|----------|
+| ACID + MVCC concurrency | External service required |
+| pgvector: HNSW + IVF-Flat indexing | Operational overhead |
+| Mature ecosystem, monitoring, backup | Higher resource requirements |
+| Horizontal read replicas | pgvector HNSW rebuild on large updates |
+| Full SQL + JSON support | Not cloud-native by default |
+
+### Option 6: LiteDB
+**Category:** Embedded NoSQL document database
+**Maturity:** Stable (v5.x)
+**.NET SDK:** LiteDB (native C#, no native deps)
+
+| Strength | Weakness |
+|----------|----------|
+| Pure C# (no native dependencies) | No vector search capability |
+| BSON document model | Limited query optimization |
+| ACID transactions | Single-file concurrency limits |
+| Zero-config embedded | Smaller community than SQLite |
+| Ideal for .NET projects | No distributed support |
+
+### Option 7: RocksDB (via RocksDbSharp)
+**Category:** LSM-tree key-value store
+**Maturity:** Battle-tested (Facebook/Meta)
+**.NET SDK:** RocksDbSharp (fair, native dependency)
+
+| Strength | Weakness |
+|----------|----------|
+| Extreme write throughput | Native library dependency |
+| Efficient storage compression | No SQL interface |
+| Tunable consistency | No built-in vector search |
+| Used by many databases internally | Complex tuning required |
+| Excellent read performance for hot data | Write amplification in LSM |
+
+### Option 8: Milvus
+**Category:** Cloud-native vector database
+**Maturity:** Production-ready (v2.x)
+**.NET SDK:** Milvus.Client (fair)
+
+| Strength | Weakness |
+|----------|----------|
+| Purpose-built for vectors at scale | Heavy operational footprint (etcd, MinIO) |
+| Multiple index types (HNSW, IVF, DiskANN) | .NET SDK less mature than Python |
+| Supports billions of vectors | Minimum 3 nodes recommended |
+| GPU acceleration | Overkill for <1M vectors |
+
+### Option 9: ChromaDB
+**Category:** AI-native embedding database
+**Maturity:** Growing (v0.5.x)
+**.NET SDK:** Community-maintained (limited)
+
+| Strength | Weakness |
+|----------|----------|
+| Designed for LLM/RAG pipelines | Not yet stable (v0.x) |
+| Automatic embedding generation | Weak .NET SDK |
+| Simple API | Python-first ecosystem |
+| Metadata filtering | Limited production track record |
+| Embedded or client-server modes | Not enterprise-grade yet |
+
+### Option 10: Azure Cosmos DB (Existing in FoundationLayer)
+**Category:** Cloud-native multi-model database
+**Maturity:** Production-ready
+**.NET SDK:** Microsoft.Azure.Cosmos (excellent)
+
+| Strength | Weakness |
+|----------|----------|
+| Global distribution | Azure lock-in |
+| Multiple APIs (SQL, MongoDB, Gremlin) | Cost at scale (RU pricing) |
+| Vector search support (preview) | Vector search is relatively new |
+| Auto-scaling | Complex partition key design |
+| Built-in change feed for events | Cold start latency on serverless |
+
+### Option 11: In-Memory ConcurrentDictionary (Current Episodic)
+**Category:** In-process memory
+**Maturity:** Built-in .NET
+**.NET SDK:** System.Collections.Concurrent
+
+| Strength | Weakness |
+|----------|----------|
+| Zero latency | No persistence |
+| No dependencies | No cross-process sharing |
+| Thread-safe | Memory-bound scalability |
+| Simplest implementation | Lost on restart |
+
+### Option 12: Semantic Kernel Memory (Microsoft)
+**Category:** AI memory framework
+**Maturity:** Growing (v1.x)
+**.NET SDK:** Microsoft.SemanticKernel.Memory (good)
+
+| Strength | Weakness |
+|----------|----------|
+| Designed for AI agent memory | Tied to Semantic Kernel ecosystem |
+| Pluggable vector stores | Additional abstraction layer |
+| Built-in chunking + embedding | May conflict with existing architecture |
+| .NET-first | Evolving API surface |
+| Supports multiple backends | Adds dependency weight |
+
+---
+
+## Decision Matrix
+
+### Criteria Weights
+
+| Criterion | Weight | Rationale |
+|-----------|--------|-----------|
+| .NET SDK Quality | 15% | Must integrate cleanly with C# codebase |
+| Vector Search Performance | 15% | Core requirement for similarity matching |
+| Operational Simplicity | 15% | Minimize deployment/maintenance burden |
+| Persistence & Durability | 12% | Session data and episodic memory must survive restarts |
+| Latency (Read/Write) | 10% | Real-time agent interactions require low latency |
+| Scalability | 8% | Must handle growth from prototype to production |
+| Cost | 8% | Infrastructure and licensing costs |
+| ACID / Consistency | 7% | Data integrity for critical operations |
+| Cloud-Native Readiness | 5% | Azure deployment target |
+| Embedding Flexibility | 5% | Support different embedding dimensions |
+
+### Scoring (1-5 scale: 1=Poor, 3=Adequate, 5=Excellent)
+
+| Option | .NET SDK (15%) | Vector (15%) | Ops Simple (15%) | Persist (12%) | Latency (10%) | Scale (8%) | Cost (8%) | ACID (7%) | Cloud (5%) | Embed Flex (5%) | **Weighted** |
+|--------|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| 1. Redis | 5 | 4 | 3 | 2 | 5 | 4 | 3 | 2 | 4 | 4 | **3.57** |
+| 2. DuckDB | 2 | 3 | 2 | 4 | 3 | 2 | 5 | 4 | 2 | 3 | **2.86** |
+| 3. Qdrant | 4 | 5 | 2 | 5 | 4 | 5 | 3 | 3 | 4 | 5 | **3.93** |
+| **4. SQLite** | **5** | **3** | **5** | **5** | **3** | **2** | **5** | **5** | **3** | **3** | **3.96** |
+| 5. PostgreSQL | 5 | 4 | 2 | 5 | 3 | 5 | 3 | 5 | 4 | 5 | **3.90** |
+| 6. LiteDB | 5 | 1 | 5 | 4 | 4 | 2 | 5 | 4 | 2 | 1 | **3.27** |
+| 7. RocksDB | 3 | 1 | 2 | 5 | 4 | 4 | 4 | 3 | 2 | 1 | **2.80** |
+| 8. Milvus | 2 | 5 | 1 | 5 | 4 | 5 | 2 | 3 | 3 | 5 | **3.30** |
+| 9. ChromaDB | 1 | 4 | 3 | 3 | 3 | 3 | 4 | 2 | 3 | 4 | **2.85** |
+| 10. Cosmos DB | 5 | 3 | 3 | 5 | 3 | 5 | 1 | 4 | 5 | 3 | **3.52** |
+| 11. ConcurrentDict | 5 | 2 | 5 | 1 | 5 | 1 | 5 | 1 | 1 | 2 | **2.92** |
+| 12. SK Memory | 4 | 4 | 3 | 4 | 3 | 3 | 4 | 3 | 4 | 4 | **3.53** |
+
+### Top 3 Results
+
+| Rank | Option | Score | Role |
+|------|--------|-------|------|
+| 1 | **SQLite + sqlite-vec** | 3.96 | Embedded persistent store (replace DuckDB) |
+| 2 | **Qdrant** | 3.93 | Production vector search (already in FoundationLayer) |
+| 3 | **PostgreSQL + pgvector** | 3.90 | Production relational + vector (cloud deployments) |
+
+---
+
+## Decision
+
+### Tier 1: Implement Now
+
+1. **SQLite** (via `Microsoft.Data.Sqlite`) as the embedded persistent store, replacing DuckDB
+ - Zero native dependency issues (built into .NET SDK)
+ - ACID transactions for consistency
+ - WAL mode for concurrent reads
+ - Eliminates DuckDB stub workaround
+ - Manual cosine similarity in C# for vector search (acceptable at <100k records)
+
+2. **Qdrant integration** for production vector search (enhance existing FoundationLayer adapter)
+ - Already has `QdrantVectorDatabaseAdapter` in FoundationLayer
+ - Wire into `MemoryStoreFactory` as a selectable `IVectorSearchProvider`
+ - Replace Redis-only vector search with Qdrant option
+
+### Tier 2: Keep As-Is
+
+3. **Redis** remains as the hot cache layer
+ - Sub-millisecond reads for active sessions
+ - Keep Redis Search for environments that have Redis Stack
+ - TTL support for automatic session expiration
+
+4. **ConcurrentDictionary** remains for MemoryStrategyEngine
+ - Add optional SQLite persistence backend for episodic memory
+ - Lazy-load from SQLite on startup, write-through on changes
+
+### Tier 3: Future Consideration
+
+5. **PostgreSQL + pgvector** for cloud-native deployments
+ - Implement when Azure deployment is prioritized
+ - Can replace both SQLite and Redis in cloud environments
+
+---
+
+## Implementation Plan
+
+### Phase 1: SQLite Memory Store (Replace DuckDB)
+- Create `SqliteMemoryStore : IMeshMemoryStore` adapter
+- Implement context table with UPSERT semantics
+- Implement embeddings table with manual cosine similarity
+- Update `MemoryStoreFactory` to support `"sqlite"` store type
+- Add `Microsoft.Data.Sqlite` to `Protocols.csproj`
+- Remove DuckDB stub classes
+
+### Phase 2: Qdrant Vector Search Provider
+- Create `QdrantVectorSearchProvider : IVectorSearchProvider` adapter
+- Wire into `MemoryStoreFactory` for `IVectorSearchProvider` registration
+- Support configurable collection names and vector dimensions
+- Add fallback from Qdrant to SQLite for offline/development scenarios
+
+### Phase 3: Configuration & Factory Updates
+- Add `VectorSearchProvider` option: `"redis"` | `"qdrant"` | `"sqlite"`
+- Add `EpisodicPersistence` option: `"memory"` | `"sqlite"`
+- Update `MemoryStoreOptions` with new configuration
+- Update DI registration in `MemoryStoreFactory`
+
+---
+
+## Consequences
+
+### Positive
+- Eliminates DuckDB native dependency (CI/CD simplification)
+- ACID transactions for persistent memory operations
+- Production-grade vector search via Qdrant
+- Configurable backend selection per deployment environment
+- No breaking changes to `IMeshMemoryStore` interface
+
+### Negative
+- SQLite vector search is slower than Redis/Qdrant (acceptable for fallback)
+- Additional adapter code to maintain
+- Qdrant requires external service for production vector search
+
+### Risks
+- SQLite WAL mode has a single-writer limitation (mitigated by short write duration)
+- Qdrant service availability depends on deployment infrastructure
+- Migration from DuckDB schema to SQLite schema needs careful data handling
+
+---
+
+## References
+
+- [Microsoft.Data.Sqlite Documentation](https://learn.microsoft.com/en-us/dotnet/standard/data/sqlite/)
+- [Qdrant Documentation](https://qdrant.tech/documentation/)
+- [Redis Search Vector Similarity](https://redis.io/docs/latest/develop/interact/search-and-query/advanced-concepts/vectors/)
+- [sqlite-vec Extension](https://github.com/asg017/sqlite-vec)
+- [pgvector PostgreSQL Extension](https://github.com/pgvector/pgvector)
diff --git a/infra/backend.tf b/infra/backend.tf
new file mode 100644
index 0000000..7432d29
--- /dev/null
+++ b/infra/backend.tf
@@ -0,0 +1,13 @@
+###############################################################################
+# Cognitive Mesh — Terraform Backend Configuration
+# State is stored in Azure Blob Storage.
+###############################################################################
+
+terraform {
+ backend "azurerm" {
+ resource_group_name = "cognitive-mesh-tfstate-rg"
+ storage_account_name = "cognitivemeshtfstate"
+ container_name = "tfstate"
+ key = "cognitive-mesh.tfstate"
+ }
+}
diff --git a/infra/environments/dev/terragrunt.hcl b/infra/environments/dev/terragrunt.hcl
new file mode 100644
index 0000000..f448656
--- /dev/null
+++ b/infra/environments/dev/terragrunt.hcl
@@ -0,0 +1,58 @@
+###############################################################################
+# Cognitive Mesh — Dev Environment Terragrunt Configuration
+###############################################################################
+
+include "root" {
+ path = find_in_parent_folders()
+}
+
+terraform {
+ source = "${get_parent_terragrunt_dir()}//"
+}
+
+inputs = {
+ environment = "dev"
+ location = "westeurope"
+ resource_group_name = "cognitive-mesh-dev-rg"
+
+ # CosmosDB — dev settings
+ cosmosdb_consistency_level = "Session"
+
+ # Redis — dev uses Basic C0 (auto-selected by module)
+
+ # Qdrant — smaller container for dev
+ qdrant_cpu_cores = 1
+ qdrant_memory_gb = 2
+
+ # OpenAI — lower capacity for dev
+ openai_model_deployments = {
+ "gpt-4o" = {
+ model_name = "gpt-4o"
+ model_version = "2024-11-20"
+ sku_name = "GlobalStandard"
+ sku_capacity = 5
+ }
+ "text-embedding-3-large" = {
+ model_name = "text-embedding-3-large"
+ model_version = "1"
+ sku_name = "Standard"
+ sku_capacity = 5
+ }
+ }
+
+ # AI Search — free tier for dev
+ search_sku = "free"
+
+ # Monitoring — shorter retention for dev
+ log_retention_days = 30
+ appinsights_retention_days = 30
+
+ # Networking — smaller address space for dev
+ vnet_address_space = ["10.0.0.0/16"]
+
+ common_tags = {
+ Project = "CognitiveMesh"
+ Environment = "dev"
+ ManagedBy = "terraform"
+ }
+}
diff --git a/infra/main.tf b/infra/main.tf
new file mode 100644
index 0000000..6fb8f7c
--- /dev/null
+++ b/infra/main.tf
@@ -0,0 +1,191 @@
+###############################################################################
+# Cognitive Mesh — Root Module
+# Orchestrates all infrastructure sub-modules.
+###############################################################################
+
+locals {
+ tags = merge(var.common_tags, {
+ Environment = var.environment
+ })
+}
+
+# ---------- Resource Group ----------
+
+resource "azurerm_resource_group" "this" {
+ name = var.resource_group_name
+ location = var.location
+
+ tags = local.tags
+}
+
+# ---------- Networking ----------
+
+module "networking" {
+ source = "./modules/networking"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ vnet_address_space = var.vnet_address_space
+ common_tags = local.tags
+}
+
+# ---------- Monitoring (deploy early — other modules reference Log Analytics) ----------
+
+module "monitoring" {
+ source = "./modules/monitoring"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ retention_in_days = var.log_retention_days
+ appinsights_retention_days = var.appinsights_retention_days
+ common_tags = local.tags
+}
+
+# ---------- Key Vault ----------
+
+module "keyvault" {
+ source = "./modules/keyvault"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ common_tags = local.tags
+}
+
+# ---------- Storage ----------
+
+module "storage" {
+ source = "./modules/storage"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ common_tags = local.tags
+}
+
+# ---------- CosmosDB ----------
+
+module "cosmosdb" {
+ source = "./modules/cosmosdb"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ database_name = var.cosmosdb_database_name
+ consistency_level = var.cosmosdb_consistency_level
+ common_tags = local.tags
+}
+
+# ---------- Redis ----------
+
+module "redis" {
+ source = "./modules/redis"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ prod_sku_name = var.redis_prod_sku_name
+ prod_capacity = var.redis_prod_capacity
+ common_tags = local.tags
+}
+
+# ---------- Qdrant (Vector DB) ----------
+
+module "qdrant" {
+ source = "./modules/qdrant"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ qdrant_image = var.qdrant_image
+ cpu_cores = var.qdrant_cpu_cores
+ memory_gb = var.qdrant_memory_gb
+ subnet_ids = [module.networking.subnet_ids["containers"]]
+ ip_address_type = "Private"
+ common_tags = local.tags
+}
+
+# ---------- Azure OpenAI ----------
+
+module "openai" {
+ source = "./modules/openai"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ model_deployments = var.openai_model_deployments
+ common_tags = local.tags
+}
+
+# ---------- AI Search ----------
+
+module "ai_search" {
+ source = "./modules/ai-search"
+
+ project_name = var.project_name
+ environment = var.environment
+ location = var.location
+ resource_group_name = azurerm_resource_group.this.name
+ sku = var.search_sku
+ common_tags = local.tags
+}
+
+# ---------- Store secrets in Key Vault ----------
+
+resource "azurerm_key_vault_secret" "cosmosdb_connection" {
+ name = "cosmosdb-connection-string"
+ value = module.cosmosdb.connection_strings[0]
+ key_vault_id = module.keyvault.key_vault_id
+
+ depends_on = [module.keyvault]
+}
+
+resource "azurerm_key_vault_secret" "redis_connection" {
+ name = "redis-connection-string"
+ value = module.redis.primary_connection_string
+ key_vault_id = module.keyvault.key_vault_id
+
+ depends_on = [module.keyvault]
+}
+
+resource "azurerm_key_vault_secret" "storage_connection" {
+ name = "storage-connection-string"
+ value = module.storage.primary_connection_string
+ key_vault_id = module.keyvault.key_vault_id
+
+ depends_on = [module.keyvault]
+}
+
+resource "azurerm_key_vault_secret" "openai_key" {
+ name = "openai-api-key"
+ value = module.openai.primary_access_key
+ key_vault_id = module.keyvault.key_vault_id
+
+ depends_on = [module.keyvault]
+}
+
+resource "azurerm_key_vault_secret" "appinsights_connection" {
+ name = "appinsights-connection-string"
+ value = module.monitoring.application_insights_connection_string
+ key_vault_id = module.keyvault.key_vault_id
+
+ depends_on = [module.keyvault]
+}
+
+resource "azurerm_key_vault_secret" "search_key" {
+ name = "search-admin-key"
+ value = module.ai_search.primary_key
+ key_vault_id = module.keyvault.key_vault_id
+
+ depends_on = [module.keyvault]
+}
diff --git a/infra/modules/ai-search/main.tf b/infra/modules/ai-search/main.tf
new file mode 100644
index 0000000..3141361
--- /dev/null
+++ b/infra/modules/ai-search/main.tf
@@ -0,0 +1,22 @@
+###############################################################################
+# Cognitive Mesh — AI Search Module
+# Provisions Azure AI Search (formerly Azure Cognitive Search).
+###############################################################################
+
+resource "azurerm_search_service" "this" {
+ name = "${var.project_name}-search-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ sku = var.sku
+
+ replica_count = var.replica_count
+ partition_count = var.partition_count
+
+ public_network_access_enabled = var.public_network_access_enabled
+
+ local_authentication_enabled = var.local_authentication_enabled
+
+ tags = merge(var.common_tags, {
+ Module = "ai-search"
+ })
+}
diff --git a/infra/modules/ai-search/outputs.tf b/infra/modules/ai-search/outputs.tf
new file mode 100644
index 0000000..0b25708
--- /dev/null
+++ b/infra/modules/ai-search/outputs.tf
@@ -0,0 +1,31 @@
+###############################################################################
+# Cognitive Mesh — AI Search Module Outputs
+###############################################################################
+
+output "search_service_id" {
+ description = "The ID of the Azure AI Search service."
+ value = azurerm_search_service.this.id
+}
+
+output "search_service_name" {
+ description = "The name of the Azure AI Search service."
+ value = azurerm_search_service.this.name
+}
+
+output "primary_key" {
+ description = "The primary admin key for the search service."
+ value = azurerm_search_service.this.primary_key
+ sensitive = true
+}
+
+output "secondary_key" {
+ description = "The secondary admin key for the search service."
+ value = azurerm_search_service.this.secondary_key
+ sensitive = true
+}
+
+output "query_keys" {
+ description = "Query keys for the search service."
+ value = azurerm_search_service.this.query_keys
+ sensitive = true
+}
diff --git a/infra/modules/ai-search/variables.tf b/infra/modules/ai-search/variables.tf
new file mode 100644
index 0000000..d026d65
--- /dev/null
+++ b/infra/modules/ai-search/variables.tf
@@ -0,0 +1,67 @@
+###############################################################################
+# Cognitive Mesh — AI Search Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "sku" {
+ description = "SKU tier for the search service (free, basic, standard, standard2, standard3)."
+ type = string
+ default = "basic"
+ validation {
+ condition = contains(["free", "basic", "standard", "standard2", "standard3", "storage_optimized_l1", "storage_optimized_l2"], var.sku)
+ error_message = "Must be one of: free, basic, standard, standard2, standard3, storage_optimized_l1, storage_optimized_l2."
+ }
+}
+
+variable "replica_count" {
+ description = "Number of replicas (1-12 depending on SKU)."
+ type = number
+ default = 1
+}
+
+variable "partition_count" {
+ description = "Number of partitions (1, 2, 3, 4, 6, or 12)."
+ type = number
+ default = 1
+}
+
+variable "public_network_access_enabled" {
+ description = "Whether public network access is enabled."
+ type = bool
+ default = true
+}
+
+variable "local_authentication_enabled" {
+ description = "Whether API key authentication is enabled."
+ type = bool
+ default = true
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/cosmosdb/main.tf b/infra/modules/cosmosdb/main.tf
new file mode 100644
index 0000000..d86b8b6
--- /dev/null
+++ b/infra/modules/cosmosdb/main.tf
@@ -0,0 +1,71 @@
+###############################################################################
+# Cognitive Mesh — CosmosDB Module
+# Provisions a serverless Cosmos DB account with SQL API and a database.
+###############################################################################
+
+resource "azurerm_cosmosdb_account" "this" {
+ name = "${var.project_name}-cosmos-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ offer_type = "Standard"
+ kind = "GlobalDocumentDB"
+
+ automatic_failover_enabled = var.enable_automatic_failover
+
+ capabilities {
+ name = "EnableServerless"
+ }
+
+ consistency_policy {
+ consistency_level = var.consistency_level
+ max_interval_in_seconds = var.consistency_level == "BoundedStaleness" ? var.max_staleness_interval : null
+ max_staleness_prefix = var.consistency_level == "BoundedStaleness" ? var.max_staleness_prefix : null
+ }
+
+ geo_location {
+ location = var.location
+ failover_priority = 0
+ }
+
+ dynamic "geo_location" {
+ for_each = var.secondary_locations
+ content {
+ location = geo_location.value.location
+ failover_priority = geo_location.value.failover_priority
+ }
+ }
+
+ tags = merge(var.common_tags, {
+ Module = "cosmosdb"
+ })
+}
+
+resource "azurerm_cosmosdb_sql_database" "this" {
+ name = var.database_name
+ resource_group_name = var.resource_group_name
+ account_name = azurerm_cosmosdb_account.this.name
+}
+
+resource "azurerm_cosmosdb_sql_container" "containers" {
+ for_each = var.containers
+
+ name = each.key
+ resource_group_name = var.resource_group_name
+ account_name = azurerm_cosmosdb_account.this.name
+ database_name = azurerm_cosmosdb_sql_database.this.name
+ partition_key_paths = each.value.partition_key_paths
+
+ default_ttl = lookup(each.value, "default_ttl", -1)
+
+ indexing_policy {
+ indexing_mode = "consistent"
+
+ included_path {
+ path = "/*"
+ }
+
+ excluded_path {
+ path = "/\"_etag\"/?"
+ }
+ }
+}
diff --git a/infra/modules/cosmosdb/outputs.tf b/infra/modules/cosmosdb/outputs.tf
new file mode 100644
index 0000000..1f00ae0
--- /dev/null
+++ b/infra/modules/cosmosdb/outputs.tf
@@ -0,0 +1,35 @@
+###############################################################################
+# Cognitive Mesh — CosmosDB Module Outputs
+###############################################################################
+
+output "account_id" {
+ description = "The ID of the Cosmos DB account."
+ value = azurerm_cosmosdb_account.this.id
+}
+
+output "account_name" {
+ description = "The name of the Cosmos DB account."
+ value = azurerm_cosmosdb_account.this.name
+}
+
+output "account_endpoint" {
+ description = "The endpoint of the Cosmos DB account."
+ value = azurerm_cosmosdb_account.this.endpoint
+}
+
+output "primary_key" {
+ description = "The primary key for the Cosmos DB account."
+ value = azurerm_cosmosdb_account.this.primary_key
+ sensitive = true
+}
+
+output "connection_strings" {
+ description = "Connection strings for the Cosmos DB account."
+ value = azurerm_cosmosdb_account.this.connection_strings
+ sensitive = true
+}
+
+output "database_name" {
+ description = "The name of the Cosmos DB SQL database."
+ value = azurerm_cosmosdb_sql_database.this.name
+}
diff --git a/infra/modules/cosmosdb/variables.tf b/infra/modules/cosmosdb/variables.tf
new file mode 100644
index 0000000..a693c2e
--- /dev/null
+++ b/infra/modules/cosmosdb/variables.tf
@@ -0,0 +1,98 @@
+###############################################################################
+# Cognitive Mesh — CosmosDB Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "database_name" {
+ description = "Name of the Cosmos DB SQL database."
+ type = string
+ default = "cognitive-mesh-db"
+}
+
+variable "consistency_level" {
+ description = "The consistency level for the Cosmos DB account."
+ type = string
+ default = "Session"
+ validation {
+ condition = contains(["BoundedStaleness", "Eventual", "Session", "Strong", "ConsistentPrefix"], var.consistency_level)
+ error_message = "Must be one of: BoundedStaleness, Eventual, Session, Strong, ConsistentPrefix."
+ }
+}
+
+variable "max_staleness_interval" {
+ description = "Max staleness interval in seconds (only for BoundedStaleness)."
+ type = number
+ default = 5
+}
+
+variable "max_staleness_prefix" {
+ description = "Max staleness prefix (only for BoundedStaleness)."
+ type = number
+ default = 100
+}
+
+variable "enable_automatic_failover" {
+ description = "Enable automatic failover for the Cosmos DB account."
+ type = bool
+ default = false
+}
+
+variable "secondary_locations" {
+ description = "List of secondary geo-locations for replication."
+ type = list(object({
+ location = string
+ failover_priority = number
+ }))
+ default = []
+}
+
+variable "containers" {
+ description = "Map of Cosmos DB SQL containers to create."
+ type = map(object({
+ partition_key_paths = list(string)
+ default_ttl = optional(number, -1)
+ }))
+ default = {
+ "workflows" = {
+ partition_key_paths = ["/tenantId"]
+ }
+ "checkpoints" = {
+ partition_key_paths = ["/workflowId"]
+ }
+ "agents" = {
+ partition_key_paths = ["/agentType"]
+ }
+ "reasoning-sessions" = {
+ partition_key_paths = ["/sessionId"]
+ }
+ }
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/keyvault/main.tf b/infra/modules/keyvault/main.tf
new file mode 100644
index 0000000..7acfa3d
--- /dev/null
+++ b/infra/modules/keyvault/main.tf
@@ -0,0 +1,64 @@
+###############################################################################
+# Cognitive Mesh — Key Vault Module
+# Provisions Azure Key Vault with access policies.
+###############################################################################
+
+data "azurerm_client_config" "current" {}
+
+resource "azurerm_key_vault" "this" {
+ name = "${var.project_name}-kv-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ tenant_id = data.azurerm_client_config.current.tenant_id
+ sku_name = var.sku_name
+
+ soft_delete_retention_days = var.soft_delete_retention_days
+ purge_protection_enabled = var.purge_protection_enabled
+
+ enabled_for_deployment = var.enabled_for_deployment
+ enabled_for_disk_encryption = var.enabled_for_disk_encryption
+ enabled_for_template_deployment = var.enabled_for_template_deployment
+
+ network_acls {
+ default_action = var.network_default_action
+ bypass = "AzureServices"
+ ip_rules = var.allowed_ip_ranges
+ virtual_network_subnet_ids = var.allowed_subnet_ids
+ }
+
+ tags = merge(var.common_tags, {
+ Module = "keyvault"
+ })
+}
+
+# Access policy for the Terraform service principal
+resource "azurerm_key_vault_access_policy" "terraform" {
+ key_vault_id = azurerm_key_vault.this.id
+ tenant_id = data.azurerm_client_config.current.tenant_id
+ object_id = data.azurerm_client_config.current.object_id
+
+ key_permissions = [
+ "Get", "List", "Create", "Delete", "Update", "Recover", "Purge",
+ ]
+
+ secret_permissions = [
+ "Get", "List", "Set", "Delete", "Recover", "Purge",
+ ]
+
+ certificate_permissions = [
+ "Get", "List", "Create", "Delete", "Update", "Recover", "Purge",
+ ]
+}
+
+# Additional access policies for application identities
+resource "azurerm_key_vault_access_policy" "additional" {
+ for_each = var.access_policies
+
+ key_vault_id = azurerm_key_vault.this.id
+ tenant_id = data.azurerm_client_config.current.tenant_id
+ object_id = each.value.object_id
+
+ key_permissions = each.value.key_permissions
+ secret_permissions = each.value.secret_permissions
+ certificate_permissions = each.value.certificate_permissions
+}
diff --git a/infra/modules/keyvault/outputs.tf b/infra/modules/keyvault/outputs.tf
new file mode 100644
index 0000000..387dcb6
--- /dev/null
+++ b/infra/modules/keyvault/outputs.tf
@@ -0,0 +1,23 @@
+###############################################################################
+# Cognitive Mesh — Key Vault Module Outputs
+###############################################################################
+
+output "key_vault_id" {
+ description = "The ID of the Key Vault."
+ value = azurerm_key_vault.this.id
+}
+
+output "key_vault_name" {
+ description = "The name of the Key Vault."
+ value = azurerm_key_vault.this.name
+}
+
+output "key_vault_uri" {
+ description = "The URI of the Key Vault."
+ value = azurerm_key_vault.this.vault_uri
+}
+
+output "tenant_id" {
+ description = "The tenant ID associated with the Key Vault."
+ value = azurerm_key_vault.this.tenant_id
+}
diff --git a/infra/modules/keyvault/variables.tf b/infra/modules/keyvault/variables.tf
new file mode 100644
index 0000000..cf581cd
--- /dev/null
+++ b/infra/modules/keyvault/variables.tf
@@ -0,0 +1,102 @@
+###############################################################################
+# Cognitive Mesh — Key Vault Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "sku_name" {
+ description = "SKU name for the Key Vault (standard or premium)."
+ type = string
+ default = "standard"
+ validation {
+ condition = contains(["standard", "premium"], var.sku_name)
+ error_message = "Must be one of: standard, premium."
+ }
+}
+
+variable "soft_delete_retention_days" {
+ description = "Number of days to retain soft-deleted vaults."
+ type = number
+ default = 90
+}
+
+variable "purge_protection_enabled" {
+ description = "Enable purge protection to prevent permanent deletion."
+ type = bool
+ default = true
+}
+
+variable "enabled_for_deployment" {
+ description = "Allow Azure VMs to retrieve certificates."
+ type = bool
+ default = false
+}
+
+variable "enabled_for_disk_encryption" {
+ description = "Allow Azure Disk Encryption to retrieve secrets."
+ type = bool
+ default = false
+}
+
+variable "enabled_for_template_deployment" {
+ description = "Allow ARM templates to retrieve secrets."
+ type = bool
+ default = false
+}
+
+variable "network_default_action" {
+ description = "Default network action (Allow or Deny)."
+ type = string
+ default = "Allow"
+}
+
+variable "allowed_ip_ranges" {
+ description = "List of allowed IP ranges for network ACL."
+ type = list(string)
+ default = []
+}
+
+variable "allowed_subnet_ids" {
+ description = "List of allowed subnet IDs for network ACL."
+ type = list(string)
+ default = []
+}
+
+variable "access_policies" {
+ description = "Map of additional access policies to create."
+ type = map(object({
+ object_id = string
+ key_permissions = optional(list(string), ["Get", "List"])
+ secret_permissions = optional(list(string), ["Get", "List"])
+ certificate_permissions = optional(list(string), ["Get", "List"])
+ }))
+ default = {}
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/monitoring/main.tf b/infra/modules/monitoring/main.tf
new file mode 100644
index 0000000..f5169f8
--- /dev/null
+++ b/infra/modules/monitoring/main.tf
@@ -0,0 +1,32 @@
+###############################################################################
+# Cognitive Mesh — Monitoring Module
+# Provisions Log Analytics Workspace and Application Insights.
+###############################################################################
+
+resource "azurerm_log_analytics_workspace" "this" {
+ name = "${var.project_name}-law-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ sku = var.log_analytics_sku
+ retention_in_days = var.retention_in_days
+ daily_quota_gb = var.daily_quota_gb
+
+ tags = merge(var.common_tags, {
+ Module = "monitoring"
+ })
+}
+
+resource "azurerm_application_insights" "this" {
+ name = "${var.project_name}-appinsights-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ workspace_id = azurerm_log_analytics_workspace.this.id
+ application_type = "web"
+
+ retention_in_days = var.appinsights_retention_days
+ sampling_percentage = var.sampling_percentage
+
+ tags = merge(var.common_tags, {
+ Module = "monitoring"
+ })
+}
diff --git a/infra/modules/monitoring/outputs.tf b/infra/modules/monitoring/outputs.tf
new file mode 100644
index 0000000..110f8e5
--- /dev/null
+++ b/infra/modules/monitoring/outputs.tf
@@ -0,0 +1,46 @@
+###############################################################################
+# Cognitive Mesh — Monitoring Module Outputs
+###############################################################################
+
+output "log_analytics_workspace_id" {
+ description = "The ID of the Log Analytics Workspace."
+ value = azurerm_log_analytics_workspace.this.id
+}
+
+output "log_analytics_workspace_name" {
+ description = "The name of the Log Analytics Workspace."
+ value = azurerm_log_analytics_workspace.this.name
+}
+
+output "log_analytics_workspace_primary_key" {
+ description = "The primary shared key for the Log Analytics Workspace."
+ value = azurerm_log_analytics_workspace.this.primary_shared_key
+ sensitive = true
+}
+
+output "application_insights_id" {
+ description = "The ID of the Application Insights resource."
+ value = azurerm_application_insights.this.id
+}
+
+output "application_insights_name" {
+ description = "The name of the Application Insights resource."
+ value = azurerm_application_insights.this.name
+}
+
+output "application_insights_instrumentation_key" {
+ description = "The instrumentation key for Application Insights."
+ value = azurerm_application_insights.this.instrumentation_key
+ sensitive = true
+}
+
+output "application_insights_connection_string" {
+ description = "The connection string for Application Insights."
+ value = azurerm_application_insights.this.connection_string
+ sensitive = true
+}
+
+output "application_insights_app_id" {
+ description = "The App ID for Application Insights."
+ value = azurerm_application_insights.this.app_id
+}
diff --git a/infra/modules/monitoring/variables.tf b/infra/modules/monitoring/variables.tf
new file mode 100644
index 0000000..c1f7613
--- /dev/null
+++ b/infra/modules/monitoring/variables.tf
@@ -0,0 +1,63 @@
+###############################################################################
+# Cognitive Mesh — Monitoring Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "log_analytics_sku" {
+ description = "SKU for the Log Analytics Workspace."
+ type = string
+ default = "PerGB2018"
+}
+
+variable "retention_in_days" {
+ description = "Log retention in days for Log Analytics Workspace."
+ type = number
+ default = 30
+}
+
+variable "daily_quota_gb" {
+ description = "Daily data ingestion quota in GB (-1 for unlimited)."
+ type = number
+ default = -1
+}
+
+variable "appinsights_retention_days" {
+ description = "Application Insights data retention in days."
+ type = number
+ default = 90
+}
+
+variable "sampling_percentage" {
+ description = "Percentage of telemetry data to sample (0-100)."
+ type = number
+ default = 100
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/networking/main.tf b/infra/modules/networking/main.tf
new file mode 100644
index 0000000..dffd35f
--- /dev/null
+++ b/infra/modules/networking/main.tf
@@ -0,0 +1,118 @@
+###############################################################################
+# Cognitive Mesh — Networking Module
+# Provisions VNet, subnets, NSGs, and private endpoints.
+###############################################################################
+
+resource "azurerm_virtual_network" "this" {
+ name = "${var.project_name}-vnet-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ address_space = var.vnet_address_space
+
+ tags = merge(var.common_tags, {
+ Module = "networking"
+ })
+}
+
+# ---------- Subnets ----------
+
+resource "azurerm_subnet" "subnets" {
+ for_each = var.subnets
+
+ name = each.key
+ resource_group_name = var.resource_group_name
+ virtual_network_name = azurerm_virtual_network.this.name
+ address_prefixes = each.value.address_prefixes
+
+ dynamic "delegation" {
+ for_each = each.value.delegation != null ? [each.value.delegation] : []
+ content {
+ name = delegation.value.name
+ service_delegation {
+ name = delegation.value.service_name
+ actions = delegation.value.actions
+ }
+ }
+ }
+
+ service_endpoints = each.value.service_endpoints
+}
+
+# ---------- Network Security Groups ----------
+
+resource "azurerm_network_security_group" "subnets" {
+ for_each = { for k, v in var.subnets : k => v if v.create_nsg }
+
+ name = "${var.project_name}-nsg-${each.key}-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+
+ tags = merge(var.common_tags, {
+ Module = "networking"
+ Subnet = each.key
+ })
+}
+
+resource "azurerm_subnet_network_security_group_association" "subnets" {
+ for_each = { for k, v in var.subnets : k => v if v.create_nsg }
+
+ subnet_id = azurerm_subnet.subnets[each.key].id
+ network_security_group_id = azurerm_network_security_group.subnets[each.key].id
+}
+
+# ---------- Private DNS Zones ----------
+
+resource "azurerm_private_dns_zone" "zones" {
+ for_each = var.private_dns_zones
+
+ name = each.value
+ resource_group_name = var.resource_group_name
+
+ tags = merge(var.common_tags, {
+ Module = "networking"
+ })
+}
+
+resource "azurerm_private_dns_zone_virtual_network_link" "links" {
+ for_each = var.private_dns_zones
+
+ name = "${each.key}-link"
+ resource_group_name = var.resource_group_name
+ private_dns_zone_name = azurerm_private_dns_zone.zones[each.key].name
+ virtual_network_id = azurerm_virtual_network.this.id
+ registration_enabled = false
+
+ tags = merge(var.common_tags, {
+ Module = "networking"
+ })
+}
+
+# ---------- Private Endpoints ----------
+
+resource "azurerm_private_endpoint" "endpoints" {
+ for_each = var.private_endpoints
+
+ name = "${var.project_name}-pe-${each.key}-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ subnet_id = azurerm_subnet.subnets[each.value.subnet_key].id
+
+ private_service_connection {
+ name = "${each.key}-connection"
+ private_connection_resource_id = each.value.resource_id
+ subresource_names = each.value.subresource_names
+ is_manual_connection = false
+ }
+
+ dynamic "private_dns_zone_group" {
+ for_each = each.value.dns_zone_key != null ? [each.value.dns_zone_key] : []
+ content {
+ name = "default"
+ private_dns_zone_ids = [azurerm_private_dns_zone.zones[private_dns_zone_group.value].id]
+ }
+ }
+
+ tags = merge(var.common_tags, {
+ Module = "networking"
+ })
+}
diff --git a/infra/modules/networking/outputs.tf b/infra/modules/networking/outputs.tf
new file mode 100644
index 0000000..d514c72
--- /dev/null
+++ b/infra/modules/networking/outputs.tf
@@ -0,0 +1,38 @@
+###############################################################################
+# Cognitive Mesh — Networking Module Outputs
+###############################################################################
+
+output "vnet_id" {
+ description = "The ID of the virtual network."
+ value = azurerm_virtual_network.this.id
+}
+
+output "vnet_name" {
+ description = "The name of the virtual network."
+ value = azurerm_virtual_network.this.name
+}
+
+output "vnet_address_space" {
+ description = "The address space of the virtual network."
+ value = azurerm_virtual_network.this.address_space
+}
+
+output "subnet_ids" {
+ description = "Map of subnet names to their IDs."
+ value = { for k, v in azurerm_subnet.subnets : k => v.id }
+}
+
+output "nsg_ids" {
+ description = "Map of NSG names to their IDs."
+ value = { for k, v in azurerm_network_security_group.subnets : k => v.id }
+}
+
+output "private_dns_zone_ids" {
+ description = "Map of private DNS zone keys to their IDs."
+ value = { for k, v in azurerm_private_dns_zone.zones : k => v.id }
+}
+
+output "private_endpoint_ids" {
+ description = "Map of private endpoint names to their IDs."
+ value = { for k, v in azurerm_private_endpoint.endpoints : k => v.id }
+}
diff --git a/infra/modules/networking/variables.tf b/infra/modules/networking/variables.tf
new file mode 100644
index 0000000..5477479
--- /dev/null
+++ b/infra/modules/networking/variables.tf
@@ -0,0 +1,100 @@
+###############################################################################
+# Cognitive Mesh — Networking Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "vnet_address_space" {
+ description = "Address space for the virtual network."
+ type = list(string)
+ default = ["10.0.0.0/16"]
+}
+
+variable "subnets" {
+ description = "Map of subnets to create within the VNet."
+ type = map(object({
+ address_prefixes = list(string)
+ service_endpoints = optional(list(string), [])
+ create_nsg = optional(bool, true)
+ delegation = optional(object({
+ name = string
+ service_name = string
+ actions = list(string)
+ }), null)
+ }))
+ default = {
+ "app" = {
+ address_prefixes = ["10.0.1.0/24"]
+ service_endpoints = ["Microsoft.KeyVault", "Microsoft.Storage", "Microsoft.AzureCosmosDB"]
+ }
+ "data" = {
+ address_prefixes = ["10.0.2.0/24"]
+ service_endpoints = ["Microsoft.Storage", "Microsoft.AzureCosmosDB"]
+ }
+ "containers" = {
+ address_prefixes = ["10.0.3.0/24"]
+ service_endpoints = []
+ delegation = {
+ name = "aci-delegation"
+ service_name = "Microsoft.ContainerInstance/containerGroups"
+ actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
+ }
+ }
+ "private-endpoints" = {
+ address_prefixes = ["10.0.4.0/24"]
+ create_nsg = false
+ }
+ }
+}
+
+variable "private_dns_zones" {
+ description = "Map of private DNS zones to create (key = logical name, value = zone FQDN)."
+ type = map(string)
+ default = {
+ "cosmosdb" = "privatelink.documents.azure.com"
+ "keyvault" = "privatelink.vaultcore.azure.net"
+ "storage" = "privatelink.blob.core.windows.net"
+ "redis" = "privatelink.redis.cache.windows.net"
+ "search" = "privatelink.search.windows.net"
+ "openai" = "privatelink.openai.azure.com"
+ }
+}
+
+variable "private_endpoints" {
+ description = "Map of private endpoints to create."
+ type = map(object({
+ subnet_key = string
+ resource_id = string
+ subresource_names = list(string)
+ dns_zone_key = optional(string, null)
+ }))
+ default = {}
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/openai/main.tf b/infra/modules/openai/main.tf
new file mode 100644
index 0000000..f744be7
--- /dev/null
+++ b/infra/modules/openai/main.tf
@@ -0,0 +1,48 @@
+###############################################################################
+# Cognitive Mesh — Azure OpenAI Module
+# Provisions Azure Cognitive Services account (OpenAI kind) with deployments.
+###############################################################################
+
+resource "azurerm_cognitive_account" "openai" {
+ name = "${var.project_name}-openai-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ kind = "OpenAI"
+ sku_name = var.sku_name
+
+ custom_subdomain_name = "${var.project_name}-openai-${var.environment}"
+
+ network_acls {
+ default_action = var.network_default_action
+ ip_rules = var.allowed_ip_ranges
+
+ dynamic "virtual_network_rules" {
+ for_each = var.allowed_subnet_ids
+ content {
+ subnet_id = virtual_network_rules.value
+ }
+ }
+ }
+
+ tags = merge(var.common_tags, {
+ Module = "openai"
+ })
+}
+
+resource "azurerm_cognitive_deployment" "deployments" {
+ for_each = var.model_deployments
+
+ name = each.key
+ cognitive_account_id = azurerm_cognitive_account.openai.id
+
+ model {
+ format = "OpenAI"
+ name = each.value.model_name
+ version = each.value.model_version
+ }
+
+ sku {
+ name = each.value.sku_name
+ capacity = each.value.sku_capacity
+ }
+}
diff --git a/infra/modules/openai/outputs.tf b/infra/modules/openai/outputs.tf
new file mode 100644
index 0000000..d69982b
--- /dev/null
+++ b/infra/modules/openai/outputs.tf
@@ -0,0 +1,29 @@
+###############################################################################
+# Cognitive Mesh — Azure OpenAI Module Outputs
+###############################################################################
+
+output "cognitive_account_id" {
+ description = "The ID of the Azure OpenAI account."
+ value = azurerm_cognitive_account.openai.id
+}
+
+output "cognitive_account_name" {
+ description = "The name of the Azure OpenAI account."
+ value = azurerm_cognitive_account.openai.name
+}
+
+output "endpoint" {
+ description = "The endpoint of the Azure OpenAI account."
+ value = azurerm_cognitive_account.openai.endpoint
+}
+
+output "primary_access_key" {
+ description = "The primary access key for the Azure OpenAI account."
+ value = azurerm_cognitive_account.openai.primary_access_key
+ sensitive = true
+}
+
+output "deployment_ids" {
+ description = "Map of deployment names to their IDs."
+ value = { for k, v in azurerm_cognitive_deployment.deployments : k => v.id }
+}
diff --git a/infra/modules/openai/variables.tf b/infra/modules/openai/variables.tf
new file mode 100644
index 0000000..6e56835
--- /dev/null
+++ b/infra/modules/openai/variables.tf
@@ -0,0 +1,81 @@
+###############################################################################
+# Cognitive Mesh — Azure OpenAI Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "sku_name" {
+ description = "SKU name for the Cognitive Services account."
+ type = string
+ default = "S0"
+}
+
+variable "network_default_action" {
+ description = "Default network action (Allow or Deny)."
+ type = string
+ default = "Allow"
+}
+
+variable "allowed_ip_ranges" {
+ description = "List of allowed IP ranges for network ACL."
+ type = list(string)
+ default = []
+}
+
+variable "allowed_subnet_ids" {
+ description = "List of allowed subnet IDs for network ACL."
+ type = list(string)
+ default = []
+}
+
+variable "model_deployments" {
+ description = "Map of model deployments to create."
+ type = map(object({
+ model_name = string
+ model_version = string
+ sku_name = optional(string, "Standard")
+ sku_capacity = optional(number, 10)
+ }))
+ default = {
+ "gpt-4o" = {
+ model_name = "gpt-4o"
+ model_version = "2024-11-20"
+ sku_name = "GlobalStandard"
+ sku_capacity = 10
+ }
+ "text-embedding-3-large" = {
+ model_name = "text-embedding-3-large"
+ model_version = "1"
+ sku_name = "Standard"
+ sku_capacity = 10
+ }
+ }
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/qdrant/main.tf b/infra/modules/qdrant/main.tf
new file mode 100644
index 0000000..a2c9292
--- /dev/null
+++ b/infra/modules/qdrant/main.tf
@@ -0,0 +1,49 @@
+###############################################################################
+# Cognitive Mesh — Qdrant Module
+# Provisions Qdrant vector database as an Azure Container Instance.
+###############################################################################
+
+resource "azurerm_container_group" "qdrant" {
+ name = "${var.project_name}-qdrant-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ os_type = "Linux"
+ ip_address_type = var.ip_address_type
+ subnet_ids = var.subnet_ids
+
+ container {
+ name = "qdrant"
+ image = var.qdrant_image
+ cpu = var.cpu_cores
+ memory = var.memory_gb
+
+ ports {
+ port = 6333
+ protocol = "TCP"
+ }
+
+ ports {
+ port = 6334
+ protocol = "TCP"
+ }
+
+ environment_variables = {
+ QDRANT__SERVICE__GRPC_PORT = "6334"
+ }
+
+ volume {
+ name = "qdrant-storage"
+ mount_path = "/qdrant/storage"
+
+ empty_dir = var.use_persistent_storage ? false : true
+
+ storage_account_name = var.use_persistent_storage ? var.storage_account_name : null
+ storage_account_key = var.use_persistent_storage ? var.storage_account_key : null
+ share_name = var.use_persistent_storage ? var.file_share_name : null
+ }
+ }
+
+ tags = merge(var.common_tags, {
+ Module = "qdrant"
+ })
+}
diff --git a/infra/modules/qdrant/outputs.tf b/infra/modules/qdrant/outputs.tf
new file mode 100644
index 0000000..01c8358
--- /dev/null
+++ b/infra/modules/qdrant/outputs.tf
@@ -0,0 +1,28 @@
+###############################################################################
+# Cognitive Mesh — Qdrant Module Outputs
+###############################################################################
+
+output "container_group_id" {
+ description = "The ID of the container group."
+ value = azurerm_container_group.qdrant.id
+}
+
+output "container_group_name" {
+ description = "The name of the container group."
+ value = azurerm_container_group.qdrant.name
+}
+
+output "ip_address" {
+ description = "The IP address of the Qdrant instance."
+ value = azurerm_container_group.qdrant.ip_address
+}
+
+output "http_endpoint" {
+ description = "The HTTP endpoint for the Qdrant REST API."
+ value = "http://${azurerm_container_group.qdrant.ip_address}:6333"
+}
+
+output "grpc_endpoint" {
+ description = "The gRPC endpoint for the Qdrant API."
+ value = "${azurerm_container_group.qdrant.ip_address}:6334"
+}
diff --git a/infra/modules/qdrant/variables.tf b/infra/modules/qdrant/variables.tf
new file mode 100644
index 0000000..48ec83f
--- /dev/null
+++ b/infra/modules/qdrant/variables.tf
@@ -0,0 +1,88 @@
+###############################################################################
+# Cognitive Mesh — Qdrant Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "qdrant_image" {
+ description = "Docker image for Qdrant."
+ type = string
+ default = "qdrant/qdrant:v1.12.5"
+}
+
+variable "cpu_cores" {
+ description = "Number of CPU cores for the Qdrant container."
+ type = number
+ default = 1
+}
+
+variable "memory_gb" {
+ description = "Memory in GB for the Qdrant container."
+ type = number
+ default = 2
+}
+
+variable "ip_address_type" {
+ description = "IP address type for the container group (Public or Private)."
+ type = string
+ default = "Private"
+}
+
+variable "subnet_ids" {
+ description = "List of subnet IDs for private networking."
+ type = list(string)
+ default = []
+}
+
+variable "use_persistent_storage" {
+ description = "Whether to use persistent Azure File Share storage."
+ type = bool
+ default = false
+}
+
+variable "storage_account_name" {
+ description = "Storage account name for persistent volume (required if use_persistent_storage=true)."
+ type = string
+ default = null
+}
+
+variable "storage_account_key" {
+ description = "Storage account key for persistent volume (required if use_persistent_storage=true)."
+ type = string
+ default = null
+ sensitive = true
+}
+
+variable "file_share_name" {
+ description = "Azure File Share name for persistent volume."
+ type = string
+ default = "qdrant-data"
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/redis/main.tf b/infra/modules/redis/main.tf
new file mode 100644
index 0000000..cfccb50
--- /dev/null
+++ b/infra/modules/redis/main.tf
@@ -0,0 +1,31 @@
+###############################################################################
+# Cognitive Mesh — Redis Module
+# Provisions Azure Cache for Redis.
+# Dev: Basic C0 | Staging/Prod: Standard C1+
+###############################################################################
+
+locals {
+ is_production = var.environment == "prod" || var.environment == "staging"
+ sku_name = local.is_production ? var.prod_sku_name : "Basic"
+ family = local.is_production ? var.prod_family : "C"
+ capacity = local.is_production ? var.prod_capacity : 0
+}
+
+resource "azurerm_redis_cache" "this" {
+ name = "${var.project_name}-redis-${var.environment}"
+ location = var.location
+ resource_group_name = var.resource_group_name
+ capacity = local.capacity
+ family = local.family
+ sku_name = local.sku_name
+
+ minimum_tls_version = "1.2"
+
+ redis_configuration {
+ maxmemory_policy = var.maxmemory_policy
+ }
+
+ tags = merge(var.common_tags, {
+ Module = "redis"
+ })
+}
diff --git a/infra/modules/redis/outputs.tf b/infra/modules/redis/outputs.tf
new file mode 100644
index 0000000..382aa52
--- /dev/null
+++ b/infra/modules/redis/outputs.tf
@@ -0,0 +1,35 @@
+###############################################################################
+# Cognitive Mesh — Redis Module Outputs
+###############################################################################
+
+output "redis_cache_id" {
+ description = "The ID of the Redis cache."
+ value = azurerm_redis_cache.this.id
+}
+
+output "redis_cache_name" {
+ description = "The name of the Redis cache."
+ value = azurerm_redis_cache.this.name
+}
+
+output "hostname" {
+ description = "The hostname of the Redis instance."
+ value = azurerm_redis_cache.this.hostname
+}
+
+output "ssl_port" {
+ description = "The SSL port of the Redis instance."
+ value = azurerm_redis_cache.this.ssl_port
+}
+
+output "primary_access_key" {
+ description = "The primary access key for the Redis cache."
+ value = azurerm_redis_cache.this.primary_access_key
+ sensitive = true
+}
+
+output "primary_connection_string" {
+ description = "The primary connection string for the Redis cache."
+ value = azurerm_redis_cache.this.primary_connection_string
+ sensitive = true
+}
diff --git a/infra/modules/redis/variables.tf b/infra/modules/redis/variables.tf
new file mode 100644
index 0000000..92e2d17
--- /dev/null
+++ b/infra/modules/redis/variables.tf
@@ -0,0 +1,57 @@
+###############################################################################
+# Cognitive Mesh — Redis Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "prod_sku_name" {
+ description = "SKU name for staging/prod environments."
+ type = string
+ default = "Standard"
+}
+
+variable "prod_family" {
+ description = "Redis family for staging/prod environments."
+ type = string
+ default = "C"
+}
+
+variable "prod_capacity" {
+ description = "Redis capacity for staging/prod environments."
+ type = number
+ default = 1
+}
+
+variable "maxmemory_policy" {
+ description = "Max memory eviction policy."
+ type = string
+ default = "allkeys-lru"
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/modules/storage/main.tf b/infra/modules/storage/main.tf
new file mode 100644
index 0000000..b041612
--- /dev/null
+++ b/infra/modules/storage/main.tf
@@ -0,0 +1,40 @@
+###############################################################################
+# Cognitive Mesh — Storage Module
+# Provisions an Azure Storage Account (StorageV2, LRS) with containers.
+###############################################################################
+
+resource "azurerm_storage_account" "this" {
+ name = replace("${var.project_name}st${var.environment}", "-", "")
+ resource_group_name = var.resource_group_name
+ location = var.location
+ account_tier = var.account_tier
+ account_replication_type = var.replication_type
+ account_kind = "StorageV2"
+ min_tls_version = "TLS1_2"
+
+ https_traffic_only_enabled = true
+
+ blob_properties {
+ versioning_enabled = var.enable_versioning
+
+ delete_retention_policy {
+ days = var.soft_delete_retention_days
+ }
+
+ container_delete_retention_policy {
+ days = var.soft_delete_retention_days
+ }
+ }
+
+ tags = merge(var.common_tags, {
+ Module = "storage"
+ })
+}
+
+resource "azurerm_storage_container" "containers" {
+ for_each = var.containers
+
+ name = each.key
+ storage_account_id = azurerm_storage_account.this.id
+ container_access_type = each.value.access_type
+}
diff --git a/infra/modules/storage/outputs.tf b/infra/modules/storage/outputs.tf
new file mode 100644
index 0000000..c371a05
--- /dev/null
+++ b/infra/modules/storage/outputs.tf
@@ -0,0 +1,30 @@
+###############################################################################
+# Cognitive Mesh — Storage Module Outputs
+###############################################################################
+
+output "storage_account_id" {
+ description = "The ID of the storage account."
+ value = azurerm_storage_account.this.id
+}
+
+output "storage_account_name" {
+ description = "The name of the storage account."
+ value = azurerm_storage_account.this.name
+}
+
+output "primary_blob_endpoint" {
+ description = "The primary blob endpoint URL."
+ value = azurerm_storage_account.this.primary_blob_endpoint
+}
+
+output "primary_access_key" {
+ description = "The primary access key for the storage account."
+ value = azurerm_storage_account.this.primary_access_key
+ sensitive = true
+}
+
+output "primary_connection_string" {
+ description = "The primary connection string for the storage account."
+ value = azurerm_storage_account.this.primary_connection_string
+ sensitive = true
+}
diff --git a/infra/modules/storage/variables.tf b/infra/modules/storage/variables.tf
new file mode 100644
index 0000000..f25bc8f
--- /dev/null
+++ b/infra/modules/storage/variables.tf
@@ -0,0 +1,82 @@
+###############################################################################
+# Cognitive Mesh — Storage Module Variables
+###############################################################################
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for resource naming."
+ type = string
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Azure region for resource deployment."
+ type = string
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy into."
+ type = string
+}
+
+variable "account_tier" {
+ description = "Performance tier of the storage account (Standard or Premium)."
+ type = string
+ default = "Standard"
+}
+
+variable "replication_type" {
+ description = "Replication type for the storage account."
+ type = string
+ default = "LRS"
+ validation {
+ condition = contains(["LRS", "GRS", "RAGRS", "ZRS", "GZRS", "RAGZRS"], var.replication_type)
+ error_message = "Must be one of: LRS, GRS, RAGRS, ZRS, GZRS, RAGZRS."
+ }
+}
+
+variable "enable_versioning" {
+ description = "Enable blob versioning."
+ type = bool
+ default = true
+}
+
+variable "soft_delete_retention_days" {
+ description = "Number of days to retain soft-deleted blobs."
+ type = number
+ default = 7
+}
+
+variable "containers" {
+ description = "Map of storage containers to create."
+ type = map(object({
+ access_type = optional(string, "private")
+ }))
+ default = {
+ "workflow-checkpoints" = {
+ access_type = "private"
+ }
+ "agent-artifacts" = {
+ access_type = "private"
+ }
+ "reasoning-outputs" = {
+ access_type = "private"
+ }
+ "tfstate" = {
+ access_type = "private"
+ }
+ }
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {}
+}
diff --git a/infra/outputs.tf b/infra/outputs.tf
new file mode 100644
index 0000000..fea96e7
--- /dev/null
+++ b/infra/outputs.tf
@@ -0,0 +1,125 @@
+###############################################################################
+# Cognitive Mesh — Root Module Outputs
+###############################################################################
+
+# ---------- Resource Group ----------
+
+output "resource_group_name" {
+ description = "The name of the resource group."
+ value = azurerm_resource_group.this.name
+}
+
+output "resource_group_id" {
+ description = "The ID of the resource group."
+ value = azurerm_resource_group.this.id
+}
+
+# ---------- Networking ----------
+
+output "vnet_id" {
+ description = "The ID of the virtual network."
+ value = module.networking.vnet_id
+}
+
+output "subnet_ids" {
+ description = "Map of subnet names to their IDs."
+ value = module.networking.subnet_ids
+}
+
+# ---------- CosmosDB ----------
+
+output "cosmosdb_endpoint" {
+ description = "The Cosmos DB account endpoint."
+ value = module.cosmosdb.account_endpoint
+}
+
+output "cosmosdb_database_name" {
+ description = "The Cosmos DB database name."
+ value = module.cosmosdb.database_name
+}
+
+# ---------- Storage ----------
+
+output "storage_account_name" {
+ description = "The storage account name."
+ value = module.storage.storage_account_name
+}
+
+output "storage_blob_endpoint" {
+ description = "The primary blob endpoint."
+ value = module.storage.primary_blob_endpoint
+}
+
+# ---------- Redis ----------
+
+output "redis_hostname" {
+ description = "The Redis cache hostname."
+ value = module.redis.hostname
+}
+
+output "redis_ssl_port" {
+ description = "The Redis cache SSL port."
+ value = module.redis.ssl_port
+}
+
+# ---------- Qdrant ----------
+
+output "qdrant_http_endpoint" {
+ description = "The Qdrant REST API endpoint."
+ value = module.qdrant.http_endpoint
+}
+
+output "qdrant_grpc_endpoint" {
+ description = "The Qdrant gRPC endpoint."
+ value = module.qdrant.grpc_endpoint
+}
+
+# ---------- Azure OpenAI ----------
+
+output "openai_endpoint" {
+ description = "The Azure OpenAI endpoint."
+ value = module.openai.endpoint
+}
+
+output "openai_deployment_ids" {
+ description = "Map of OpenAI deployment names to IDs."
+ value = module.openai.deployment_ids
+}
+
+# ---------- AI Search ----------
+
+output "search_service_name" {
+ description = "The AI Search service name."
+ value = module.ai_search.search_service_name
+}
+
+# ---------- Key Vault ----------
+
+output "key_vault_uri" {
+ description = "The Key Vault URI."
+ value = module.keyvault.key_vault_uri
+}
+
+output "key_vault_name" {
+ description = "The Key Vault name."
+ value = module.keyvault.key_vault_name
+}
+
+# ---------- Monitoring ----------
+
+output "application_insights_connection_string" {
+ description = "The Application Insights connection string."
+ value = module.monitoring.application_insights_connection_string
+ sensitive = true
+}
+
+output "application_insights_instrumentation_key" {
+ description = "The Application Insights instrumentation key."
+ value = module.monitoring.application_insights_instrumentation_key
+ sensitive = true
+}
+
+output "log_analytics_workspace_id" {
+ description = "The Log Analytics Workspace ID."
+ value = module.monitoring.log_analytics_workspace_id
+}
diff --git a/infra/providers.tf b/infra/providers.tf
new file mode 100644
index 0000000..d7c02f6
--- /dev/null
+++ b/infra/providers.tf
@@ -0,0 +1,30 @@
+###############################################################################
+# Cognitive Mesh — Provider Configuration
+###############################################################################
+
+terraform {
+ required_version = ">= 1.7.0"
+
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "~> 4.0"
+ }
+ }
+}
+
+provider "azurerm" {
+ features {
+ key_vault {
+ purge_soft_delete_on_destroy = false
+ }
+
+ resource_group {
+ prevent_deletion_if_contains_resources = true
+ }
+
+ cognitive_account {
+ purge_soft_delete_on_destroy = false
+ }
+ }
+}
diff --git a/infra/terragrunt.hcl b/infra/terragrunt.hcl
new file mode 100644
index 0000000..80f3b1b
--- /dev/null
+++ b/infra/terragrunt.hcl
@@ -0,0 +1,58 @@
+###############################################################################
+# Cognitive Mesh — Root Terragrunt Configuration
+# Shared settings inherited by all environment configurations.
+###############################################################################
+
+# Generate the Azure provider block in each child module
+generate "provider" {
+ path = "provider.tf"
+ if_exists = "overwrite_terragrunt"
+ contents = <<-EOF
+ terraform {
+ required_version = ">= 1.7.0"
+
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "~> 4.0"
+ }
+ }
+ }
+
+ provider "azurerm" {
+ features {
+ key_vault {
+ purge_soft_delete_on_destroy = false
+ }
+ resource_group {
+ prevent_deletion_if_contains_resources = true
+ }
+ }
+ }
+ EOF
+}
+
+# Configure remote state storage in Azure Blob
+remote_state {
+ backend = "azurerm"
+ generate = {
+ path = "backend.tf"
+ if_exists = "overwrite_terragrunt"
+ }
+ config = {
+ resource_group_name = "cognitive-mesh-tfstate-rg"
+ storage_account_name = "cognitivemeshtfstate"
+ container_name = "tfstate"
+ key = "${path_relative_to_include()}/terraform.tfstate"
+ }
+}
+
+# Common input variables inherited by all environments
+inputs = {
+ project_name = "cognitive-mesh"
+
+ common_tags = {
+ Project = "CognitiveMesh"
+ ManagedBy = "terraform"
+ }
+}
diff --git a/infra/variables.tf b/infra/variables.tf
new file mode 100644
index 0000000..2c8e5ed
--- /dev/null
+++ b/infra/variables.tf
@@ -0,0 +1,150 @@
+###############################################################################
+# Cognitive Mesh — Root Module Variables
+###############################################################################
+
+# ---------- General ----------
+
+variable "project_name" {
+ description = "Name of the project, used as a prefix for all resource names."
+ type = string
+ default = "cognitive-mesh"
+}
+
+variable "environment" {
+ description = "Deployment environment (dev, staging, prod)."
+ type = string
+ validation {
+ condition = contains(["dev", "staging", "prod"], var.environment)
+ error_message = "Environment must be one of: dev, staging, prod."
+ }
+}
+
+variable "location" {
+ description = "Primary Azure region for resource deployment."
+ type = string
+ default = "westeurope"
+}
+
+variable "resource_group_name" {
+ description = "Name of the resource group to deploy all resources into."
+ type = string
+}
+
+variable "common_tags" {
+ description = "Common tags applied to all resources."
+ type = map(string)
+ default = {
+ Project = "CognitiveMesh"
+ ManagedBy = "terraform"
+ }
+}
+
+# ---------- CosmosDB ----------
+
+variable "cosmosdb_consistency_level" {
+ description = "Consistency level for the Cosmos DB account."
+ type = string
+ default = "Session"
+}
+
+variable "cosmosdb_database_name" {
+ description = "Name of the Cosmos DB SQL database."
+ type = string
+ default = "cognitive-mesh-db"
+}
+
+# ---------- Redis ----------
+
+variable "redis_prod_sku_name" {
+ description = "Redis SKU for staging/prod environments."
+ type = string
+ default = "Standard"
+}
+
+variable "redis_prod_capacity" {
+ description = "Redis capacity for staging/prod environments."
+ type = number
+ default = 1
+}
+
+# ---------- Qdrant ----------
+
+variable "qdrant_cpu_cores" {
+ description = "CPU cores for the Qdrant container."
+ type = number
+ default = 1
+}
+
+variable "qdrant_memory_gb" {
+ description = "Memory in GB for the Qdrant container."
+ type = number
+ default = 2
+}
+
+variable "qdrant_image" {
+ description = "Docker image for Qdrant."
+ type = string
+ default = "qdrant/qdrant:v1.12.5"
+}
+
+# ---------- OpenAI ----------
+
+variable "openai_model_deployments" {
+ description = "Map of Azure OpenAI model deployments."
+ type = map(object({
+ model_name = string
+ model_version = string
+ sku_name = optional(string, "Standard")
+ sku_capacity = optional(number, 10)
+ }))
+ default = {
+ "gpt-4o" = {
+ model_name = "gpt-4o"
+ model_version = "2024-11-20"
+ sku_name = "GlobalStandard"
+ sku_capacity = 10
+ }
+ "text-embedding-3-large" = {
+ model_name = "text-embedding-3-large"
+ model_version = "1"
+ sku_name = "Standard"
+ sku_capacity = 10
+ }
+ }
+}
+
+# ---------- AI Search ----------
+
+variable "search_sku" {
+ description = "SKU tier for Azure AI Search."
+ type = string
+ default = "basic"
+}
+
+# ---------- Monitoring ----------
+
+variable "log_retention_days" {
+ description = "Log Analytics data retention in days."
+ type = number
+ default = 30
+}
+
+variable "appinsights_retention_days" {
+ description = "Application Insights data retention in days."
+ type = number
+ default = 90
+}
+
+# ---------- Networking ----------
+
+variable "vnet_address_space" {
+ description = "Address space for the virtual network."
+ type = list(string)
+ default = ["10.0.0.0/16"]
+}
+
+variable "enable_private_endpoints" {
+ description = "Whether to create private endpoints for services."
+ type = bool
+ default = false
+}
diff --git a/k8s/base/configmap.yaml b/k8s/base/configmap.yaml
new file mode 100644
index 0000000..2b7dc57
--- /dev/null
+++ b/k8s/base/configmap.yaml
@@ -0,0 +1,23 @@
+###############################################################################
+# Cognitive Mesh — Base ConfigMap
+# Non-sensitive configuration values. Secrets are stored in K8s Secrets
+# (populated from Azure Key Vault via CSI driver or external-secrets).
+###############################################################################
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: cognitive-mesh-config
+ labels:
+ app: cognitive-mesh
+data:
+ ASPNETCORE_ENVIRONMENT: "Production"
+ COSMOSDB_DATABASE: "cognitive-mesh-db"
+ # Values below are overridden per environment via Kustomize overlays
+ KEY_VAULT_URI: "https://cognitive-mesh-kv.vault.azure.net/"
+ COSMOSDB_ENDPOINT: "https://cognitive-mesh-cosmos.documents.azure.com:443/"
+ REDIS_HOSTNAME: "cognitive-mesh-redis.redis.cache.windows.net"
+ REDIS_SSL_PORT: "6380"
+ QDRANT_ENDPOINT: "http://cognitive-mesh-qdrant:6333"
+ OPENAI_ENDPOINT: "https://cognitive-mesh-openai.openai.azure.com/"
+ SEARCH_ENDPOINT: "https://cognitive-mesh-search.search.windows.net"
+ LOG_LEVEL: "Information"
diff --git a/k8s/base/deployment.yaml b/k8s/base/deployment.yaml
new file mode 100644
index 0000000..379d29d
--- /dev/null
+++ b/k8s/base/deployment.yaml
@@ -0,0 +1,115 @@
+###############################################################################
+# Cognitive Mesh — Base Deployment
+###############################################################################
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: cognitive-mesh-api
+ labels:
+ app: cognitive-mesh
+ component: api
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: cognitive-mesh
+ component: api
+ template:
+ metadata:
+ labels:
+ app: cognitive-mesh
+ component: api
+ spec:
+ serviceAccountName: cognitive-mesh
+ containers:
+ - name: cognitive-mesh-api
+ image: cognitive-mesh-api:latest
+ ports:
+ - name: http
+ containerPort: 8080
+ protocol: TCP
+ - name: health
+ containerPort: 8081
+ protocol: TCP
+ env:
+ - name: ASPNETCORE_ENVIRONMENT
+ valueFrom:
+ configMapKeyRef:
+ name: cognitive-mesh-config
+ key: ASPNETCORE_ENVIRONMENT
+ - name: ASPNETCORE_URLS
+ value: "http://+:8080"
+ - name: KeyVault__VaultUri
+ valueFrom:
+ configMapKeyRef:
+ name: cognitive-mesh-config
+ key: KEY_VAULT_URI
+ - name: CosmosDb__Endpoint
+ valueFrom:
+ configMapKeyRef:
+ name: cognitive-mesh-config
+ key: COSMOSDB_ENDPOINT
+ - name: CosmosDb__DatabaseName
+ valueFrom:
+ configMapKeyRef:
+ name: cognitive-mesh-config
+ key: COSMOSDB_DATABASE
+ - name: Redis__Hostname
+ valueFrom:
+ configMapKeyRef:
+ name: cognitive-mesh-config
+ key: REDIS_HOSTNAME
+ - name: Qdrant__Endpoint
+ valueFrom:
+ configMapKeyRef:
+ name: cognitive-mesh-config
+ key: QDRANT_ENDPOINT
+ - name: OpenAI__Endpoint
+ valueFrom:
+ configMapKeyRef:
+ name: cognitive-mesh-config
+ key: OPENAI_ENDPOINT
+ - name: ApplicationInsights__ConnectionString
+ valueFrom:
+ secretKeyRef:
+ name: cognitive-mesh-secrets
+ key: APPINSIGHTS_CONNECTION_STRING
+ resources:
+ requests:
+ cpu: 250m
+ memory: 512Mi
+ limits:
+ cpu: "1"
+ memory: 1Gi
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: health
+ initialDelaySeconds: 15
+ periodSeconds: 20
+ failureThreshold: 3
+ readinessProbe:
+ httpGet:
+ path: /readyz
+ port: health
+ initialDelaySeconds: 5
+ periodSeconds: 10
+ failureThreshold: 3
+ startupProbe:
+ httpGet:
+ path: /healthz
+ port: health
+ initialDelaySeconds: 10
+ periodSeconds: 5
+ failureThreshold: 30
+ volumeMounts:
+ - name: tmp
+ mountPath: /tmp
+ volumes:
+ - name: tmp
+ emptyDir: {}
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 1000
+ fsGroup: 1000
+ terminationGracePeriodSeconds: 30
diff --git a/k8s/base/kustomization.yaml b/k8s/base/kustomization.yaml
new file mode 100644
index 0000000..4051eb0
--- /dev/null
+++ b/k8s/base/kustomization.yaml
@@ -0,0 +1,17 @@
+###############################################################################
+# Cognitive Mesh — Base Kustomization
+###############################################################################
+apiVersion: kustomize.config.k8s.io/v1beta1
+kind: Kustomization
+
+metadata:
+ name: cognitive-mesh-base
+
+commonLabels:
+ app.kubernetes.io/name: cognitive-mesh
+ app.kubernetes.io/managed-by: kustomize
+
+resources:
+ - deployment.yaml
+ - service.yaml
+ - configmap.yaml
diff --git a/k8s/base/service.yaml b/k8s/base/service.yaml
new file mode 100644
index 0000000..923ff45
--- /dev/null
+++ b/k8s/base/service.yaml
@@ -0,0 +1,24 @@
+###############################################################################
+# Cognitive Mesh — Base Service
+###############################################################################
+apiVersion: v1
+kind: Service
+metadata:
+ name: cognitive-mesh-api
+ labels:
+ app: cognitive-mesh
+ component: api
+spec:
+ type: ClusterIP
+ selector:
+ app: cognitive-mesh
+ component: api
+ ports:
+ - name: http
+ port: 80
+ targetPort: http
+ protocol: TCP
+ - name: health
+ port: 8081
+ targetPort: health
+ protocol: TCP
diff --git a/k8s/overlays/dev/kustomization.yaml b/k8s/overlays/dev/kustomization.yaml
new file mode 100644
index 0000000..dbee6c4
--- /dev/null
+++ b/k8s/overlays/dev/kustomization.yaml
@@ -0,0 +1,74 @@
+###############################################################################
+# Cognitive Mesh — Dev Overlay
+###############################################################################
+apiVersion: kustomize.config.k8s.io/v1beta1
+kind: Kustomization
+
+metadata:
+ name: cognitive-mesh-dev
+
+namespace: cognitive-mesh-dev
+
+commonLabels:
+ app.kubernetes.io/instance: dev
+ environment: dev
+
+resources:
+ - ../../base
+
+patches:
+ # Scale down for dev
+ - target:
+ kind: Deployment
+ name: cognitive-mesh-api
+ patch: |
+ - op: replace
+ path: /spec/replicas
+ value: 1
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/requests/cpu
+ value: 100m
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/requests/memory
+ value: 256Mi
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/limits/cpu
+ value: 500m
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/limits/memory
+ value: 512Mi
+
+ # Dev environment config
+ - target:
+ kind: ConfigMap
+ name: cognitive-mesh-config
+ patch: |
+ - op: replace
+ path: /data/ASPNETCORE_ENVIRONMENT
+ value: Development
+ - op: replace
+ path: /data/KEY_VAULT_URI
+ value: "https://cognitive-mesh-kv-dev.vault.azure.net/"
+ - op: replace
+ path: /data/COSMOSDB_ENDPOINT
+ value: "https://cognitive-mesh-cosmos-dev.documents.azure.com:443/"
+ - op: replace
+ path: /data/REDIS_HOSTNAME
+ value: "cognitive-mesh-redis-dev.redis.cache.windows.net"
+ - op: replace
+ path: /data/QDRANT_ENDPOINT
+ value: "http://cognitive-mesh-qdrant-dev:6333"
+ - op: replace
+ path: /data/OPENAI_ENDPOINT
+ value: "https://cognitive-mesh-openai-dev.openai.azure.com/"
+ - op: replace
+ path: /data/SEARCH_ENDPOINT
+ value: "https://cognitive-mesh-search-dev.search.windows.net"
+ - op: replace
+ path: /data/LOG_LEVEL
+ value: Debug
+
+images:
+ - name: cognitive-mesh-api
+ newName: cognitivemeshacr.azurecr.io/cognitive-mesh-api
+ newTag: dev-latest
diff --git a/k8s/overlays/prod/kustomization.yaml b/k8s/overlays/prod/kustomization.yaml
new file mode 100644
index 0000000..ddbfc81
--- /dev/null
+++ b/k8s/overlays/prod/kustomization.yaml
@@ -0,0 +1,87 @@
+###############################################################################
+# Cognitive Mesh — Production Overlay
+###############################################################################
+apiVersion: kustomize.config.k8s.io/v1beta1
+kind: Kustomization
+
+metadata:
+ name: cognitive-mesh-prod
+
+namespace: cognitive-mesh-prod
+
+commonLabels:
+ app.kubernetes.io/instance: prod
+ environment: prod
+
+resources:
+ - ../../base
+
+patches:
+ # Production: higher replicas and resources
+ - target:
+ kind: Deployment
+ name: cognitive-mesh-api
+ patch: |
+ - op: replace
+ path: /spec/replicas
+ value: 3
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/requests/cpu
+ value: 500m
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/requests/memory
+ value: 1Gi
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/limits/cpu
+ value: "2"
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/limits/memory
+ value: 2Gi
+
+ # Production environment config
+ - target:
+ kind: ConfigMap
+ name: cognitive-mesh-config
+ patch: |
+ - op: replace
+ path: /data/ASPNETCORE_ENVIRONMENT
+ value: Production
+ - op: replace
+ path: /data/KEY_VAULT_URI
+ value: "https://cognitive-mesh-kv-prod.vault.azure.net/"
+ - op: replace
+ path: /data/COSMOSDB_ENDPOINT
+ value: "https://cognitive-mesh-cosmos-prod.documents.azure.com:443/"
+ - op: replace
+ path: /data/REDIS_HOSTNAME
+ value: "cognitive-mesh-redis-prod.redis.cache.windows.net"
+ - op: replace
+ path: /data/QDRANT_ENDPOINT
+ value: "http://cognitive-mesh-qdrant-prod:6333"
+ - op: replace
+ path: /data/OPENAI_ENDPOINT
+ value: "https://cognitive-mesh-openai-prod.openai.azure.com/"
+ - op: replace
+ path: /data/SEARCH_ENDPOINT
+ value: "https://cognitive-mesh-search-prod.search.windows.net"
+ - op: replace
+ path: /data/LOG_LEVEL
+ value: Warning
+
+ # Add pod disruption budget for production
+ - target:
+ kind: Deployment
+ name: cognitive-mesh-api
+ patch: |
+ - op: add
+ path: /spec/strategy
+ value:
+ type: RollingUpdate
+ rollingUpdate:
+ maxSurge: 1
+ maxUnavailable: 0
+
+images:
+ - name: cognitive-mesh-api
+ newName: cognitivemeshacr.azurecr.io/cognitive-mesh-api
+ newTag: prod-latest
diff --git a/k8s/overlays/staging/kustomization.yaml b/k8s/overlays/staging/kustomization.yaml
new file mode 100644
index 0000000..e763b0b
--- /dev/null
+++ b/k8s/overlays/staging/kustomization.yaml
@@ -0,0 +1,74 @@
+###############################################################################
+# Cognitive Mesh — Staging Overlay
+###############################################################################
+apiVersion: kustomize.config.k8s.io/v1beta1
+kind: Kustomization
+
+metadata:
+ name: cognitive-mesh-staging
+
+namespace: cognitive-mesh-staging
+
+commonLabels:
+ app.kubernetes.io/instance: staging
+ environment: staging
+
+resources:
+ - ../../base
+
+patches:
+ # Staging replica count
+ - target:
+ kind: Deployment
+ name: cognitive-mesh-api
+ patch: |
+ - op: replace
+ path: /spec/replicas
+ value: 2
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/requests/cpu
+ value: 250m
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/requests/memory
+ value: 512Mi
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/limits/cpu
+ value: "1"
+ - op: replace
+ path: /spec/template/spec/containers/0/resources/limits/memory
+ value: 1Gi
+
+ # Staging environment config
+ - target:
+ kind: ConfigMap
+ name: cognitive-mesh-config
+ patch: |
+ - op: replace
+ path: /data/ASPNETCORE_ENVIRONMENT
+ value: Staging
+ - op: replace
+ path: /data/KEY_VAULT_URI
+ value: "https://cognitive-mesh-kv-staging.vault.azure.net/"
+ - op: replace
+ path: /data/COSMOSDB_ENDPOINT
+ value: "https://cognitive-mesh-cosmos-staging.documents.azure.com:443/"
+ - op: replace
+ path: /data/REDIS_HOSTNAME
+ value: "cognitive-mesh-redis-staging.redis.cache.windows.net"
+ - op: replace
+ path: /data/QDRANT_ENDPOINT
+ value: "http://cognitive-mesh-qdrant-staging:6333"
+ - op: replace
+ path: /data/OPENAI_ENDPOINT
+ value: "https://cognitive-mesh-openai-staging.openai.azure.com/"
+ - op: replace
+ path: /data/SEARCH_ENDPOINT
+ value: "https://cognitive-mesh-search-staging.search.windows.net"
+ - op: replace
+ path: /data/LOG_LEVEL
+ value: Information
+
+images:
+ - name: cognitive-mesh-api
+ newName: cognitivemeshacr.azurecr.io/cognitive-mesh-api
+ newTag: staging-latest
diff --git a/src/AgencyLayer/ActionPlanning/ActionPlanner.cs b/src/AgencyLayer/ActionPlanning/ActionPlanner.cs
index 7fce85a..361e3c9 100644
--- a/src/AgencyLayer/ActionPlanning/ActionPlanner.cs
+++ b/src/AgencyLayer/ActionPlanning/ActionPlanner.cs
@@ -23,6 +23,9 @@ public class ActionPlanner : IActionPlanner
private readonly ISemanticSearchManager _semanticSearchManager;
private readonly IMessageBus _bus;
+ ///
+ /// Initializes a new instance of the class.
+ ///
public ActionPlanner(
ILogger logger,
IKnowledgeGraphManager knowledgeGraphManager,
@@ -86,7 +89,7 @@ Generate a list of action steps to achieve the goal.
};
// Step 4: Call LLM
- var response = await _llmClient.GenerateChatCompletionAsync(messages, temperature: 0.3f, cancellationToken: cancellationToken);
+ var response = await _llmClient.GenerateChatCompletionAsync(messages, temperature: 0.3f);
// Step 5: Parse Response
var plans = ParsePlans(response);
@@ -97,7 +100,7 @@ Generate a list of action steps to achieve the goal.
if (plan.Status != ActionPlanStatus.Failed)
{
await _knowledgeGraphManager.AddNodeAsync(plan.Id, plan, NodeLabels.ActionPlan, cancellationToken);
- await _bus.PublishAsync(new PlanGeneratedNotification(plan), cancellationToken: cancellationToken);
+ await _bus.PublishAsync(new PlanGeneratedNotification(plan));
}
}
@@ -224,7 +227,7 @@ public async Task ExecutePlanAsync(
}
// 5. Notify subscribers
- await _bus.PublishAsync(new PlanUpdatedNotification(plan), cancellationToken: cancellationToken);
+ await _bus.PublishAsync(new PlanUpdatedNotification(plan));
return plan;
}
@@ -246,7 +249,7 @@ public async Task UpdatePlanAsync(
{
_logger.LogInformation("Updating action plan: {PlanId}", plan.Id);
await _knowledgeGraphManager.UpdateNodeAsync(plan.Id, plan, cancellationToken);
- await _bus.PublishAsync(new PlanUpdatedNotification(plan), cancellationToken: cancellationToken);
+ await _bus.PublishAsync(new PlanUpdatedNotification(plan));
}
catch (Exception ex)
{
@@ -280,7 +283,7 @@ public async Task CancelPlanAsync(
plan.CompletedAt = DateTime.UtcNow;
await _knowledgeGraphManager.UpdateNodeAsync(planId, plan, cancellationToken);
- await _bus.PublishAsync(new PlanUpdatedNotification(plan), cancellationToken: cancellationToken);
+ await _bus.PublishAsync(new PlanUpdatedNotification(plan));
}
catch (Exception ex)
{
@@ -290,33 +293,68 @@ public async Task CancelPlanAsync(
}
}
+ ///
+ /// Represents an action plan with its current state and metadata.
+ ///
public class ActionPlan
{
+ /// Gets or sets the unique identifier of the plan.
public string Id { get; set; } = string.Empty;
+ /// Gets or sets the name of the plan.
public string Name { get; set; } = string.Empty;
+ /// Gets or sets the description of the plan.
public string Description { get; set; } = string.Empty;
+ /// Gets or sets the priority of the plan.
public int Priority { get; set; }
+ /// Gets or sets the current status of the plan.
public ActionPlanStatus Status { get; set; }
+ /// Gets or sets when the plan was created.
public DateTime CreatedAt { get; set; }
+ /// Gets or sets when the plan was completed.
public DateTime? CompletedAt { get; set; }
+ /// Gets or sets the error message if the plan failed.
public string? Error { get; set; }
+ /// Gets or sets the result of the plan execution.
public string? Result { get; set; }
}
+ ///
+ /// Represents the status of an action plan.
+ ///
public enum ActionPlanStatus
{
+ /// Plan is pending execution.
Pending,
+ /// Plan is currently being executed.
InProgress,
+ /// Plan completed successfully.
Completed,
+ /// Plan execution failed.
Failed,
+ /// Plan was cancelled.
Cancelled
}
+ ///
+ /// Defines the contract for action planning operations.
+ ///
public interface IActionPlanner
{
- Task> GeneratePlanAsync(string goal, IEnumerable constraints = null, CancellationToken cancellationToken = default);
+ ///
+ /// Generates an action plan for the specified goal and constraints.
+ ///
+ Task> GeneratePlanAsync(string goal, IEnumerable? constraints = null, CancellationToken cancellationToken = default);
+ ///
+ /// Executes the action plan with the specified identifier.
+ ///
Task ExecutePlanAsync(string planId, CancellationToken cancellationToken = default);
+ ///
+ /// Updates an existing action plan.
+ ///
Task UpdatePlanAsync(ActionPlan plan, CancellationToken cancellationToken = default);
- Task CancelPlanAsync(string planId, string reason = null, CancellationToken cancellationToken = default);
+ ///
+ /// Cancels the action plan with the specified identifier.
+ ///
+ Task CancelPlanAsync(string planId, string? reason = null, CancellationToken cancellationToken = default);
}
}
diff --git a/src/AgencyLayer/ActionPlanning/ActionPlanning.csproj b/src/AgencyLayer/ActionPlanning/ActionPlanning.csproj
index 347ef65..24365e9 100644
--- a/src/AgencyLayer/ActionPlanning/ActionPlanning.csproj
+++ b/src/AgencyLayer/ActionPlanning/ActionPlanning.csproj
@@ -14,6 +14,7 @@
+
diff --git a/src/AgencyLayer/ActionPlanning/Events/PlanGeneratedNotification.cs b/src/AgencyLayer/ActionPlanning/Events/PlanGeneratedNotification.cs
new file mode 100644
index 0000000..1c14679
--- /dev/null
+++ b/src/AgencyLayer/ActionPlanning/Events/PlanGeneratedNotification.cs
@@ -0,0 +1,10 @@
+using AgencyLayer.ActionPlanning;
+
+namespace CognitiveMesh.AgencyLayer.ActionPlanning.Events
+{
+ ///
+ /// Notification published when a new action plan has been generated.
+ ///
+ /// The generated action plan.
+ public record PlanGeneratedNotification(ActionPlan Plan);
+}
diff --git a/src/AgencyLayer/ActionPlanning/Events/PlanUpdatedNotification.cs b/src/AgencyLayer/ActionPlanning/Events/PlanUpdatedNotification.cs
new file mode 100644
index 0000000..3f38359
--- /dev/null
+++ b/src/AgencyLayer/ActionPlanning/Events/PlanUpdatedNotification.cs
@@ -0,0 +1,10 @@
+using AgencyLayer.ActionPlanning;
+
+namespace CognitiveMesh.AgencyLayer.ActionPlanning.Events
+{
+ ///
+ /// Notification published when an existing action plan has been updated.
+ ///
+ /// The updated action plan.
+ public record PlanUpdatedNotification(ActionPlan Plan);
+}
diff --git a/src/AgencyLayer/ActionPlanning/ServiceCollectionExtensions.cs b/src/AgencyLayer/ActionPlanning/ServiceCollectionExtensions.cs
index 5787eda..ebecb5b 100644
--- a/src/AgencyLayer/ActionPlanning/ServiceCollectionExtensions.cs
+++ b/src/AgencyLayer/ActionPlanning/ServiceCollectionExtensions.cs
@@ -1,11 +1,18 @@
using System;
using Microsoft.Extensions.DependencyInjection;
using CognitiveMesh.Shared.Interfaces;
+using AgencyLayer.ActionPlanning;
namespace CognitiveMesh.AgencyLayer.ActionPlanning
{
+ ///
+ /// Extension methods for registering ActionPlanning services.
+ ///
public static class ServiceCollectionExtensions
{
+ ///
+ /// Adds ActionPlanning services to the service collection.
+ ///
public static IServiceCollection AddActionPlanning(this IServiceCollection services)
{
if (services == null) throw new ArgumentNullException(nameof(services));
diff --git a/src/AgencyLayer/AgencyLayer.csproj b/src/AgencyLayer/AgencyLayer.csproj
index e6e2d31..ba56952 100644
--- a/src/AgencyLayer/AgencyLayer.csproj
+++ b/src/AgencyLayer/AgencyLayer.csproj
@@ -5,6 +5,29 @@
net9.0
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
@@ -23,8 +46,12 @@
+
+
+
+
diff --git a/src/AgencyLayer/AgencyRouter/TaskRouter.cs b/src/AgencyLayer/AgencyRouter/TaskRouter.cs
index 11a1fe6..4db1c9f 100644
--- a/src/AgencyLayer/AgencyRouter/TaskRouter.cs
+++ b/src/AgencyLayer/AgencyRouter/TaskRouter.cs
@@ -16,6 +16,12 @@ public class TaskRouter
private readonly IMultiAgentOrchestrationPort _orchestrationPort;
private readonly ILogger _logger;
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The workflow engine for multi-step sequential task execution.
+ /// The multi-agent orchestration port for single-step coordination.
+ /// The logger instance.
public TaskRouter(
IWorkflowEngine workflowEngine,
IMultiAgentOrchestrationPort orchestrationPort,
diff --git a/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryAuditLoggingAdapter.cs b/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryAuditLoggingAdapter.cs
new file mode 100644
index 0000000..7bfd238
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryAuditLoggingAdapter.cs
@@ -0,0 +1,58 @@
+using System.Collections.Concurrent;
+using AgencyLayer.CognitiveSandwich.Models;
+using AgencyLayer.CognitiveSandwich.Ports;
+using Microsoft.Extensions.Logging;
+using static CognitiveMesh.Shared.LogSanitizer;
+
+namespace AgencyLayer.CognitiveSandwich.Adapters;
+
+///
+/// In-memory implementation of using a
+/// for development and testing scenarios.
+/// Stores audit trail entries in memory with thread-safe access.
+///
+public class InMemoryAuditLoggingAdapter : IAuditLoggingAdapter
+{
+ private readonly ConcurrentBag _entries = new();
+ private readonly ILogger _logger;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger instance.
+ public InMemoryAuditLoggingAdapter(ILogger logger)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public Task LogAuditEntryAsync(PhaseAuditEntry entry, CancellationToken ct = default)
+ {
+ ArgumentNullException.ThrowIfNull(entry);
+
+ _entries.Add(entry);
+
+ _logger.LogDebug(
+ "Logged audit entry {EntryId} for process {ProcessId}: {EventType}",
+ Sanitize(entry.EntryId), Sanitize(entry.ProcessId), entry.EventType);
+
+ return Task.CompletedTask;
+ }
+
+ ///
+ public Task> GetAuditEntriesAsync(string processId, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+
+ var entries = _entries
+ .Where(e => e.ProcessId == processId)
+ .OrderBy(e => e.Timestamp)
+ .ToList();
+
+ _logger.LogDebug(
+ "Retrieved {Count} audit entries for process {ProcessId}",
+ entries.Count, Sanitize(processId));
+
+ return Task.FromResult>(entries);
+ }
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryCognitiveDebtAdapter.cs b/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryCognitiveDebtAdapter.cs
new file mode 100644
index 0000000..1e87d12
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryCognitiveDebtAdapter.cs
@@ -0,0 +1,69 @@
+using System.Collections.Concurrent;
+using AgencyLayer.CognitiveSandwich.Models;
+using AgencyLayer.CognitiveSandwich.Ports;
+using Microsoft.Extensions.Logging;
+using static CognitiveMesh.Shared.LogSanitizer;
+
+namespace AgencyLayer.CognitiveSandwich.Adapters;
+
+///
+/// In-memory implementation of using a
+/// for development and testing scenarios.
+/// Tracks cognitive debt scores per process and supports threshold breach checks.
+///
+public class InMemoryCognitiveDebtAdapter : ICognitiveDebtPort
+{
+ private readonly ConcurrentDictionary _assessments = new();
+ private readonly ILogger _logger;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger instance.
+ public InMemoryCognitiveDebtAdapter(ILogger logger)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public Task AssessDebtAsync(string processId, string phaseId, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(phaseId);
+
+ var key = $"{processId}:{phaseId}";
+
+ var assessment = _assessments.GetOrAdd(key, _ => new CognitiveDebtAssessment
+ {
+ ProcessId = processId,
+ PhaseId = phaseId,
+ DebtScore = 0.0,
+ IsBreached = false,
+ Recommendations = [],
+ AssessedAt = DateTime.UtcNow
+ });
+
+ _logger.LogDebug(
+ "Assessed cognitive debt for process {ProcessId}, phase {PhaseId}: score={DebtScore}",
+ Sanitize(processId), Sanitize(phaseId), assessment.DebtScore);
+
+ return Task.FromResult(assessment);
+ }
+
+ ///
+ public Task IsThresholdBreachedAsync(string processId, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+
+ // In the default in-memory adapter, threshold is never breached
+ // unless an assessment was explicitly stored with IsBreached = true
+ var breached = _assessments.Values
+ .Any(a => a.ProcessId == processId && a.IsBreached);
+
+ _logger.LogDebug(
+ "Threshold breach check for process {ProcessId}: breached={Breached}",
+ Sanitize(processId), breached);
+
+ return Task.FromResult(breached);
+ }
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryPhaseConditionAdapter.cs b/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryPhaseConditionAdapter.cs
new file mode 100644
index 0000000..1f00df1
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Adapters/InMemoryPhaseConditionAdapter.cs
@@ -0,0 +1,64 @@
+using AgencyLayer.CognitiveSandwich.Models;
+using AgencyLayer.CognitiveSandwich.Ports;
+using Microsoft.Extensions.Logging;
+using static CognitiveMesh.Shared.LogSanitizer;
+
+namespace AgencyLayer.CognitiveSandwich.Adapters;
+
+///
+/// In-memory implementation of for development
+/// and testing scenarios. Returns default passing condition check results,
+/// indicating all pre- and postconditions are met.
+///
+public class InMemoryPhaseConditionAdapter : IPhaseConditionPort
+{
+ private readonly ILogger _logger;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger instance.
+ public InMemoryPhaseConditionAdapter(ILogger logger)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public Task CheckPreconditionsAsync(string processId, string phaseId, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(phaseId);
+
+ _logger.LogDebug(
+ "Checking preconditions for process {ProcessId}, phase {PhaseId} — returning all met (in-memory default)",
+ Sanitize(processId), Sanitize(phaseId));
+
+ var result = new ConditionCheckResult
+ {
+ AllMet = true,
+ Results = []
+ };
+
+ return Task.FromResult(result);
+ }
+
+ ///
+ public Task CheckPostconditionsAsync(string processId, string phaseId, PhaseOutput output, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(phaseId);
+ ArgumentNullException.ThrowIfNull(output);
+
+ _logger.LogDebug(
+ "Checking postconditions for process {ProcessId}, phase {PhaseId} — returning all met (in-memory default)",
+ Sanitize(processId), Sanitize(phaseId));
+
+ var result = new ConditionCheckResult
+ {
+ AllMet = true,
+ Results = []
+ };
+
+ return Task.FromResult(result);
+ }
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/CognitiveSandwich.csproj b/src/AgencyLayer/CognitiveSandwich/CognitiveSandwich.csproj
new file mode 100644
index 0000000..7ba3039
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/CognitiveSandwich.csproj
@@ -0,0 +1,24 @@
+
+
+
+ net9.0
+ enable
+ enable
+ true
+ $(NoWarn);1591
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/AgencyLayer/CognitiveSandwich/Controllers/CognitiveSandwichController.cs b/src/AgencyLayer/CognitiveSandwich/Controllers/CognitiveSandwichController.cs
new file mode 100644
index 0000000..3bb1490
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Controllers/CognitiveSandwichController.cs
@@ -0,0 +1,341 @@
+using AgencyLayer.CognitiveSandwich.Models;
+using AgencyLayer.CognitiveSandwich.Ports;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+using static CognitiveMesh.Shared.LogSanitizer;
+
+namespace AgencyLayer.CognitiveSandwich.Controllers;
+
+///
+/// REST API controller for managing Cognitive Sandwich processes.
+/// Provides endpoints for creating, querying, advancing, stepping back,
+/// and auditing phase-based workflows with cognitive debt monitoring.
+///
+[ApiController]
+[Route("api/v1/cognitive-sandwich")]
+public class CognitiveSandwichController : ControllerBase
+{
+ private readonly IPhaseManagerPort _phaseManager;
+ private readonly ICognitiveDebtPort _cognitiveDebtPort;
+ private readonly ILogger _logger;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Port for managing Cognitive Sandwich processes.
+ /// Port for assessing cognitive debt.
+ /// Logger instance.
+ public CognitiveSandwichController(
+ IPhaseManagerPort phaseManager,
+ ICognitiveDebtPort cognitiveDebtPort,
+ ILogger logger)
+ {
+ _phaseManager = phaseManager ?? throw new ArgumentNullException(nameof(phaseManager));
+ _cognitiveDebtPort = cognitiveDebtPort ?? throw new ArgumentNullException(nameof(cognitiveDebtPort));
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ /// Creates a new Cognitive Sandwich process from the given configuration.
+ ///
+ /// Configuration specifying phases, step-back limits, and thresholds.
+ /// Cancellation token.
+ /// The newly created process.
+ [HttpPost]
+ [ProducesResponseType(typeof(SandwichProcess), StatusCodes.Status201Created)]
+ [ProducesResponseType(StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task CreateSandwichProcess(
+ [FromBody] SandwichProcessConfig config,
+ CancellationToken ct)
+ {
+ try
+ {
+ if (config == null)
+ {
+ return BadRequest(new { error = "Request body is required." });
+ }
+
+ var process = await _phaseManager.CreateProcessAsync(config, ct);
+
+ _logger.LogInformation(
+ "Created Cognitive Sandwich process {ProcessId} via API",
+ process.ProcessId);
+
+ return CreatedAtAction(
+ nameof(GetSandwichProcess),
+ new { processId = process.ProcessId },
+ process);
+ }
+ catch (ArgumentException ex)
+ {
+ _logger.LogWarning(ex, "Invalid configuration provided for process creation");
+ return BadRequest(new { error = ex.Message });
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Unexpected error creating Cognitive Sandwich process");
+ return StatusCode(
+ StatusCodes.Status500InternalServerError,
+ new { error = "An unexpected error occurred while creating the process." });
+ }
+ }
+
+ ///
+ /// Retrieves an existing Cognitive Sandwich process by its unique identifier.
+ ///
+ /// The unique identifier of the process.
+ /// Cancellation token.
+ /// The process details.
+ [HttpGet("{processId}")]
+ [ProducesResponseType(typeof(SandwichProcess), StatusCodes.Status200OK)]
+ [ProducesResponseType(StatusCodes.Status404NotFound)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task GetSandwichProcess(
+ string processId,
+ CancellationToken ct)
+ {
+ try
+ {
+ var process = await _phaseManager.GetProcessAsync(processId, ct);
+ return Ok(process);
+ }
+ catch (InvalidOperationException ex) when (ex.Message.Contains("not found"))
+ {
+ _logger.LogWarning("Process {ProcessId} not found", Sanitize(processId));
+ return NotFound(new { error = $"Process '{processId}' not found." });
+ }
+ catch (ArgumentException ex)
+ {
+ _logger.LogWarning(ex, "Invalid processId provided");
+ return BadRequest(new { error = ex.Message });
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Unexpected error retrieving process {ProcessId}", Sanitize(processId));
+ return StatusCode(
+ StatusCodes.Status500InternalServerError,
+ new { error = "An unexpected error occurred while retrieving the process." });
+ }
+ }
+
+ ///
+ /// Advances the process to its next phase after validating conditions and cognitive debt.
+ ///
+ /// The unique identifier of the process.
+ /// Context for the transition including user and optional phase output.
+ /// Cancellation token.
+ /// The result of the phase transition attempt.
+ [HttpPost("{processId}/advance")]
+ [ProducesResponseType(typeof(PhaseResult), StatusCodes.Status200OK)]
+ [ProducesResponseType(StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(StatusCodes.Status404NotFound)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task AdvancePhase(
+ string processId,
+ [FromBody] PhaseTransitionContext context,
+ CancellationToken ct)
+ {
+ try
+ {
+ if (context == null)
+ {
+ return BadRequest(new { error = "Transition context is required." });
+ }
+
+ var result = await _phaseManager.TransitionToNextPhaseAsync(processId, context, ct);
+
+ _logger.LogInformation(
+ "Phase advance for process {ProcessId}: success={Success}",
+ Sanitize(processId), result.Success);
+
+ return Ok(result);
+ }
+ catch (InvalidOperationException ex) when (ex.Message.Contains("not found"))
+ {
+ _logger.LogWarning("Process {ProcessId} not found during advance", Sanitize(processId));
+ return NotFound(new { error = $"Process '{processId}' not found." });
+ }
+ catch (ArgumentException ex)
+ {
+ _logger.LogWarning(ex, "Invalid arguments for phase advance on process {ProcessId}", Sanitize(processId));
+ return BadRequest(new { error = ex.Message });
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Unexpected error advancing phase for process {ProcessId}", Sanitize(processId));
+ return StatusCode(
+ StatusCodes.Status500InternalServerError,
+ new { error = "An unexpected error occurred while advancing the phase." });
+ }
+ }
+
+ ///
+ /// Steps the process back to a prior phase, rolling back intermediate phases.
+ ///
+ /// The unique identifier of the process.
+ /// The step-back request containing target phase and reason.
+ /// Cancellation token.
+ /// The result of the step-back attempt.
+ [HttpPost("{processId}/step-back")]
+ [ProducesResponseType(typeof(PhaseResult), StatusCodes.Status200OK)]
+ [ProducesResponseType(StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(StatusCodes.Status404NotFound)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task StepBack(
+ string processId,
+ [FromBody] StepBackRequest request,
+ CancellationToken ct)
+ {
+ try
+ {
+ if (request == null)
+ {
+ return BadRequest(new { error = "Step-back request is required." });
+ }
+
+ if (string.IsNullOrWhiteSpace(request.TargetPhaseId))
+ {
+ return BadRequest(new { error = "TargetPhaseId is required." });
+ }
+
+ var reason = new StepBackReason
+ {
+ Reason = request.Reason,
+ InitiatedBy = request.InitiatedBy
+ };
+
+ var result = await _phaseManager.StepBackAsync(processId, request.TargetPhaseId, reason, ct);
+
+ _logger.LogInformation(
+ "Step-back for process {ProcessId} to phase {TargetPhaseId}: success={Success}",
+ Sanitize(processId), Sanitize(request.TargetPhaseId), result.Success);
+
+ return Ok(result);
+ }
+ catch (InvalidOperationException ex) when (ex.Message.Contains("not found"))
+ {
+ _logger.LogWarning("Process or phase not found during step-back for process {ProcessId}", Sanitize(processId));
+ return NotFound(new { error = ex.Message });
+ }
+ catch (InvalidOperationException ex) when (ex.Message.Contains("must be before"))
+ {
+ _logger.LogWarning(ex, "Invalid step-back target for process {ProcessId}", Sanitize(processId));
+ return BadRequest(new { error = ex.Message });
+ }
+ catch (ArgumentException ex)
+ {
+ _logger.LogWarning(ex, "Invalid arguments for step-back on process {ProcessId}", Sanitize(processId));
+ return BadRequest(new { error = ex.Message });
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Unexpected error during step-back for process {ProcessId}", Sanitize(processId));
+ return StatusCode(
+ StatusCodes.Status500InternalServerError,
+ new { error = "An unexpected error occurred during the step-back operation." });
+ }
+ }
+
+ ///
+ /// Retrieves the complete audit trail for a process, ordered chronologically.
+ ///
+ /// The unique identifier of the process.
+ /// Cancellation token.
+ /// An ordered list of audit entries for the process.
+ [HttpGet("{processId}/audit")]
+ [ProducesResponseType(typeof(IReadOnlyList), StatusCodes.Status200OK)]
+ [ProducesResponseType(StatusCodes.Status404NotFound)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task GetAuditTrail(
+ string processId,
+ CancellationToken ct)
+ {
+ try
+ {
+ var auditTrail = await _phaseManager.GetAuditTrailAsync(processId, ct);
+ return Ok(auditTrail);
+ }
+ catch (InvalidOperationException ex) when (ex.Message.Contains("not found"))
+ {
+ _logger.LogWarning("Process {ProcessId} not found when retrieving audit trail", Sanitize(processId));
+ return NotFound(new { error = $"Process '{processId}' not found." });
+ }
+ catch (ArgumentException ex)
+ {
+ _logger.LogWarning(ex, "Invalid processId provided for audit trail");
+ return BadRequest(new { error = ex.Message });
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Unexpected error retrieving audit trail for process {ProcessId}", Sanitize(processId));
+ return StatusCode(
+ StatusCodes.Status500InternalServerError,
+ new { error = "An unexpected error occurred while retrieving the audit trail." });
+ }
+ }
+
+ ///
+ /// Retrieves the cognitive debt assessment for a process.
+ ///
+ /// The unique identifier of the process.
+ /// Cancellation token.
+ /// The cognitive debt assessment for the current phase.
+ [HttpGet("{processId}/debt")]
+ [ProducesResponseType(typeof(CognitiveDebtAssessment), StatusCodes.Status200OK)]
+ [ProducesResponseType(StatusCodes.Status404NotFound)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task GetCognitiveDebtAssessment(
+ string processId,
+ CancellationToken ct)
+ {
+ try
+ {
+ // Validate process exists first
+ var process = await _phaseManager.GetProcessAsync(processId, ct);
+ var currentPhase = process.Phases[process.CurrentPhaseIndex];
+
+ var assessment = await _cognitiveDebtPort.AssessDebtAsync(processId, currentPhase.PhaseId, ct);
+ return Ok(assessment);
+ }
+ catch (InvalidOperationException ex) when (ex.Message.Contains("not found"))
+ {
+ _logger.LogWarning("Process {ProcessId} not found when retrieving cognitive debt", Sanitize(processId));
+ return NotFound(new { error = $"Process '{processId}' not found." });
+ }
+ catch (ArgumentException ex)
+ {
+ _logger.LogWarning(ex, "Invalid processId provided for debt assessment");
+ return BadRequest(new { error = ex.Message });
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Unexpected error retrieving cognitive debt for process {ProcessId}", Sanitize(processId));
+ return StatusCode(
+ StatusCodes.Status500InternalServerError,
+ new { error = "An unexpected error occurred while assessing cognitive debt." });
+ }
+ }
+}
+
+///
+/// Request model for the step-back endpoint, containing the target phase and reason.
+///
+public class StepBackRequest
+{
+ ///
+ /// The identifier of the phase to step back to.
+ ///
+ public string TargetPhaseId { get; set; } = string.Empty;
+
+ ///
+ /// Human-readable explanation of why the step-back was initiated.
+ ///
+ public string Reason { get; set; } = string.Empty;
+
+ ///
+ /// Identifier of the user or system component that initiated the step-back.
+ ///
+ public string InitiatedBy { get; set; } = string.Empty;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Engines/CognitiveSandwichEngine.cs b/src/AgencyLayer/CognitiveSandwich/Engines/CognitiveSandwichEngine.cs
new file mode 100644
index 0000000..9c450c8
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Engines/CognitiveSandwichEngine.cs
@@ -0,0 +1,477 @@
+using System.Collections.Concurrent;
+using AgencyLayer.CognitiveSandwich.Models;
+using AgencyLayer.CognitiveSandwich.Ports;
+using Microsoft.Extensions.Logging;
+using static CognitiveMesh.Shared.LogSanitizer;
+
+namespace AgencyLayer.CognitiveSandwich.Engines;
+
+///
+/// Core engine implementing the Cognitive Sandwich workflow pattern.
+/// Manages phase-based processes with pre/postcondition validation,
+/// step-back capability, cognitive debt monitoring, and full audit trails.
+///
+public class CognitiveSandwichEngine : IPhaseManagerPort
+{
+ private readonly IPhaseConditionPort _conditionPort;
+ private readonly ICognitiveDebtPort _cognitiveDebtPort;
+ private readonly IAuditLoggingAdapter _auditLoggingAdapter;
+ private readonly ILogger _logger;
+
+ private readonly ConcurrentDictionary _processes = new();
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Port for evaluating phase pre/postconditions.
+ /// Port for assessing cognitive debt.
+ /// Adapter for persisting audit trail entries.
+ /// Logger instance.
+ public CognitiveSandwichEngine(
+ IPhaseConditionPort conditionPort,
+ ICognitiveDebtPort cognitiveDebtPort,
+ IAuditLoggingAdapter auditLoggingAdapter,
+ ILogger logger)
+ {
+ _conditionPort = conditionPort ?? throw new ArgumentNullException(nameof(conditionPort));
+ _cognitiveDebtPort = cognitiveDebtPort ?? throw new ArgumentNullException(nameof(cognitiveDebtPort));
+ _auditLoggingAdapter = auditLoggingAdapter ?? throw new ArgumentNullException(nameof(auditLoggingAdapter));
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public async Task CreateProcessAsync(SandwichProcessConfig config, CancellationToken ct = default)
+ {
+ ArgumentNullException.ThrowIfNull(config);
+
+ ValidateConfig(config);
+
+ var phases = config.Phases
+ .OrderBy(p => p.Order)
+ .Select(pd => new Phase
+ {
+ PhaseId = Guid.NewGuid().ToString(),
+ Name = pd.Name,
+ Description = pd.Description,
+ Order = pd.Order,
+ Preconditions = pd.Preconditions,
+ Postconditions = pd.Postconditions,
+ RequiresHumanValidation = pd.RequiresHumanValidation,
+ Status = PhaseStatus.Pending
+ })
+ .ToList();
+
+ var process = new SandwichProcess
+ {
+ ProcessId = Guid.NewGuid().ToString(),
+ TenantId = config.TenantId,
+ Name = config.Name,
+ CreatedAt = DateTime.UtcNow,
+ CurrentPhaseIndex = 0,
+ Phases = phases,
+ State = SandwichProcessState.Created,
+ MaxStepBacks = config.MaxStepBacks,
+ StepBackCount = 0,
+ CognitiveDebtThreshold = config.CognitiveDebtThreshold
+ };
+
+ _processes[process.ProcessId] = process;
+
+ var auditEntry = new PhaseAuditEntry
+ {
+ ProcessId = process.ProcessId,
+ PhaseId = phases[0].PhaseId,
+ EventType = PhaseAuditEventType.ProcessCreated,
+ UserId = "system",
+ Details = $"Process '{config.Name}' created with {phases.Count} phases, max step-backs: {config.MaxStepBacks}, cognitive debt threshold: {config.CognitiveDebtThreshold}"
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(auditEntry, ct);
+
+ _logger.LogInformation(
+ "Created Cognitive Sandwich process {ProcessId} '{ProcessName}' with {PhaseCount} phases",
+ Sanitize(process.ProcessId), Sanitize(process.Name), phases.Count);
+
+ return process;
+ }
+
+ ///
+ public Task GetProcessAsync(string processId, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+
+ if (!_processes.TryGetValue(processId, out var process))
+ {
+ throw new InvalidOperationException($"Process '{processId}' not found.");
+ }
+
+ return Task.FromResult(process);
+ }
+
+ ///
+ public async Task TransitionToNextPhaseAsync(string processId, PhaseTransitionContext context, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+ ArgumentNullException.ThrowIfNull(context);
+
+ var process = await GetProcessAsync(processId, ct);
+
+ if (process.State == SandwichProcessState.Completed)
+ {
+ return CreateBlockedResult(process, "Process is already completed.");
+ }
+
+ if (process.State == SandwichProcessState.Failed)
+ {
+ return CreateBlockedResult(process, "Process is in a failed state.");
+ }
+
+ var currentPhase = process.Phases[process.CurrentPhaseIndex];
+
+ // Log the transition attempt
+ var transitionStartEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = currentPhase.PhaseId,
+ EventType = PhaseAuditEventType.PhaseTransitionStarted,
+ UserId = context.UserId,
+ Details = $"Transition from phase '{currentPhase.Name}' (index {process.CurrentPhaseIndex}) requested. Reason: {context.TransitionReason}"
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(transitionStartEntry, ct);
+
+ // Check postconditions of current phase
+ if (context.PhaseOutput != null)
+ {
+ var postconditionResult = await _conditionPort.CheckPostconditionsAsync(processId, currentPhase.PhaseId, context.PhaseOutput, ct);
+ if (!postconditionResult.AllMet)
+ {
+ var failedConditions = postconditionResult.Results
+ .Where(r => !r.Met)
+ .Select(r => $"{r.ConditionId}: {r.Reason}")
+ .ToList();
+
+ var blockedEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = currentPhase.PhaseId,
+ EventType = PhaseAuditEventType.PhaseTransitionBlocked,
+ UserId = context.UserId,
+ Details = $"Postconditions not met: {string.Join("; ", failedConditions)}"
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(blockedEntry, ct);
+
+ _logger.LogWarning(
+ "Transition blocked for process {ProcessId} at phase {PhaseId}: postconditions not met",
+ Sanitize(processId), Sanitize(currentPhase.PhaseId));
+
+ return new PhaseResult
+ {
+ Success = false,
+ PhaseId = currentPhase.PhaseId,
+ NextPhaseId = null,
+ ValidationErrors = failedConditions,
+ AuditEntry = blockedEntry
+ };
+ }
+ }
+
+ // Check cognitive debt
+ var debtBreached = await _cognitiveDebtPort.IsThresholdBreachedAsync(processId, ct);
+ if (debtBreached)
+ {
+ var debtAssessment = await _cognitiveDebtPort.AssessDebtAsync(processId, currentPhase.PhaseId, ct);
+
+ var debtEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = currentPhase.PhaseId,
+ EventType = PhaseAuditEventType.CognitiveDebtBreached,
+ UserId = context.UserId,
+ Details = $"Cognitive debt threshold breached (score: {debtAssessment.DebtScore:F1}). Recommendations: {string.Join("; ", debtAssessment.Recommendations)}"
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(debtEntry, ct);
+
+ _logger.LogWarning(
+ "Transition blocked for process {ProcessId}: cognitive debt threshold breached (score: {DebtScore})",
+ Sanitize(processId), debtAssessment.DebtScore);
+
+ return new PhaseResult
+ {
+ Success = false,
+ PhaseId = currentPhase.PhaseId,
+ NextPhaseId = null,
+ ValidationErrors = [$"Cognitive debt threshold breached (score: {debtAssessment.DebtScore:F1}). Human review required."],
+ AuditEntry = debtEntry
+ };
+ }
+
+ // Mark current phase as completed
+ var mutableCurrentPhase = currentPhase;
+ mutableCurrentPhase.Status = PhaseStatus.Completed;
+
+ // Check if this was the last phase
+ if (process.CurrentPhaseIndex >= process.Phases.Count - 1)
+ {
+ process.State = SandwichProcessState.Completed;
+
+ var completedEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = currentPhase.PhaseId,
+ EventType = PhaseAuditEventType.ProcessCompleted,
+ UserId = context.UserId,
+ Details = $"All {process.Phases.Count} phases completed successfully."
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(completedEntry, ct);
+
+ _logger.LogInformation("Process {ProcessId} completed all phases successfully", Sanitize(processId));
+
+ return new PhaseResult
+ {
+ Success = true,
+ PhaseId = currentPhase.PhaseId,
+ NextPhaseId = null,
+ ValidationErrors = [],
+ AuditEntry = completedEntry
+ };
+ }
+
+ // Advance to next phase
+ var nextPhaseIndex = process.CurrentPhaseIndex + 1;
+ var nextPhase = process.Phases[nextPhaseIndex];
+
+ // Check preconditions of the next phase
+ var preconditionResult = await _conditionPort.CheckPreconditionsAsync(processId, nextPhase.PhaseId, ct);
+ if (!preconditionResult.AllMet)
+ {
+ var failedPreconditions = preconditionResult.Results
+ .Where(r => !r.Met)
+ .Select(r => $"{r.ConditionId}: {r.Reason}")
+ .ToList();
+
+ var precondBlockedEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = nextPhase.PhaseId,
+ EventType = PhaseAuditEventType.PhaseTransitionBlocked,
+ UserId = context.UserId,
+ Details = $"Preconditions for next phase '{nextPhase.Name}' not met: {string.Join("; ", failedPreconditions)}"
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(precondBlockedEntry, ct);
+
+ // Roll back the current phase status since we can't actually advance
+ mutableCurrentPhase.Status = PhaseStatus.InProgress;
+
+ _logger.LogWarning(
+ "Transition blocked for process {ProcessId}: preconditions for phase {NextPhaseId} not met",
+ Sanitize(processId), Sanitize(nextPhase.PhaseId));
+
+ return new PhaseResult
+ {
+ Success = false,
+ PhaseId = currentPhase.PhaseId,
+ NextPhaseId = nextPhase.PhaseId,
+ ValidationErrors = failedPreconditions,
+ AuditEntry = precondBlockedEntry
+ };
+ }
+
+ // Perform the transition
+ process.CurrentPhaseIndex = nextPhaseIndex;
+ var mutableNextPhase = nextPhase;
+ mutableNextPhase.Status = PhaseStatus.InProgress;
+ process.State = SandwichProcessState.InProgress;
+
+ // Check if next phase requires human validation
+ if (nextPhase.RequiresHumanValidation)
+ {
+ mutableNextPhase.Status = PhaseStatus.AwaitingReview;
+ process.State = SandwichProcessState.AwaitingHumanReview;
+
+ var humanEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = nextPhase.PhaseId,
+ EventType = PhaseAuditEventType.HumanValidationRequested,
+ UserId = context.UserId,
+ Details = $"Phase '{nextPhase.Name}' requires human validation before proceeding."
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(humanEntry, ct);
+
+ _logger.LogInformation(
+ "Process {ProcessId} awaiting human review at phase {PhaseId} '{PhaseName}'",
+ Sanitize(processId), Sanitize(nextPhase.PhaseId), Sanitize(nextPhase.Name));
+ }
+
+ var transitionEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = nextPhase.PhaseId,
+ EventType = PhaseAuditEventType.PhaseTransitionCompleted,
+ UserId = context.UserId,
+ Details = $"Transitioned from phase '{currentPhase.Name}' (index {process.CurrentPhaseIndex - 1}) to phase '{nextPhase.Name}' (index {process.CurrentPhaseIndex})."
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(transitionEntry, ct);
+
+ _logger.LogInformation(
+ "Process {ProcessId} transitioned to phase {PhaseIndex} '{PhaseName}'",
+ Sanitize(processId), process.CurrentPhaseIndex, Sanitize(nextPhase.Name));
+
+ return new PhaseResult
+ {
+ Success = true,
+ PhaseId = currentPhase.PhaseId,
+ NextPhaseId = nextPhase.PhaseId,
+ ValidationErrors = [],
+ AuditEntry = transitionEntry
+ };
+ }
+
+ ///
+ public async Task StepBackAsync(string processId, string targetPhaseId, StepBackReason reason, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(targetPhaseId);
+ ArgumentNullException.ThrowIfNull(reason);
+
+ var process = await GetProcessAsync(processId, ct);
+
+ // Validate step-back count
+ if (process.StepBackCount >= process.MaxStepBacks)
+ {
+ var blockedEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = targetPhaseId,
+ EventType = PhaseAuditEventType.PhaseTransitionBlocked,
+ UserId = reason.InitiatedBy,
+ Details = $"Step-back blocked: maximum step-backs ({process.MaxStepBacks}) exceeded. Current count: {process.StepBackCount}."
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(blockedEntry, ct);
+
+ _logger.LogWarning(
+ "Step-back blocked for process {ProcessId}: max step-backs ({MaxStepBacks}) exceeded",
+ Sanitize(processId), process.MaxStepBacks);
+
+ return new PhaseResult
+ {
+ Success = false,
+ PhaseId = process.Phases[process.CurrentPhaseIndex].PhaseId,
+ NextPhaseId = targetPhaseId,
+ ValidationErrors = [$"Maximum step-back count ({process.MaxStepBacks}) has been reached."],
+ AuditEntry = blockedEntry
+ };
+ }
+
+ // Find target phase
+ var targetPhaseIndex = -1;
+ for (int i = 0; i < process.Phases.Count; i++)
+ {
+ if (process.Phases[i].PhaseId == targetPhaseId)
+ {
+ targetPhaseIndex = i;
+ break;
+ }
+ }
+
+ if (targetPhaseIndex < 0)
+ {
+ throw new InvalidOperationException($"Target phase '{targetPhaseId}' not found in process '{processId}'.");
+ }
+
+ if (targetPhaseIndex >= process.CurrentPhaseIndex)
+ {
+ throw new InvalidOperationException(
+ $"Step-back target phase (index {targetPhaseIndex}) must be before the current phase (index {process.CurrentPhaseIndex}).");
+ }
+
+ // Roll back all phases between current and target (inclusive of current)
+ for (int i = process.CurrentPhaseIndex; i > targetPhaseIndex; i--)
+ {
+ var phaseToRollBack = process.Phases[i];
+ phaseToRollBack.Status = PhaseStatus.RolledBack;
+ }
+
+ // Set target phase back to InProgress
+ var targetPhase = process.Phases[targetPhaseIndex];
+ targetPhase.Status = PhaseStatus.InProgress;
+
+ process.CurrentPhaseIndex = targetPhaseIndex;
+ process.StepBackCount++;
+ process.State = SandwichProcessState.SteppedBack;
+
+ var auditEntry = new PhaseAuditEntry
+ {
+ ProcessId = processId,
+ PhaseId = targetPhaseId,
+ EventType = PhaseAuditEventType.StepBackPerformed,
+ UserId = reason.InitiatedBy,
+ Details = $"Stepped back to phase '{targetPhase.Name}' (index {targetPhaseIndex}). Reason: {reason.Reason}. Step-back count: {process.StepBackCount}/{process.MaxStepBacks}."
+ };
+ await _auditLoggingAdapter.LogAuditEntryAsync(auditEntry, ct);
+
+ _logger.LogInformation(
+ "Process {ProcessId} stepped back to phase {TargetPhaseIndex} '{TargetPhaseName}' (step-back {StepBackCount}/{MaxStepBacks}). Reason: {Reason}",
+ Sanitize(processId), targetPhaseIndex, Sanitize(targetPhase.Name), process.StepBackCount, process.MaxStepBacks, Sanitize(reason.Reason));
+
+ return new PhaseResult
+ {
+ Success = true,
+ PhaseId = process.Phases[process.CurrentPhaseIndex].PhaseId,
+ NextPhaseId = targetPhaseId,
+ ValidationErrors = [],
+ AuditEntry = auditEntry
+ };
+ }
+
+ ///
+ public async Task> GetAuditTrailAsync(string processId, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(processId);
+
+ // Validate process exists
+ await GetProcessAsync(processId, ct);
+
+ return await _auditLoggingAdapter.GetAuditEntriesAsync(processId, ct);
+ }
+
+ private static void ValidateConfig(SandwichProcessConfig config)
+ {
+ if (string.IsNullOrWhiteSpace(config.TenantId))
+ {
+ throw new ArgumentException("TenantId is required.", nameof(config));
+ }
+
+ if (string.IsNullOrWhiteSpace(config.Name))
+ {
+ throw new ArgumentException("Process name is required.", nameof(config));
+ }
+
+ if (config.Phases.Count < 3 || config.Phases.Count > 7)
+ {
+ throw new ArgumentException(
+ $"Process must have between 3 and 7 phases, but {config.Phases.Count} were provided.",
+ nameof(config));
+ }
+
+ if (config.MaxStepBacks < 0)
+ {
+ throw new ArgumentException("MaxStepBacks must be non-negative.", nameof(config));
+ }
+
+ if (config.CognitiveDebtThreshold is < 0 or > 100)
+ {
+ throw new ArgumentException("CognitiveDebtThreshold must be between 0 and 100.", nameof(config));
+ }
+ }
+
+ private static PhaseResult CreateBlockedResult(SandwichProcess process, string reason)
+ {
+ return new PhaseResult
+ {
+ Success = false,
+ PhaseId = process.Phases[process.CurrentPhaseIndex].PhaseId,
+ NextPhaseId = null,
+ ValidationErrors = [reason]
+ };
+ }
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Infrastructure/ServiceCollectionExtensions.cs b/src/AgencyLayer/CognitiveSandwich/Infrastructure/ServiceCollectionExtensions.cs
new file mode 100644
index 0000000..6b8176a
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Infrastructure/ServiceCollectionExtensions.cs
@@ -0,0 +1,34 @@
+using AgencyLayer.CognitiveSandwich.Adapters;
+using AgencyLayer.CognitiveSandwich.Engines;
+using AgencyLayer.CognitiveSandwich.Ports;
+using Microsoft.Extensions.DependencyInjection;
+
+namespace AgencyLayer.CognitiveSandwich.Infrastructure;
+
+///
+/// Provides extension methods for registering Cognitive Sandwich services
+/// with the dependency injection container.
+///
+public static class ServiceCollectionExtensions
+{
+ ///
+ /// Registers all Cognitive Sandwich services including the engine, adapters,
+ /// and in-memory implementations for development scenarios.
+ ///
+ /// The service collection to add services to.
+ /// The service collection for chaining.
+ public static IServiceCollection AddCognitiveSandwichServices(this IServiceCollection services)
+ {
+ ArgumentNullException.ThrowIfNull(services);
+
+ // Register in-memory adapters as singletons (shared state across scoped engine instances)
+ services.AddSingleton();
+ services.AddSingleton();
+ services.AddSingleton();
+
+ // Register the engine as scoped (one per request)
+ services.AddScoped();
+
+ return services;
+ }
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/CognitiveDebtAssessment.cs b/src/AgencyLayer/CognitiveSandwich/Models/CognitiveDebtAssessment.cs
new file mode 100644
index 0000000..51697be
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/CognitiveDebtAssessment.cs
@@ -0,0 +1,39 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the result of a cognitive debt assessment for a specific phase,
+/// measuring the degree of over-reliance on AI within the Cognitive Sandwich process.
+///
+public class CognitiveDebtAssessment
+{
+ ///
+ /// Identifier of the process being assessed.
+ ///
+ public string ProcessId { get; set; } = string.Empty;
+
+ ///
+ /// Identifier of the phase being assessed.
+ ///
+ public string PhaseId { get; set; } = string.Empty;
+
+ ///
+ /// Cognitive debt score on a scale of 0 (no debt) to 100 (maximum debt).
+ /// Higher values indicate greater over-reliance on AI with insufficient human oversight.
+ ///
+ public double DebtScore { get; set; }
+
+ ///
+ /// Indicates whether the debt score exceeds the configured threshold for the process.
+ ///
+ public bool IsBreached { get; set; }
+
+ ///
+ /// Actionable recommendations for reducing cognitive debt.
+ ///
+ public IReadOnlyList Recommendations { get; set; } = [];
+
+ ///
+ /// Timestamp when the assessment was performed.
+ ///
+ public DateTime AssessedAt { get; set; } = DateTime.UtcNow;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/ConditionCheckResult.cs b/src/AgencyLayer/CognitiveSandwich/Models/ConditionCheckResult.cs
new file mode 100644
index 0000000..91d5768
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/ConditionCheckResult.cs
@@ -0,0 +1,17 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the aggregated result of evaluating all pre- or postconditions for a phase.
+///
+public class ConditionCheckResult
+{
+ ///
+ /// Indicates whether all mandatory conditions were met.
+ ///
+ public bool AllMet { get; set; }
+
+ ///
+ /// Individual evaluation results for each condition.
+ ///
+ public IReadOnlyList Results { get; set; } = [];
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/ConditionEvaluation.cs b/src/AgencyLayer/CognitiveSandwich/Models/ConditionEvaluation.cs
new file mode 100644
index 0000000..1384440
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/ConditionEvaluation.cs
@@ -0,0 +1,22 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the evaluation result of a single phase condition.
+///
+public class ConditionEvaluation
+{
+ ///
+ /// Identifier of the condition that was evaluated.
+ ///
+ public string ConditionId { get; set; } = string.Empty;
+
+ ///
+ /// Indicates whether the condition was satisfied.
+ ///
+ public bool Met { get; set; }
+
+ ///
+ /// Human-readable reason for the evaluation result.
+ ///
+ public string Reason { get; set; } = string.Empty;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/Phase.cs b/src/AgencyLayer/CognitiveSandwich/Models/Phase.cs
new file mode 100644
index 0000000..f9dc5c9
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/Phase.cs
@@ -0,0 +1,48 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents a single phase within a Cognitive Sandwich process.
+/// Each phase has pre/postconditions, an execution status, and optional human validation.
+///
+public class Phase
+{
+ ///
+ /// Unique identifier for this phase.
+ ///
+ public string PhaseId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Human-readable name of the phase.
+ ///
+ public string Name { get; set; } = string.Empty;
+
+ ///
+ /// Description of what this phase accomplishes.
+ ///
+ public string Description { get; set; } = string.Empty;
+
+ ///
+ /// Zero-based order of this phase within the process.
+ ///
+ public int Order { get; set; }
+
+ ///
+ /// Conditions that must be satisfied before this phase can begin.
+ ///
+ public IReadOnlyList Preconditions { get; set; } = [];
+
+ ///
+ /// Conditions that must be satisfied after this phase completes.
+ ///
+ public IReadOnlyList Postconditions { get; set; } = [];
+
+ ///
+ /// When true, the process pauses after this phase for human-in-the-loop validation.
+ ///
+ public bool RequiresHumanValidation { get; set; }
+
+ ///
+ /// Current execution status of this phase.
+ ///
+ public PhaseStatus Status { get; set; } = PhaseStatus.Pending;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseAuditEntry.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseAuditEntry.cs
new file mode 100644
index 0000000..32294a4
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseAuditEntry.cs
@@ -0,0 +1,43 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents an immutable audit trail entry recording a significant event
+/// in a Cognitive Sandwich process, such as phase transitions, step-backs, or validations.
+///
+public class PhaseAuditEntry
+{
+ ///
+ /// Unique identifier for this audit entry.
+ ///
+ public string EntryId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Identifier of the process this entry belongs to.
+ ///
+ public string ProcessId { get; set; } = string.Empty;
+
+ ///
+ /// Identifier of the phase this event relates to.
+ ///
+ public string PhaseId { get; set; } = string.Empty;
+
+ ///
+ /// Category of the audit event.
+ ///
+ public PhaseAuditEventType EventType { get; set; }
+
+ ///
+ /// Timestamp when the event occurred.
+ ///
+ public DateTime Timestamp { get; set; } = DateTime.UtcNow;
+
+ ///
+ /// Identifier of the user or system component that caused the event.
+ ///
+ public string UserId { get; set; } = string.Empty;
+
+ ///
+ /// Human-readable details about the event.
+ ///
+ public string Details { get; set; } = string.Empty;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseAuditEventType.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseAuditEventType.cs
new file mode 100644
index 0000000..f085bfd
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseAuditEventType.cs
@@ -0,0 +1,57 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Categorizes the type of event recorded in a phase audit trail entry.
+///
+public enum PhaseAuditEventType
+{
+ ///
+ /// A new Cognitive Sandwich process was created.
+ ///
+ ProcessCreated,
+
+ ///
+ /// A phase transition was initiated.
+ ///
+ PhaseTransitionStarted,
+
+ ///
+ /// A phase transition completed successfully.
+ ///
+ PhaseTransitionCompleted,
+
+ ///
+ /// A phase transition was blocked by failed preconditions.
+ ///
+ PhaseTransitionBlocked,
+
+ ///
+ /// A step-back operation was performed to revisit a prior phase.
+ ///
+ StepBackPerformed,
+
+ ///
+ /// Human validation was requested for a phase checkpoint.
+ ///
+ HumanValidationRequested,
+
+ ///
+ /// Human validation was completed.
+ ///
+ HumanValidationCompleted,
+
+ ///
+ /// Cognitive debt threshold was breached, blocking progression.
+ ///
+ CognitiveDebtBreached,
+
+ ///
+ /// The process completed all phases successfully.
+ ///
+ ProcessCompleted,
+
+ ///
+ /// The process failed due to an unrecoverable error.
+ ///
+ ProcessFailed
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseCondition.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseCondition.cs
new file mode 100644
index 0000000..add5d34
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseCondition.cs
@@ -0,0 +1,34 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents a pre- or postcondition attached to a phase in a Cognitive Sandwich process.
+/// Conditions are evaluated before entering or after completing a phase.
+///
+public class PhaseCondition
+{
+ ///
+ /// Unique identifier for this condition.
+ ///
+ public string ConditionId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Human-readable name of the condition.
+ ///
+ public string Name { get; set; } = string.Empty;
+
+ ///
+ /// The type of evaluation this condition performs.
+ ///
+ public PhaseConditionType ConditionType { get; set; }
+
+ ///
+ /// The expression or rule to evaluate (interpretation depends on ).
+ ///
+ public string Expression { get; set; } = string.Empty;
+
+ ///
+ /// When true, failure of this condition blocks phase transition.
+ /// When false, failure is logged as a warning but does not block.
+ ///
+ public bool IsMandatory { get; set; } = true;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseConditionType.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseConditionType.cs
new file mode 100644
index 0000000..53459dd
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseConditionType.cs
@@ -0,0 +1,32 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Defines the type of a phase condition used for pre/postcondition evaluation.
+///
+public enum PhaseConditionType
+{
+ ///
+ /// Condition requires a specific data field to be present and non-empty.
+ ///
+ DataPresence,
+
+ ///
+ /// Condition requires a quality score to meet a minimum threshold.
+ ///
+ QualityThreshold,
+
+ ///
+ /// Condition requires explicit human approval.
+ ///
+ HumanApproval,
+
+ ///
+ /// Condition evaluates a custom expression or rule.
+ ///
+ CustomExpression,
+
+ ///
+ /// Condition checks that cognitive debt is within acceptable limits.
+ ///
+ CognitiveDebtCheck
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseDefinition.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseDefinition.cs
new file mode 100644
index 0000000..75f29a1
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseDefinition.cs
@@ -0,0 +1,38 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Defines the configuration for a single phase when creating a new Cognitive Sandwich process.
+/// Used within to specify the phase structure.
+///
+public class PhaseDefinition
+{
+ ///
+ /// Human-readable name of the phase.
+ ///
+ public string Name { get; set; } = string.Empty;
+
+ ///
+ /// Description of what this phase accomplishes.
+ ///
+ public string Description { get; set; } = string.Empty;
+
+ ///
+ /// Zero-based order of this phase within the process.
+ ///
+ public int Order { get; set; }
+
+ ///
+ /// Conditions that must be satisfied before this phase can begin.
+ ///
+ public IReadOnlyList Preconditions { get; set; } = [];
+
+ ///
+ /// Conditions that must be satisfied after this phase completes.
+ ///
+ public IReadOnlyList Postconditions { get; set; } = [];
+
+ ///
+ /// When true, the process pauses after this phase for human-in-the-loop validation.
+ ///
+ public bool RequiresHumanValidation { get; set; }
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseOutput.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseOutput.cs
new file mode 100644
index 0000000..1d24a4c
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseOutput.cs
@@ -0,0 +1,27 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the output produced by executing a phase in a Cognitive Sandwich process.
+///
+public class PhaseOutput
+{
+ ///
+ /// Identifier of the phase that produced this output.
+ ///
+ public string PhaseId { get; set; } = string.Empty;
+
+ ///
+ /// Key-value data produced by the phase execution.
+ ///
+ public Dictionary Data { get; set; } = new();
+
+ ///
+ /// Optional summary of the phase output for human review.
+ ///
+ public string? Summary { get; set; }
+
+ ///
+ /// Timestamp when the output was generated.
+ ///
+ public DateTime GeneratedAt { get; set; } = DateTime.UtcNow;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseResult.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseResult.cs
new file mode 100644
index 0000000..34230bf
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseResult.cs
@@ -0,0 +1,33 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the result of a phase transition or step-back operation,
+/// including success status, validation errors, and the associated audit entry.
+///
+public class PhaseResult
+{
+ ///
+ /// Indicates whether the transition or step-back completed successfully.
+ ///
+ public bool Success { get; set; }
+
+ ///
+ /// Identifier of the phase that was transitioned from.
+ ///
+ public string PhaseId { get; set; } = string.Empty;
+
+ ///
+ /// Identifier of the phase that was transitioned to, or null if the transition was blocked.
+ ///
+ public string? NextPhaseId { get; set; }
+
+ ///
+ /// Validation errors that prevented the transition, if any.
+ ///
+ public IReadOnlyList ValidationErrors { get; set; } = [];
+
+ ///
+ /// Audit trail entry recording this transition attempt.
+ ///
+ public PhaseAuditEntry? AuditEntry { get; set; }
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseStatus.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseStatus.cs
new file mode 100644
index 0000000..3cac183
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseStatus.cs
@@ -0,0 +1,32 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the status of a single phase within a Cognitive Sandwich process.
+///
+public enum PhaseStatus
+{
+ ///
+ /// Phase has not started yet.
+ ///
+ Pending,
+
+ ///
+ /// Phase is currently being executed.
+ ///
+ InProgress,
+
+ ///
+ /// Phase output is awaiting human review before the process can advance.
+ ///
+ AwaitingReview,
+
+ ///
+ /// Phase completed successfully and its postconditions are satisfied.
+ ///
+ Completed,
+
+ ///
+ /// Phase was rolled back due to a step-back operation.
+ ///
+ RolledBack
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/PhaseTransitionContext.cs b/src/AgencyLayer/CognitiveSandwich/Models/PhaseTransitionContext.cs
new file mode 100644
index 0000000..55b4d5f
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/PhaseTransitionContext.cs
@@ -0,0 +1,28 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Provides context for a phase transition request, including the output from the
+/// current phase, the requesting user, and optional human feedback.
+///
+public class PhaseTransitionContext
+{
+ ///
+ /// Identifier of the user initiating the transition.
+ ///
+ public string UserId { get; set; } = string.Empty;
+
+ ///
+ /// Output produced by the current phase, used for postcondition evaluation.
+ ///
+ public PhaseOutput? PhaseOutput { get; set; }
+
+ ///
+ /// Optional human feedback provided during a human-in-the-loop validation checkpoint.
+ ///
+ public string? HumanFeedback { get; set; }
+
+ ///
+ /// Reason or justification for the transition.
+ ///
+ public string TransitionReason { get; set; } = string.Empty;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcess.cs b/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcess.cs
new file mode 100644
index 0000000..29e64c5
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcess.cs
@@ -0,0 +1,58 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the core entity of a Cognitive Sandwich workflow: a multi-phase process
+/// with human-in-the-loop validation, step-back capability, and cognitive debt monitoring.
+///
+public class SandwichProcess
+{
+ ///
+ /// Unique identifier for this process instance.
+ ///
+ public string ProcessId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Tenant that owns this process, for multi-tenancy isolation.
+ ///
+ public string TenantId { get; set; } = string.Empty;
+
+ ///
+ /// Human-readable name of the process.
+ ///
+ public string Name { get; set; } = string.Empty;
+
+ ///
+ /// Timestamp when the process was created.
+ ///
+ public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
+
+ ///
+ /// Zero-based index of the currently active phase.
+ ///
+ public int CurrentPhaseIndex { get; set; }
+
+ ///
+ /// Ordered list of phases that compose this process.
+ ///
+ public IReadOnlyList Phases { get; set; } = [];
+
+ ///
+ /// Current lifecycle state of the process.
+ ///
+ public SandwichProcessState State { get; set; } = SandwichProcessState.Created;
+
+ ///
+ /// Maximum number of step-back operations allowed before the process is blocked.
+ ///
+ public int MaxStepBacks { get; set; } = 3;
+
+ ///
+ /// Number of step-back operations that have been performed so far.
+ ///
+ public int StepBackCount { get; set; }
+
+ ///
+ /// Cognitive debt threshold (0-100). When exceeded, phase transitions are blocked.
+ ///
+ public double CognitiveDebtThreshold { get; set; } = 70.0;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcessConfig.cs b/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcessConfig.cs
new file mode 100644
index 0000000..0eee750
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcessConfig.cs
@@ -0,0 +1,34 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Configuration used to create a new Cognitive Sandwich process,
+/// defining the phases, step-back limits, and cognitive debt thresholds.
+///
+public class SandwichProcessConfig
+{
+ ///
+ /// Tenant that will own the created process.
+ ///
+ public string TenantId { get; set; } = string.Empty;
+
+ ///
+ /// Human-readable name for the process.
+ ///
+ public string Name { get; set; } = string.Empty;
+
+ ///
+ /// Ordered list of phase definitions that compose the process.
+ /// Must contain between 3 and 7 phases.
+ ///
+ public IReadOnlyList Phases { get; set; } = [];
+
+ ///
+ /// Maximum number of step-back operations allowed before the process is blocked.
+ ///
+ public int MaxStepBacks { get; set; } = 3;
+
+ ///
+ /// Cognitive debt score threshold (0-100). When exceeded, phase transitions are blocked.
+ ///
+ public double CognitiveDebtThreshold { get; set; } = 70.0;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcessState.cs b/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcessState.cs
new file mode 100644
index 0000000..09a3907
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/SandwichProcessState.cs
@@ -0,0 +1,37 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Represents the lifecycle state of a Cognitive Sandwich process.
+///
+public enum SandwichProcessState
+{
+ ///
+ /// Process has been created but not yet started.
+ ///
+ Created,
+
+ ///
+ /// Process is actively executing phases.
+ ///
+ InProgress,
+
+ ///
+ /// Process is waiting for human review at a validation checkpoint.
+ ///
+ AwaitingHumanReview,
+
+ ///
+ /// Process has stepped back to a prior phase for rework.
+ ///
+ SteppedBack,
+
+ ///
+ /// All phases completed successfully.
+ ///
+ Completed,
+
+ ///
+ /// Process failed due to unrecoverable error or policy violation.
+ ///
+ Failed
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Models/StepBackReason.cs b/src/AgencyLayer/CognitiveSandwich/Models/StepBackReason.cs
new file mode 100644
index 0000000..08fe679
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Models/StepBackReason.cs
@@ -0,0 +1,23 @@
+namespace AgencyLayer.CognitiveSandwich.Models;
+
+///
+/// Captures the reason, initiator, and timestamp for a step-back operation
+/// that rewinds a Cognitive Sandwich process to a prior phase.
+///
+public class StepBackReason
+{
+ ///
+ /// Human-readable explanation of why the step-back was initiated.
+ ///
+ public string Reason { get; set; } = string.Empty;
+
+ ///
+ /// Identifier of the user or system component that initiated the step-back.
+ ///
+ public string InitiatedBy { get; set; } = string.Empty;
+
+ ///
+ /// Timestamp when the step-back was requested.
+ ///
+ public DateTime Timestamp { get; set; } = DateTime.UtcNow;
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Ports/IAuditLoggingAdapter.cs b/src/AgencyLayer/CognitiveSandwich/Ports/IAuditLoggingAdapter.cs
new file mode 100644
index 0000000..5d4f1c5
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Ports/IAuditLoggingAdapter.cs
@@ -0,0 +1,26 @@
+using AgencyLayer.CognitiveSandwich.Models;
+
+namespace AgencyLayer.CognitiveSandwich.Ports;
+
+///
+/// Adapter interface for persisting and retrieving audit trail entries
+/// for Cognitive Sandwich processes. Implementations may write to CosmosDB,
+/// event stores, or other durable storage backends.
+///
+public interface IAuditLoggingAdapter
+{
+ ///
+ /// Persists an audit trail entry for a Cognitive Sandwich process event.
+ ///
+ /// The audit entry to persist.
+ /// Cancellation token.
+ Task LogAuditEntryAsync(PhaseAuditEntry entry, CancellationToken ct = default);
+
+ ///
+ /// Retrieves all audit trail entries for the specified process, ordered chronologically.
+ ///
+ /// The unique identifier of the process.
+ /// Cancellation token.
+ /// An ordered list of audit entries.
+ Task> GetAuditEntriesAsync(string processId, CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Ports/ICognitiveDebtPort.cs b/src/AgencyLayer/CognitiveSandwich/Ports/ICognitiveDebtPort.cs
new file mode 100644
index 0000000..4a20322
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Ports/ICognitiveDebtPort.cs
@@ -0,0 +1,29 @@
+using AgencyLayer.CognitiveSandwich.Models;
+
+namespace AgencyLayer.CognitiveSandwich.Ports;
+
+///
+/// Defines the port for assessing cognitive debt within a Cognitive Sandwich process.
+/// Cognitive debt measures the degree of over-reliance on AI, enabling the system
+/// to enforce human oversight at appropriate intervals.
+///
+public interface ICognitiveDebtPort
+{
+ ///
+ /// Assesses the cognitive debt for a specific phase within a process.
+ ///
+ /// The unique identifier of the process.
+ /// The unique identifier of the phase to assess.
+ /// Cancellation token.
+ /// A debt assessment including score, breach status, and recommendations.
+ Task AssessDebtAsync(string processId, string phaseId, CancellationToken ct = default);
+
+ ///
+ /// Checks whether the cognitive debt threshold has been breached for the process,
+ /// indicating that further AI-driven phase transitions should be blocked.
+ ///
+ /// The unique identifier of the process.
+ /// Cancellation token.
+ /// true if the threshold is breached; otherwise false.
+ Task IsThresholdBreachedAsync(string processId, CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Ports/IPhaseConditionPort.cs b/src/AgencyLayer/CognitiveSandwich/Ports/IPhaseConditionPort.cs
new file mode 100644
index 0000000..4fa630d
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Ports/IPhaseConditionPort.cs
@@ -0,0 +1,32 @@
+using AgencyLayer.CognitiveSandwich.Models;
+
+namespace AgencyLayer.CognitiveSandwich.Ports;
+
+///
+/// Defines the port for evaluating pre- and postconditions on phases
+/// within a Cognitive Sandwich process. Conditions act as quality gates
+/// that must be satisfied before entering or after completing a phase.
+///
+public interface IPhaseConditionPort
+{
+ ///
+ /// Evaluates all preconditions for the specified phase, determining
+ /// whether the phase is ready to begin execution.
+ ///
+ /// The unique identifier of the process.
+ /// The unique identifier of the phase whose preconditions to check.
+ /// Cancellation token.
+ /// Aggregated result with individual condition evaluations.
+ Task CheckPreconditionsAsync(string processId, string phaseId, CancellationToken ct = default);
+
+ ///
+ /// Evaluates all postconditions for the specified phase given the phase output,
+ /// determining whether the phase can be considered complete.
+ ///
+ /// The unique identifier of the process.
+ /// The unique identifier of the phase whose postconditions to check.
+ /// The output produced by the phase execution.
+ /// Cancellation token.
+ /// Aggregated result with individual condition evaluations.
+ Task CheckPostconditionsAsync(string processId, string phaseId, PhaseOutput output, CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/CognitiveSandwich/Ports/IPhaseManagerPort.cs b/src/AgencyLayer/CognitiveSandwich/Ports/IPhaseManagerPort.cs
new file mode 100644
index 0000000..43a5506
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSandwich/Ports/IPhaseManagerPort.cs
@@ -0,0 +1,57 @@
+using AgencyLayer.CognitiveSandwich.Models;
+
+namespace AgencyLayer.CognitiveSandwich.Ports;
+
+///
+/// Defines the primary port for managing Cognitive Sandwich processes.
+/// This port handles process creation, phase transitions, step-back operations,
+/// and audit trail retrieval following hexagonal architecture conventions.
+///
+public interface IPhaseManagerPort
+{
+ ///
+ /// Creates a new Cognitive Sandwich process from the given configuration.
+ ///
+ /// Configuration specifying phases, step-back limits, and thresholds.
+ /// Cancellation token.
+ /// The newly created process with all phases initialized.
+ Task CreateProcessAsync(SandwichProcessConfig config, CancellationToken ct = default);
+
+ ///
+ /// Retrieves an existing Cognitive Sandwich process by its unique identifier.
+ ///
+ /// The unique identifier of the process.
+ /// Cancellation token.
+ /// The process, or throws if not found.
+ Task GetProcessAsync(string processId, CancellationToken ct = default);
+
+ ///
+ /// Attempts to transition the process to its next phase after validating
+ /// postconditions of the current phase, preconditions of the next phase,
+ /// and cognitive debt thresholds.
+ ///
+ /// The unique identifier of the process.
+ /// Context for the transition including user, output, and feedback.
+ /// Cancellation token.
+ /// The result of the transition attempt.
+ Task TransitionToNextPhaseAsync(string processId, PhaseTransitionContext context, CancellationToken ct = default);
+
+ ///
+ /// Steps the process back to a prior phase, rolling back intermediate phases
+ /// and incrementing the step-back counter.
+ ///
+ /// The unique identifier of the process.
+ /// The identifier of the phase to step back to.
+ /// The reason for the step-back operation.
+ /// Cancellation token.
+ /// The result of the step-back attempt.
+ Task StepBackAsync(string processId, string targetPhaseId, StepBackReason reason, CancellationToken ct = default);
+
+ ///
+ /// Retrieves the complete audit trail for a process, ordered chronologically.
+ ///
+ /// The unique identifier of the process.
+ /// Cancellation token.
+ /// An ordered list of audit entries for the process.
+ Task> GetAuditTrailAsync(string processId, CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/CognitiveSovereignty.csproj b/src/AgencyLayer/CognitiveSovereignty/CognitiveSovereignty.csproj
new file mode 100644
index 0000000..ae07815
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/CognitiveSovereignty.csproj
@@ -0,0 +1,15 @@
+
+
+
+ net9.0
+ enable
+ enable
+ true
+ $(NoWarn);1591
+
+
+
+
+
+
+
diff --git a/src/AgencyLayer/CognitiveSovereignty/Engines/CognitiveSovereigntyEngine.cs b/src/AgencyLayer/CognitiveSovereignty/Engines/CognitiveSovereigntyEngine.cs
new file mode 100644
index 0000000..ee5d249
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Engines/CognitiveSovereigntyEngine.cs
@@ -0,0 +1,165 @@
+using System.Collections.Concurrent;
+using AgencyLayer.CognitiveSovereignty.Models;
+using AgencyLayer.CognitiveSovereignty.Ports;
+using Microsoft.Extensions.Logging;
+
+namespace AgencyLayer.CognitiveSovereignty.Engines;
+
+///
+/// Core engine implementing the Cognitive Sovereignty framework.
+/// Manages sovereignty profiles, mode resolution (overrides, domain rules, defaults),
+/// and autonomy level calculation based on the agency-sovereignty spectrum.
+///
+public class CognitiveSovereigntyEngine : ISovereigntyPort
+{
+ private readonly ISovereigntyOverridePort _overridePort;
+ private readonly ILogger _logger;
+
+ private readonly ConcurrentDictionary _profiles = new();
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Port for managing sovereignty overrides.
+ /// Logger instance.
+ public CognitiveSovereigntyEngine(
+ ISovereigntyOverridePort overridePort,
+ ILogger logger)
+ {
+ _overridePort = overridePort ?? throw new ArgumentNullException(nameof(overridePort));
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public Task GetProfileAsync(string userId, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(userId);
+
+ _profiles.TryGetValue(userId, out var profile);
+
+ _logger.LogDebug(
+ "Retrieved sovereignty profile for user {UserId}: {Found}",
+ userId, profile != null);
+
+ return Task.FromResult(profile);
+ }
+
+ ///
+ public Task UpdateProfileAsync(SovereigntyProfile profile, CancellationToken ct = default)
+ {
+ ArgumentNullException.ThrowIfNull(profile);
+ ArgumentException.ThrowIfNullOrWhiteSpace(profile.UserId);
+
+ if (!_profiles.ContainsKey(profile.UserId))
+ {
+ throw new InvalidOperationException($"Profile for user '{profile.UserId}' not found.");
+ }
+
+ profile.UpdatedAt = DateTime.UtcNow;
+ _profiles[profile.UserId] = profile;
+
+ _logger.LogInformation(
+ "Updated sovereignty profile for user {UserId}, default mode: {DefaultMode}",
+ profile.UserId, profile.DefaultMode);
+
+ return Task.FromResult(profile);
+ }
+
+ ///
+ public Task SetModeAsync(string userId, SovereigntyMode mode, string? domain = null, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(userId);
+
+ var profile = _profiles.GetOrAdd(userId, _ => new SovereigntyProfile
+ {
+ UserId = userId,
+ DefaultMode = SovereigntyMode.GuidedAutonomy
+ });
+
+ var previousMode = domain != null && profile.DomainOverrides.TryGetValue(domain, out var domainMode)
+ ? domainMode
+ : profile.DefaultMode;
+
+ if (domain != null)
+ {
+ profile.DomainOverrides[domain] = mode;
+
+ _logger.LogInformation(
+ "Set sovereignty mode for user {UserId} in domain '{Domain}': {PreviousMode} -> {NewMode}",
+ userId, domain, previousMode, mode);
+ }
+ else
+ {
+ profile.DefaultMode = mode;
+
+ _logger.LogInformation(
+ "Set default sovereignty mode for user {UserId}: {PreviousMode} -> {NewMode}",
+ userId, previousMode, mode);
+ }
+
+ profile.UpdatedAt = DateTime.UtcNow;
+
+ return Task.FromResult(profile);
+ }
+
+ ///
+ public async Task GetCurrentModeAsync(string userId, string? domain = null, CancellationToken ct = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(userId);
+
+ // Priority 1: Check active overrides (newest first)
+ var overrides = await _overridePort.GetActiveOverridesAsync(userId, ct);
+ if (overrides.Count > 0)
+ {
+ var activeOverride = overrides[0]; // Most recent active override
+ _logger.LogDebug(
+ "Resolved sovereignty mode for user {UserId} from override {OverrideId}: {Mode}",
+ userId, activeOverride.OverrideId, activeOverride.NewMode);
+
+ return activeOverride.NewMode;
+ }
+
+ // Priority 2: Check domain-specific rules
+ if (!_profiles.TryGetValue(userId, out var profile))
+ {
+ _logger.LogDebug(
+ "No sovereignty profile for user {UserId}, returning default GuidedAutonomy",
+ userId);
+ return SovereigntyMode.GuidedAutonomy;
+ }
+
+ if (domain != null && profile.DomainOverrides.TryGetValue(domain, out var domainMode))
+ {
+ _logger.LogDebug(
+ "Resolved sovereignty mode for user {UserId} from domain '{Domain}': {Mode}",
+ userId, domain, domainMode);
+
+ return domainMode;
+ }
+
+ // Priority 3: Default mode
+ _logger.LogDebug(
+ "Resolved sovereignty mode for user {UserId} from default: {Mode}",
+ userId, profile.DefaultMode);
+
+ return profile.DefaultMode;
+ }
+
+ ///
+ /// Calculates the autonomy level (0.0 to 1.0) for a given sovereignty mode.
+ ///
+ /// The sovereignty mode to calculate the autonomy level for.
+ /// The autonomy level as a double between 0.0 and 1.0.
+ public static double CalculateAutonomyLevel(SovereigntyMode mode)
+ {
+ return mode switch
+ {
+ SovereigntyMode.FullAutonomy => 1.0,
+ SovereigntyMode.GuidedAutonomy => 0.75,
+ SovereigntyMode.CoAuthorship => 0.5,
+ SovereigntyMode.HumanLed => 0.25,
+ SovereigntyMode.FullManual => 0.0,
+ _ => throw new ArgumentOutOfRangeException(nameof(mode), mode, "Unknown sovereignty mode.")
+ };
+ }
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Models/AgentAction.cs b/src/AgencyLayer/CognitiveSovereignty/Models/AgentAction.cs
new file mode 100644
index 0000000..4d71284
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Models/AgentAction.cs
@@ -0,0 +1,55 @@
+namespace AgencyLayer.CognitiveSovereignty.Models;
+
+///
+/// Records an agent action submitted for approval within the cognitive sovereignty framework.
+/// Tracks the agent, task, autonomy level, and approval status.
+///
+public class AgentAction
+{
+ ///
+ /// Unique identifier for this action.
+ ///
+ public string ActionId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Identifier of the agent that produced or requests this action.
+ ///
+ public string AgentId { get; set; } = string.Empty;
+
+ ///
+ /// Identifier of the task this action relates to.
+ ///
+ public string TaskId { get; set; } = string.Empty;
+
+ ///
+ /// Human-readable description of what the agent intends to do.
+ ///
+ public string Description { get; set; } = string.Empty;
+
+ ///
+ /// Autonomy level at which this action was generated (0.0 = fully manual, 1.0 = fully autonomous).
+ ///
+ public double AutonomyLevel { get; set; }
+
+ ///
+ /// Indicates whether this action requires explicit human approval before execution.
+ ///
+ public bool RequiresApproval { get; set; }
+
+ ///
+ /// Indicates whether this action has been approved by a human reviewer.
+ /// null means pending approval.
+ ///
+ public bool? Approved { get; set; }
+
+ ///
+ /// Identifier of the user who approved or rejected this action.
+ /// Empty when the action is still pending.
+ ///
+ public string ApprovedBy { get; set; } = string.Empty;
+
+ ///
+ /// Timestamp when the action was created or submitted.
+ ///
+ public DateTime Timestamp { get; set; } = DateTime.UtcNow;
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Models/AuthorshipTrail.cs b/src/AgencyLayer/CognitiveSovereignty/Models/AuthorshipTrail.cs
new file mode 100644
index 0000000..d5cf5aa
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Models/AuthorshipTrail.cs
@@ -0,0 +1,75 @@
+namespace AgencyLayer.CognitiveSovereignty.Models;
+
+///
+/// Tracks the authorship provenance for a task, recording who authored what
+/// (human, agent, or hybrid) with content hashes for auditability.
+///
+public class AuthorshipTrail
+{
+ ///
+ /// Unique identifier for this authorship trail.
+ ///
+ public string TrailId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Identifier of the task this trail documents.
+ ///
+ public string TaskId { get; set; } = string.Empty;
+
+ ///
+ /// Ordered list of authorship entries recording each contribution to the task.
+ ///
+ public List Entries { get; set; } = [];
+}
+
+///
+/// Represents a single authorship contribution within an .
+///
+public class AuthorshipEntry
+{
+ ///
+ /// Unique identifier for this entry.
+ ///
+ public string EntryId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// The type of author that produced this contribution.
+ ///
+ public AuthorType AuthorType { get; set; }
+
+ ///
+ /// Identifier of the specific author (user ID or agent ID).
+ ///
+ public string AuthorId { get; set; } = string.Empty;
+
+ ///
+ /// SHA-256 hash of the content for integrity verification and deduplication.
+ ///
+ public string ContentHash { get; set; } = string.Empty;
+
+ ///
+ /// Timestamp when this contribution was recorded.
+ ///
+ public DateTime Timestamp { get; set; } = DateTime.UtcNow;
+}
+
+///
+/// Identifies the type of author for an authorship trail entry.
+///
+public enum AuthorType
+{
+ ///
+ /// Content authored entirely by a human.
+ ///
+ Human,
+
+ ///
+ /// Content authored entirely by an AI agent.
+ ///
+ Agent,
+
+ ///
+ /// Content co-authored by both human and agent.
+ ///
+ Hybrid
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyAuditEntry.cs b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyAuditEntry.cs
new file mode 100644
index 0000000..1a41de1
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyAuditEntry.cs
@@ -0,0 +1,43 @@
+namespace AgencyLayer.CognitiveSovereignty.Models;
+
+///
+/// Represents an audit log entry recording a sovereignty-related event,
+/// such as mode changes, overrides, or approval actions.
+///
+public class SovereigntyAuditEntry
+{
+ ///
+ /// Unique identifier for this audit entry.
+ ///
+ public string EntryId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Identifier of the user affected by or initiating the action.
+ ///
+ public string UserId { get; set; } = string.Empty;
+
+ ///
+ /// Description of the action that occurred (e.g., "ModeChanged", "OverrideCreated").
+ ///
+ public string Action { get; set; } = string.Empty;
+
+ ///
+ /// The sovereignty state before this action, if applicable.
+ ///
+ public string PreviousState { get; set; } = string.Empty;
+
+ ///
+ /// The sovereignty state after this action, if applicable.
+ ///
+ public string NewState { get; set; } = string.Empty;
+
+ ///
+ /// Human-readable reason or justification for the action.
+ ///
+ public string Reason { get; set; } = string.Empty;
+
+ ///
+ /// Timestamp when the action occurred.
+ ///
+ public DateTime Timestamp { get; set; } = DateTime.UtcNow;
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyMode.cs b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyMode.cs
new file mode 100644
index 0000000..c0811e3
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyMode.cs
@@ -0,0 +1,42 @@
+namespace AgencyLayer.CognitiveSovereignty.Models;
+
+///
+/// Defines the spectrum of sovereignty modes that determine the balance
+/// between autonomous agentic workflows and human cognitive control.
+///
+public enum SovereigntyMode
+{
+ ///
+ /// The system operates with full autonomous agentic capability.
+ /// Human input is limited to problem statement; agents handle execution.
+ /// Autonomy level: 1.0.
+ ///
+ FullAutonomy,
+
+ ///
+ /// Agents operate autonomously with human oversight at critical decision points.
+ /// Autonomy level: 0.75.
+ ///
+ GuidedAutonomy,
+
+ ///
+ /// Equal partnership between human and agent. Both contribute to decisions
+ /// and outputs, with shared authorship tracked transparently.
+ /// Autonomy level: 0.5.
+ ///
+ CoAuthorship,
+
+ ///
+ /// Human drives the workflow with agent suggestions and support.
+ /// Agents provide recommendations but do not act autonomously.
+ /// Autonomy level: 0.25.
+ ///
+ HumanLed,
+
+ ///
+ /// Full human control with no autonomous agent actions.
+ /// Agents are disabled or provide only passive information display.
+ /// Autonomy level: 0.0.
+ ///
+ FullManual
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyOverride.cs b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyOverride.cs
new file mode 100644
index 0000000..1ba5cf2
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyOverride.cs
@@ -0,0 +1,49 @@
+namespace AgencyLayer.CognitiveSovereignty.Models;
+
+///
+/// Represents an active override that temporarily changes a user's sovereignty mode,
+/// with an expiration time and reason for the override.
+///
+public class SovereigntyOverride
+{
+ ///
+ /// Unique identifier for this override.
+ ///
+ public string OverrideId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Identifier of the user this override applies to.
+ ///
+ public string UserId { get; set; } = string.Empty;
+
+ ///
+ /// Human-readable reason for activating this override.
+ ///
+ public string Reason { get; set; } = string.Empty;
+
+ ///
+ /// The sovereignty mode that was active before this override.
+ ///
+ public SovereigntyMode PreviousMode { get; set; }
+
+ ///
+ /// The sovereignty mode imposed by this override.
+ ///
+ public SovereigntyMode NewMode { get; set; }
+
+ ///
+ /// When this override expires and the previous mode is restored.
+ /// A null value indicates the override does not expire automatically.
+ ///
+ public DateTime? Expiry { get; set; }
+
+ ///
+ /// Timestamp when this override was created.
+ ///
+ public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
+
+ ///
+ /// Indicates whether this override has been revoked before its expiry.
+ ///
+ public bool IsRevoked { get; set; }
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyProfile.cs b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyProfile.cs
new file mode 100644
index 0000000..8ff49ff
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Models/SovereigntyProfile.cs
@@ -0,0 +1,44 @@
+namespace AgencyLayer.CognitiveSovereignty.Models;
+
+///
+/// Represents a user's sovereignty preferences, defining how agentic autonomy
+/// is balanced with human control across domains and tasks.
+///
+public class SovereigntyProfile
+{
+ ///
+ /// Unique identifier for this profile.
+ ///
+ public string ProfileId { get; set; } = Guid.NewGuid().ToString();
+
+ ///
+ /// Identifier of the user this profile belongs to.
+ ///
+ public string UserId { get; set; } = string.Empty;
+
+ ///
+ /// Identifier of the tenant this profile belongs to, for multi-tenancy isolation.
+ ///
+ public string TenantId { get; set; } = string.Empty;
+
+ ///
+ /// Default sovereignty mode used when no domain-specific override applies.
+ ///
+ public SovereigntyMode DefaultMode { get; set; } = SovereigntyMode.GuidedAutonomy;
+
+ ///
+ /// Domain-specific sovereignty mode overrides. Keys are domain names
+ /// (e.g., "financial", "creative", "routine") and values are the mode for that domain.
+ ///
+ public Dictionary DomainOverrides { get; set; } = new();
+
+ ///
+ /// Timestamp when the profile was created.
+ ///
+ public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
+
+ ///
+ /// Timestamp when the profile was last updated.
+ ///
+ public DateTime UpdatedAt { get; set; } = DateTime.UtcNow;
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Ports/IAgentActionApprovalPort.cs b/src/AgencyLayer/CognitiveSovereignty/Ports/IAgentActionApprovalPort.cs
new file mode 100644
index 0000000..c805c6e
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Ports/IAgentActionApprovalPort.cs
@@ -0,0 +1,44 @@
+using AgencyLayer.CognitiveSovereignty.Models;
+
+namespace AgencyLayer.CognitiveSovereignty.Ports;
+
+///
+/// Defines the port for managing agent action approval workflows
+/// within the cognitive sovereignty framework. Enables human-in-the-loop
+/// approval or rejection of agent-proposed actions.
+///
+public interface IAgentActionApprovalPort
+{
+ ///
+ /// Submits an agent action for human approval.
+ ///
+ /// The agent action to submit for approval.
+ /// Cancellation token.
+ /// The submitted action with its assigned identifier.
+ Task SubmitActionForApprovalAsync(AgentAction action, CancellationToken ct = default);
+
+ ///
+ /// Approves a pending agent action, allowing it to proceed.
+ ///
+ /// The unique identifier of the action to approve.
+ /// The identifier of the user who approved the action.
+ /// Cancellation token.
+ /// The approved action.
+ Task ApproveActionAsync(string actionId, string approvedBy, CancellationToken ct = default);
+
+ ///
+ /// Rejects a pending agent action, preventing it from proceeding.
+ ///
+ /// The unique identifier of the action to reject.
+ /// The identifier of the user who rejected the action.
+ /// Cancellation token.
+ /// The rejected action.
+ Task RejectActionAsync(string actionId, string rejectedBy, CancellationToken ct = default);
+
+ ///
+ /// Retrieves all pending (unapproved, unrejected) actions for review.
+ ///
+ /// Cancellation token.
+ /// A list of pending agent actions.
+ Task> GetPendingActionsAsync(CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Ports/IAuthorshipTrailPort.cs b/src/AgencyLayer/CognitiveSovereignty/Ports/IAuthorshipTrailPort.cs
new file mode 100644
index 0000000..4db99a8
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Ports/IAuthorshipTrailPort.cs
@@ -0,0 +1,27 @@
+using AgencyLayer.CognitiveSovereignty.Models;
+
+namespace AgencyLayer.CognitiveSovereignty.Ports;
+
+///
+/// Defines the port for recording and retrieving authorship provenance trails
+/// that track who authored what (human, agent, or hybrid) for each task.
+///
+public interface IAuthorshipTrailPort
+{
+ ///
+ /// Records an authorship entry for a task, adding it to the task's authorship trail.
+ ///
+ /// The unique identifier of the task.
+ /// The authorship entry to record.
+ /// Cancellation token.
+ /// The updated authorship trail for the task.
+ Task RecordAuthorshipAsync(string taskId, AuthorshipEntry entry, CancellationToken ct = default);
+
+ ///
+ /// Retrieves the complete authorship trail for a task.
+ ///
+ /// The unique identifier of the task.
+ /// Cancellation token.
+ /// The authorship trail, or null if no trail exists for the task.
+ Task GetTrailAsync(string taskId, CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Ports/ISovereigntyOverridePort.cs b/src/AgencyLayer/CognitiveSovereignty/Ports/ISovereigntyOverridePort.cs
new file mode 100644
index 0000000..509aa20
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Ports/ISovereigntyOverridePort.cs
@@ -0,0 +1,34 @@
+using AgencyLayer.CognitiveSovereignty.Models;
+
+namespace AgencyLayer.CognitiveSovereignty.Ports;
+
+///
+/// Defines the port for managing sovereignty overrides that temporarily
+/// change a user's sovereignty mode with expiration and revocation support.
+///
+public interface ISovereigntyOverridePort
+{
+ ///
+ /// Creates a new sovereignty override for a user, temporarily changing their mode.
+ ///
+ /// The override to create.
+ /// Cancellation token.
+ /// The created override with its assigned identifier.
+ Task CreateOverrideAsync(SovereigntyOverride @override, CancellationToken ct = default);
+
+ ///
+ /// Revokes an active override, restoring the previous sovereignty mode.
+ ///
+ /// The unique identifier of the override to revoke.
+ /// Cancellation token.
+ /// true if the override was found and revoked; false otherwise.
+ Task RevokeOverrideAsync(string overrideId, CancellationToken ct = default);
+
+ ///
+ /// Retrieves all active (non-revoked, non-expired) overrides for a user.
+ ///
+ /// The unique identifier of the user.
+ /// Cancellation token.
+ /// A list of active overrides ordered by creation time (newest first).
+ Task> GetActiveOverridesAsync(string userId, CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/CognitiveSovereignty/Ports/ISovereigntyPort.cs b/src/AgencyLayer/CognitiveSovereignty/Ports/ISovereigntyPort.cs
new file mode 100644
index 0000000..0f2f6f3
--- /dev/null
+++ b/src/AgencyLayer/CognitiveSovereignty/Ports/ISovereigntyPort.cs
@@ -0,0 +1,49 @@
+using AgencyLayer.CognitiveSovereignty.Models;
+
+namespace AgencyLayer.CognitiveSovereignty.Ports;
+
+///
+/// Defines the primary port for managing cognitive sovereignty profiles and modes.
+/// This port handles profile CRUD operations and sovereignty mode resolution
+/// following hexagonal architecture conventions.
+///
+public interface ISovereigntyPort
+{
+ ///
+ /// Retrieves the sovereignty profile for a given user.
+ ///
+ /// The unique identifier of the user.
+ /// Cancellation token.
+ /// The user's sovereignty profile, or null if no profile exists.
+ Task GetProfileAsync(string userId, CancellationToken ct = default);
+
+ ///
+ /// Updates an existing sovereignty profile with the provided values.
+ ///
+ /// The updated profile.
+ /// Cancellation token.
+ /// The updated profile.
+ Task UpdateProfileAsync(SovereigntyProfile profile, CancellationToken ct = default);
+
+ ///
+ /// Sets the sovereignty mode for a user, optionally specifying a domain.
+ /// When a domain is specified, sets a domain-specific override.
+ /// When no domain is specified, sets the default mode.
+ ///
+ /// The unique identifier of the user.
+ /// The sovereignty mode to set.
+ /// Optional domain for domain-specific mode. null sets the default mode.
+ /// Cancellation token.
+ /// The updated sovereignty profile.
+ Task SetModeAsync(string userId, SovereigntyMode mode, string? domain = null, CancellationToken ct = default);
+
+ ///
+ /// Resolves the effective sovereignty mode for a user in a given context.
+ /// Checks active overrides first, then domain-specific rules, then the default mode.
+ ///
+ /// The unique identifier of the user.
+ /// Optional domain to consider for domain-specific resolution.
+ /// Cancellation token.
+ /// The currently effective sovereignty mode.
+ Task GetCurrentModeAsync(string userId, string? domain = null, CancellationToken ct = default);
+}
diff --git a/src/AgencyLayer/ConvenerAgents/ChampionNudger/ChampionNudgerAgent.cs b/src/AgencyLayer/ConvenerAgents/ChampionNudger/ChampionNudgerAgent.cs
index 25f85f1..5c2e71b 100644
--- a/src/AgencyLayer/ConvenerAgents/ChampionNudger/ChampionNudgerAgent.cs
+++ b/src/AgencyLayer/ConvenerAgents/ChampionNudger/ChampionNudgerAgent.cs
@@ -38,7 +38,7 @@ public abstract class CommunityEvent
{
public Guid EventId { get; } = Guid.NewGuid();
public DateTimeOffset Timestamp { get; } = DateTimeOffset.UtcNow;
- public string TenantId { get; set; }
+ public required string TenantId { get; set; }
}
///
@@ -46,11 +46,11 @@ public abstract class CommunityEvent
///
public class ProjectNeedsChampionEvent : CommunityEvent
{
- public string ProjectId { get; set; }
- public string ProjectName { get; set; }
- public string RequiredSkill { get; set; }
- public string RequestingUserId { get; set; }
- public IEnumerable PotentialChampionUserIds { get; set; }
+ public required string ProjectId { get; set; }
+ public required string ProjectName { get; set; }
+ public required string RequiredSkill { get; set; }
+ public required string RequestingUserId { get; set; }
+ public required IEnumerable PotentialChampionUserIds { get; set; }
}
///
@@ -58,9 +58,9 @@ public class ProjectNeedsChampionEvent : CommunityEvent
///
public class CollaborationRequestedEvent : CommunityEvent
{
- public string RequestingUserId { get; set; }
- public string ChampionUserId { get; set; }
- public string ContextMessage { get; set; }
+ public required string RequestingUserId { get; set; }
+ public required string ChampionUserId { get; set; }
+ public required string ContextMessage { get; set; }
}
///
@@ -68,9 +68,9 @@ public class CollaborationRequestedEvent : CommunityEvent
///
public class SafetyConcernDetectedEvent : CommunityEvent
{
- public string ChannelId { get; set; }
- public string ConcernDetails { get; set; }
- public string EscalationTargetUserId { get; set; } // e.g., HR or community manager
+ public required string ChannelId { get; set; }
+ public required string ConcernDetails { get; set; }
+ public required string EscalationTargetUserId { get; set; } // e.g., HR or community manager
}
diff --git a/src/AgencyLayer/DecisionExecution/DecisionExecutor.cs b/src/AgencyLayer/DecisionExecution/DecisionExecutor.cs
index a24082b..e3a6b0b 100644
--- a/src/AgencyLayer/DecisionExecution/DecisionExecutor.cs
+++ b/src/AgencyLayer/DecisionExecution/DecisionExecutor.cs
@@ -1,16 +1,29 @@
+using System.Collections.Concurrent;
+using System.Diagnostics;
+using CognitiveMesh.Shared.Interfaces;
using Microsoft.Extensions.Logging;
namespace AgencyLayer.DecisionExecution
{
///
- /// Handles the execution of decisions made by the cognitive mesh
+ /// Handles the execution of decisions made by the cognitive mesh.
+ /// Uses the knowledge graph for context retrieval and the LLM client
+ /// for reasoning, while tracking execution state and logs internally.
///
public class DecisionExecutor : IDecisionExecutor
{
private readonly ILogger _logger;
private readonly IKnowledgeGraphManager _knowledgeGraphManager;
private readonly ILLMClient _llmClient;
+ private readonly ConcurrentDictionary _executionTracker = new();
+ private readonly ConcurrentDictionary _logBuffer = new();
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger for structured diagnostic output.
+ /// Knowledge graph for contextual lookups.
+ /// LLM client for decision reasoning.
public DecisionExecutor(
ILogger logger,
IKnowledgeGraphManager knowledgeGraphManager,
@@ -29,112 +42,225 @@ public async Task ExecuteDecisionAsync(
if (request == null)
throw new ArgumentNullException(nameof(request));
+ var stopwatch = Stopwatch.StartNew();
+
+ // Mark as executing
+ var executingResult = new DecisionResult
+ {
+ RequestId = request.RequestId,
+ Status = DecisionStatus.Executing,
+ Outcome = DecisionOutcome.Success,
+ Timestamp = DateTime.UtcNow
+ };
+ _executionTracker[request.RequestId] = executingResult;
+
try
{
- _logger.LogInformation("Executing decision for request: {RequestId}", request.RequestId);
-
- // TODO: Implement actual decision execution logic
- // This is a placeholder implementation
- await Task.Delay(100, cancellationToken); // Simulate work
-
- return new DecisionResult
+ _logger.LogInformation(
+ "Executing decision for request: {RequestId}, type: {DecisionType}, priority: {Priority}",
+ request.RequestId, request.DecisionType, request.Priority);
+
+ // 1. Query the knowledge graph for relevant context
+ var contextEntries = await _knowledgeGraphManager.QueryAsync(
+ request.DecisionType ?? "default",
+ cancellationToken);
+
+ var contextData = contextEntries?.ToList() ?? new List>();
+ _logger.LogDebug(
+ "Knowledge graph returned {ContextCount} entries for decision {RequestId}",
+ contextData.Count, request.RequestId);
+
+ // 2. Build the prompt from request parameters and knowledge context
+ var prompt = BuildDecisionPrompt(request, contextData);
+
+ // 3. Invoke the LLM for reasoning
+ var llmResponse = await _llmClient.GenerateCompletionAsync(
+ prompt,
+ temperature: 0.3f,
+ maxTokens: 500,
+ cancellationToken: cancellationToken);
+
+ stopwatch.Stop();
+
+ // 4. Build the successful result
+ var result = new DecisionResult
{
RequestId = request.RequestId,
Status = DecisionStatus.Completed,
Outcome = DecisionOutcome.Success,
- ExecutionTime = TimeSpan.FromMilliseconds(150),
+ ExecutionTime = stopwatch.Elapsed,
Timestamp = DateTime.UtcNow,
+ Results = new Dictionary
+ {
+ ["llmResponse"] = llmResponse,
+ ["contextEntriesUsed"] = contextData.Count
+ },
Metadata = new Dictionary
{
["executionNode"] = Environment.MachineName,
- ["version"] = "1.0.0"
+ ["version"] = "1.0.0",
+ ["model"] = _llmClient.ModelName
}
};
+
+ // 5. Store the execution result for status lookups
+ _executionTracker[request.RequestId] = result;
+
+ // 6. Record the decision log entry
+ RecordLog(request, result);
+
+ // 7. Persist the decision node to the knowledge graph
+ await _knowledgeGraphManager.AddNodeAsync(
+ $"decision:{request.RequestId}",
+ new Dictionary
+ {
+ ["requestId"] = request.RequestId,
+ ["decisionType"] = request.DecisionType ?? "unknown",
+ ["outcome"] = result.Outcome.ToString(),
+ ["executionTimeMs"] = result.ExecutionTime.TotalMilliseconds,
+ ["timestamp"] = result.Timestamp.ToString("O")
+ },
+ label: "Decision",
+ cancellationToken: cancellationToken);
+
+ _logger.LogInformation(
+ "Decision {RequestId} completed successfully in {ElapsedMs}ms",
+ request.RequestId, stopwatch.Elapsed.TotalMilliseconds);
+
+ return result;
+ }
+ catch (OperationCanceledException)
+ {
+ stopwatch.Stop();
+ var cancelledResult = new DecisionResult
+ {
+ RequestId = request.RequestId,
+ Status = DecisionStatus.Cancelled,
+ Outcome = DecisionOutcome.Error,
+ ErrorMessage = "Decision execution was cancelled",
+ ExecutionTime = stopwatch.Elapsed,
+ Timestamp = DateTime.UtcNow
+ };
+
+ _executionTracker[request.RequestId] = cancelledResult;
+ RecordLog(request, cancelledResult);
+
+ _logger.LogWarning("Decision {RequestId} was cancelled after {ElapsedMs}ms",
+ request.RequestId, stopwatch.Elapsed.TotalMilliseconds);
+
+ throw;
}
catch (Exception ex)
{
- _logger.LogError(ex, "Error executing decision for request: {RequestId}", request?.RequestId);
-
- return new DecisionResult
+ stopwatch.Stop();
+ _logger.LogError(ex, "Error executing decision for request: {RequestId}", request.RequestId);
+
+ var failedResult = new DecisionResult
{
- RequestId = request?.RequestId ?? "unknown",
+ RequestId = request.RequestId,
Status = DecisionStatus.Failed,
Outcome = DecisionOutcome.Error,
ErrorMessage = ex.Message,
- ExecutionTime = TimeSpan.Zero,
+ ExecutionTime = stopwatch.Elapsed,
Timestamp = DateTime.UtcNow
};
+
+ _executionTracker[request.RequestId] = failedResult;
+ RecordLog(request, failedResult);
+
+ return failedResult;
}
}
///
- public async Task GetDecisionStatusAsync(
+ public Task GetDecisionStatusAsync(
string requestId,
CancellationToken cancellationToken = default)
{
if (string.IsNullOrWhiteSpace(requestId))
throw new ArgumentException("Request ID cannot be empty", nameof(requestId));
- try
+ cancellationToken.ThrowIfCancellationRequested();
+
+ _logger.LogInformation("Retrieving status for decision: {RequestId}", requestId);
+
+ if (_executionTracker.TryGetValue(requestId, out var result))
{
- _logger.LogInformation("Retrieving status for decision: {RequestId}", requestId);
-
- // TODO: Implement actual status retrieval logic
- // This is a placeholder implementation
- await Task.Delay(50, cancellationToken); // Simulate work
-
- return new DecisionResult
- {
- RequestId = requestId,
- Status = DecisionStatus.Completed,
- Outcome = DecisionOutcome.Success,
- Timestamp = DateTime.UtcNow
- };
+ _logger.LogDebug("Found status for decision {RequestId}: {Status}", requestId, result.Status);
+ return Task.FromResult(result);
}
- catch (Exception ex)
+
+ _logger.LogWarning("No execution record found for decision {RequestId}", requestId);
+ return Task.FromResult(new DecisionResult
{
- _logger.LogError(ex, "Error retrieving status for decision: {RequestId}", requestId);
- throw;
- }
+ RequestId = requestId,
+ Status = DecisionStatus.Pending,
+ Outcome = DecisionOutcome.Error,
+ ErrorMessage = $"No execution record found for request ID '{requestId}'",
+ Timestamp = DateTime.UtcNow
+ });
}
///
- public async Task> GetDecisionLogsAsync(
+ public Task> GetDecisionLogsAsync(
DateTime? startDate = null,
DateTime? endDate = null,
int limit = 100,
CancellationToken cancellationToken = default)
{
- try
+ cancellationToken.ThrowIfCancellationRequested();
+
+ _logger.LogInformation(
+ "Retrieving decision logs (startDate={StartDate}, endDate={EndDate}, limit={Limit})",
+ startDate, endDate, limit);
+
+ var effectiveStart = startDate ?? DateTime.MinValue;
+ var effectiveEnd = endDate ?? DateTime.MaxValue;
+
+ var logs = _logBuffer.Values
+ .Where(log => log.Timestamp >= effectiveStart && log.Timestamp <= effectiveEnd)
+ .OrderByDescending(log => log.Timestamp)
+ .Take(limit)
+ .ToList();
+
+ _logger.LogDebug("Returning {LogCount} decision log entries", logs.Count);
+ return Task.FromResult>(logs);
+ }
+
+ private static string BuildDecisionPrompt(DecisionRequest request, List> contextData)
+ {
+ var contextSummary = contextData.Count > 0
+ ? string.Join("; ", contextData.Select(c => string.Join(", ", c.Select(kvp => $"{kvp.Key}={kvp.Value}"))))
+ : "No additional context available.";
+
+ var parameterSummary = request.Parameters.Count > 0
+ ? string.Join(", ", request.Parameters.Select(kvp => $"{kvp.Key}={kvp.Value}"))
+ : "No parameters specified.";
+
+ return $"Decision Type: {request.DecisionType ?? "general"}\n" +
+ $"Parameters: {parameterSummary}\n" +
+ $"Context: {contextSummary}\n" +
+ $"Analyze the above and provide a decision recommendation.";
+ }
+
+ private void RecordLog(DecisionRequest request, DecisionResult result)
+ {
+ var log = new DecisionLog
{
- _logger.LogInformation("Retrieving decision logs");
-
- // TODO: Implement actual log retrieval logic
- // This is a placeholder implementation
- await Task.Delay(50, cancellationToken); // Simulate work
-
- return new[]
+ RequestId = request.RequestId,
+ DecisionType = request.DecisionType ?? "unknown",
+ Status = result.Status,
+ Outcome = result.Outcome,
+ Timestamp = result.Timestamp,
+ Metadata = new Dictionary
{
- new DecisionLog
- {
- RequestId = $"req-{Guid.NewGuid()}",
- DecisionType = "SampleDecision",
- Status = DecisionStatus.Completed,
- Outcome = DecisionOutcome.Success,
- Timestamp = DateTime.UtcNow.AddHours(-1),
- Metadata = new Dictionary
- {
- ["executionTimeMs"] = 125,
- ["node"] = Environment.MachineName
- }
- }
- };
- }
- catch (Exception ex)
- {
- _logger.LogError(ex, "Error retrieving decision logs");
- throw;
- }
+ ["executionTimeMs"] = result.ExecutionTime.TotalMilliseconds,
+ ["node"] = Environment.MachineName,
+ ["priority"] = request.Priority
+ }
+ };
+
+ _logBuffer[request.RequestId] = log;
}
}
@@ -151,8 +277,8 @@ public class DecisionRequest
///
/// Type of decision to execute
///
- public string DecisionType { get; set; }
-
+ public string? DecisionType { get; set; }
+
///
/// Input parameters for the decision
///
@@ -182,22 +308,22 @@ public class DecisionResult
///
/// ID of the original request
///
- public string RequestId { get; set; }
-
+ public required string RequestId { get; set; }
+
///
/// Current status of the decision
///
public DecisionStatus Status { get; set; }
-
+
///
/// Outcome of the decision
///
public DecisionOutcome Outcome { get; set; }
-
+
///
/// Error message if the decision failed
///
- public string ErrorMessage { get; set; }
+ public string? ErrorMessage { get; set; }
///
/// When the decision was processed
@@ -228,12 +354,12 @@ public class DecisionLog
///
/// ID of the decision request
///
- public string RequestId { get; set; }
-
+ public required string RequestId { get; set; }
+
///
/// Type of decision
///
- public string DecisionType { get; set; }
+ public required string DecisionType { get; set; }
///
/// Status of the decision
diff --git a/src/AgencyLayer/HumanCollaboration/CollaborationPortAdapter.cs b/src/AgencyLayer/HumanCollaboration/CollaborationPortAdapter.cs
new file mode 100644
index 0000000..d3276b6
--- /dev/null
+++ b/src/AgencyLayer/HumanCollaboration/CollaborationPortAdapter.cs
@@ -0,0 +1,41 @@
+using MetacognitiveLayer.UncertaintyQuantification;
+using Microsoft.Extensions.Logging;
+
+namespace AgencyLayer.HumanCollaboration
+{
+ ///
+ /// Adapter that bridges the MetacognitiveLayer's
+ /// to the AgencyLayer's implementation.
+ /// This preserves the dependency direction by having the AgencyLayer implement
+ /// a port defined in the MetacognitiveLayer.
+ ///
+ public class CollaborationPortAdapter : ICollaborationPort
+ {
+ private readonly ICollaborationManager _collaborationManager;
+ private readonly ILogger _logger;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The collaboration manager to delegate to.
+ /// The logger instance.
+ public CollaborationPortAdapter(
+ ICollaborationManager collaborationManager,
+ ILogger logger)
+ {
+ _collaborationManager = collaborationManager ?? throw new ArgumentNullException(nameof(collaborationManager));
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public async Task CreateCollaborationSessionAsync(
+ string sessionName,
+ string? description,
+ IEnumerable participantIds,
+ CancellationToken cancellationToken = default)
+ {
+ _logger.LogDebug("Creating collaboration session via port adapter: {SessionName}", sessionName);
+ await _collaborationManager.CreateSessionAsync(sessionName, description, participantIds, cancellationToken);
+ }
+ }
+}
diff --git a/src/AgencyLayer/HumanCollaboration/HumanCollaboration.csproj b/src/AgencyLayer/HumanCollaboration/HumanCollaboration.csproj
index 21bac80..41322f9 100644
--- a/src/AgencyLayer/HumanCollaboration/HumanCollaboration.csproj
+++ b/src/AgencyLayer/HumanCollaboration/HumanCollaboration.csproj
@@ -9,6 +9,7 @@
+
diff --git a/src/AgencyLayer/HumanCollaboration/ServiceCollectionExtensions.cs b/src/AgencyLayer/HumanCollaboration/ServiceCollectionExtensions.cs
index f2b7664..9c40a8f 100644
--- a/src/AgencyLayer/HumanCollaboration/ServiceCollectionExtensions.cs
+++ b/src/AgencyLayer/HumanCollaboration/ServiceCollectionExtensions.cs
@@ -1,3 +1,4 @@
+using MetacognitiveLayer.UncertaintyQuantification;
using Microsoft.Extensions.DependencyInjection;
namespace AgencyLayer.HumanCollaboration
@@ -19,6 +20,7 @@ public static IServiceCollection AddHumanCollaborationServices(this IServiceColl
});
services.AddScoped();
+ services.AddScoped();
return services;
}
diff --git a/src/AgencyLayer/MultiAgentOrchestration/Adapters/InMemoryAgentKnowledgeRepository.cs b/src/AgencyLayer/MultiAgentOrchestration/Adapters/InMemoryAgentKnowledgeRepository.cs
index 0f07dde..382f5f5 100644
--- a/src/AgencyLayer/MultiAgentOrchestration/Adapters/InMemoryAgentKnowledgeRepository.cs
+++ b/src/AgencyLayer/MultiAgentOrchestration/Adapters/InMemoryAgentKnowledgeRepository.cs
@@ -6,15 +6,20 @@
namespace AgencyLayer.MultiAgentOrchestration.Adapters;
///
-/// In-memory implementation of IAgentKnowledgeRepository for storing
-/// agent definitions and learning insights.
+/// In-memory implementation of for storing
+/// agent definitions and learning insights. Uses
+/// for thread-safe storage and efficient retrieval by key.
///
public class InMemoryAgentKnowledgeRepository : IAgentKnowledgeRepository
{
private readonly ConcurrentDictionary _definitions = new();
- private readonly ConcurrentBag _insights = new();
+ private readonly ConcurrentDictionary _insights = new();
private readonly ILogger _logger;
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger for structured diagnostic output.
public InMemoryAgentKnowledgeRepository(ILogger logger)
{
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
@@ -24,21 +29,34 @@ public InMemoryAgentKnowledgeRepository(ILogger
public Task GetAgentDefinitionAsync(string agentType)
{
+ ArgumentException.ThrowIfNullOrWhiteSpace(agentType);
+
_definitions.TryGetValue(agentType, out var definition);
if (definition == null)
{
_logger.LogWarning("Agent definition not found for type: {AgentType}", agentType);
}
+ else
+ {
+ _logger.LogDebug("Retrieved agent definition for type: {AgentType}", agentType);
+ }
return Task.FromResult(definition);
}
@@ -46,21 +64,72 @@ public Task StoreAgentDefinitionAsync(AgentDefinition definition)
public Task StoreLearningInsightAsync(AgentLearningInsight insight)
{
ArgumentNullException.ThrowIfNull(insight);
+ ArgumentException.ThrowIfNullOrWhiteSpace(insight.InsightId);
+
+ _insights[insight.InsightId] = insight;
+
+ _logger.LogDebug(
+ "Stored learning insight: {InsightId} (type={InsightType}, confidence={Confidence:F2}, agent={AgentType})",
+ insight.InsightId, insight.InsightType, insight.ConfidenceScore, insight.GeneratingAgentType);
- _insights.Add(insight);
- _logger.LogDebug("Stored learning insight: {InsightId} ({InsightType})", insight.InsightId, insight.InsightType);
return Task.CompletedTask;
}
///
public Task> GetRelevantInsightsAsync(string taskGoal)
{
- var relevant = _insights
- .Where(i => i.InsightType != null &&
- (taskGoal.Contains(i.InsightType, StringComparison.OrdinalIgnoreCase) ||
- i.ConfidenceScore >= 0.8))
- .OrderByDescending(i => i.ConfidenceScore)
- .Take(10);
+ ArgumentException.ThrowIfNullOrWhiteSpace(taskGoal);
+
+ var goalTokens = taskGoal
+ .Split(' ', StringSplitOptions.RemoveEmptyEntries | StringSplitOptions.TrimEntries)
+ .Where(t => t.Length > 2)
+ .Select(t => t.ToUpperInvariant())
+ .ToHashSet();
+
+ var relevant = _insights.Values
+ .Select(insight =>
+ {
+ double relevanceScore = 0.0;
+
+ // Exact type match in goal text
+ if (!string.IsNullOrEmpty(insight.InsightType) &&
+ taskGoal.Contains(insight.InsightType, StringComparison.OrdinalIgnoreCase))
+ {
+ relevanceScore += 1.0;
+ }
+
+ // Token overlap between goal and insight type
+ if (!string.IsNullOrEmpty(insight.InsightType))
+ {
+ var insightTokens = insight.InsightType
+ .Split(' ', StringSplitOptions.RemoveEmptyEntries | StringSplitOptions.TrimEntries)
+ .Select(t => t.ToUpperInvariant())
+ .ToHashSet();
+
+ var overlap = goalTokens.Intersect(insightTokens).Count();
+ if (insightTokens.Count > 0)
+ {
+ relevanceScore += (double)overlap / insightTokens.Count * 0.5;
+ }
+ }
+
+ // High-confidence insights are always somewhat relevant
+ if (insight.ConfidenceScore >= 0.8)
+ {
+ relevanceScore += 0.3;
+ }
+
+ return new { Insight = insight, Relevance = relevanceScore };
+ })
+ .Where(x => x.Relevance > 0.0)
+ .OrderByDescending(x => x.Relevance)
+ .ThenByDescending(x => x.Insight.ConfidenceScore)
+ .Take(10)
+ .Select(x => x.Insight);
+
+ _logger.LogDebug(
+ "Retrieved {Count} relevant insights for goal: {TaskGoal}",
+ relevant.Count(), taskGoal);
return Task.FromResult(relevant);
}
diff --git a/src/AgencyLayer/MultiAgentOrchestration/Engines/MultiAgentOrchestrationEngine.cs b/src/AgencyLayer/MultiAgentOrchestration/Engines/MultiAgentOrchestrationEngine.cs
index 6678a7b..950ac01 100644
--- a/src/AgencyLayer/MultiAgentOrchestration/Engines/MultiAgentOrchestrationEngine.cs
+++ b/src/AgencyLayer/MultiAgentOrchestration/Engines/MultiAgentOrchestrationEngine.cs
@@ -152,21 +152,56 @@ await ShareLearningInsightAsync(new AgentLearningInsight
}
///
- public Task SetAgentAutonomyAsync(string agentIdOrType, AutonomyLevel level, string tenantId)
+ public async Task SetAgentAutonomyAsync(string agentIdOrType, AutonomyLevel level, string tenantId)
{
- // In a real system, this would also persist to the knowledge repository.
+ ArgumentException.ThrowIfNullOrWhiteSpace(agentIdOrType);
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ var previousLevel = _autonomySettings.TryGetValue(agentIdOrType, out var existing) ? existing : (AutonomyLevel?)null;
_autonomySettings[agentIdOrType] = level;
- _logger.LogInformation("Set autonomy for '{AgentIdOrType}' to {Level} for Tenant '{TenantId}'.", agentIdOrType, level, tenantId);
- return Task.CompletedTask;
+
+ // Persist the autonomy change as a learning insight for audit and replay
+ await _knowledgeRepository.StoreLearningInsightAsync(new AgentLearningInsight
+ {
+ GeneratingAgentType = "Orchestrator",
+ InsightType = "AutonomyChange",
+ InsightData = new { AgentIdOrType = agentIdOrType, PreviousLevel = previousLevel?.ToString(), NewLevel = level.ToString(), TenantId = tenantId },
+ ConfidenceScore = 1.0
+ });
+
+ _logger.LogInformation(
+ "Set autonomy for '{AgentIdOrType}' from {PreviousLevel} to {Level} for Tenant '{TenantId}'.",
+ agentIdOrType, previousLevel?.ToString() ?? "unset", level, tenantId);
}
///
- public Task ConfigureAgentAuthorityAsync(string agentIdOrType, AuthorityScope scope, string tenantId)
+ public async Task ConfigureAgentAuthorityAsync(string agentIdOrType, AuthorityScope scope, string tenantId)
{
- // In a real system, this would also persist to the knowledge repository.
+ ArgumentException.ThrowIfNullOrWhiteSpace(agentIdOrType);
+ ArgumentNullException.ThrowIfNull(scope);
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
_authoritySettings[agentIdOrType] = scope;
- _logger.LogInformation("Configured authority for '{AgentIdOrType}' for Tenant '{TenantId}'.", agentIdOrType, tenantId);
- return Task.CompletedTask;
+
+ // Persist the authority configuration change for audit and replay
+ await _knowledgeRepository.StoreLearningInsightAsync(new AgentLearningInsight
+ {
+ GeneratingAgentType = "Orchestrator",
+ InsightType = "AuthorityChange",
+ InsightData = new
+ {
+ AgentIdOrType = agentIdOrType,
+ AllowedEndpoints = scope.AllowedApiEndpoints,
+ MaxBudget = scope.MaxBudget,
+ MaxResource = scope.MaxResourceConsumption,
+ TenantId = tenantId
+ },
+ ConfidenceScore = 1.0
+ });
+
+ _logger.LogInformation(
+ "Configured authority for '{AgentIdOrType}' with {EndpointCount} allowed endpoints for Tenant '{TenantId}'.",
+ agentIdOrType, scope.AllowedApiEndpoints.Count, tenantId);
}
///
@@ -190,7 +225,60 @@ public async Task SpawnAgentAsync(DynamicAgentSpawnRequest request)
public Task GetAgentTaskStatusAsync(string taskId, string tenantId)
{
_activeTasks.TryGetValue(taskId, out var task);
- return Task.FromResult(task); // Returns null if not found
+ return Task.FromResult(task)!; // Returns null if not found
+ }
+
+ ///
+ public Task GetAgentByIdAsync(Guid agentId)
+ {
+ var definition = _agentDefinitions.Values.FirstOrDefault(d => d.AgentId == agentId);
+ if (definition == null)
+ {
+ _logger.LogWarning("Agent definition not found for ID: {AgentId}", agentId);
+ }
+ return Task.FromResult(definition)!;
+ }
+
+ ///
+ public Task> ListAgentsAsync(bool includeRetired = false)
+ {
+ IEnumerable agents = includeRetired
+ ? _agentDefinitions.Values
+ : _agentDefinitions.Values.Where(d => d.Status != AgentStatus.Retired);
+
+ _logger.LogDebug("Listing agents (includeRetired={IncludeRetired}): {Count} found", includeRetired, agents.Count());
+ return Task.FromResult(agents);
+ }
+
+ ///
+ public async Task UpdateAgentAsync(AgentDefinition definition)
+ {
+ ArgumentNullException.ThrowIfNull(definition);
+ ArgumentException.ThrowIfNullOrWhiteSpace(definition.AgentType);
+
+ if (!_agentDefinitions.TryGetValue(definition.AgentType, out var existing))
+ {
+ throw new InvalidOperationException($"Agent type '{definition.AgentType}' is not registered. Use RegisterAgentAsync to add new agents.");
+ }
+
+ _agentDefinitions[definition.AgentType] = definition;
+ await _knowledgeRepository.StoreAgentDefinitionAsync(definition);
+ _logger.LogInformation("Updated agent definition for type: {AgentType}", definition.AgentType);
+ }
+
+ ///
+ public Task RetireAgentAsync(Guid agentId)
+ {
+ var entry = _agentDefinitions.FirstOrDefault(kvp => kvp.Value.AgentId == agentId);
+ if (entry.Value == null)
+ {
+ _logger.LogWarning("Cannot retire agent: no definition found for ID {AgentId}", agentId);
+ return Task.FromResult(false);
+ }
+
+ entry.Value.Status = AgentStatus.Retired;
+ _logger.LogInformation("Retired agent type '{AgentType}' (ID: {AgentId})", entry.Value.AgentType, agentId);
+ return Task.FromResult(true);
}
@@ -244,7 +332,7 @@ private async Task
public Guid AgentId { get; set; }
- public string AgentType { get; set; } // e.g., "ChampionNudger", "VelocityRecalibrator"
- public string Description { get; set; }
+ public string AgentType { get; set; } = string.Empty; // e.g., "ChampionNudger", "VelocityRecalibrator"
+ public string Description { get; set; } = string.Empty;
public List Capabilities { get; set; } = new();
public AutonomyLevel DefaultAutonomyLevel { get; set; } = AutonomyLevel.RecommendOnly;
public AuthorityScope DefaultAuthorityScope { get; set; } = new();
@@ -127,7 +127,7 @@ public class AgentDefinition
public class AgentTask
{
public string TaskId { get; set; } = Guid.NewGuid().ToString();
- public string Goal { get; set; }
+ public string Goal { get; set; } = string.Empty;
public Dictionary Context { get; set; } = new();
public List Constraints { get; set; } = new();
public CoordinationPattern CoordinationPattern { get; set; } = CoordinationPattern.CollaborativeSwarm;
@@ -139,9 +139,9 @@ public class AgentTask
///
public class AgentExecutionRequest
{
- public AgentTask Task { get; set; }
- public string TenantId { get; set; }
- public string RequestingUserId { get; set; }
+ public AgentTask Task { get; set; } = null!;
+ public string TenantId { get; set; } = string.Empty;
+ public string RequestingUserId { get; set; } = string.Empty;
}
///
@@ -149,12 +149,12 @@ public class AgentExecutionRequest
///
public class AgentExecutionResponse
{
- public string TaskId { get; set; }
+ public string TaskId { get; set; } = string.Empty;
public bool IsSuccess { get; set; }
- public object Result { get; set; }
- public string Summary { get; set; }
+ public object Result { get; set; } = null!;
+ public string Summary { get; set; } = string.Empty;
public List AgentIdsInvolved { get; set; } = new();
- public string AuditTrailId { get; set; }
+ public string AuditTrailId { get; set; } = string.Empty;
}
///
@@ -163,9 +163,9 @@ public class AgentExecutionResponse
public class AgentLearningInsight
{
public string InsightId { get; set; } = Guid.NewGuid().ToString();
- public string GeneratingAgentType { get; set; }
- public string InsightType { get; set; } // e.g., "OptimizedWorkflow", "NewRiskFactor"
- public object InsightData { get; set; }
+ public string GeneratingAgentType { get; set; } = string.Empty;
+ public string InsightType { get; set; } = string.Empty; // e.g., "OptimizedWorkflow", "NewRiskFactor"
+ public object InsightData { get; set; } = null!;
public double ConfidenceScore { get; set; }
}
@@ -174,11 +174,11 @@ public class AgentLearningInsight
///
public class DynamicAgentSpawnRequest
{
- public string AgentType { get; set; }
- public string TenantId { get; set; }
- public string ParentTaskId { get; set; } // The task that requires this new agent.
+ public string AgentType { get; set; } = string.Empty;
+ public string TenantId { get; set; } = string.Empty;
+ public string ParentTaskId { get; set; } = string.Empty; // The task that requires this new agent.
public AutonomyLevel? CustomAutonomy { get; set; }
- public AuthorityScope CustomAuthority { get; set; }
+ public AuthorityScope CustomAuthority { get; set; } = null!;
}
diff --git a/src/AgencyLayer/Orchestration/Benchmarks/MakerBenchmark.cs b/src/AgencyLayer/Orchestration/Benchmarks/MakerBenchmark.cs
index d381859..a4a7b98 100644
--- a/src/AgencyLayer/Orchestration/Benchmarks/MakerBenchmark.cs
+++ b/src/AgencyLayer/Orchestration/Benchmarks/MakerBenchmark.cs
@@ -320,37 +320,68 @@ private static double CalculateOverallMakerScore(List results)
}
}
+///
+/// Represents a single Tower of Hanoi move in the MAKER benchmark.
+///
public class HanoiMove
{
+ /// Gets or sets the sequential move number.
public int MoveNumber { get; set; }
+ /// Gets or sets the disc number being moved.
public int Disc { get; set; }
+ /// Gets or sets the source peg.
public char From { get; set; }
+ /// Gets or sets the destination peg.
public char To { get; set; }
}
+///
+/// Reports the score for a single MAKER benchmark run at a given disc count.
+///
public class MakerScoreReport
{
+ /// Gets or sets the name of the benchmark.
public string BenchmarkName { get; set; } = string.Empty;
+ /// Gets or sets the number of discs used.
public int NumDiscs { get; set; }
+ /// Gets or sets the total steps required for completion.
public int TotalStepsRequired { get; set; }
+ /// Gets or sets the number of steps completed successfully.
public int StepsCompleted { get; set; }
+ /// Gets or sets the number of steps that failed.
public int StepsFailed { get; set; }
+ /// Gets or sets whether the benchmark run was successful.
public bool Success { get; set; }
+ /// Gets or sets the total duration of the benchmark run.
public TimeSpan TotalDuration { get; set; }
+ /// Gets or sets the average duration per step.
public TimeSpan AverageStepDuration { get; set; }
+ /// Gets or sets the number of checkpoints created during the run.
public int CheckpointsCreated { get; set; }
+ /// Gets or sets the computed MAKER score.
public double MakerScore { get; set; }
+ /// Gets or sets the workflow identifier for this run.
public string WorkflowId { get; set; } = string.Empty;
+ /// Gets or sets the timestamp of the benchmark run.
public DateTime Timestamp { get; set; }
}
+///
+/// Reports the results of a progressive MAKER benchmark across multiple disc counts.
+///
public class MakerProgressiveReport
{
+ /// Gets or sets the maximum number of discs attempted.
public int MaxDiscsAttempted { get; set; }
+ /// Gets or sets the maximum number of discs completed successfully.
public int MaxDiscsCompleted { get; set; }
+ /// Gets or sets the maximum number of steps completed in any run.
public int MaxStepsCompleted { get; set; }
+ /// Gets or sets the overall MAKER score across all runs.
public double OverallMakerScore { get; set; }
+ /// Gets or sets the individual results for each disc count.
public List Results { get; set; } = new();
+ /// Gets or sets the timestamp of the report.
public DateTime Timestamp { get; set; }
public string GetSummary()
diff --git a/src/AgencyLayer/Orchestration/Checkpointing/InMemoryCheckpointManager.cs b/src/AgencyLayer/Orchestration/Checkpointing/InMemoryCheckpointManager.cs
index 2a9de2a..27ad136 100644
--- a/src/AgencyLayer/Orchestration/Checkpointing/InMemoryCheckpointManager.cs
+++ b/src/AgencyLayer/Orchestration/Checkpointing/InMemoryCheckpointManager.cs
@@ -82,8 +82,28 @@ public Task> GetWorkflowCheckpointsAsync(string
public Task PurgeWorkflowCheckpointsAsync(string workflowId, CancellationToken cancellationToken = default)
{
- _checkpoints.TryRemove(workflowId, out _);
- _logger.LogInformation("Purged all checkpoints for workflow {WorkflowId}", workflowId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(workflowId);
+ cancellationToken.ThrowIfCancellationRequested();
+
+ if (_checkpoints.TryRemove(workflowId, out var removed))
+ {
+ int count;
+ lock (removed)
+ {
+ count = removed.Count;
+ removed.Clear();
+ }
+
+ _logger.LogInformation(
+ "Purged {CheckpointCount} checkpoints for workflow {WorkflowId}",
+ count, workflowId);
+ }
+ else
+ {
+ _logger.LogDebug(
+ "No checkpoints found to purge for workflow {WorkflowId}", workflowId);
+ }
+
return Task.CompletedTask;
}
diff --git a/src/AgencyLayer/Orchestration/Execution/DurableWorkflowEngine.cs b/src/AgencyLayer/Orchestration/Execution/DurableWorkflowEngine.cs
index bca5e4a..007df94 100644
--- a/src/AgencyLayer/Orchestration/Execution/DurableWorkflowEngine.cs
+++ b/src/AgencyLayer/Orchestration/Execution/DurableWorkflowEngine.cs
@@ -107,15 +107,49 @@ public Task GetWorkflowStatusAsync(string workflowId, Cancellati
return Task.FromResult(status ?? new WorkflowStatus { WorkflowId = workflowId, State = WorkflowState.Pending });
}
- public Task CancelWorkflowAsync(string workflowId, CancellationToken cancellationToken = default)
+ public async Task CancelWorkflowAsync(string workflowId, CancellationToken cancellationToken = default)
{
+ ArgumentException.ThrowIfNullOrWhiteSpace(workflowId);
+
+ if (!_activeWorkflows.TryGetValue(workflowId, out var status))
+ {
+ _logger.LogWarning("Cannot cancel workflow {WorkflowId}: workflow not found in active registry", workflowId);
+ return;
+ }
+
+ if (status.State is WorkflowState.Completed or WorkflowState.Failed or WorkflowState.Cancelled)
+ {
+ _logger.LogWarning(
+ "Cannot cancel workflow {WorkflowId}: workflow is already in terminal state {State}",
+ workflowId, status.State);
+ return;
+ }
+
+ // Signal the linked CancellationTokenSource so running steps observe cancellation
if (_cancellationTokens.TryGetValue(workflowId, out var cts))
{
cts.Cancel();
- UpdateWorkflowState(workflowId, WorkflowState.Cancelled);
- _logger.LogInformation("Workflow {WorkflowId} cancelled", workflowId);
}
- return Task.CompletedTask;
+
+ UpdateWorkflowState(workflowId, WorkflowState.Cancelled);
+
+ // Record a cancellation checkpoint so the workflow can be inspected post-mortem
+ var cancellationCheckpoint = new ExecutionCheckpoint
+ {
+ WorkflowId = workflowId,
+ StepNumber = status.CurrentStep,
+ StepName = $"Cancelled at step {status.CurrentStep} ({status.CurrentStepName})",
+ Status = ExecutionStepStatus.Failed,
+ StateJson = "{}",
+ InputJson = "{}",
+ OutputJson = "{}",
+ ErrorMessage = "Workflow explicitly cancelled via CancelWorkflowAsync"
+ };
+ await _checkpointManager.SaveCheckpointAsync(cancellationCheckpoint, cancellationToken);
+
+ _logger.LogInformation(
+ "Workflow {WorkflowId} cancelled at step {CurrentStep}/{TotalSteps}",
+ workflowId, status.CurrentStep, status.TotalSteps);
}
private async Task ExecuteFromStepAsync(
diff --git a/src/AgencyLayer/ProcessAutomation/WorkflowTemplateRegistry.cs b/src/AgencyLayer/ProcessAutomation/WorkflowTemplateRegistry.cs
index d9682fb..3e9fd76 100644
--- a/src/AgencyLayer/ProcessAutomation/WorkflowTemplateRegistry.cs
+++ b/src/AgencyLayer/ProcessAutomation/WorkflowTemplateRegistry.cs
@@ -14,6 +14,10 @@ public class WorkflowTemplateRegistry
private readonly ConcurrentDictionary _templates = new();
private readonly ILogger _logger;
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The logger instance.
public WorkflowTemplateRegistry(ILogger logger)
{
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
@@ -70,11 +74,24 @@ public bool IsPreApproved(string templateId) =>
///
public class WorkflowTemplate
{
+ /// Gets or sets the unique identifier for this template.
public string TemplateId { get; set; } = Guid.NewGuid().ToString();
+
+ /// Gets or sets the human-readable name of the template.
public string Name { get; set; } = string.Empty;
+
+ /// Gets or sets a description of what the template does.
public string Description { get; set; } = string.Empty;
+
+ /// Gets or sets a value indicating whether this template is pre-approved for governance bypass.
public bool IsPreApproved { get; set; }
+
+ /// Gets or sets the identity of the approver who pre-approved this template.
public string ApprovedBy { get; set; } = string.Empty;
+
+ /// Gets or sets the date and time when this template was approved.
public DateTime ApprovedAt { get; set; }
+
+ /// Gets or sets the delegate that builds a from a parameter dictionary.
public Func, WorkflowDefinition> BuildWorkflow { get; set; } = _ => new WorkflowDefinition();
}
diff --git a/src/AgencyLayer/RealTime/Adapters/SignalRNotificationAdapter.cs b/src/AgencyLayer/RealTime/Adapters/SignalRNotificationAdapter.cs
new file mode 100644
index 0000000..ec809d3
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Adapters/SignalRNotificationAdapter.cs
@@ -0,0 +1,130 @@
+using System.Collections.Concurrent;
+using CognitiveMesh.AgencyLayer.RealTime.Hubs;
+using CognitiveMesh.AgencyLayer.RealTime.Models;
+using CognitiveMesh.AgencyLayer.RealTime.Ports;
+using Microsoft.AspNetCore.SignalR;
+using Microsoft.Extensions.Logging;
+
+namespace CognitiveMesh.AgencyLayer.RealTime.Adapters;
+
+///
+/// SignalR-based implementation of .
+/// Uses a typed to send messages to connected clients
+/// and tracks connected users in a thread-safe concurrent dictionary.
+///
+public class SignalRNotificationAdapter : IRealTimeNotificationPort
+{
+ private readonly IHubContext _hubContext;
+ private readonly ILogger _logger;
+ private readonly ConcurrentDictionary _connectedUsers = new();
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The typed SignalR hub context for sending messages to clients.
+ /// The logger instance for structured logging.
+ /// Thrown when any required parameter is null.
+ public SignalRNotificationAdapter(
+ IHubContext hubContext,
+ ILogger logger)
+ {
+ _hubContext = hubContext ?? throw new ArgumentNullException(nameof(hubContext));
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public async Task BroadcastAsync(string eventName, object payload, CancellationToken ct)
+ {
+ _logger.LogDebug("Broadcasting event {EventName} to all clients", eventName);
+
+ var notification = new RealTimeEvent(
+ EventId: Guid.NewGuid().ToString(),
+ EventType: eventName,
+ Payload: payload,
+ Timestamp: DateTimeOffset.UtcNow);
+
+ await _hubContext.Clients.All.ReceiveNotification(notification);
+ }
+
+ ///
+ public async Task SendToUserAsync(string userId, string eventName, object payload, CancellationToken ct)
+ {
+ _logger.LogDebug("Sending event {EventName} to user {UserId}", eventName, userId);
+
+ var notification = new RealTimeEvent(
+ EventId: Guid.NewGuid().ToString(),
+ EventType: eventName,
+ Payload: payload,
+ Timestamp: DateTimeOffset.UtcNow);
+
+ await _hubContext.Clients.User(userId).ReceiveNotification(notification);
+ }
+
+ ///
+ public async Task SendToGroupAsync(string groupName, string eventName, object payload, CancellationToken ct)
+ {
+ _logger.LogDebug("Sending event {EventName} to group {GroupName}", eventName, groupName);
+
+ var notification = new RealTimeEvent(
+ EventId: Guid.NewGuid().ToString(),
+ EventType: eventName,
+ Payload: payload,
+ Timestamp: DateTimeOffset.UtcNow);
+
+ await _hubContext.Clients.Group(groupName).ReceiveNotification(notification);
+ }
+
+ ///
+ public async Task AddToGroupAsync(string connectionId, string groupName, CancellationToken ct)
+ {
+ _logger.LogDebug(
+ "Adding connection {ConnectionId} to group {GroupName}",
+ connectionId,
+ groupName);
+
+ await _hubContext.Groups.AddToGroupAsync(connectionId, groupName, ct);
+ }
+
+ ///
+ public async Task RemoveFromGroupAsync(string connectionId, string groupName, CancellationToken ct)
+ {
+ _logger.LogDebug(
+ "Removing connection {ConnectionId} from group {GroupName}",
+ connectionId,
+ groupName);
+
+ await _hubContext.Groups.RemoveFromGroupAsync(connectionId, groupName, ct);
+ }
+
+ ///
+ public Task> GetConnectedUsersAsync(CancellationToken ct)
+ {
+ IReadOnlyList users = _connectedUsers.Values.ToList().AsReadOnly();
+ return Task.FromResult(users);
+ }
+
+ ///
+ /// Registers a connected user in the internal tracking dictionary.
+ /// Called by hub lifecycle methods to maintain the connected user list.
+ ///
+ /// The connected user to track.
+ public void TrackConnection(ConnectedUser user)
+ {
+ _connectedUsers.TryAdd(user.ConnectionId, user);
+ _logger.LogDebug(
+ "Tracking connection {ConnectionId} for user {UserId}",
+ user.ConnectionId,
+ user.UserId);
+ }
+
+ ///
+ /// Removes a connected user from the internal tracking dictionary.
+ /// Called by hub lifecycle methods when a client disconnects.
+ ///
+ /// The connection identifier to stop tracking.
+ public void UntrackConnection(string connectionId)
+ {
+ _connectedUsers.TryRemove(connectionId, out _);
+ _logger.LogDebug("Untracked connection {ConnectionId}", connectionId);
+ }
+}
diff --git a/src/AgencyLayer/RealTime/Hubs/CognitiveMeshHub.cs b/src/AgencyLayer/RealTime/Hubs/CognitiveMeshHub.cs
new file mode 100644
index 0000000..4475fd3
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Hubs/CognitiveMeshHub.cs
@@ -0,0 +1,167 @@
+using CognitiveMesh.AgencyLayer.RealTime.Ports;
+using Microsoft.AspNetCore.SignalR;
+using Microsoft.Extensions.Logging;
+
+namespace CognitiveMesh.AgencyLayer.RealTime.Hubs;
+
+///
+/// SignalR hub that provides real-time communication for the Cognitive Mesh platform.
+/// Clients can subscribe to agent updates, join dashboard groups, and receive
+/// workflow progress notifications through this hub.
+///
+public class CognitiveMeshHub : Hub
+{
+ private readonly ILogger _logger;
+ private readonly IRealTimeNotificationPort _notificationPort;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The logger instance for structured logging.
+ /// The notification port for managing real-time connections.
+ /// Thrown when any required parameter is null.
+ public CognitiveMeshHub(
+ ILogger logger,
+ IRealTimeNotificationPort notificationPort)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ _notificationPort = notificationPort ?? throw new ArgumentNullException(nameof(notificationPort));
+ }
+
+ ///
+ /// Called when a new client connects to the hub. Extracts user information
+ /// from the connection context and tracks the connection.
+ ///
+ /// A task that completes when the connection has been registered.
+ public override async Task OnConnectedAsync()
+ {
+ var connectionId = Context.ConnectionId;
+ var userId = Context.UserIdentifier ?? "anonymous";
+
+ _logger.LogInformation(
+ "Client connected. ConnectionId: {ConnectionId}, UserId: {UserId}",
+ connectionId,
+ userId);
+
+ await _notificationPort.AddToGroupAsync(connectionId, "all-users", CancellationToken.None);
+ await base.OnConnectedAsync();
+ }
+
+ ///
+ /// Called when a client disconnects from the hub. Removes the connection from tracking.
+ ///
+ /// The exception that caused the disconnect, if any.
+ /// A task that completes when the disconnection has been processed.
+ public override async Task OnDisconnectedAsync(Exception? exception)
+ {
+ var connectionId = Context.ConnectionId;
+ var userId = Context.UserIdentifier ?? "anonymous";
+
+ if (exception is not null)
+ {
+ _logger.LogWarning(
+ exception,
+ "Client disconnected with error. ConnectionId: {ConnectionId}, UserId: {UserId}",
+ connectionId,
+ userId);
+ }
+ else
+ {
+ _logger.LogInformation(
+ "Client disconnected. ConnectionId: {ConnectionId}, UserId: {UserId}",
+ connectionId,
+ userId);
+ }
+
+ await _notificationPort.RemoveFromGroupAsync(connectionId, "all-users", CancellationToken.None);
+ await base.OnDisconnectedAsync(exception);
+ }
+
+ ///
+ /// Adds the calling client to a dashboard group to receive targeted dashboard updates.
+ ///
+ /// The unique identifier of the dashboard to join.
+ /// A task that completes when the client has joined the dashboard group.
+ /// Thrown when is null or whitespace.
+ public async Task JoinDashboardGroup(string dashboardId)
+ {
+ if (string.IsNullOrWhiteSpace(dashboardId))
+ {
+ throw new ArgumentException("Dashboard ID must not be null or whitespace.", nameof(dashboardId));
+ }
+
+ var groupName = $"dashboard-{dashboardId}";
+ await _notificationPort.AddToGroupAsync(Context.ConnectionId, groupName, CancellationToken.None);
+
+ _logger.LogInformation(
+ "Client {ConnectionId} joined dashboard group {GroupName}",
+ Context.ConnectionId,
+ groupName);
+ }
+
+ ///
+ /// Removes the calling client from a dashboard group.
+ ///
+ /// The unique identifier of the dashboard to leave.
+ /// A task that completes when the client has left the dashboard group.
+ /// Thrown when is null or whitespace.
+ public async Task LeaveDashboardGroup(string dashboardId)
+ {
+ if (string.IsNullOrWhiteSpace(dashboardId))
+ {
+ throw new ArgumentException("Dashboard ID must not be null or whitespace.", nameof(dashboardId));
+ }
+
+ var groupName = $"dashboard-{dashboardId}";
+ await _notificationPort.RemoveFromGroupAsync(Context.ConnectionId, groupName, CancellationToken.None);
+
+ _logger.LogInformation(
+ "Client {ConnectionId} left dashboard group {GroupName}",
+ Context.ConnectionId,
+ groupName);
+ }
+
+ ///
+ /// Subscribes the calling client to receive updates for a specific agent.
+ ///
+ /// The unique identifier of the agent to subscribe to.
+ /// A task that completes when the subscription has been registered.
+ /// Thrown when is null or whitespace.
+ public async Task SubscribeToAgent(string agentId)
+ {
+ if (string.IsNullOrWhiteSpace(agentId))
+ {
+ throw new ArgumentException("Agent ID must not be null or whitespace.", nameof(agentId));
+ }
+
+ var groupName = $"agent-{agentId}";
+ await _notificationPort.AddToGroupAsync(Context.ConnectionId, groupName, CancellationToken.None);
+
+ _logger.LogInformation(
+ "Client {ConnectionId} subscribed to agent {AgentId}",
+ Context.ConnectionId,
+ agentId);
+ }
+
+ ///
+ /// Unsubscribes the calling client from updates for a specific agent.
+ ///
+ /// The unique identifier of the agent to unsubscribe from.
+ /// A task that completes when the subscription has been removed.
+ /// Thrown when is null or whitespace.
+ public async Task UnsubscribeFromAgent(string agentId)
+ {
+ if (string.IsNullOrWhiteSpace(agentId))
+ {
+ throw new ArgumentException("Agent ID must not be null or whitespace.", nameof(agentId));
+ }
+
+ var groupName = $"agent-{agentId}";
+ await _notificationPort.RemoveFromGroupAsync(Context.ConnectionId, groupName, CancellationToken.None);
+
+ _logger.LogInformation(
+ "Client {ConnectionId} unsubscribed from agent {AgentId}",
+ Context.ConnectionId,
+ agentId);
+ }
+}
diff --git a/src/AgencyLayer/RealTime/Hubs/ICognitiveMeshHubClient.cs b/src/AgencyLayer/RealTime/Hubs/ICognitiveMeshHubClient.cs
new file mode 100644
index 0000000..c09b03e
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Hubs/ICognitiveMeshHubClient.cs
@@ -0,0 +1,45 @@
+using CognitiveMesh.AgencyLayer.RealTime.Models;
+
+namespace CognitiveMesh.AgencyLayer.RealTime.Hubs;
+
+///
+/// Defines the strongly-typed client interface for the Cognitive Mesh SignalR hub.
+/// Methods on this interface correspond to client-side handlers that receive real-time notifications.
+///
+public interface ICognitiveMeshHubClient
+{
+ ///
+ /// Receives an agent status change notification.
+ ///
+ /// The agent status update details.
+ /// A task that completes when the client has acknowledged the message.
+ Task ReceiveAgentStatus(AgentStatusUpdate update);
+
+ ///
+ /// Receives a workflow progress notification.
+ ///
+ /// The workflow progress update details.
+ /// A task that completes when the client has acknowledged the message.
+ Task ReceiveWorkflowProgress(WorkflowProgressUpdate update);
+
+ ///
+ /// Receives a metric threshold violation alert.
+ ///
+ /// The metric alert notification details.
+ /// A task that completes when the client has acknowledged the message.
+ Task ReceiveMetricAlert(MetricAlertNotification alert);
+
+ ///
+ /// Receives a generic real-time event notification.
+ ///
+ /// The real-time event details.
+ /// A task that completes when the client has acknowledged the message.
+ Task ReceiveNotification(RealTimeEvent notification);
+
+ ///
+ /// Receives a dashboard data update notification.
+ ///
+ /// The dashboard data update details.
+ /// A task that completes when the client has acknowledged the message.
+ Task ReceiveDashboardUpdate(DashboardDataUpdate update);
+}
diff --git a/src/AgencyLayer/RealTime/Infrastructure/ServiceCollectionExtensions.cs b/src/AgencyLayer/RealTime/Infrastructure/ServiceCollectionExtensions.cs
new file mode 100644
index 0000000..c7c68dd
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Infrastructure/ServiceCollectionExtensions.cs
@@ -0,0 +1,48 @@
+using CognitiveMesh.AgencyLayer.RealTime.Adapters;
+using CognitiveMesh.AgencyLayer.RealTime.Hubs;
+using CognitiveMesh.AgencyLayer.RealTime.Ports;
+using Microsoft.AspNetCore.Builder;
+using Microsoft.AspNetCore.Routing;
+using Microsoft.AspNetCore.SignalR;
+using Microsoft.Extensions.DependencyInjection;
+
+namespace CognitiveMesh.AgencyLayer.RealTime.Infrastructure;
+
+///
+/// Extension methods for registering Cognitive Mesh real-time services and mapping hub endpoints.
+///
+public static class ServiceCollectionExtensions
+{
+ ///
+ /// Registers the Cognitive Mesh real-time notification services into the dependency injection container.
+ /// This registers as a singleton implementation of
+ /// .
+ ///
+ /// The service collection to add services to.
+ /// The service collection for chaining.
+ /// Thrown when is null.
+ public static IServiceCollection AddCognitiveMeshRealTime(this IServiceCollection services)
+ {
+ ArgumentNullException.ThrowIfNull(services);
+
+ services.AddSingleton();
+ services.AddSingleton(sp =>
+ sp.GetRequiredService());
+
+ return services;
+ }
+
+ ///
+ /// Maps the Cognitive Mesh SignalR hub endpoint to the specified route builder.
+ /// The hub is accessible at /hubs/cognitive-mesh.
+ ///
+ /// The endpoint route builder to map the hub to.
+ /// The hub endpoint convention builder for further configuration.
+ /// Thrown when is null.
+ public static IEndpointConventionBuilder MapCognitiveMeshHubs(this IEndpointRouteBuilder endpoints)
+ {
+ ArgumentNullException.ThrowIfNull(endpoints);
+
+ return endpoints.MapHub("/hubs/cognitive-mesh");
+ }
+}
diff --git a/src/AgencyLayer/RealTime/Models/AgentStatusUpdate.cs b/src/AgencyLayer/RealTime/Models/AgentStatusUpdate.cs
new file mode 100644
index 0000000..7d7bda5
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Models/AgentStatusUpdate.cs
@@ -0,0 +1,14 @@
+namespace CognitiveMesh.AgencyLayer.RealTime.Models;
+
+///
+/// DTO representing a change in an agent's operational status, broadcast to subscribed clients.
+///
+/// The unique identifier of the agent whose status changed.
+/// The new status of the agent (e.g., "Active", "Idle", "Degraded").
+/// The timestamp when the status change occurred.
+/// Optional human-readable details about the status change.
+public record AgentStatusUpdate(
+ string AgentId,
+ string Status,
+ DateTimeOffset Timestamp,
+ string? Details = null);
diff --git a/src/AgencyLayer/RealTime/Models/ConnectedUser.cs b/src/AgencyLayer/RealTime/Models/ConnectedUser.cs
new file mode 100644
index 0000000..ce7b518
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Models/ConnectedUser.cs
@@ -0,0 +1,14 @@
+namespace CognitiveMesh.AgencyLayer.RealTime.Models;
+
+///
+/// Represents a user currently connected to the real-time hub.
+///
+/// The unique identifier of the connected user.
+/// The SignalR connection identifier for this session.
+/// The timestamp when the user connected.
+/// The optional tenant identifier for multi-tenant scenarios.
+public record ConnectedUser(
+ string UserId,
+ string ConnectionId,
+ DateTimeOffset ConnectedAt,
+ string? TenantId = null);
diff --git a/src/AgencyLayer/RealTime/Models/DashboardDataUpdate.cs b/src/AgencyLayer/RealTime/Models/DashboardDataUpdate.cs
new file mode 100644
index 0000000..e8440ed
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Models/DashboardDataUpdate.cs
@@ -0,0 +1,16 @@
+namespace CognitiveMesh.AgencyLayer.RealTime.Models;
+
+///
+/// DTO representing an update to dashboard data, broadcast to clients in a specific dashboard group.
+///
+/// The unique identifier of the dashboard that was updated.
+/// The identifier of the specific dashboard section that changed.
+/// The updated data payload for the dashboard section.
+/// The timestamp when the dashboard data was refreshed.
+/// Optional human-readable details about the update.
+public record DashboardDataUpdate(
+ string DashboardId,
+ string SectionId,
+ object Data,
+ DateTimeOffset Timestamp,
+ string? Details = null);
diff --git a/src/AgencyLayer/RealTime/Models/MetricAlertNotification.cs b/src/AgencyLayer/RealTime/Models/MetricAlertNotification.cs
new file mode 100644
index 0000000..3a29ebe
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Models/MetricAlertNotification.cs
@@ -0,0 +1,20 @@
+namespace CognitiveMesh.AgencyLayer.RealTime.Models;
+
+///
+/// DTO representing a metric threshold violation alert, broadcast to subscribed clients.
+///
+/// The unique identifier of the alert.
+/// The name of the metric that violated its threshold.
+/// The current value of the metric that triggered the alert.
+/// The threshold value that was exceeded.
+/// The severity level of the alert (e.g., "Warning", "Critical").
+/// The timestamp when the alert was generated.
+/// Optional human-readable details about the alert.
+public record MetricAlertNotification(
+ string AlertId,
+ string MetricName,
+ double CurrentValue,
+ double ThresholdValue,
+ string Severity,
+ DateTimeOffset Timestamp,
+ string? Details = null);
diff --git a/src/AgencyLayer/RealTime/Models/RealTimeEvent.cs b/src/AgencyLayer/RealTime/Models/RealTimeEvent.cs
new file mode 100644
index 0000000..ad30296
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Models/RealTimeEvent.cs
@@ -0,0 +1,16 @@
+namespace CognitiveMesh.AgencyLayer.RealTime.Models;
+
+///
+/// Represents a real-time event dispatched through the SignalR hub.
+///
+/// The unique identifier of the event.
+/// The type of event, typically one of the constants from .
+/// The event payload data.
+/// The timestamp when the event was created.
+/// The optional identifier of the agent that produced this event.
+public record RealTimeEvent(
+ string EventId,
+ string EventType,
+ object Payload,
+ DateTimeOffset Timestamp,
+ string? SourceAgentId = null);
diff --git a/src/AgencyLayer/RealTime/Models/RealTimeEventTypes.cs b/src/AgencyLayer/RealTime/Models/RealTimeEventTypes.cs
new file mode 100644
index 0000000..8d4369e
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Models/RealTimeEventTypes.cs
@@ -0,0 +1,42 @@
+namespace CognitiveMesh.AgencyLayer.RealTime.Models;
+
+///
+/// Well-known event type constants used across the Cognitive Mesh real-time notification system.
+///
+public static class RealTimeEventTypes
+{
+ ///
+ /// Raised when an agent's operational status changes (e.g., idle to active, healthy to degraded).
+ ///
+ public const string AgentStatusChanged = "AgentStatusChanged";
+
+ ///
+ /// Raised when a single step within a durable workflow completes.
+ ///
+ public const string WorkflowStepCompleted = "WorkflowStepCompleted";
+
+ ///
+ /// Raised when an entire workflow finishes execution.
+ ///
+ public const string WorkflowCompleted = "WorkflowCompleted";
+
+ ///
+ /// Raised when a monitored metric exceeds its configured threshold.
+ ///
+ public const string MetricThresholdViolation = "MetricThresholdViolation";
+
+ ///
+ /// Raised when a general notification is received for a user or group.
+ ///
+ public const string NotificationReceived = "NotificationReceived";
+
+ ///
+ /// Raised when dashboard data has been refreshed and clients should update their displays.
+ ///
+ public const string DashboardDataUpdated = "DashboardDataUpdated";
+
+ ///
+ /// Raised to report incremental progress during a reasoning operation.
+ ///
+ public const string ReasoningProgressUpdate = "ReasoningProgressUpdate";
+}
diff --git a/src/AgencyLayer/RealTime/Models/WorkflowProgressUpdate.cs b/src/AgencyLayer/RealTime/Models/WorkflowProgressUpdate.cs
new file mode 100644
index 0000000..38d5f9e
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Models/WorkflowProgressUpdate.cs
@@ -0,0 +1,20 @@
+namespace CognitiveMesh.AgencyLayer.RealTime.Models;
+
+///
+/// DTO representing progress within a durable workflow, broadcast to subscribed clients.
+///
+/// The unique identifier of the workflow.
+/// The name of the current or completed step.
+/// The zero-based index of the current step.
+/// The total number of steps in the workflow.
+/// The status of the step (e.g., "Completed", "Failed", "InProgress").
+/// The timestamp when this progress update was generated.
+/// Optional human-readable details about the step progress.
+public record WorkflowProgressUpdate(
+ string WorkflowId,
+ string StepName,
+ int StepIndex,
+ int TotalSteps,
+ string Status,
+ DateTimeOffset Timestamp,
+ string? Details = null);
diff --git a/src/AgencyLayer/RealTime/Ports/IRealTimeNotificationPort.cs b/src/AgencyLayer/RealTime/Ports/IRealTimeNotificationPort.cs
new file mode 100644
index 0000000..fb95320
--- /dev/null
+++ b/src/AgencyLayer/RealTime/Ports/IRealTimeNotificationPort.cs
@@ -0,0 +1,65 @@
+using CognitiveMesh.AgencyLayer.RealTime.Models;
+
+namespace CognitiveMesh.AgencyLayer.RealTime.Ports;
+
+///
+/// Defines the contract for sending real-time notifications to connected clients.
+/// This port follows the hexagonal architecture pattern and abstracts the underlying
+/// transport mechanism (e.g., SignalR) from the business logic.
+///
+public interface IRealTimeNotificationPort
+{
+ ///
+ /// Broadcasts an event to all connected clients.
+ ///
+ /// The name of the event to broadcast.
+ /// The event payload data.
+ /// A token to cancel the operation.
+ /// A task that completes when the broadcast has been dispatched.
+ Task BroadcastAsync(string eventName, object payload, CancellationToken ct);
+
+ ///
+ /// Sends an event to a specific user identified by their user identifier.
+ ///
+ /// The unique identifier of the target user.
+ /// The name of the event to send.
+ /// The event payload data.
+ /// A token to cancel the operation.
+ /// A task that completes when the message has been dispatched.
+ Task SendToUserAsync(string userId, string eventName, object payload, CancellationToken ct);
+
+ ///
+ /// Sends an event to all clients in a specific group.
+ ///
+ /// The name of the target group.
+ /// The name of the event to send.
+ /// The event payload data.
+ /// A token to cancel the operation.
+ /// A task that completes when the message has been dispatched.
+ Task SendToGroupAsync(string groupName, string eventName, object payload, CancellationToken ct);
+
+ ///
+ /// Adds a connection to a named group for targeted message delivery.
+ ///
+ /// The SignalR connection identifier.
+ /// The name of the group to join.
+ /// A token to cancel the operation.
+ /// A task that completes when the connection has been added to the group.
+ Task AddToGroupAsync(string connectionId, string groupName, CancellationToken ct);
+
+ ///
+ /// Removes a connection from a named group.
+ ///
+ /// The SignalR connection identifier.
+ /// The name of the group to leave.
+ /// A token to cancel the operation.
+ /// A task that completes when the connection has been removed from the group.
+ Task RemoveFromGroupAsync(string connectionId, string groupName, CancellationToken ct);
+
+ ///
+ /// Retrieves a snapshot of all currently connected users.
+ ///
+ /// A token to cancel the operation.
+ /// A read-only list of currently connected users.
+ Task> GetConnectedUsersAsync(CancellationToken ct);
+}
diff --git a/src/AgencyLayer/RealTime/RealTime.csproj b/src/AgencyLayer/RealTime/RealTime.csproj
new file mode 100644
index 0000000..ecaf625
--- /dev/null
+++ b/src/AgencyLayer/RealTime/RealTime.csproj
@@ -0,0 +1,19 @@
+
+
+
+ net9.0
+ enable
+ enable
+ true
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/AgencyLayer/RefactoringAgents/Engines/SolidDryRefactoringEngine.cs b/src/AgencyLayer/RefactoringAgents/Engines/SolidDryRefactoringEngine.cs
new file mode 100644
index 0000000..d6c7572
--- /dev/null
+++ b/src/AgencyLayer/RefactoringAgents/Engines/SolidDryRefactoringEngine.cs
@@ -0,0 +1,539 @@
+using System.Text.RegularExpressions;
+using AgencyLayer.RefactoringAgents.Ports;
+using Microsoft.Extensions.Logging;
+
+namespace AgencyLayer.RefactoringAgents.Engines;
+
+///
+/// Pure domain engine that analyzes source code for SOLID and DRY principle violations
+/// using heuristic pattern analysis. Follows Hexagonal Architecture as the core business
+/// logic layer, independent of any infrastructure concerns.
+///
+public class SolidDryRefactoringEngine : ICodeRefactoringPort
+{
+ private readonly ILogger _logger;
+
+ // Thresholds for heuristic detection
+ private const int SrpMaxMethods = 10;
+ private const int SrpMaxFields = 8;
+ private const int OcpSwitchCaseThreshold = 5;
+ private const int IspMaxInterfaceMethods = 7;
+ private const int DryMinDuplicateLineLength = 3;
+ private const double DryMinSimilarity = 0.85;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger for diagnostic output.
+ public SolidDryRefactoringEngine(ILogger logger)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ ///
+ public Task AnalyzeCodeAsync(
+ CodeAnalysisRequest request,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(request);
+
+ if (string.IsNullOrWhiteSpace(request.SourceCode))
+ {
+ return Task.FromResult(new CodeAnalysisResponse
+ {
+ OverallScore = 1.0,
+ Summary = "No source code provided for analysis."
+ });
+ }
+
+ _logger.LogInformation("Starting {Scope} analysis for {FilePath}",
+ request.Scope, string.IsNullOrEmpty(request.FilePath) ? "" : request.FilePath);
+
+ var response = new CodeAnalysisResponse();
+ var lines = request.SourceCode.Split('\n');
+
+ if (request.Scope is AnalysisScope.Solid or AnalysisScope.Both)
+ {
+ cancellationToken.ThrowIfCancellationRequested();
+ DetectSrpViolations(lines, request.FilePath, response);
+ DetectOcpViolations(lines, request.FilePath, response);
+ DetectLspViolations(lines, request.FilePath, response);
+ DetectIspViolations(lines, request.FilePath, response);
+ DetectDipViolations(lines, request.FilePath, response);
+ }
+
+ if (request.Scope is AnalysisScope.Dry or AnalysisScope.Both)
+ {
+ cancellationToken.ThrowIfCancellationRequested();
+ DetectDryViolations(lines, request.FilePath, response);
+ }
+
+ response.OverallScore = CalculateOverallScore(response);
+ response.Summary = BuildSummary(response);
+
+ _logger.LogInformation("Analysis complete: {SolidCount} SOLID violations, {DryCount} DRY violations, score {Score:F2}",
+ response.SolidViolations.Count, response.DryViolations.Count, response.OverallScore);
+
+ return Task.FromResult(response);
+ }
+
+ private void DetectSrpViolations(string[] lines, string filePath, CodeAnalysisResponse response)
+ {
+ var classPattern = new Regex(@"^\s*(?:public|internal|private|protected)?\s*(?:static|abstract|sealed|partial)?\s*class\s+(\w+)");
+ var methodPattern = new Regex(@"^\s*(?:public|private|protected|internal)?\s*(?:static|virtual|override|abstract|async)?\s*(?:\w+(?:<[^>]+>)?(?:\[\])?)\s+(\w+)\s*\(");
+ var fieldPattern = new Regex(@"^\s*(?:private|protected|internal)?\s*(?:readonly|static)?\s*\w+(?:<[^>]+>)?\s+_?\w+\s*[;=]");
+
+ string? currentClass = null;
+ int classStartLine = 0;
+ int methodCount = 0;
+ int fieldCount = 0;
+ var methodNames = new List();
+
+ for (int i = 0; i < lines.Length; i++)
+ {
+ var classMatch = classPattern.Match(lines[i]);
+ if (classMatch.Success)
+ {
+ // Emit violation for previous class if needed
+ if (currentClass != null)
+ {
+ EmitSrpIfViolated(currentClass, classStartLine, methodCount, fieldCount, methodNames, filePath, response);
+ }
+
+ currentClass = classMatch.Groups[1].Value;
+ classStartLine = i + 1;
+ methodCount = 0;
+ fieldCount = 0;
+ methodNames.Clear();
+ }
+
+ if (currentClass != null)
+ {
+ if (methodPattern.IsMatch(lines[i]))
+ {
+ var methodMatch = methodPattern.Match(lines[i]);
+ methodCount++;
+ methodNames.Add(methodMatch.Groups[1].Value);
+ }
+
+ if (fieldPattern.IsMatch(lines[i]))
+ {
+ fieldCount++;
+ }
+ }
+ }
+
+ // Check the last class
+ if (currentClass != null)
+ {
+ EmitSrpIfViolated(currentClass, classStartLine, methodCount, fieldCount, methodNames, filePath, response);
+ }
+ }
+
+ private void EmitSrpIfViolated(string className, int line, int methodCount, int fieldCount,
+ List methodNames, string filePath, CodeAnalysisResponse response)
+ {
+ if (methodCount > SrpMaxMethods)
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.SRP,
+ Severity = methodCount > SrpMaxMethods * 2 ? IssueSeverity.Error : IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = line, MemberName = className },
+ Description = $"Class '{className}' has {methodCount} methods (threshold: {SrpMaxMethods}), suggesting multiple responsibilities.",
+ SuggestedFix = $"Consider splitting '{className}' into smaller classes, each with a single responsibility."
+ });
+
+ response.Suggestions.Add(new RefactoringSuggestion
+ {
+ RefactoringType = "ExtractClass",
+ Description = $"Split '{className}' into cohesive classes based on method grouping.",
+ Impact = 0.8
+ });
+ }
+
+ if (fieldCount > SrpMaxFields)
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.SRP,
+ Severity = IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = line, MemberName = className },
+ Description = $"Class '{className}' has {fieldCount} fields (threshold: {SrpMaxFields}), suggesting it manages too much state.",
+ SuggestedFix = "Group related fields into separate data objects or extract collaborating classes."
+ });
+ }
+ }
+
+ private void DetectOcpViolations(string[] lines, string filePath, CodeAnalysisResponse response)
+ {
+ var switchPattern = new Regex(@"^\s*switch\s*\(");
+ var casePattern = new Regex(@"^\s*case\s+");
+ var ifElseChainPattern = new Regex(@"^\s*else\s+if\s*\(");
+
+ int consecutiveIfElse = 0;
+ int switchStartLine = -1;
+ int caseCount = 0;
+
+ for (int i = 0; i < lines.Length; i++)
+ {
+ // Detect long switch statements
+ if (switchPattern.IsMatch(lines[i]))
+ {
+ switchStartLine = i + 1;
+ caseCount = 0;
+ }
+
+ if (switchStartLine >= 0 && casePattern.IsMatch(lines[i]))
+ {
+ caseCount++;
+ }
+
+ if (switchStartLine >= 0 && lines[i].Trim() == "}")
+ {
+ if (caseCount >= OcpSwitchCaseThreshold)
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.OCP,
+ Severity = IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = switchStartLine },
+ Description = $"Switch statement with {caseCount} cases (threshold: {OcpSwitchCaseThreshold}). Adding new cases requires modifying existing code.",
+ SuggestedFix = "Replace with a strategy pattern or polymorphic dispatch using a dictionary of handlers."
+ });
+
+ response.Suggestions.Add(new RefactoringSuggestion
+ {
+ RefactoringType = "ReplaceConditionalWithPolymorphism",
+ Description = $"Replace {caseCount}-case switch with strategy pattern.",
+ Impact = 0.7
+ });
+ }
+ switchStartLine = -1;
+ }
+
+ // Detect long if/else-if chains
+ if (ifElseChainPattern.IsMatch(lines[i]))
+ {
+ consecutiveIfElse++;
+ }
+ else if (!lines[i].Trim().StartsWith("else") && !lines[i].Trim().StartsWith("{") && !lines[i].Trim().StartsWith("}") && lines[i].Trim().Length > 0)
+ {
+ if (consecutiveIfElse >= OcpSwitchCaseThreshold)
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.OCP,
+ Severity = IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = i + 1 - consecutiveIfElse },
+ Description = $"If/else-if chain with {consecutiveIfElse + 1} branches. Adding new conditions requires modifying existing code.",
+ SuggestedFix = "Replace with a chain of responsibility or strategy pattern."
+ });
+ }
+ consecutiveIfElse = 0;
+ }
+ }
+ }
+
+ private void DetectLspViolations(string[] lines, string filePath, CodeAnalysisResponse response)
+ {
+ var notImplPattern = new Regex(@"throw\s+new\s+NotImplementedException\s*\(");
+ var notSupportedPattern = new Regex(@"throw\s+new\s+NotSupportedException\s*\(");
+ var emptyOverridePattern = new Regex(@"^\s*(?:public|protected)\s+override\s+\w+\s+(\w+)\s*\([^)]*\)\s*\{\s*\}\s*$");
+
+ for (int i = 0; i < lines.Length; i++)
+ {
+ if (notImplPattern.IsMatch(lines[i]))
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.LSP,
+ Severity = IssueSeverity.Error,
+ Location = new CodeLocation { FilePath = filePath, Line = i + 1 },
+ Description = "Method throws NotImplementedException, indicating the subtype does not fully support the base type contract.",
+ SuggestedFix = "Implement the method or redesign the type hierarchy so this method is not required."
+ });
+ }
+
+ if (notSupportedPattern.IsMatch(lines[i]))
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.LSP,
+ Severity = IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = i + 1 },
+ Description = "Method throws NotSupportedException, suggesting the type cannot fully substitute for its base type.",
+ SuggestedFix = "Consider splitting the interface so this type only implements the operations it supports."
+ });
+ }
+
+ if (emptyOverridePattern.IsMatch(lines[i]))
+ {
+ var match = emptyOverridePattern.Match(lines[i]);
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.LSP,
+ Severity = IssueSeverity.Info,
+ Location = new CodeLocation { FilePath = filePath, Line = i + 1, MemberName = match.Groups[1].Value },
+ Description = $"Empty override of '{match.Groups[1].Value}' silently changes base class behavior.",
+ SuggestedFix = "Ensure the empty override is intentional. Document why the base behavior is suppressed."
+ });
+ }
+ }
+ }
+
+ private void DetectIspViolations(string[] lines, string filePath, CodeAnalysisResponse response)
+ {
+ var interfacePattern = new Regex(@"^\s*(?:public|internal)?\s*interface\s+(\w+)");
+ var interfaceMethodPattern = new Regex(@"^\s*(?:Task|void|bool|int|string|float|double|decimal|IEnumerable|IReadOnlyList|IList)\s*(?:<[^>]+>)?\s+\w+\s*\(");
+
+ string? currentInterface = null;
+ int interfaceStartLine = 0;
+ int methodCount = 0;
+
+ for (int i = 0; i < lines.Length; i++)
+ {
+ var ifaceMatch = interfacePattern.Match(lines[i]);
+ if (ifaceMatch.Success)
+ {
+ if (currentInterface != null && methodCount > IspMaxInterfaceMethods)
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.ISP,
+ Severity = IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = interfaceStartLine, MemberName = currentInterface },
+ Description = $"Interface '{currentInterface}' has {methodCount} methods (threshold: {IspMaxInterfaceMethods}). Clients may be forced to depend on methods they don't use.",
+ SuggestedFix = $"Split '{currentInterface}' into smaller, role-specific interfaces."
+ });
+
+ response.Suggestions.Add(new RefactoringSuggestion
+ {
+ RefactoringType = "ExtractInterface",
+ Description = $"Split '{currentInterface}' into cohesive sub-interfaces.",
+ Impact = 0.6
+ });
+ }
+
+ currentInterface = ifaceMatch.Groups[1].Value;
+ interfaceStartLine = i + 1;
+ methodCount = 0;
+ }
+
+ if (currentInterface != null && interfaceMethodPattern.IsMatch(lines[i]))
+ {
+ methodCount++;
+ }
+ }
+
+ // Check the last interface
+ if (currentInterface != null && methodCount > IspMaxInterfaceMethods)
+ {
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.ISP,
+ Severity = IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = interfaceStartLine, MemberName = currentInterface },
+ Description = $"Interface '{currentInterface}' has {methodCount} methods (threshold: {IspMaxInterfaceMethods}). Clients may be forced to depend on methods they don't use.",
+ SuggestedFix = $"Split '{currentInterface}' into smaller, role-specific interfaces."
+ });
+ }
+ }
+
+ private void DetectDipViolations(string[] lines, string filePath, CodeAnalysisResponse response)
+ {
+ // Detect direct instantiation of concrete types outside of constructors/factories
+ var newConcretePattern = new Regex(@"=\s*new\s+([A-Z]\w+(?:Client|Service|Repository|Manager|Handler|Provider|Engine|Adapter))\s*\(");
+ var constructorPattern = new Regex(@"^\s*(?:public|private|protected|internal)\s+\w+\s*\(");
+ var factoryMethodPattern = new Regex(@"^\s*(?:public|private|protected|internal)\s+static\s+");
+
+ bool inConstructorOrFactory = false;
+
+ for (int i = 0; i < lines.Length; i++)
+ {
+ if (constructorPattern.IsMatch(lines[i]) || factoryMethodPattern.IsMatch(lines[i]))
+ {
+ inConstructorOrFactory = true;
+ }
+
+ if (inConstructorOrFactory && lines[i].Trim() == "}")
+ {
+ inConstructorOrFactory = false;
+ }
+
+ if (!inConstructorOrFactory)
+ {
+ var match = newConcretePattern.Match(lines[i]);
+ if (match.Success)
+ {
+ var typeName = match.Groups[1].Value;
+ response.SolidViolations.Add(new SolidViolation
+ {
+ Principle = SolidPrinciple.DIP,
+ Severity = IssueSeverity.Warning,
+ Location = new CodeLocation { FilePath = filePath, Line = i + 1 },
+ Description = $"Direct instantiation of concrete type '{typeName}'. High-level modules should depend on abstractions.",
+ SuggestedFix = $"Inject '{typeName}' through the constructor using its interface (e.g., 'I{typeName}')."
+ });
+
+ response.Suggestions.Add(new RefactoringSuggestion
+ {
+ RefactoringType = "InjectDependency",
+ Description = $"Replace 'new {typeName}(...)' with constructor injection of 'I{typeName}'.",
+ Impact = 0.7
+ });
+ }
+ }
+ }
+ }
+
+ private void DetectDryViolations(string[] lines, string filePath, CodeAnalysisResponse response)
+ {
+ // Find duplicate blocks of code (sequences of N+ similar lines)
+ var normalizedLines = lines.Select(NormalizeLine).ToArray();
+ var duplicates = new HashSet();
+
+ for (int blockSize = DryMinDuplicateLineLength; blockSize <= Math.Min(20, lines.Length / 2); blockSize++)
+ {
+ for (int i = 0; i <= normalizedLines.Length - blockSize; i++)
+ {
+ var block = string.Join("\n", normalizedLines.Skip(i).Take(blockSize));
+ if (string.IsNullOrWhiteSpace(block) || block.Length < 30)
+ continue;
+
+ for (int j = i + blockSize; j <= normalizedLines.Length - blockSize; j++)
+ {
+ var candidate = string.Join("\n", normalizedLines.Skip(j).Take(blockSize));
+ if (string.IsNullOrWhiteSpace(candidate))
+ continue;
+
+ var similarity = CalculateSimilarity(block, candidate);
+ if (similarity >= DryMinSimilarity)
+ {
+ var key = $"{i}:{j}:{blockSize}";
+ if (duplicates.Add(key))
+ {
+ var snippet = string.Join("\n", lines.Skip(i).Take(Math.Min(blockSize, 5)));
+ response.DryViolations.Add(new DryViolation
+ {
+ Severity = blockSize >= 10 ? IssueSeverity.Error : IssueSeverity.Warning,
+ Locations = new List
+ {
+ new() { FilePath = filePath, Line = i + 1 },
+ new() { FilePath = filePath, Line = j + 1 }
+ },
+ DuplicatedPattern = snippet.Length > 200 ? snippet[..200] + "..." : snippet,
+ Description = $"Duplicated code block ({blockSize} lines, {similarity:P0} similar) found at lines {i + 1} and {j + 1}.",
+ SuggestedAbstraction = "Extract the duplicated logic into a shared method or base class."
+ });
+
+ response.Suggestions.Add(new RefactoringSuggestion
+ {
+ RefactoringType = "ExtractMethod",
+ Description = $"Extract duplicated {blockSize}-line block into a shared method.",
+ Impact = 0.6 + (blockSize * 0.02)
+ });
+ }
+ }
+ }
+ }
+ }
+ }
+
+ private static string NormalizeLine(string line)
+ {
+ // Normalize whitespace and remove comments for comparison
+ var trimmed = line.Trim();
+ if (trimmed.StartsWith("//") || trimmed.StartsWith("/*") || trimmed.StartsWith("*"))
+ return string.Empty;
+ return Regex.Replace(trimmed, @"\s+", " ");
+ }
+
+ private static double CalculateSimilarity(string a, string b)
+ {
+ if (a == b) return 1.0;
+ if (a.Length == 0 || b.Length == 0) return 0.0;
+
+ // Use Jaccard similarity on character trigrams for efficiency
+ var trigramsA = GetTrigrams(a);
+ var trigramsB = GetTrigrams(b);
+
+ var intersection = trigramsA.Intersect(trigramsB).Count();
+ var union = trigramsA.Union(trigramsB).Count();
+
+ return union == 0 ? 0.0 : (double)intersection / union;
+ }
+
+ private static HashSet GetTrigrams(string text)
+ {
+ var trigrams = new HashSet();
+ for (int i = 0; i <= text.Length - 3; i++)
+ {
+ trigrams.Add(text.Substring(i, 3));
+ }
+ return trigrams;
+ }
+
+ private static double CalculateOverallScore(CodeAnalysisResponse response)
+ {
+ double score = 1.0;
+
+ foreach (var v in response.SolidViolations)
+ {
+ score -= v.Severity switch
+ {
+ IssueSeverity.Error => 0.15,
+ IssueSeverity.Warning => 0.08,
+ IssueSeverity.Info => 0.03,
+ _ => 0.0
+ };
+ }
+
+ foreach (var v in response.DryViolations)
+ {
+ score -= v.Severity switch
+ {
+ IssueSeverity.Error => 0.12,
+ IssueSeverity.Warning => 0.06,
+ _ => 0.02
+ };
+ }
+
+ return Math.Max(0.0, Math.Min(1.0, score));
+ }
+
+ private static string BuildSummary(CodeAnalysisResponse response)
+ {
+ var solidCount = response.SolidViolations.Count;
+ var dryCount = response.DryViolations.Count;
+ var totalIssues = solidCount + dryCount;
+
+ if (totalIssues == 0)
+ return "No SOLID or DRY violations detected. The code follows good design principles.";
+
+ var parts = new List();
+ if (solidCount > 0)
+ {
+ var principles = response.SolidViolations
+ .Select(v => v.Principle)
+ .Distinct()
+ .OrderBy(p => p);
+ parts.Add($"{solidCount} SOLID violation(s) ({string.Join(", ", principles)})");
+ }
+
+ if (dryCount > 0)
+ {
+ parts.Add($"{dryCount} DRY violation(s)");
+ }
+
+ var scoreLabel = response.OverallScore switch
+ {
+ >= 0.9 => "excellent",
+ >= 0.7 => "good",
+ >= 0.5 => "needs improvement",
+ _ => "poor"
+ };
+
+ return $"Found {string.Join(" and ", parts)}. Overall quality: {scoreLabel} ({response.OverallScore:P0}). {response.Suggestions.Count} refactoring suggestion(s) available.";
+ }
+}
diff --git a/src/AgencyLayer/RefactoringAgents/Ports/ICodeRefactoringPort.cs b/src/AgencyLayer/RefactoringAgents/Ports/ICodeRefactoringPort.cs
new file mode 100644
index 0000000..744457a
--- /dev/null
+++ b/src/AgencyLayer/RefactoringAgents/Ports/ICodeRefactoringPort.cs
@@ -0,0 +1,183 @@
+using System.Text.Json.Serialization;
+
+namespace AgencyLayer.RefactoringAgents.Ports;
+
+///
+/// Identifies which SOLID principle is violated.
+///
+[JsonConverter(typeof(JsonStringEnumConverter))]
+public enum SolidPrinciple
+{
+ /// Single Responsibility Principle: a class should have one reason to change.
+ SRP,
+ /// Open/Closed Principle: open for extension, closed for modification.
+ OCP,
+ /// Liskov Substitution Principle: subtypes must be substitutable for their base types.
+ LSP,
+ /// Interface Segregation Principle: clients should not depend on methods they do not use.
+ ISP,
+ /// Dependency Inversion Principle: depend on abstractions, not concretions.
+ DIP
+}
+
+///
+/// Severity level for a detected code issue.
+///
+[JsonConverter(typeof(JsonStringEnumConverter))]
+public enum IssueSeverity
+{
+ /// Informational suggestion for improvement.
+ Info,
+ /// Potential issue that may cause maintenance burden.
+ Warning,
+ /// Clear violation that should be addressed.
+ Error
+}
+
+///
+/// Specifies which categories of analysis to perform.
+///
+[JsonConverter(typeof(JsonStringEnumConverter))]
+public enum AnalysisScope
+{
+ /// Analyze only SOLID principle adherence.
+ Solid,
+ /// Analyze only DRY (Don't Repeat Yourself) adherence.
+ Dry,
+ /// Analyze both SOLID and DRY.
+ Both
+}
+
+///
+/// A location within source code where an issue was detected.
+///
+public class CodeLocation
+{
+ /// The file path or class name where the issue was found.
+ public string FilePath { get; set; } = string.Empty;
+
+ /// The approximate line number of the issue.
+ public int Line { get; set; }
+
+ /// The name of the class, method, or member involved.
+ public string MemberName { get; set; } = string.Empty;
+}
+
+///
+/// A detected violation of a SOLID principle.
+///
+public class SolidViolation
+{
+ /// Which SOLID principle was violated.
+ public SolidPrinciple Principle { get; set; }
+
+ /// Severity of the violation.
+ public IssueSeverity Severity { get; set; }
+
+ /// Where in the code the violation occurs.
+ public CodeLocation Location { get; set; } = new();
+
+ /// Human-readable description of the violation.
+ public string Description { get; set; } = string.Empty;
+
+ /// Suggested fix or refactoring approach.
+ public string SuggestedFix { get; set; } = string.Empty;
+}
+
+///
+/// A detected DRY violation (duplicated or repeated code).
+///
+public class DryViolation
+{
+ /// Severity of the duplication.
+ public IssueSeverity Severity { get; set; }
+
+ /// Locations where the duplicated code appears.
+ public List Locations { get; set; } = new();
+
+ /// The duplicated code fragment or pattern description.
+ public string DuplicatedPattern { get; set; } = string.Empty;
+
+ /// Human-readable description of the duplication.
+ public string Description { get; set; } = string.Empty;
+
+ /// Suggested abstraction or refactoring to eliminate the duplication.
+ public string SuggestedAbstraction { get; set; } = string.Empty;
+}
+
+///
+/// A concrete refactoring suggestion with before/after code.
+///
+public class RefactoringSuggestion
+{
+ /// The type of refactoring (e.g., ExtractInterface, ExtractMethod, InjectDependency).
+ public string RefactoringType { get; set; } = string.Empty;
+
+ /// Human-readable description of what the refactoring does.
+ public string Description { get; set; } = string.Empty;
+
+ /// The original code before refactoring.
+ public string Before { get; set; } = string.Empty;
+
+ /// The suggested code after refactoring.
+ public string After { get; set; } = string.Empty;
+
+ /// Estimated impact: how much this refactoring improves the code (0.0 to 1.0).
+ public double Impact { get; set; }
+}
+
+///
+/// Request to analyze source code for SOLID/DRY violations.
+///
+public class CodeAnalysisRequest
+{
+ /// The source code to analyze.
+ public string SourceCode { get; set; } = string.Empty;
+
+ /// The programming language of the source code.
+ public string Language { get; set; } = "csharp";
+
+ /// Which analysis categories to include.
+ public AnalysisScope Scope { get; set; } = AnalysisScope.Both;
+
+ /// Optional file path for context in reported locations.
+ public string FilePath { get; set; } = string.Empty;
+}
+
+///
+/// Result of analyzing source code for SOLID/DRY violations.
+///
+public class CodeAnalysisResponse
+{
+ /// Detected SOLID principle violations.
+ public List SolidViolations { get; set; } = new();
+
+ /// Detected DRY violations.
+ public List DryViolations { get; set; } = new();
+
+ /// Concrete refactoring suggestions.
+ public List Suggestions { get; set; } = new();
+
+ /// Overall code quality score (0.0 worst to 1.0 best).
+ public double OverallScore { get; set; }
+
+ /// Human-readable summary of the analysis.
+ public string Summary { get; set; } = string.Empty;
+}
+
+///
+/// Defines the contract for code refactoring analysis operations.
+/// This port follows the Hexagonal Architecture pattern for the Agency Layer.
+///
+public interface ICodeRefactoringPort
+{
+ ///
+ /// Analyzes source code for SOLID and/or DRY principle violations.
+ ///
+ /// The analysis request containing source code and options.
+ /// Token to cancel the operation.
+ /// Analysis results including violations, suggestions, and an overall score.
+ Task AnalyzeCodeAsync(
+ CodeAnalysisRequest request,
+ CancellationToken cancellationToken = default);
+}
diff --git a/src/AgencyLayer/RefactoringAgents/RefactoringAgentRegistration.cs b/src/AgencyLayer/RefactoringAgents/RefactoringAgentRegistration.cs
new file mode 100644
index 0000000..567fc62
--- /dev/null
+++ b/src/AgencyLayer/RefactoringAgents/RefactoringAgentRegistration.cs
@@ -0,0 +1,76 @@
+using AgencyLayer.MultiAgentOrchestration.Adapters;
+using AgencyLayer.MultiAgentOrchestration.Ports;
+using AgencyLayer.RefactoringAgents.Engines;
+using Microsoft.Extensions.Logging;
+
+namespace AgencyLayer.RefactoringAgents;
+
+///
+/// Provides methods to register the SOLID/DRY refactoring agent with the
+/// multi-agent orchestration system.
+///
+public static class RefactoringAgentRegistration
+{
+ ///
+ /// Registers the SOLID/DRY refactoring agent with the orchestrator and runtime adapter.
+ /// This method creates the agent definition, registers it with the orchestration port,
+ /// and wires up the runtime handler for task execution.
+ ///
+ /// The orchestration port to register the agent definition with.
+ /// The runtime adapter to register the agent handler with.
+ /// Logger factory for creating typed loggers.
+ /// Token to cancel the operation.
+ public static async Task RegisterSolidDryAgentAsync(
+ IMultiAgentOrchestrationPort orchestrationPort,
+ InProcessAgentRuntimeAdapter runtimeAdapter,
+ ILoggerFactory loggerFactory,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(orchestrationPort);
+ ArgumentNullException.ThrowIfNull(runtimeAdapter);
+ ArgumentNullException.ThrowIfNull(loggerFactory);
+
+ // 1. Register the agent definition with the orchestrator
+ var definition = new AgentDefinition
+ {
+ AgentId = Guid.NewGuid(),
+ AgentType = SolidDryRefactoringAgent.AgentType,
+ Description = "Analyzes source code for SOLID and DRY principle violations, providing actionable refactoring suggestions with severity ratings and code quality scores.",
+ Capabilities = new List
+ {
+ "analyze-solid-principles",
+ "analyze-dry-violations",
+ "suggest-refactorings",
+ "code-quality-scoring",
+ "extract-class",
+ "extract-method",
+ "extract-interface",
+ "inject-dependency",
+ "replace-conditional-with-polymorphism"
+ },
+ DefaultAutonomyLevel = AutonomyLevel.RecommendOnly,
+ DefaultAuthorityScope = new AuthorityScope
+ {
+ AllowedApiEndpoints = new List { "/api/code/analyze", "/api/code/refactor" },
+ MaxResourceConsumption = 50.0,
+ MaxBudget = 0m,
+ DataAccessPolicies = new List { "read:source-code" }
+ },
+ Status = AgentStatus.Active
+ };
+
+ await orchestrationPort.RegisterAgentAsync(definition);
+
+ // 2. Create the agent and register its handler with the runtime adapter
+ var engineLogger = loggerFactory.CreateLogger();
+ var agentLogger = loggerFactory.CreateLogger();
+ var engine = new SolidDryRefactoringEngine(engineLogger);
+ var agent = new SolidDryRefactoringAgent(agentLogger, engine);
+
+ runtimeAdapter.RegisterHandler(SolidDryRefactoringAgent.AgentType, async task =>
+ {
+ cancellationToken.ThrowIfCancellationRequested();
+ return await agent.HandleTaskAsync(task, cancellationToken);
+ });
+ }
+}
diff --git a/src/AgencyLayer/RefactoringAgents/RefactoringAgents.csproj b/src/AgencyLayer/RefactoringAgents/RefactoringAgents.csproj
new file mode 100644
index 0000000..aef1f01
--- /dev/null
+++ b/src/AgencyLayer/RefactoringAgents/RefactoringAgents.csproj
@@ -0,0 +1,20 @@
+
+
+
+ net9.0
+ enable
+ enable
+ true
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/AgencyLayer/RefactoringAgents/SolidDryRefactoringAgent.cs b/src/AgencyLayer/RefactoringAgents/SolidDryRefactoringAgent.cs
new file mode 100644
index 0000000..688e86c
--- /dev/null
+++ b/src/AgencyLayer/RefactoringAgents/SolidDryRefactoringAgent.cs
@@ -0,0 +1,142 @@
+using System.Text.Json;
+using AgencyLayer.MultiAgentOrchestration.Ports;
+using AgencyLayer.RefactoringAgents.Ports;
+using Microsoft.Extensions.Logging;
+
+namespace AgencyLayer.RefactoringAgents;
+
+///
+/// An autonomous agent that performs SOLID/DRY code analysis on submitted source code.
+/// Integrates with the Multi-Agent Orchestration system through the handler registration
+/// pattern of .
+///
+public class SolidDryRefactoringAgent
+{
+ ///
+ /// The agent type identifier used for orchestrator registration and task routing.
+ ///
+ public const string AgentType = "SolidDryRefactoring";
+
+ private readonly ILogger _logger;
+ private readonly ICodeRefactoringPort _refactoringEngine;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger for diagnostic output.
+ /// The code refactoring analysis engine.
+ public SolidDryRefactoringAgent(
+ ILogger logger,
+ ICodeRefactoringPort refactoringEngine)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ _refactoringEngine = refactoringEngine ?? throw new ArgumentNullException(nameof(refactoringEngine));
+ }
+
+ ///
+ /// Handles an dispatched by the orchestrator.
+ /// Expects the task context to contain a "sourceCode" key with the code to analyze.
+ ///
+ /// The agent task containing source code and analysis parameters.
+ /// Token to cancel the operation.
+ /// The analysis result as an object for the orchestrator.
+ public async Task HandleTaskAsync(AgentTask task, CancellationToken cancellationToken = default)
+ {
+ _logger.LogInformation("{AgentType} handling task '{TaskId}': {Goal}",
+ AgentType, task.TaskId, task.Goal);
+
+ if (!TryExtractParameters(task.Context, out var analysisRequest, out var error))
+ {
+ _logger.LogError("Invalid parameters for task '{TaskId}': {Error}", task.TaskId, error);
+ return new { IsSuccess = false, Error = error };
+ }
+
+ try
+ {
+ var result = await _refactoringEngine.AnalyzeCodeAsync(analysisRequest, cancellationToken);
+
+ _logger.LogInformation("{AgentType} completed task '{TaskId}': {Summary}",
+ AgentType, task.TaskId, result.Summary);
+
+ return new
+ {
+ IsSuccess = true,
+ result.SolidViolations,
+ result.DryViolations,
+ result.Suggestions,
+ result.OverallScore,
+ result.Summary
+ };
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "{AgentType} failed on task '{TaskId}'", AgentType, task.TaskId);
+ return new { IsSuccess = false, Error = $"Analysis failed: {ex.Message}" };
+ }
+ }
+
+ private static bool TryExtractParameters(
+ Dictionary context,
+ out CodeAnalysisRequest request,
+ out string error)
+ {
+ request = new CodeAnalysisRequest();
+ error = string.Empty;
+
+ // Extract sourceCode (required)
+ string? sourceCode = null;
+ if (context.TryGetValue("sourceCode", out var sourceCodeObj))
+ {
+ sourceCode = sourceCodeObj switch
+ {
+ string s => s,
+ JsonElement { ValueKind: JsonValueKind.String } je => je.GetString(),
+ _ => sourceCodeObj?.ToString()
+ };
+ }
+
+ if (string.IsNullOrWhiteSpace(sourceCode))
+ {
+ error = "Parameter 'sourceCode' is required and must be a non-empty string.";
+ return false;
+ }
+
+ request.SourceCode = sourceCode;
+
+ // Extract language (optional, default: csharp)
+ if (context.TryGetValue("language", out var langObj))
+ {
+ request.Language = ExtractString(langObj) ?? "csharp";
+ }
+
+ // Extract filePath (optional)
+ if (context.TryGetValue("filePath", out var fpObj))
+ {
+ request.FilePath = ExtractString(fpObj) ?? string.Empty;
+ }
+
+ // Extract scope (optional, default: Both)
+ if (context.TryGetValue("scope", out var scopeObj))
+ {
+ var scopeStr = ExtractString(scopeObj);
+ if (Enum.TryParse(scopeStr, ignoreCase: true, out var scope))
+ {
+ request.Scope = scope;
+ }
+ }
+
+ return true;
+ }
+
+ private static string? ExtractString(object? obj)
+ {
+ if (obj is null) return null;
+
+ return obj switch
+ {
+ string s => s,
+ JsonElement { ValueKind: JsonValueKind.String } je => je.GetString(),
+ _ => obj.ToString()
+ };
+ }
+}
diff --git a/src/AgencyLayer/SecurityAgents/AutomatedResponseAgent.cs b/src/AgencyLayer/SecurityAgents/AutomatedResponseAgent.cs
index 9657a72..8ed7c0b 100644
--- a/src/AgencyLayer/SecurityAgents/AutomatedResponseAgent.cs
+++ b/src/AgencyLayer/SecurityAgents/AutomatedResponseAgent.cs
@@ -1,63 +1,120 @@
using System.Text.Json;
+using CognitiveMesh.ReasoningLayer.SecurityReasoning.Ports;
using MetacognitiveLayer.SecurityMonitoring;
using Microsoft.Extensions.Logging;
namespace AgencyLayer.SecurityAgents;
-// Conceptual ports the agent depends on. In a real system, these would be in their respective layers.
+///
+/// Port for network security operations such as IP blocking.
+///
public interface INetworkSecurityPort
{
+ ///
+ /// Blocks the specified IP address for the given reason.
+ ///
Task BlockIpAddressAsync(string ipAddress, string reason);
}
+///
+/// Port for identity management operations such as account isolation.
+///
public interface IIdentityManagementPort
{
+ ///
+ /// Isolates the specified account for the given reason.
+ ///
Task IsolateAccountAsync(string subjectId, string reason);
}
+///
+/// Port for forensic data collection and evidence storage.
+///
public interface IForensicDataPort
{
+ ///
+ /// Stores forensic evidence for the specified incident.
+ ///
Task StoreEvidenceAsync(string incidentId, object evidence);
}
-// Re-defining for clarity within this file's context
+///
+/// Represents a request to execute an agent task.
+///
public class AgentTaskRequest
{
- public string AgentId { get; set; }
- public string TaskDescription { get; set; }
+ /// Gets or sets the unique identifier of the agent.
+ public string AgentId { get; set; } = string.Empty;
+ /// Gets or sets the description of the task to execute.
+ public string TaskDescription { get; set; } = string.Empty;
+ /// Gets or sets the parameters for the task.
public Dictionary Parameters { get; set; } = new();
+ /// Gets or sets the priority of the task.
public int Priority { get; set; }
}
+///
+/// Represents the response from an agent task execution.
+///
public class AgentTaskResponse
{
+ /// Gets or sets whether the task was successful.
public bool IsSuccess { get; set; }
- public string Message { get; set; }
+ /// Gets or sets the result message.
+ public string Message { get; set; } = string.Empty;
+ /// Gets or sets the output data from the task.
public Dictionary Output { get; set; } = new();
}
+///
+/// Defines the contract for an executable agent.
+///
public interface IAgent
{
+ /// Gets the unique identifier of the agent.
string AgentId { get; }
+ ///
+ /// Executes the specified task and returns the result.
+ ///
Task ExecuteTaskAsync(AgentTaskRequest request);
}
+///
+/// Port for orchestrating agent task execution.
+///
public interface IAgentOrchestrationPort
{
+ ///
+ /// Executes the specified agent task and returns the result.
+ ///
Task ExecuteTaskAsync(AgentTaskRequest request);
}
-
+
+///
+/// Represents a notification to be sent through one or more channels.
+///
public class Notification
{
- public string Subject { get; set; }
- public string Message { get; set; }
- public List Channels { get; set; }
- public List Recipients { get; set; }
+ /// Gets or sets the notification subject.
+ public string Subject { get; set; } = string.Empty;
+ /// Gets or sets the notification message body.
+ public string Message { get; set; } = string.Empty;
+ /// Gets or sets the delivery channels (e.g., email, SMS).
+ public List Channels { get; set; } = new();
+ /// Gets or sets the notification recipients.
+ public List Recipients { get; set; } = new();
+ /// Gets or sets the timestamp when the notification was created.
public DateTimeOffset Timestamp { get; set; }
}
+///
+/// Port for sending notifications.
+///
public interface INotificationPort
{
+ ///
+ /// Sends the specified notification asynchronously.
+ ///
Task SendNotificationAsync(Notification notification);
}
@@ -173,15 +230,18 @@ private bool ValidateParameters(
out List events,
out List recommendedActions)
{
- incidentId = parameters.GetValueOrDefault("incidentId")?.ToString();
-
+ incidentId = parameters.GetValueOrDefault("incidentId")?.ToString() ?? string.Empty;
+
var eventsObj = parameters.GetValueOrDefault("correlatedEvents");
- events = eventsObj is JsonElement je ? JsonSerializer.Deserialize>(je.GetRawText()) : eventsObj as List;
+ var parsedEvents = eventsObj is JsonElement je ? JsonSerializer.Deserialize>(je.GetRawText()) : eventsObj as List;
var actionsObj = parameters.GetValueOrDefault("recommendedActions");
- recommendedActions = actionsObj is JsonElement jeActions ? JsonSerializer.Deserialize>(jeActions.GetRawText()) : actionsObj as List;
+ var parsedActions = actionsObj is JsonElement jeActions ? JsonSerializer.Deserialize>(jeActions.GetRawText()) : actionsObj as List;
+
+ events = parsedEvents ?? [];
+ recommendedActions = parsedActions ?? [];
- return !string.IsNullOrEmpty(incidentId) && events != null && recommendedActions != null;
+ return !string.IsNullOrEmpty(incidentId) && parsedEvents != null && parsedActions != null;
}
private async Task PreserveEvidence(string incidentId, List events, List actionsTaken)
diff --git a/src/AgencyLayer/SecurityAgents/SecurityAgents.csproj b/src/AgencyLayer/SecurityAgents/SecurityAgents.csproj
index beb47b2..3d02c79 100644
--- a/src/AgencyLayer/SecurityAgents/SecurityAgents.csproj
+++ b/src/AgencyLayer/SecurityAgents/SecurityAgents.csproj
@@ -15,6 +15,7 @@
+
diff --git a/src/AgencyLayer/ToolIntegration/BaseTool.cs b/src/AgencyLayer/ToolIntegration/BaseTool.cs
index fa73c9b..924e40c 100644
--- a/src/AgencyLayer/ToolIntegration/BaseTool.cs
+++ b/src/AgencyLayer/ToolIntegration/BaseTool.cs
@@ -2,17 +2,32 @@
namespace AgencyLayer.ToolIntegration
{
+ ///
+ /// Abstract base class for all tools in the tool integration framework.
+ ///
public abstract class BaseTool
{
+ /// The logger instance for this tool.
protected readonly ILogger _logger;
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The logger instance.
protected BaseTool(ILogger logger)
{
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
+ /// Gets the name of the tool.
public abstract string Name { get; }
+ /// Gets the description of the tool.
public abstract string Description { get; }
+ ///
+ /// Executes the tool with the specified parameters.
+ ///
+ /// The parameters for tool execution.
+ /// A string containing the tool execution result.
public abstract Task ExecuteAsync(Dictionary parameters);
}
}
diff --git a/src/AgencyLayer/ToolIntegration/ConfigurableTool.cs b/src/AgencyLayer/ToolIntegration/ConfigurableTool.cs
index b24cbf9..aa4ff20 100644
--- a/src/AgencyLayer/ToolIntegration/ConfigurableTool.cs
+++ b/src/AgencyLayer/ToolIntegration/ConfigurableTool.cs
@@ -11,7 +11,9 @@ public class ConfigurableTool : BaseTool
{
private readonly ToolDefinition _definition;
+ ///
public override string Name => _definition.Name;
+ ///
public override string Description => _definition.Description;
///
diff --git a/src/AgencyLayer/ToolIntegration/Models/ToolDefinition.cs b/src/AgencyLayer/ToolIntegration/Models/ToolDefinition.cs
index 3ac4a79..a111006 100644
--- a/src/AgencyLayer/ToolIntegration/Models/ToolDefinition.cs
+++ b/src/AgencyLayer/ToolIntegration/Models/ToolDefinition.cs
@@ -10,23 +10,23 @@ public class ToolDefinition
///
/// The display name of the tool.
///
- public string Name { get; set; }
+ public required string Name { get; set; }
///
/// A short description of the tool's purpose.
///
- public string Description { get; set; }
+ public required string Description { get; set; }
///
/// The key for the primary input parameter expected in the ExecuteAsync dictionary.
/// e.g., "data", "text".
///
- public string ParameterKey { get; set; }
+ public required string ParameterKey { get; set; }
///
/// The informational message to log upon execution.
///
- public string LogMessage { get; set; }
+ public required string LogMessage { get; set; }
///
/// The simulated processing delay in milliseconds.
@@ -38,6 +38,6 @@ public class ToolDefinition
/// The parameter value can be inserted using string formatting.
/// e.g., "Classification results for: {0}"
///
- public string ResultTemplate { get; set; }
+ public required string ResultTemplate { get; set; }
}
}
diff --git a/src/AgencyLayer/ToolIntegration/ToolDefinitions.cs b/src/AgencyLayer/ToolIntegration/ToolDefinitions.cs
index 46e00ff..6cfb334 100644
--- a/src/AgencyLayer/ToolIntegration/ToolDefinitions.cs
+++ b/src/AgencyLayer/ToolIntegration/ToolDefinitions.cs
@@ -10,6 +10,7 @@ namespace AgencyLayer.ToolIntegration
///
public static class ToolDefinitions
{
+ /// Gets the classification tool definition.
public static ToolDefinition Classification => new()
{
Name = "Classification Tool",
@@ -19,6 +20,7 @@ public static class ToolDefinitions
ResultTemplate = "Classification results for: {0}"
};
+ /// Gets the clustering tool definition.
public static ToolDefinition Clustering => new()
{
Name = "Clustering Tool",
@@ -28,6 +30,7 @@ public static class ToolDefinitions
ResultTemplate = "Clustering results for: {0}"
};
+ /// Gets the data analysis tool definition.
public static ToolDefinition DataAnalysis => new()
{
Name = "Data Analysis Tool",
@@ -37,6 +40,7 @@ public static class ToolDefinitions
ResultTemplate = "Data analysis results for: {0}"
};
+ /// Gets the data cleaning tool definition.
public static ToolDefinition DataCleaning => new()
{
Name = "Data Cleaning Tool",
@@ -46,6 +50,7 @@ public static class ToolDefinitions
ResultTemplate = "Data cleaning results for: {0}"
};
+ /// Gets the data visualization tool definition.
public static ToolDefinition DataVisualization => new()
{
Name = "Data Visualization Tool",
@@ -55,6 +60,7 @@ public static class ToolDefinitions
ResultTemplate = "Data visualization for: {0}"
};
+ /// Gets the named entity recognition tool definition.
public static ToolDefinition NamedEntityRecognition => new()
{
Name = "Named Entity Recognition Tool",
@@ -64,6 +70,7 @@ public static class ToolDefinitions
ResultTemplate = "Named entity recognition results for: {0}"
};
+ /// Gets the pattern recognition tool definition.
public static ToolDefinition PatternRecognition => new()
{
Name = "Pattern Recognition Tool",
@@ -73,6 +80,7 @@ public static class ToolDefinitions
ResultTemplate = "Pattern recognition results for: {0}"
};
+ /// Gets the predictive analytics tool definition.
public static ToolDefinition PredictiveAnalytics => new()
{
Name = "Predictive Analytics Tool",
@@ -82,6 +90,7 @@ public static class ToolDefinitions
ResultTemplate = "Predictive analytics results for: {0}"
};
+ /// Gets the recommendation system tool definition.
public static ToolDefinition RecommendationSystem => new()
{
Name = "Recommendation System Tool",
@@ -91,6 +100,7 @@ public static class ToolDefinitions
ResultTemplate = "Recommendations for: {0}"
};
+ /// Gets the sentiment analysis tool definition.
public static ToolDefinition SentimentAnalysis => new()
{
Name = "Sentiment Analysis Tool",
@@ -100,6 +110,7 @@ public static class ToolDefinitions
ResultTemplate = "Sentiment analysis results for: {0}"
};
+ /// Gets the text generation tool definition.
public static ToolDefinition TextGeneration => new()
{
Name = "Text Generation Tool",
@@ -109,6 +120,7 @@ public static class ToolDefinitions
ResultTemplate = "Generated text for prompt: {0}"
};
+ /// Gets the web scraping tool definition.
public static ToolDefinition WebScraping => new()
{
Name = "Web Scraping Tool",
@@ -118,6 +130,7 @@ public static class ToolDefinitions
ResultTemplate = "Scraped content from: {0}"
};
+ /// Gets the web search tool definition.
public static ToolDefinition WebSearch => new()
{
Name = "Web Search Tool",
@@ -138,7 +151,7 @@ public static IEnumerable GetAll()
{
if (property.PropertyType == typeof(ToolDefinition))
{
- yield return (ToolDefinition)property.GetValue(null);
+ yield return (ToolDefinition)property.GetValue(null)!;
}
}
}
diff --git a/src/BusinessApplications/AdaptiveBalance/AdaptiveBalance.csproj b/src/BusinessApplications/AdaptiveBalance/AdaptiveBalance.csproj
new file mode 100644
index 0000000..64232f7
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/AdaptiveBalance.csproj
@@ -0,0 +1,15 @@
+
+
+
+ net9.0
+ enable
+ enable
+ true
+
+
+
+
+
+
+
+
diff --git a/src/BusinessApplications/AdaptiveBalance/Controllers/AdaptiveBalanceController.cs b/src/BusinessApplications/AdaptiveBalance/Controllers/AdaptiveBalanceController.cs
new file mode 100644
index 0000000..c4f545a
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Controllers/AdaptiveBalanceController.cs
@@ -0,0 +1,120 @@
+using CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+using CognitiveMesh.BusinessApplications.AdaptiveBalance.Ports;
+using Microsoft.Extensions.Logging;
+
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Controllers;
+
+///
+/// Controller for adaptive balance management, providing operations for
+/// spectrum dimension retrieval, manual overrides, history tracking,
+/// learning evidence submission, and reflexion status monitoring.
+///
+public class AdaptiveBalanceController
+{
+ private readonly ILogger _logger;
+ private readonly IAdaptiveBalanceServicePort _servicePort;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger instance for structured logging.
+ /// The adaptive balance service port.
+ public AdaptiveBalanceController(
+ ILogger logger,
+ IAdaptiveBalanceServicePort servicePort)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ _servicePort = servicePort ?? throw new ArgumentNullException(nameof(servicePort));
+ }
+
+ ///
+ /// Retrieves the current adaptive balance positions for all spectrum dimensions.
+ ///
+ /// The balance request with optional context.
+ /// Token to cancel the operation.
+ /// The current balance positions with confidence metrics.
+ /// Thrown when request is null.
+ public async Task GetBalanceAsync(BalanceRequest request, CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(request);
+
+ _logger.LogInformation("Retrieving adaptive balance with {ContextCount} context entries", request.Context.Count);
+
+ return await _servicePort.GetBalanceAsync(request, cancellationToken);
+ }
+
+ ///
+ /// Applies a manual override to a specific spectrum dimension.
+ ///
+ /// The override request with new value and rationale.
+ /// Token to cancel the operation.
+ /// The override response confirming the change.
+ /// Thrown when request is null.
+ /// Thrown when required fields are missing.
+ public async Task ApplyOverrideAsync(OverrideRequest request, CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(request);
+
+ if (string.IsNullOrWhiteSpace(request.Dimension))
+ {
+ throw new ArgumentException("Dimension is required.", nameof(request));
+ }
+
+ if (string.IsNullOrWhiteSpace(request.OverriddenBy))
+ {
+ throw new ArgumentException("OverriddenBy is required.", nameof(request));
+ }
+
+ _logger.LogInformation(
+ "Applying override to dimension {Dimension} by {OverriddenBy}",
+ request.Dimension, request.OverriddenBy);
+
+ return await _servicePort.ApplyOverrideAsync(request, cancellationToken);
+ }
+
+ ///
+ /// Retrieves the history of changes for a specific spectrum dimension.
+ ///
+ /// The name of the dimension to retrieve history for.
+ /// Token to cancel the operation.
+ /// The change history for the specified dimension.
+ /// Thrown when dimension is null or whitespace.
+ public async Task GetSpectrumHistoryAsync(string dimension, CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(dimension);
+
+ _logger.LogInformation("Retrieving spectrum history for dimension {Dimension}", dimension);
+
+ return await _servicePort.GetSpectrumHistoryAsync(dimension, cancellationToken);
+ }
+
+ ///
+ /// Submits learning evidence for the continuous improvement loop.
+ ///
+ /// The learning evidence request.
+ /// Token to cancel the operation.
+ /// The learning evidence submission response.
+ /// Thrown when request is null.
+ public async Task SubmitLearningEvidenceAsync(LearningEvidenceRequest request, CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(request);
+
+ _logger.LogInformation(
+ "Submitting learning evidence from agent {SourceAgentId}: pattern {PatternType}",
+ request.SourceAgentId, request.PatternType);
+
+ return await _servicePort.SubmitLearningEvidenceAsync(request, cancellationToken);
+ }
+
+ ///
+ /// Retrieves the current reflexion (self-evaluation) system status.
+ ///
+ /// Token to cancel the operation.
+ /// The current reflexion status with aggregate metrics.
+ public async Task GetReflexionStatusAsync(CancellationToken cancellationToken = default)
+ {
+ _logger.LogInformation("Retrieving reflexion status");
+
+ return await _servicePort.GetReflexionStatusAsync(cancellationToken);
+ }
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Infrastructure/ServiceCollectionExtensions.cs b/src/BusinessApplications/AdaptiveBalance/Infrastructure/ServiceCollectionExtensions.cs
new file mode 100644
index 0000000..9e14d7f
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Infrastructure/ServiceCollectionExtensions.cs
@@ -0,0 +1,26 @@
+using CognitiveMesh.BusinessApplications.AdaptiveBalance.Ports;
+using CognitiveMesh.BusinessApplications.AdaptiveBalance.Services;
+using Microsoft.Extensions.DependencyInjection;
+
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Infrastructure;
+
+///
+/// Extension methods for registering Adaptive Balance services
+/// in the dependency injection container.
+///
+public static class ServiceCollectionExtensions
+{
+ ///
+ /// Adds Adaptive Balance services to the specified
+ /// , registering the in-memory
+ /// as the implementation for
+ /// .
+ ///
+ /// The service collection to configure.
+ /// The same instance for chaining.
+ public static IServiceCollection AddAdaptiveBalanceServices(this IServiceCollection services)
+ {
+ services.AddSingleton();
+ return services;
+ }
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/BalanceRequest.cs b/src/BusinessApplications/AdaptiveBalance/Models/BalanceRequest.cs
new file mode 100644
index 0000000..730bea9
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/BalanceRequest.cs
@@ -0,0 +1,14 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents a request to retrieve the current adaptive balance positions
+/// for all spectrum dimensions.
+///
+public class BalanceRequest
+{
+ ///
+ /// Contextual information that may influence the balance calculation,
+ /// provided as key-value pairs.
+ ///
+ public Dictionary Context { get; set; } = new();
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/BalanceResponse.cs b/src/BusinessApplications/AdaptiveBalance/Models/BalanceResponse.cs
new file mode 100644
index 0000000..5c2b6c6
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/BalanceResponse.cs
@@ -0,0 +1,23 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents the response containing the current adaptive balance positions
+/// for all spectrum dimensions with confidence metrics.
+///
+public class BalanceResponse
+{
+ ///
+ /// The current position for each spectrum dimension.
+ ///
+ public List Dimensions { get; set; } = new();
+
+ ///
+ /// The overall confidence level of the balance calculation (0.0 to 1.0).
+ ///
+ public double OverallConfidence { get; set; }
+
+ ///
+ /// The timestamp when this balance response was generated.
+ ///
+ public DateTimeOffset GeneratedAt { get; set; }
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/LearningEvidenceRequest.cs b/src/BusinessApplications/AdaptiveBalance/Models/LearningEvidenceRequest.cs
new file mode 100644
index 0000000..9986763
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/LearningEvidenceRequest.cs
@@ -0,0 +1,32 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents a request to submit learning evidence for the continuous improvement loop.
+///
+public class LearningEvidenceRequest
+{
+ ///
+ /// The type of pattern identified (e.g., "Bias", "Drift", "Success").
+ ///
+ public string PatternType { get; set; } = string.Empty;
+
+ ///
+ /// A description of the learning evidence.
+ ///
+ public string Description { get; set; } = string.Empty;
+
+ ///
+ /// The evidence supporting the learning, such as data or observations.
+ ///
+ public string Evidence { get; set; } = string.Empty;
+
+ ///
+ /// The observed outcome associated with this evidence.
+ ///
+ public string Outcome { get; set; } = string.Empty;
+
+ ///
+ /// The identifier of the agent that produced this learning evidence.
+ ///
+ public string SourceAgentId { get; set; } = string.Empty;
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/LearningEvidenceResponse.cs b/src/BusinessApplications/AdaptiveBalance/Models/LearningEvidenceResponse.cs
new file mode 100644
index 0000000..a70d705
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/LearningEvidenceResponse.cs
@@ -0,0 +1,22 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents the response after submitting learning evidence.
+///
+public class LearningEvidenceResponse
+{
+ ///
+ /// The unique identifier assigned to this learning event.
+ ///
+ public Guid EventId { get; set; }
+
+ ///
+ /// The timestamp when the learning evidence was recorded.
+ ///
+ public DateTimeOffset RecordedAt { get; set; }
+
+ ///
+ /// A human-readable message describing the submission result.
+ ///
+ public string Message { get; set; } = string.Empty;
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/OverrideRequest.cs b/src/BusinessApplications/AdaptiveBalance/Models/OverrideRequest.cs
new file mode 100644
index 0000000..6f66d6d
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/OverrideRequest.cs
@@ -0,0 +1,27 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents a request to manually override the position of a spectrum dimension.
+///
+public class OverrideRequest
+{
+ ///
+ /// The name of the spectrum dimension to override.
+ ///
+ public string Dimension { get; set; } = string.Empty;
+
+ ///
+ /// The new value for the dimension (must be between 0.0 and 1.0).
+ ///
+ public double NewValue { get; set; }
+
+ ///
+ /// The rationale for the override.
+ ///
+ public string Rationale { get; set; } = string.Empty;
+
+ ///
+ /// The identifier of the person performing the override.
+ ///
+ public string OverriddenBy { get; set; } = string.Empty;
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/OverrideResponse.cs b/src/BusinessApplications/AdaptiveBalance/Models/OverrideResponse.cs
new file mode 100644
index 0000000..46b18cc
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/OverrideResponse.cs
@@ -0,0 +1,37 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents the response after applying a manual override to a spectrum dimension.
+///
+public class OverrideResponse
+{
+ ///
+ /// The unique identifier assigned to this override operation.
+ ///
+ public Guid OverrideId { get; set; }
+
+ ///
+ /// The name of the dimension that was overridden.
+ ///
+ public string Dimension { get; set; } = string.Empty;
+
+ ///
+ /// The previous value of the dimension before the override.
+ ///
+ public double OldValue { get; set; }
+
+ ///
+ /// The new value of the dimension after the override.
+ ///
+ public double NewValue { get; set; }
+
+ ///
+ /// The timestamp when the override was applied.
+ ///
+ public DateTimeOffset UpdatedAt { get; set; }
+
+ ///
+ /// A human-readable message describing the override result.
+ ///
+ public string Message { get; set; } = string.Empty;
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/ReflexionStatusEntry.cs b/src/BusinessApplications/AdaptiveBalance/Models/ReflexionStatusEntry.cs
new file mode 100644
index 0000000..d60c184
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/ReflexionStatusEntry.cs
@@ -0,0 +1,27 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents a single reflexion (self-evaluation) result entry.
+///
+public class ReflexionStatusEntry
+{
+ ///
+ /// The unique identifier for this reflexion result.
+ ///
+ public Guid ResultId { get; set; }
+
+ ///
+ /// Whether this result was identified as a hallucination.
+ ///
+ public bool IsHallucination { get; set; }
+
+ ///
+ /// The confidence level of the reflexion evaluation (0.0 to 1.0).
+ ///
+ public double Confidence { get; set; }
+
+ ///
+ /// The timestamp when this evaluation was performed.
+ ///
+ public DateTimeOffset EvaluatedAt { get; set; }
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/ReflexionStatusResponse.cs b/src/BusinessApplications/AdaptiveBalance/Models/ReflexionStatusResponse.cs
new file mode 100644
index 0000000..d5b1f1c
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/ReflexionStatusResponse.cs
@@ -0,0 +1,23 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents the current status of the reflexion (self-evaluation) system,
+/// including recent results and aggregate metrics.
+///
+public class ReflexionStatusResponse
+{
+ ///
+ /// Recent reflexion evaluation results.
+ ///
+ public List RecentResults { get; set; } = new();
+
+ ///
+ /// The overall hallucination rate as a proportion (0.0 to 1.0).
+ ///
+ public double HallucinationRate { get; set; }
+
+ ///
+ /// The average confidence level across recent evaluations (0.0 to 1.0).
+ ///
+ public double AverageConfidence { get; set; }
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/SpectrumDimensionResult.cs b/src/BusinessApplications/AdaptiveBalance/Models/SpectrumDimensionResult.cs
new file mode 100644
index 0000000..5148816
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/SpectrumDimensionResult.cs
@@ -0,0 +1,33 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents the current position of a single spectrum dimension,
+/// including confidence bounds and rationale.
+///
+public class SpectrumDimensionResult
+{
+ ///
+ /// The name of the spectrum dimension (e.g., "Profit", "Risk", "Agreeableness").
+ ///
+ public string Dimension { get; set; } = string.Empty;
+
+ ///
+ /// The current value of this dimension on the spectrum (0.0 to 1.0).
+ ///
+ public double Value { get; set; }
+
+ ///
+ /// The lower confidence bound for the dimension value (0.0 to 1.0).
+ ///
+ public double LowerBound { get; set; }
+
+ ///
+ /// The upper confidence bound for the dimension value (0.0 to 1.0).
+ ///
+ public double UpperBound { get; set; }
+
+ ///
+ /// An explanation of why the dimension is at its current position.
+ ///
+ public string Rationale { get; set; } = string.Empty;
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/SpectrumHistoryEntry.cs b/src/BusinessApplications/AdaptiveBalance/Models/SpectrumHistoryEntry.cs
new file mode 100644
index 0000000..9166997
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/SpectrumHistoryEntry.cs
@@ -0,0 +1,22 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents a single historical entry for a spectrum dimension change.
+///
+public class SpectrumHistoryEntry
+{
+ ///
+ /// The value of the dimension at this point in time (0.0 to 1.0).
+ ///
+ public double Value { get; set; }
+
+ ///
+ /// The rationale for the value at this point in time.
+ ///
+ public string Rationale { get; set; } = string.Empty;
+
+ ///
+ /// The timestamp when this value was recorded.
+ ///
+ public DateTimeOffset RecordedAt { get; set; }
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Models/SpectrumHistoryResponse.cs b/src/BusinessApplications/AdaptiveBalance/Models/SpectrumHistoryResponse.cs
new file mode 100644
index 0000000..9a04095
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Models/SpectrumHistoryResponse.cs
@@ -0,0 +1,17 @@
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+///
+/// Represents the history of changes for a specific spectrum dimension.
+///
+public class SpectrumHistoryResponse
+{
+ ///
+ /// The name of the spectrum dimension.
+ ///
+ public string Dimension { get; set; } = string.Empty;
+
+ ///
+ /// The historical entries for this dimension, ordered by time.
+ ///
+ public List History { get; set; } = new();
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Ports/IAdaptiveBalanceServicePort.cs b/src/BusinessApplications/AdaptiveBalance/Ports/IAdaptiveBalanceServicePort.cs
new file mode 100644
index 0000000..07e5a7b
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Ports/IAdaptiveBalanceServicePort.cs
@@ -0,0 +1,50 @@
+using CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Ports;
+
+///
+/// Defines the contract for adaptive balance services, providing operations
+/// for spectrum dimension management, manual overrides, learning evidence,
+/// and reflexion status monitoring.
+///
+public interface IAdaptiveBalanceServicePort
+{
+ ///
+ /// Retrieves the current adaptive balance positions for all spectrum dimensions.
+ ///
+ /// The balance request with optional context.
+ /// Token to cancel the operation.
+ /// The current balance positions with confidence metrics.
+ Task GetBalanceAsync(BalanceRequest request, CancellationToken cancellationToken = default);
+
+ ///
+ /// Applies a manual override to a specific spectrum dimension.
+ ///
+ /// The override request with new value and rationale.
+ /// Token to cancel the operation.
+ /// The override response confirming the change.
+ Task ApplyOverrideAsync(OverrideRequest request, CancellationToken cancellationToken = default);
+
+ ///
+ /// Retrieves the history of changes for a specific spectrum dimension.
+ ///
+ /// The name of the dimension to retrieve history for.
+ /// Token to cancel the operation.
+ /// The change history for the specified dimension.
+ Task GetSpectrumHistoryAsync(string dimension, CancellationToken cancellationToken = default);
+
+ ///
+ /// Submits learning evidence for the continuous improvement loop.
+ ///
+ /// The learning evidence request.
+ /// Token to cancel the operation.
+ /// The learning evidence submission response.
+ Task SubmitLearningEvidenceAsync(LearningEvidenceRequest request, CancellationToken cancellationToken = default);
+
+ ///
+ /// Retrieves the current reflexion (self-evaluation) system status.
+ ///
+ /// Token to cancel the operation.
+ /// The current reflexion status with aggregate metrics.
+ Task GetReflexionStatusAsync(CancellationToken cancellationToken = default);
+}
diff --git a/src/BusinessApplications/AdaptiveBalance/Services/AdaptiveBalanceService.cs b/src/BusinessApplications/AdaptiveBalance/Services/AdaptiveBalanceService.cs
new file mode 100644
index 0000000..3913eac
--- /dev/null
+++ b/src/BusinessApplications/AdaptiveBalance/Services/AdaptiveBalanceService.cs
@@ -0,0 +1,284 @@
+using System.Collections.Concurrent;
+using CognitiveMesh.BusinessApplications.AdaptiveBalance.Models;
+using CognitiveMesh.BusinessApplications.AdaptiveBalance.Ports;
+using Microsoft.Extensions.Logging;
+
+namespace CognitiveMesh.BusinessApplications.AdaptiveBalance.Services;
+
+///
+/// In-memory implementation of the adaptive balance service, providing
+/// spectrum dimension management, manual overrides, learning evidence tracking,
+/// and reflexion status monitoring.
+///
+public class AdaptiveBalanceService : IAdaptiveBalanceServicePort
+{
+ private readonly ILogger _logger;
+ private readonly ConcurrentDictionary _dimensions = new();
+ private readonly ConcurrentDictionary> _history = new();
+ private readonly ConcurrentBag _learningEvents = new();
+ private readonly ConcurrentBag _reflexionResults = new();
+
+ private static readonly string[] DefaultDimensions =
+ [
+ "Profit",
+ "Risk",
+ "Agreeableness",
+ "IdentityGrounding",
+ "LearningRate"
+ ];
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger instance for structured logging.
+ public AdaptiveBalanceService(ILogger logger)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+
+ // Initialize all dimensions with default value of 0.5
+ foreach (var dimension in DefaultDimensions)
+ {
+ _dimensions[dimension] = new DimensionState
+ {
+ Value = 0.5,
+ Rationale = "Default initial position."
+ };
+
+ _history[dimension] =
+ [
+ new SpectrumHistoryEntry
+ {
+ Value = 0.5,
+ Rationale = "Default initial position.",
+ RecordedAt = DateTimeOffset.UtcNow
+ }
+ ];
+ }
+ }
+
+ ///
+ public Task GetBalanceAsync(BalanceRequest request, CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(request);
+
+ cancellationToken.ThrowIfCancellationRequested();
+
+ var dimensions = new List();
+
+ foreach (var dimensionName in DefaultDimensions)
+ {
+ var state = _dimensions.GetOrAdd(dimensionName, _ => new DimensionState
+ {
+ Value = 0.5,
+ Rationale = "Default initial position."
+ });
+
+ dimensions.Add(new SpectrumDimensionResult
+ {
+ Dimension = dimensionName,
+ Value = state.Value,
+ LowerBound = Math.Max(0.0, state.Value - 0.1),
+ UpperBound = Math.Min(1.0, state.Value + 0.1),
+ Rationale = state.Rationale
+ });
+ }
+
+ var overallConfidence = _reflexionResults.IsEmpty
+ ? 0.5
+ : 1.0 - _reflexionResults.Count(r => r.IsHallucination) / (double)_reflexionResults.Count;
+
+ _logger.LogInformation(
+ "Balance retrieved with {DimensionCount} dimensions, overall confidence {Confidence}",
+ dimensions.Count, overallConfidence);
+
+ return Task.FromResult(new BalanceResponse
+ {
+ Dimensions = dimensions,
+ OverallConfidence = Math.Round(overallConfidence, 4),
+ GeneratedAt = DateTimeOffset.UtcNow
+ });
+ }
+
+ ///
+ public Task ApplyOverrideAsync(OverrideRequest request, CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(request);
+
+ cancellationToken.ThrowIfCancellationRequested();
+
+ if (string.IsNullOrWhiteSpace(request.Dimension))
+ {
+ throw new ArgumentException("Dimension is required.", nameof(request));
+ }
+
+ if (request.NewValue < 0.0 || request.NewValue > 1.0)
+ {
+ throw new ArgumentOutOfRangeException(nameof(request), "NewValue must be between 0.0 and 1.0.");
+ }
+
+ var state = _dimensions.GetOrAdd(request.Dimension, _ => new DimensionState
+ {
+ Value = 0.5,
+ Rationale = "Default initial position."
+ });
+
+ var oldValue = state.Value;
+ var now = DateTimeOffset.UtcNow;
+
+ state.Value = request.NewValue;
+ state.Rationale = request.Rationale;
+
+ // Add history entry
+ var historyEntries = _history.GetOrAdd(request.Dimension, _ => new List());
+ lock (historyEntries)
+ {
+ historyEntries.Add(new SpectrumHistoryEntry
+ {
+ Value = request.NewValue,
+ Rationale = request.Rationale,
+ RecordedAt = now
+ });
+ }
+
+ var overrideId = Guid.NewGuid();
+
+ _logger.LogInformation(
+ "Override {OverrideId} applied to dimension {Dimension}: {OldValue} -> {NewValue} by {OverriddenBy}",
+ overrideId, request.Dimension, oldValue, request.NewValue, request.OverriddenBy);
+
+ return Task.FromResult(new OverrideResponse
+ {
+ OverrideId = overrideId,
+ Dimension = request.Dimension,
+ OldValue = oldValue,
+ NewValue = request.NewValue,
+ UpdatedAt = now,
+ Message = $"Override applied successfully. Dimension '{request.Dimension}' changed from {oldValue} to {request.NewValue}."
+ });
+ }
+
+ ///
+ public Task GetSpectrumHistoryAsync(string dimension, CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(dimension);
+
+ cancellationToken.ThrowIfCancellationRequested();
+
+ var historyEntries = _history.GetOrAdd(dimension, _ => new List());
+
+ List snapshot;
+ lock (historyEntries)
+ {
+ snapshot = historyEntries.OrderByDescending(h => h.RecordedAt).ToList();
+ }
+
+ _logger.LogInformation(
+ "History retrieved for dimension {Dimension}: {EntryCount} entries",
+ dimension, snapshot.Count);
+
+ return Task.FromResult(new SpectrumHistoryResponse
+ {
+ Dimension = dimension,
+ History = snapshot
+ });
+ }
+
+ ///
+ public Task SubmitLearningEvidenceAsync(LearningEvidenceRequest request, CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(request);
+
+ cancellationToken.ThrowIfCancellationRequested();
+
+ var eventId = Guid.NewGuid();
+ var now = DateTimeOffset.UtcNow;
+
+ _learningEvents.Add(new LearningEvent
+ {
+ EventId = eventId,
+ PatternType = request.PatternType,
+ Description = request.Description,
+ Evidence = request.Evidence,
+ Outcome = request.Outcome,
+ SourceAgentId = request.SourceAgentId,
+ RecordedAt = now
+ });
+
+ _logger.LogInformation(
+ "Learning evidence {EventId} submitted by agent {SourceAgentId}: pattern type {PatternType}",
+ eventId, request.SourceAgentId, request.PatternType);
+
+ return Task.FromResult(new LearningEvidenceResponse
+ {
+ EventId = eventId,
+ RecordedAt = now,
+ Message = "Learning evidence recorded successfully."
+ });
+ }
+
+ ///
+ public Task GetReflexionStatusAsync(CancellationToken cancellationToken = default)
+ {
+ cancellationToken.ThrowIfCancellationRequested();
+
+ var results = _reflexionResults.OrderByDescending(r => r.EvaluatedAt).ToList();
+
+ var hallucinationRate = results.Count > 0
+ ? (double)results.Count(r => r.IsHallucination) / results.Count
+ : 0.0;
+
+ var averageConfidence = results.Count > 0
+ ? results.Average(r => r.Confidence)
+ : 0.0;
+
+ _logger.LogInformation(
+ "Reflexion status retrieved: {ResultCount} results, hallucination rate {Rate}, average confidence {Confidence}",
+ results.Count, hallucinationRate, averageConfidence);
+
+ return Task.FromResult(new ReflexionStatusResponse
+ {
+ RecentResults = results,
+ HallucinationRate = Math.Round(hallucinationRate, 4),
+ AverageConfidence = Math.Round(averageConfidence, 4)
+ });
+ }
+
+ ///
+ /// Internal state for tracking the current value and rationale of a spectrum dimension.
+ ///
+ internal class DimensionState
+ {
+ /// The current value (0.0 to 1.0).
+ public double Value { get; set; }
+
+ /// The rationale for the current value.
+ public string Rationale { get; set; } = string.Empty;
+ }
+
+ ///
+ /// Internal record for tracking submitted learning events.
+ ///
+ internal class LearningEvent
+ {
+ /// The unique event identifier.
+ public Guid EventId { get; set; }
+
+ /// The pattern type.
+ public string PatternType { get; set; } = string.Empty;
+
+ /// The description.
+ public string Description { get; set; } = string.Empty;
+
+ /// The supporting evidence.
+ public string Evidence { get; set; } = string.Empty;
+
+ /// The observed outcome.
+ public string Outcome { get; set; } = string.Empty;
+
+ /// The source agent identifier.
+ public string SourceAgentId { get; set; } = string.Empty;
+
+ /// When the event was recorded.
+ public DateTimeOffset RecordedAt { get; set; }
+ }
+}
diff --git a/src/BusinessApplications/BusinessApplications.csproj b/src/BusinessApplications/BusinessApplications.csproj
index bcde8a7..d6c8b1e 100644
--- a/src/BusinessApplications/BusinessApplications.csproj
+++ b/src/BusinessApplications/BusinessApplications.csproj
@@ -5,6 +5,24 @@
net9.0
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
@@ -16,15 +34,19 @@
+
+
+
-
-
+
+
+
diff --git a/src/BusinessApplications/Common/Models/ErrorEnvelope.cs b/src/BusinessApplications/Common/Models/ErrorEnvelope.cs
index fb59f75..76088cc 100644
--- a/src/BusinessApplications/Common/Models/ErrorEnvelope.cs
+++ b/src/BusinessApplications/Common/Models/ErrorEnvelope.cs
@@ -81,6 +81,59 @@ public ErrorEnvelope(
AdditionalDetails = additionalDetails;
}
+ #region General Factory Methods
+
+ ///
+ /// Creates a generic error envelope with the specified error code and message.
+ ///
+ /// A machine-readable error code.
+ /// A human-readable error message.
+ /// An optional correlation ID for tracing.
+ /// A new ErrorEnvelope instance.
+ public static ErrorEnvelope Create(string errorCode, string message, string? correlationId = null)
+ {
+ return new ErrorEnvelope(
+ errorCode: errorCode,
+ errorMessage: message,
+ detailedMessage: null,
+ source: null,
+ correlationId: correlationId);
+ }
+
+ ///
+ /// Creates an error envelope for an invalid request payload.
+ ///
+ /// A description of the validation failure.
+ /// An optional correlation ID for tracing.
+ /// A new ErrorEnvelope instance.
+ public static ErrorEnvelope InvalidPayload(string message, string? correlationId = null)
+ {
+ return new ErrorEnvelope(
+ errorCode: "INVALID_PAYLOAD",
+ errorMessage: message,
+ detailedMessage: null,
+ source: null,
+ correlationId: correlationId);
+ }
+
+ ///
+ /// Creates an error envelope for a missing consent scenario.
+ ///
+ /// A description of the missing consent.
+ /// An optional correlation ID for tracing.
+ /// A new ErrorEnvelope instance.
+ public static ErrorEnvelope ConsentMissing(string message, string? correlationId = null)
+ {
+ return new ErrorEnvelope(
+ errorCode: "CONSENT_MISSING",
+ errorMessage: message,
+ detailedMessage: null,
+ source: null,
+ correlationId: correlationId);
+ }
+
+ #endregion
+
#region Factory Methods for Compliance Errors
///
diff --git a/src/BusinessApplications/Compliance/Adapters/EUAIActComplianceAdapter.cs b/src/BusinessApplications/Compliance/Adapters/EUAIActComplianceAdapter.cs
index e5bbf3b..5f7cf7e 100644
--- a/src/BusinessApplications/Compliance/Adapters/EUAIActComplianceAdapter.cs
+++ b/src/BusinessApplications/Compliance/Adapters/EUAIActComplianceAdapter.cs
@@ -8,8 +8,7 @@
using CognitiveMesh.BusinessApplications.Compliance.Ports;
using CognitiveMesh.BusinessApplications.Compliance.Ports.Models;
using CognitiveMesh.BusinessApplications.ConvenerServices.Ports;
-using CognitiveMesh.BusinessApplications.ConvenerServices.Ports.Models;
-using CognitiveMesh.FoundationLayer.AuditLogging;
+using FoundationLayer.AuditLogging;
using Microsoft.Extensions.Logging;
namespace CognitiveMesh.BusinessApplications.Compliance.Adapters
@@ -18,12 +17,14 @@ namespace CognitiveMesh.BusinessApplications.Compliance.Adapters
/// Adapter that implements the EU AI Act Compliance Port.
/// This adapter is responsible for classifying AI systems by risk, managing conformity assessments for high-risk systems,
/// ensuring transparency obligations are met, and handling the registration of systems in the EU database.
- /// It serves as a core component of the Ethical & Legal Compliance Framework.
+ /// It serves as a core component of the Ethical & Legal Compliance Framework.
///
public class EUAIActComplianceAdapter : IEUAIActCompliancePort
{
private readonly ILogger _logger;
+#pragma warning disable CS0414 // Field is assigned but its value is never used (reserved for future consent verification)
private readonly IConsentPort _consentPort;
+#pragma warning restore CS0414
private readonly IAuditLoggingAdapter _auditLoggingAdapter;
// In-memory store for conformity assessments. In a real system, this would be a persistent database.
@@ -88,8 +89,8 @@ public Task ClassifySystemRiskAsync(RiskClassificati
_logger.LogInformation("System '{SystemName}' classified as {RiskLevel}. Justification: {Justification}",
request.SystemName, response.RiskLevel, response.Justification);
- // Audit the classification event
- _auditLoggingAdapter.LogLegalComplianceCheckedAsync(
+ // Audit the classification event (fire-and-forget; discard suppresses CS4014)
+ _ = _auditLoggingAdapter.LogLegalComplianceCheckedAsync(
Guid.NewGuid().ToString(),
"EUAIAct",
request.SystemName,
@@ -157,7 +158,7 @@ public Task GetConformityAssessmentStatusAsync(string asse
return Task.FromResult(assessment);
}
_logger.LogWarning("Conformity assessment with ID {AssessmentId} not found for tenant {TenantId}.", assessmentId, tenantId);
- return Task.FromResult(null);
+ return Task.FromResult(null!);
}
///
diff --git a/src/BusinessApplications/Compliance/Adapters/GDPRComplianceAdapter.cs b/src/BusinessApplications/Compliance/Adapters/GDPRComplianceAdapter.cs
index 0b3574f..5f76361 100644
--- a/src/BusinessApplications/Compliance/Adapters/GDPRComplianceAdapter.cs
+++ b/src/BusinessApplications/Compliance/Adapters/GDPRComplianceAdapter.cs
@@ -8,8 +8,7 @@
using CognitiveMesh.BusinessApplications.ConvenerServices.Ports.Models;
using CognitiveMesh.BusinessApplications.Compliance.Ports;
using CognitiveMesh.BusinessApplications.Compliance.Ports.Models;
-using CognitiveMesh.FoundationLayer.AuditLogging;
-using CognitiveMesh.FoundationLayer.AuditLogging.Models;
+using FoundationLayer.AuditLogging;
namespace CognitiveMesh.BusinessApplications.Compliance.Adapters
{
@@ -1479,7 +1478,7 @@ public class ComplianceAssessment
///
/// The area being assessed (e.g., "Legal Basis", "Data Minimization").
///
- public string Area { get; set; }
+ public string Area { get; set; } = string.Empty;
///
/// Indicates whether the area is compliant with GDPR requirements.
@@ -1492,27 +1491,6 @@ public class ComplianceAssessment
public List Issues { get; set; } = new List();
}
- ///
- /// Represents a data transfer to another country or region.
- ///
- public class DataTransfer
- {
- ///
- /// The destination country or region.
- ///
- public string Destination { get; set; }
-
- ///
- /// The mechanism used for the transfer (e.g., adequacy decision, SCCs).
- ///
- public string TransferMechanism { get; set; }
-
- ///
- /// Indicates whether additional safeguards are in place beyond the transfer mechanism.
- ///
- public bool AdditionalSafeguards { get; set; }
- }
-
///
/// Interface for a registry of data processing activities.
///
diff --git a/src/BusinessApplications/Compliance/Adapters/SectoralRegulationsFramework.cs b/src/BusinessApplications/Compliance/Adapters/SectoralRegulationsFramework.cs
index 1f6e9ab..9cd4aa3 100644
--- a/src/BusinessApplications/Compliance/Adapters/SectoralRegulationsFramework.cs
+++ b/src/BusinessApplications/Compliance/Adapters/SectoralRegulationsFramework.cs
@@ -19,12 +19,12 @@ public class SectoralComplianceCheckRequest
/// The name of the regulation to check against (e.g., "HIPAA", "MiFID II").
/// This is used to dispatch the request to the correct compliance module.
///
- public string Regulation { get; set; }
+ public string Regulation { get; set; } = string.Empty;
///
/// The type of action being assessed (e.g., "DataStorage", "DataSharing", "AutomatedDecision").
///
- public string ActionType { get; set; }
+ public string ActionType { get; set; } = string.Empty;
///
/// The data involved in the action, which will be assessed by the compliance module.
@@ -34,7 +34,7 @@ public class SectoralComplianceCheckRequest
///
/// The tenant ID under which the compliance check is being performed.
///
- public string TenantId { get; set; }
+ public string TenantId { get; set; } = string.Empty;
///
/// A unique identifier for tracing this request across systems.
@@ -55,7 +55,7 @@ public class ComplianceAssessmentResult
///
/// The regulation against which the assessment was performed.
///
- public string Regulation { get; set; }
+ public string Regulation { get; set; } = string.Empty;
///
/// A list of compliance issues identified during the assessment. This list is empty if IsCompliant is true.
@@ -81,22 +81,42 @@ public class ComplianceIssue
///
/// A unique code for the identified issue.
///
- public string IssueCode { get; set; }
+ public string IssueCode { get; set; } = string.Empty;
+
+ ///
+ /// A unique identifier for the compliance issue instance.
+ ///
+ public string IssueId { get; set; } = string.Empty;
+
+ ///
+ /// The regulatory framework this issue pertains to (e.g., "GDPR", "EU AI Act").
+ ///
+ public string Framework { get; set; } = string.Empty;
///
/// A description of the compliance issue.
///
- public string Description { get; set; }
+ public string Description { get; set; } = string.Empty;
///
/// The severity of the issue (e.g., "Low", "Medium", "High", "Critical").
///
- public string Severity { get; set; }
+ public string Severity { get; set; } = string.Empty;
///
/// A reference to the specific article or section of the regulation that was violated.
///
- public string RegulatoryReference { get; set; }
+ public string RegulatoryReference { get; set; } = string.Empty;
+
+ ///
+ /// The timestamp when this compliance issue was identified.
+ ///
+ public DateTimeOffset IdentifiedAt { get; set; }
+
+ ///
+ /// Recommended steps to remediate the compliance issue.
+ ///
+ public string RemediationSteps { get; set; } = string.Empty;
}
#endregion
diff --git a/src/BusinessApplications/Compliance/Compliance.csproj b/src/BusinessApplications/Compliance/Compliance.csproj
index beb47b2..d58b86a 100644
--- a/src/BusinessApplications/Compliance/Compliance.csproj
+++ b/src/BusinessApplications/Compliance/Compliance.csproj
@@ -8,6 +8,10 @@
$(NoWarn);1591
+
+
+
+
@@ -15,7 +19,11 @@
+
+
+
+
diff --git a/src/BusinessApplications/Compliance/Controllers/ComplianceController.cs b/src/BusinessApplications/Compliance/Controllers/ComplianceController.cs
index b745e19..46d0ff6 100644
--- a/src/BusinessApplications/Compliance/Controllers/ComplianceController.cs
+++ b/src/BusinessApplications/Compliance/Controllers/ComplianceController.cs
@@ -1,5 +1,4 @@
using CognitiveMesh.ReasoningLayer.EthicalReasoning.Ports;
-using CognitiveMesh.ReasoningLayer.EthicalReasoning.Ports.Models;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
@@ -8,72 +7,162 @@
using System.Collections.Generic;
using System.Threading.Tasks;
-namespace CognitiveMesh.BusinessApplications.Compliance.Ports
+namespace CognitiveMesh.BusinessApplications.Compliance.Controllers.Models
{
// --- DTOs for GDPR Compliance ---
+
+ ///
+ /// Represents a GDPR consent record for the controller API.
+ ///
public class GDPRConsentRecord
{
- public string SubjectId { get; set; }
- public string ConsentType { get; set; } // e.g., "DataProcessing", "Marketing"
+ /// Gets or sets the subject identifier.
+ public string SubjectId { get; set; } = string.Empty;
+ /// Gets or sets the consent type (e.g., "DataProcessing", "Marketing").
+ public string ConsentType { get; set; } = string.Empty;
+ /// Gets or sets whether consent is given.
public bool IsGiven { get; set; }
+ /// Gets or sets the timestamp of the consent.
public DateTimeOffset Timestamp { get; set; }
- public string EvidenceId { get; set; }
+ /// Gets or sets the evidence identifier.
+ public string EvidenceId { get; set; } = string.Empty;
}
+ ///
+ /// Represents a data subject request for the controller API.
+ ///
public class DataSubjectRequest
{
+ /// Gets or sets the request identifier.
public string RequestId { get; set; } = Guid.NewGuid().ToString();
- public string SubjectId { get; set; }
- public string RequestType { get; set; } // "Access", "Rectify", "Erasure"
- public string Data { get; set; } // Data for rectification, etc.
+ /// Gets or sets the subject identifier.
+ public string SubjectId { get; set; } = string.Empty;
+ /// Gets or sets the request type ("Access", "Rectify", "Erasure").
+ public string RequestType { get; set; } = string.Empty;
+ /// Gets or sets the data for rectification.
+ public string Data { get; set; } = string.Empty;
+ /// Gets or sets the status of the request.
public string Status { get; set; } = "Pending";
}
// --- DTOs for EU AI Act Compliance ---
+
+ ///
+ /// Represents an AI risk assessment for the controller API.
+ ///
public class AIRiskAssessment
{
- public string SystemId { get; set; }
- public string RiskLevel { get; set; } // "High", "Medium", "Low"
- public string AssessmentDetails { get; set; }
- public string MitigationMeasures { get; set; }
+ /// Gets or sets the system identifier.
+ public string SystemId { get; set; } = string.Empty;
+ /// Gets or sets the risk level ("High", "Medium", "Low").
+ public string RiskLevel { get; set; } = string.Empty;
+ /// Gets or sets the assessment details.
+ public string AssessmentDetails { get; set; } = string.Empty;
+ /// Gets or sets the mitigation measures.
+ public string MitigationMeasures { get; set; } = string.Empty;
+ /// Gets or sets the assessment date.
public DateTimeOffset AssessmentDate { get; set; }
}
// --- DTOs for Governance Policy Management ---
+
+ ///
+ /// Represents a governance policy record for the controller API.
+ ///
public class PolicyRecord
{
- public string PolicyId { get; set; }
- public string Name { get; set; }
- public string Content { get; set; }
+ /// Gets or sets the policy identifier.
+ public string PolicyId { get; set; } = string.Empty;
+ /// Gets or sets the policy name.
+ public string Name { get; set; } = string.Empty;
+ /// Gets or sets the policy content.
+ public string Content { get; set; } = string.Empty;
+ /// Gets or sets the policy version.
public int Version { get; set; }
- public string Status { get; set; } // "Draft", "Active", "Deprecated"
+ /// Gets or sets the policy status ("Draft", "Active", "Deprecated").
+ public string Status { get; set; } = string.Empty;
+ /// Gets or sets the last updated timestamp.
public DateTimeOffset LastUpdatedAt { get; set; }
}
+}
- // --- Placeholder Ports for Adapters ---
- public interface IGDPRCompliancePort
+namespace CognitiveMesh.BusinessApplications.Compliance.Controllers.Ports
+{
+ using CognitiveMesh.BusinessApplications.Compliance.Controllers.Models;
+
+ ///
+ /// Defines the simplified GDPR compliance contract used by the compliance controller.
+ ///
+ public interface IGDPRComplianceControllerPort
{
+ ///
+ /// Records a consent decision.
+ ///
+ /// The consent record to store.
+ /// The persisted consent record.
Task RecordConsentAsync(GDPRConsentRecord consentRecord);
+
+ ///
+ /// Handles a data subject request.
+ ///
+ /// The data subject identifier.
+ /// The type of request.
+ /// The data subject request result.
Task HandleDataSubjectRequestAsync(string subjectId, string requestType);
}
- public interface IEUAIActCompliancePort
+ ///
+ /// Defines the simplified EU AI Act compliance contract used by the compliance controller.
+ ///
+ public interface IEUAIActComplianceControllerPort
{
+ ///
+ /// Submits a risk assessment for an AI system.
+ ///
+ /// The risk assessment to submit.
+ /// The submitted risk assessment.
Task SubmitRiskAssessmentAsync(AIRiskAssessment assessment);
}
+ ///
+ /// Defines the contract for governance policy management operations.
+ ///
public interface IGovernancePort
{
+ ///
+ /// Lists all governance policies.
+ ///
+ /// An enumerable of policy records.
Task> ListPoliciesAsync();
+
+ ///
+ /// Creates a new draft policy.
+ ///
+ /// The policy name.
+ /// The policy content.
+ /// The created policy record.
Task CreatePolicyAsync(string name, string content);
+
+ ///
+ /// Approves a draft policy, making it active.
+ ///
+ /// The identifier of the policy to approve.
+ /// The approved policy record.
Task ApprovePolicyAsync(string policyId);
+
+ ///
+ /// Rolls back a policy to a previous version.
+ ///
+ /// The identifier of the policy to roll back.
+ /// The rolled-back policy record.
Task RollbackPolicyAsync(string policyId);
}
}
namespace CognitiveMesh.BusinessApplications.Compliance.Controllers
{
- using CognitiveMesh.BusinessApplications.Compliance.Ports;
+ using CognitiveMesh.BusinessApplications.Compliance.Controllers.Models;
+ using CognitiveMesh.BusinessApplications.Compliance.Controllers.Ports;
///
/// Provides REST API endpoints for managing and reporting on ethical and legal compliance.
@@ -88,16 +177,25 @@ public class ComplianceController : ControllerBase
private readonly ILogger _logger;
private readonly INormativeAgencyPort _normativeAgencyPort;
private readonly IInformationEthicsPort _informationEthicsPort;
- private readonly IGDPRCompliancePort _gdprPort;
- private readonly IEUAIActCompliancePort _euAiActPort;
+ private readonly IGDPRComplianceControllerPort _gdprPort;
+ private readonly IEUAIActComplianceControllerPort _euAiActPort;
private readonly IGovernancePort _governancePort;
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The logger instance.
+ /// The normative agency reasoning port.
+ /// The information ethics reasoning port.
+ /// The GDPR compliance controller port.
+ /// The EU AI Act compliance controller port.
+ /// The governance policy management port.
public ComplianceController(
ILogger logger,
INormativeAgencyPort normativeAgencyPort,
IInformationEthicsPort informationEthicsPort,
- IGDPRCompliancePort gdprPort,
- IEUAIActCompliancePort euAiActPort,
+ IGDPRComplianceControllerPort gdprPort,
+ IEUAIActComplianceControllerPort euAiActPort,
IGovernancePort governancePort)
{
_logger = logger;
@@ -158,7 +256,7 @@ public async Task ApprovePolicy(string policyId)
[HttpGet("report")]
[ProducesResponseType(typeof(object), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status400BadRequest)]
- public async Task GetComplianceReport([FromQuery] string reportType)
+ public Task GetComplianceReport([FromQuery] string reportType)
{
// In a real implementation, this would call a dedicated reporting service.
// For now, we mock the response based on the report type.
@@ -172,9 +270,9 @@ public async Task GetComplianceReport([FromQuery] string reportTy
report = new { ReportTitle = "GDPR Data Subject Request Log", Entries = new[] { new { RequestId = Guid.NewGuid(), SubjectId = "user-123", Status = "Completed" } } };
break;
default:
- return BadRequest(new { error_code = "INVALID_REPORT_TYPE", message = "The requested report type is not supported." });
+ return Task.FromResult(BadRequest(new { error_code = "INVALID_REPORT_TYPE", message = "The requested report type is not supported." }));
}
- return Ok(report);
+ return Task.FromResult(Ok(report));
}
#endregion
diff --git a/src/BusinessApplications/Compliance/Ports/IEUAIActCompliancePort.cs b/src/BusinessApplications/Compliance/Ports/IEUAIActCompliancePort.cs
index 5827dab..3a5cfba 100644
--- a/src/BusinessApplications/Compliance/Ports/IEUAIActCompliancePort.cs
+++ b/src/BusinessApplications/Compliance/Ports/IEUAIActCompliancePort.cs
@@ -44,12 +44,12 @@ public enum AIRiskLevel
///
public class RiskClassificationRequest
{
- public string SystemName { get; set; }
- public string SystemVersion { get; set; }
- public string IntendedPurpose { get; set; }
+ public string SystemName { get; set; } = string.Empty;
+ public string SystemVersion { get; set; } = string.Empty;
+ public string IntendedPurpose { get; set; } = string.Empty;
public List DataSources { get; set; } = new List();
public List UserDemographics { get; set; } = new List();
- public string TenantId { get; set; }
+ public string TenantId { get; set; } = string.Empty;
public string CorrelationId { get; set; } = Guid.NewGuid().ToString();
}
@@ -58,13 +58,13 @@ public class RiskClassificationRequest
///
public class RiskClassificationResponse
{
- public string SystemName { get; set; }
- public string SystemVersion { get; set; }
+ public string SystemName { get; set; } = string.Empty;
+ public string SystemVersion { get; set; } = string.Empty;
public AIRiskLevel RiskLevel { get; set; }
- public string Justification { get; set; }
+ public string Justification { get; set; } = string.Empty;
public List ApplicableArticles { get; set; } = new List();
public DateTimeOffset AssessedAt { get; set; }
- public string CorrelationId { get; set; }
+ public string CorrelationId { get; set; } = string.Empty;
}
///
@@ -73,10 +73,10 @@ public class RiskClassificationResponse
public class ConformityAssessmentRequest
{
public Guid AgentId { get; set; }
- public string SystemVersion { get; set; }
+ public string SystemVersion { get; set; } = string.Empty;
public string FrameworkVersion { get; set; } = "EU_AI_ACT_V1";
- public string RequestedBy { get; set; }
- public string TenantId { get; set; }
+ public string RequestedBy { get; set; } = string.Empty;
+ public string TenantId { get; set; } = string.Empty;
public string CorrelationId { get; set; } = Guid.NewGuid().ToString();
}
@@ -85,12 +85,12 @@ public class ConformityAssessmentRequest
///
public class ConformityAssessment
{
- public string AssessmentId { get; set; }
+ public string AssessmentId { get; set; } = string.Empty;
public Guid AgentId { get; set; }
- public string Status { get; set; } // e.g., "InProgress", "Completed", "Failed"
- public string Outcome { get; set; } // e.g., "Compliant", "NonCompliant"
+ public string Status { get; set; } = string.Empty; // e.g., "InProgress", "Completed", "Failed"
+ public string Outcome { get; set; } = string.Empty; // e.g., "Compliant", "NonCompliant"
public List Findings { get; set; } = new List();
- public string EvidenceLocation { get; set; }
+ public string EvidenceLocation { get; set; } = string.Empty;
public DateTimeOffset AssessedAt { get; set; }
public DateTimeOffset? ExpiresAt { get; set; }
}
@@ -101,9 +101,9 @@ public class ConformityAssessment
public class TransparencyCheckRequest
{
public Guid AgentId { get; set; }
- public string SystemType { get; set; } // e.g., "Chatbot", "DeepfakeGenerator", "EmotionRecognition"
- public string DisclosureContent { get; set; }
- public string TenantId { get; set; }
+ public string SystemType { get; set; } = string.Empty; // e.g., "Chatbot", "DeepfakeGenerator", "EmotionRecognition"
+ public string DisclosureContent { get; set; } = string.Empty;
+ public string TenantId { get; set; } = string.Empty;
public string CorrelationId { get; set; } = Guid.NewGuid().ToString();
}
@@ -114,7 +114,7 @@ public class TransparencyCheckResponse
{
public bool IsCompliant { get; set; }
public List Violations { get; set; } = new List();
- public string ApplicableArticle { get; set; }
+ public string ApplicableArticle { get; set; } = string.Empty;
public DateTimeOffset CheckedAt { get; set; }
}
@@ -124,11 +124,11 @@ public class TransparencyCheckResponse
public class EUDatabaseRegistrationRequest
{
public Guid AgentId { get; set; }
- public string SystemName { get; set; }
- public string ProviderInfo { get; set; }
- public string ConformityAssessmentId { get; set; }
- public string InstructionsForUseUrl { get; set; }
- public string TenantId { get; set; }
+ public string SystemName { get; set; } = string.Empty;
+ public string ProviderInfo { get; set; } = string.Empty;
+ public string ConformityAssessmentId { get; set; } = string.Empty;
+ public string InstructionsForUseUrl { get; set; } = string.Empty;
+ public string TenantId { get; set; } = string.Empty;
public string CorrelationId { get; set; } = Guid.NewGuid().ToString();
}
@@ -138,10 +138,10 @@ public class EUDatabaseRegistrationRequest
public class EUDatabaseRegistrationResponse
{
public bool IsSuccess { get; set; }
- public string RegistrationId { get; set; }
- public string Status { get; set; } // e.g., "Submitted", "Registered", "Failed"
+ public string RegistrationId { get; set; } = string.Empty;
+ public string Status { get; set; } = string.Empty; // e.g., "Submitted", "Registered", "Failed"
public DateTimeOffset RegisteredAt { get; set; }
- public ErrorEnvelope Error { get; set; }
+ public ErrorEnvelope? Error { get; set; }
}
#endregion
@@ -164,7 +164,7 @@ public interface IEUAIActCompliancePort
/// The request containing details of the AI system to classify.
/// The risk classification and justification for the decision.
///
- /// This method implements the logic defined in EU AI Act Title III and Annexes II & III.
+ /// This method implements the logic defined in EU AI Act Title III and Annexes II & III.
/// It determines if a system is Unacceptable, High-Risk, Limited-Risk, or Minimal-Risk.
///
Task ClassifySystemRiskAsync(RiskClassificationRequest request);
diff --git a/src/BusinessApplications/Compliance/Ports/IGDPRCompliancePort.cs b/src/BusinessApplications/Compliance/Ports/IGDPRCompliancePort.cs
index 341adbd..59a1616 100644
--- a/src/BusinessApplications/Compliance/Ports/IGDPRCompliancePort.cs
+++ b/src/BusinessApplications/Compliance/Ports/IGDPRCompliancePort.cs
@@ -15,22 +15,22 @@ public abstract class DataSubjectRightRequest
///
/// The unique identifier of the data subject making the request.
///
- public string SubjectId { get; set; }
+ public string SubjectId { get; set; } = string.Empty;
///
/// The tenant ID in which the request is made.
///
- public string TenantId { get; set; }
+ public string TenantId { get; set; } = string.Empty;
///
/// A token or evidence used to verify the identity of the data subject.
///
- public string IdentityVerificationToken { get; set; }
+ public string IdentityVerificationToken { get; set; } = string.Empty;
///
/// The user or system that initiated the request.
///
- public string RequestedBy { get; set; }
+ public string RequestedBy { get; set; } = string.Empty;
///
/// A correlation ID for tracing the request across the system.
@@ -59,6 +59,27 @@ public class DataSubjectRectificationRequest : DataSubjectRightRequest
///
public class DataSubjectErasureRequest : DataSubjectRightRequest { }
+ ///
+ /// Represents a data transfer to another country or region.
+ ///
+ public class DataTransfer
+ {
+ ///
+ /// The destination country or region.
+ ///
+ public string Destination { get; set; } = string.Empty;
+
+ ///
+ /// The mechanism used for the transfer (e.g., adequacy decision, SCCs).
+ ///
+ public string TransferMechanism { get; set; } = string.Empty;
+
+ ///
+ /// Indicates whether additional safeguards are in place beyond the transfer mechanism.
+ ///
+ public bool AdditionalSafeguards { get; set; }
+ }
+
///
/// Represents a request for data portability (GDPR Article 20).
///
@@ -78,12 +99,12 @@ public class DataSubjectObjectionRequest : DataSubjectRightRequest
///
/// The specific data processing activity being objected to.
///
- public string ProcessingActivity { get; set; }
+ public string ProcessingActivity { get; set; } = string.Empty;
///
/// The grounds for the objection.
///
- public string GroundsForObjection { get; set; }
+ public string GroundsForObjection { get; set; } = string.Empty;
}
///
@@ -92,11 +113,11 @@ public class DataSubjectObjectionRequest : DataSubjectRightRequest
public class DataSubjectRightResponse
{
public bool IsSuccess { get; set; }
- public string RequestType { get; set; }
- public string SubjectId { get; set; }
+ public string RequestType { get; set; } = string.Empty;
+ public string SubjectId { get; set; } = string.Empty;
public DateTimeOffset ProcessedAt { get; set; }
- public Dictionary Data { get; set; }
- public ErrorEnvelope Error { get; set; }
+ public Dictionary Data { get; set; } = new Dictionary();
+ public ErrorEnvelope? Error { get; set; }
}
#endregion
@@ -108,15 +129,15 @@ public class DataSubjectRightResponse
///
public class DataProcessingAssessmentRequest
{
- public string ActivityName { get; set; }
- public string ProcessingPurpose { get; set; }
+ public string ActivityName { get; set; } = string.Empty;
+ public string ProcessingPurpose { get; set; } = string.Empty;
public List DataCategories { get; set; } = new List();
- public string LegalBasis { get; set; }
+ public string LegalBasis { get; set; } = string.Empty;
public Dictionary ProcessingDetails { get; set; } = new Dictionary();
- public string RetentionPeriod { get; set; }
+ public string RetentionPeriod { get; set; } = string.Empty;
public List DataTransfers { get; set; } = new List();
- public string TenantId { get; set; }
- public string AssessedBy { get; set; }
+ public string TenantId { get; set; } = string.Empty;
+ public string AssessedBy { get; set; } = string.Empty;
public string CorrelationId { get; set; } = Guid.NewGuid().ToString();
}
@@ -126,11 +147,11 @@ public class DataProcessingAssessmentRequest
public class DataProcessingAssessmentResponse
{
public bool IsCompliant { get; set; }
- public string ActivityName { get; set; }
+ public string ActivityName { get; set; } = string.Empty;
public DateTimeOffset AssessedAt { get; set; }
public List Issues { get; set; } = new List();
- public Dictionary AssessmentDetails { get; set; }
- public ErrorEnvelope Error { get; set; }
+ public Dictionary AssessmentDetails { get; set; } = new Dictionary();
+ public ErrorEnvelope? Error { get; set; }
}
#endregion
@@ -142,11 +163,11 @@ public class DataProcessingAssessmentResponse
///
public class ConsentVerificationRequest
{
- public string SubjectId { get; set; }
- public string TenantId { get; set; }
- public string ConsentType { get; set; }
- public string ProcessingOperation { get; set; }
- public string VerifiedBy { get; set; }
+ public string SubjectId { get; set; } = string.Empty;
+ public string TenantId { get; set; } = string.Empty;
+ public string ConsentType { get; set; } = string.Empty;
+ public string ProcessingOperation { get; set; } = string.Empty;
+ public string VerifiedBy { get; set; } = string.Empty;
public string CorrelationId { get; set; } = Guid.NewGuid().ToString();
}
@@ -156,11 +177,11 @@ public class ConsentVerificationRequest
public class ConsentVerificationResponse
{
public bool HasConsent { get; set; }
- public string SubjectId { get; set; }
- public string ConsentType { get; set; }
+ public string SubjectId { get; set; } = string.Empty;
+ public string ConsentType { get; set; } = string.Empty;
public DateTimeOffset VerifiedAt { get; set; }
- public string ConsentRecordId { get; set; }
- public ErrorEnvelope Error { get; set; }
+ public string ConsentRecordId { get; set; } = string.Empty;
+ public ErrorEnvelope? Error { get; set; }
}
///
@@ -168,15 +189,15 @@ public class ConsentVerificationResponse
///
public class ConsentRecordRequest
{
- public string SubjectId { get; set; }
- public string TenantId { get; set; }
- public string ConsentType { get; set; }
- public string ProcessingOperation { get; set; }
+ public string SubjectId { get; set; } = string.Empty;
+ public string TenantId { get; set; } = string.Empty;
+ public string ConsentType { get; set; } = string.Empty;
+ public string ProcessingOperation { get; set; } = string.Empty;
public bool IsGranted { get; set; }
- public string Source { get; set; }
- public string Evidence { get; set; }
+ public string Source { get; set; } = string.Empty;
+ public string Evidence { get; set; } = string.Empty;
public DateTimeOffset? ExpirationTime { get; set; }
- public string RecordedBy { get; set; }
+ public string RecordedBy { get; set; } = string.Empty;
public string CorrelationId { get; set; } = Guid.NewGuid().ToString();
}
@@ -186,12 +207,12 @@ public class ConsentRecordRequest
public class ConsentRecordResponse
{
public bool IsSuccess { get; set; }
- public string SubjectId { get; set; }
- public string ConsentType { get; set; }
+ public string SubjectId { get; set; } = string.Empty;
+ public string ConsentType { get; set; } = string.Empty;
public DateTimeOffset RecordedAt { get; set; }
- public string ConsentRecordId { get; set; }
+ public string ConsentRecordId { get; set; } = string.Empty;
public bool IsGranted { get; set; }
- public ErrorEnvelope Error { get; set; }
+ public ErrorEnvelope? Error { get; set; }
}
#endregion
diff --git a/src/BusinessApplications/ConvenerServices/ConvenerController.cs b/src/BusinessApplications/ConvenerServices/ConvenerController.cs
index 356199d..095cd0b 100644
--- a/src/BusinessApplications/ConvenerServices/ConvenerController.cs
+++ b/src/BusinessApplications/ConvenerServices/ConvenerController.cs
@@ -1,13 +1,12 @@
-using CognitiveMesh.Application.UseCases.ChampionDiscovery;
-using CognitiveMesh.MetacognitiveLayer.CommunityPulse;
-using CognitiveMesh.MetacognitiveLayer.CommunityPulse.Models;
+using CognitiveMesh.BusinessApplications.ConvenerServices.Ports;
+using CognitiveMesh.BusinessApplications.ConvenerServices.UseCases;
+using MetacognitiveLayer.CommunityPulse;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
-using System;
using System.Security.Claims;
-using System.Threading.Tasks;
+using static CognitiveMesh.Shared.LogSanitizer;
namespace CognitiveMesh.BusinessApplications.ConvenerServices
{
@@ -18,21 +17,35 @@ namespace CognitiveMesh.BusinessApplications.ConvenerServices
///
[ApiController]
[Route("api/v1/convener")]
- [Authorize] // All endpoints require authentication by default.
+ [Authorize]
public class ConvenerController : ControllerBase
{
private readonly ILogger _logger;
private readonly DiscoverChampionsUseCase _discoverChampionsUseCase;
private readonly CommunityPulseService _communityPulseService;
+ private readonly IInnovationSpreadPort _innovationSpreadPort;
+ private readonly ILearningCatalystPort _learningCatalystPort;
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger for structured diagnostics.
+ /// Use case for champion discovery.
+ /// Service for community health metrics.
+ /// Port for innovation spread tracking.
+ /// Port for learning catalyst recommendations.
public ConvenerController(
ILogger logger,
DiscoverChampionsUseCase discoverChampionsUseCase,
- CommunityPulseService communityPulseService)
+ CommunityPulseService communityPulseService,
+ IInnovationSpreadPort innovationSpreadPort,
+ ILearningCatalystPort learningCatalystPort)
{
- _logger = logger;
- _discoverChampionsUseCase = discoverChampionsUseCase;
- _communityPulseService = communityPulseService;
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ _discoverChampionsUseCase = discoverChampionsUseCase ?? throw new ArgumentNullException(nameof(discoverChampionsUseCase));
+ _communityPulseService = communityPulseService ?? throw new ArgumentNullException(nameof(communityPulseService));
+ _innovationSpreadPort = innovationSpreadPort ?? throw new ArgumentNullException(nameof(innovationSpreadPort));
+ _learningCatalystPort = learningCatalystPort ?? throw new ArgumentNullException(nameof(learningCatalystPort));
}
///
@@ -44,7 +57,7 @@ public ConvenerController(
/// endorsements, and recent activity. All data access is strictly scoped to the
/// authenticated user's tenant.
///
- /// Conforms to NFRs: Security (1), Telemetry & Audit (2), Performance (6).
+ /// Conforms to NFRs: Security (1), Telemetry & Audit (2), Performance (6).
///
/// An optional skill to filter champions by.
/// The maximum number of champions to return.
@@ -145,28 +158,112 @@ public async Task GetCommunityPulse([FromQuery] string channelId,
}
}
- // --- Placeholder Endpoints for Future Implementation ---
-
+ ///
+ /// Tracks how an innovation (idea) has spread through the organization.
+ ///
+ ///
+ /// Returns adoption lineage, virality metrics, and current diffusion phase for a specific idea.
+ /// Implements the Innovation Spread Engine from the Convener PRD.
+ ///
+ /// Conforms to NFRs: Security (1), Telemetry & Audit (2), Performance (6).
+ ///
+ /// The unique identifier for the innovation/idea to track.
+ /// Innovation spread analysis including adoption lineage and metrics.
[HttpGet("innovation/spread/{ideaId}")]
- [ProducesResponseType(StatusCodes.Status501NotImplemented)]
- public IActionResult GetInnovationSpread(string ideaId)
+ [ProducesResponseType(typeof(InnovationSpreadResult), StatusCodes.Status200OK)]
+ [ProducesResponseType(StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(StatusCodes.Status401Unauthorized)]
+ [ProducesResponseType(StatusCodes.Status404NotFound)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task GetInnovationSpread(string ideaId)
{
- return StatusCode(StatusCodes.Status501NotImplemented, "Innovation Spread tracking is not yet implemented.");
+ try
+ {
+ var tenantId = GetTenantIdFromClaims();
+ if (tenantId == null)
+ {
+ return Unauthorized("Tenant ID is missing from the authentication token.");
+ }
+
+ if (string.IsNullOrWhiteSpace(ideaId))
+ {
+ return BadRequest("The 'ideaId' path parameter is required.");
+ }
+
+ _logger.LogInformation(
+ "Tracking innovation spread for Idea '{IdeaId}' in Tenant '{TenantId}'.",
+ Sanitize(ideaId), Sanitize(tenantId));
+
+ var result = await _innovationSpreadPort.GetInnovationSpreadAsync(ideaId, tenantId);
+ if (result == null)
+ {
+ return NotFound($"No innovation data found for idea '{ideaId}'.");
+ }
+
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "An unhandled exception occurred during innovation spread tracking for Idea '{IdeaId}'.", Sanitize(ideaId));
+ return StatusCode(StatusCodes.Status500InternalServerError, "An internal error occurred while processing your request.");
+ }
}
+ ///
+ /// Generates personalized learning catalyst recommendations for the authenticated user.
+ ///
+ ///
+ /// Analyzes the user's skill profile, identifies gaps, and recommends targeted learning
+ /// activities sourced from champions and curated content. Links contributions to outcomes.
+ ///
+ /// Conforms to NFRs: Security (1), Privacy (4), Telemetry & Audit (2).
+ ///
+ /// Optional request body with focus areas and result limits.
+ /// Curated learning recommendations with identified skill gaps.
[HttpPost("learning/catalysts/recommend")]
- [ProducesResponseType(StatusCodes.Status501NotImplemented)]
- public IActionResult GetLearningRecommendations()
+ [ProducesResponseType(typeof(LearningCatalystResponse), StatusCodes.Status200OK)]
+ [ProducesResponseType(StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(StatusCodes.Status401Unauthorized)]
+ [ProducesResponseType(StatusCodes.Status500InternalServerError)]
+ public async Task GetLearningRecommendations([FromBody] LearningCatalystRequest? request)
{
- return StatusCode(StatusCodes.Status501NotImplemented, "Learning Catalyst recommendations are not yet implemented.");
+ try
+ {
+ var tenantId = GetTenantIdFromClaims();
+ if (tenantId == null)
+ {
+ return Unauthorized("Tenant ID is missing from the authentication token.");
+ }
+
+ var userId = User.FindFirstValue("sub") ?? User.FindFirstValue(ClaimTypes.NameIdentifier);
+ if (string.IsNullOrWhiteSpace(userId))
+ {
+ return Unauthorized("User ID is missing from the authentication token.");
+ }
+
+ request ??= new LearningCatalystRequest();
+ request.TenantId = tenantId;
+ request.UserId = userId;
+
+ _logger.LogInformation(
+ "Generating learning catalyst recommendations for User '{UserId}' in Tenant '{TenantId}'.",
+ userId, tenantId);
+
+ var response = await _learningCatalystPort.GetRecommendationsAsync(request);
+ return Ok(response);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "An unhandled exception occurred during learning catalyst recommendation.");
+ return StatusCode(StatusCodes.Status500InternalServerError, "An internal error occurred while processing your request.");
+ }
}
///
/// Helper method to securely retrieve the Tenant ID from the user's claims.
///
- private string GetTenantIdFromClaims()
+ private string? GetTenantIdFromClaims()
{
- // In a real application, the claim type would be a constant.
return User.FindFirstValue("tenant_id");
}
}
diff --git a/src/BusinessApplications/ConvenerServices/ConvenerServices.csproj b/src/BusinessApplications/ConvenerServices/ConvenerServices.csproj
index beb47b2..9bb6312 100644
--- a/src/BusinessApplications/ConvenerServices/ConvenerServices.csproj
+++ b/src/BusinessApplications/ConvenerServices/ConvenerServices.csproj
@@ -5,9 +5,12 @@
enable
enable
true
- $(NoWarn);1591
+
+
+
+
@@ -15,6 +18,7 @@
+
diff --git a/src/BusinessApplications/ConvenerServices/Ports/IConsentPort.cs b/src/BusinessApplications/ConvenerServices/Ports/IConsentPort.cs
index d842f5c..f0ef6d0 100644
--- a/src/BusinessApplications/ConvenerServices/Ports/IConsentPort.cs
+++ b/src/BusinessApplications/ConvenerServices/Ports/IConsentPort.cs
@@ -13,24 +13,24 @@ public class ConsentRequest
///
/// The ID of the user giving consent.
///
- public string UserId { get; set; }
+ public string UserId { get; set; } = string.Empty;
///
/// The tenant ID to which this consent applies.
///
- public string TenantId { get; set; }
+ public string TenantId { get; set; } = string.Empty;
///
/// The specific type of consent being granted or denied (e.g., "NotifyOnProjectOpportunities", "AutoCreateCollaborationSpaces").
/// This should be a well-known, documented string.
///
- public string ConsentType { get; set; }
+ public string ConsentType { get; set; } = string.Empty;
///
/// An optional identifier to narrow the scope of the consent (e.g., a specific project ID or channel ID).
/// If null or empty, the consent is considered global for the given type.
///
- public string Scope { get; set; }
+ public string Scope { get; set; } = string.Empty;
///
/// True if consent is being granted; false if it is being denied or revoked.
@@ -40,24 +40,24 @@ public class ConsentRequest
///
/// The source of the consent action, used for auditing (e.g., "Widget:ChampionFinder", "UserProfileSettings").
///
- public string Source { get; set; }
+ public string Source { get; set; } = string.Empty;
///
/// Optional evidence, such as a link to the version of the privacy policy or terms the user agreed to.
///
- public string Evidence { get; set; }
+ public string Evidence { get; set; } = string.Empty;
///
/// The level of consent being captured (e.g., "Standard", "LegallyBinding", "ExplicitGDPRConsent").
/// Use this to distinguish between advisory consent and those required by regulation.
///
- public string ConsentLevel { get; set; }
+ public string ConsentLevel { get; set; } = string.Empty;
///
/// (Optional) The legal framework that governs this consent (e.g., "GDPR", "EUAIAct", "HIPAA").
/// When provided, downstream services can enforce jurisdiction-specific requirements.
///
- public string LegalFramework { get; set; }
+ public string LegalFramework { get; set; } = string.Empty;
///
/// (Optional) A timestamp indicating when this consent expires. null means no expiration.
@@ -71,25 +71,34 @@ public class ConsentRequest
///
public class ConsentRecord
{
- public string ConsentId { get; set; }
- public string UserId { get; set; }
- public string TenantId { get; set; }
- public string ConsentType { get; set; }
- public string Scope { get; set; }
+ /// Gets or sets the unique consent record identifier.
+ public string ConsentId { get; set; } = string.Empty;
+ /// Gets or sets the user identifier.
+ public string UserId { get; set; } = string.Empty;
+ /// Gets or sets the tenant identifier.
+ public string TenantId { get; set; } = string.Empty;
+ /// Gets or sets the type of consent.
+ public string ConsentType { get; set; } = string.Empty;
+ /// Gets or sets the scope of the consent.
+ public string Scope { get; set; } = string.Empty;
+ /// Gets or sets whether consent was granted.
public bool IsGranted { get; set; }
+ /// Gets or sets the timestamp of the consent decision.
public DateTimeOffset Timestamp { get; set; }
- public string Source { get; set; }
- public string Evidence { get; set; }
+ /// Gets or sets the source of the consent action.
+ public string Source { get; set; } = string.Empty;
+ /// Gets or sets the consent evidence reference.
+ public string Evidence { get; set; } = string.Empty;
///
/// The level of consent granted (mirrors ).
///
- public string ConsentLevel { get; set; }
+ public string ConsentLevel { get; set; } = string.Empty;
///
/// The legal framework (e.g., GDPR, EUAIAct) relevant for this consent record.
///
- public string LegalFramework { get; set; }
+ public string LegalFramework { get; set; } = string.Empty;
///
/// When the consent expires (null if it does not expire).
@@ -107,20 +116,24 @@ public class ConsentRecord
///
public class ValidateConsentRequest
{
- public string UserId { get; set; }
- public string TenantId { get; set; }
- public string RequiredConsentType { get; set; }
- public string Scope { get; set; } // Optional scope to check
+ /// Gets or sets the user identifier to validate.
+ public string UserId { get; set; } = string.Empty;
+ /// Gets or sets the tenant identifier.
+ public string TenantId { get; set; } = string.Empty;
+ /// Gets or sets the required consent type to check.
+ public string RequiredConsentType { get; set; } = string.Empty;
+ /// Gets or sets the optional scope to validate.
+ public string Scope { get; set; } = string.Empty;
///
/// (Optional) Specifies the minimum consent level that must have been granted.
///
- public string RequiredConsentLevel { get; set; }
+ public string RequiredConsentLevel { get; set; } = string.Empty;
///
/// (Optional) Specifies the regulatory framework that the consent must satisfy.
///
- public string RequiredLegalFramework { get; set; }
+ public string RequiredLegalFramework { get; set; } = string.Empty;
}
///
@@ -141,7 +154,7 @@ public class ValidateConsentResponse
///
/// The ID of the relevant consent record, if one exists.
///
- public string ConsentRecordId { get; set; }
+ public string ConsentRecordId { get; set; } = string.Empty;
}
///
@@ -168,14 +181,21 @@ public static class ConsentTypes
// ------------------------------------------------------------------
// GDPR-specific consent types
// ------------------------------------------------------------------
+
+ /// Consent for GDPR data processing.
public const string GDPRDataProcessing = "GDPRDataProcessing";
+ /// Consent for GDPR data transfer outside the EU.
public const string GDPRDataTransferOutsideEU = "GDPRDataTransferOutsideEU";
+ /// Consent for GDPR automated decision-making.
public const string GDPRAutomatedDecisionMaking = "GDPRAutomatedDecisionMaking";
// ------------------------------------------------------------------
// EU AI Act-specific consent types
// ------------------------------------------------------------------
+
+ /// Consent for EU AI Act high-risk system usage.
public const string EUAIActHighRiskSystem = "EUAIActHighRiskSystem";
+ /// Consent for EU AI Act biometric identification.
public const string EUAIActBiometricIdentification = "EUAIActBiometricIdentification";
}
}
@@ -238,6 +258,6 @@ public interface IConsentPort
/// The type of consent to revoke.
/// The optional scope of the consent to revoke.
/// A task that represents the asynchronous operation. The task result is true if the revocation was successful.
- Task RevokeConsentAsync(string userId, string tenantId, string consentType, string scope = null);
+ Task RevokeConsentAsync(string userId, string tenantId, string consentType, string? scope = null);
}
}
diff --git a/src/BusinessApplications/ConvenerServices/Ports/IInnovationSpreadPort.cs b/src/BusinessApplications/ConvenerServices/Ports/IInnovationSpreadPort.cs
new file mode 100644
index 0000000..9d39d5c
--- /dev/null
+++ b/src/BusinessApplications/ConvenerServices/Ports/IInnovationSpreadPort.cs
@@ -0,0 +1,82 @@
+namespace CognitiveMesh.BusinessApplications.ConvenerServices.Ports;
+
+///
+/// Port interface for tracking how innovations spread through an organization.
+/// Implements the Innovation Spread Engine concept from the Convener PRD:
+/// detect, log, and propagate innovations; track adoption lineage.
+///
+public interface IInnovationSpreadPort
+{
+ ///
+ /// Gets the spread metrics and adoption lineage for a specific idea.
+ ///
+ /// The unique identifier of the idea to track.
+ /// Tenant scope for data isolation.
+ /// Cancellation token.
+ /// The innovation spread analysis for the idea.
+ Task GetInnovationSpreadAsync(
+ string ideaId, string tenantId, CancellationToken cancellationToken = default);
+}
+
+///
+/// Result of an innovation spread analysis.
+///
+public class InnovationSpreadResult
+{
+ /// The idea being tracked.
+ public string IdeaId { get; set; } = string.Empty;
+
+ /// Original author/proposer of the idea.
+ public string OriginatorUserId { get; set; } = string.Empty;
+
+ /// When the idea was first proposed.
+ public DateTimeOffset ProposedAt { get; set; }
+
+ /// Number of teams/individuals who have adopted this idea.
+ public int AdoptionCount { get; set; }
+
+ /// Adoption rate as a percentage of the organization.
+ public double AdoptionRatePercent { get; set; }
+
+ /// Virality score indicating how quickly the idea is spreading (0.0–1.0).
+ public double ViralityScore { get; set; }
+
+ /// Adoption lineage — ordered list of adoption events.
+ public List AdoptionLineage { get; set; } = new();
+
+ /// Current spread phase (Seed, Growth, Maturity, Saturation).
+ public SpreadPhase Phase { get; set; }
+}
+
+///
+/// A single adoption event in the innovation spread lineage.
+///
+public class AdoptionEvent
+{
+ /// User or team ID that adopted the idea.
+ public string AdopterUserId { get; set; } = string.Empty;
+
+ /// When the adoption occurred.
+ public DateTimeOffset AdoptedAt { get; set; }
+
+ /// Who introduced/referred the idea to this adopter.
+ public string? ReferredByUserId { get; set; }
+
+ /// Context of adoption (e.g., "sprint planning", "tech talk").
+ public string? AdoptionContext { get; set; }
+}
+
+///
+/// Innovation diffusion phases based on Rogers' Diffusion of Innovations.
+///
+public enum SpreadPhase
+{
+ /// Initial proposal, few adopters.
+ Seed,
+ /// Accelerating adoption.
+ Growth,
+ /// Broad adoption, rate slowing.
+ Maturity,
+ /// Most eligible parties have adopted.
+ Saturation
+}
diff --git a/src/BusinessApplications/ConvenerServices/Ports/ILearningCatalystPort.cs b/src/BusinessApplications/ConvenerServices/Ports/ILearningCatalystPort.cs
new file mode 100644
index 0000000..e798d59
--- /dev/null
+++ b/src/BusinessApplications/ConvenerServices/Ports/ILearningCatalystPort.cs
@@ -0,0 +1,113 @@
+namespace CognitiveMesh.BusinessApplications.ConvenerServices.Ports;
+
+///
+/// Port interface for the Learning Catalyst feature.
+/// Curates, tags, and pushes learning recommendations;
+/// links contributions to learning outcomes.
+///
+public interface ILearningCatalystPort
+{
+ ///
+ /// Generates personalized learning catalyst recommendations for a user.
+ ///
+ /// The recommendation request with user and tenant context.
+ /// Cancellation token.
+ /// A set of curated learning recommendations.
+ Task GetRecommendationsAsync(
+ LearningCatalystRequest request, CancellationToken cancellationToken = default);
+}
+
+///
+/// Request DTO for learning catalyst recommendations.
+///
+public class LearningCatalystRequest
+{
+ /// Tenant scope for data isolation.
+ public string TenantId { get; set; } = string.Empty;
+
+ /// User requesting learning recommendations.
+ public string UserId { get; set; } = string.Empty;
+
+ /// Optional skill areas to focus recommendations on.
+ public List FocusAreas { get; set; } = new();
+
+ /// Maximum number of recommendations to return.
+ public int MaxRecommendations { get; set; } = 5;
+}
+
+///
+/// Response DTO containing curated learning catalyst recommendations.
+///
+public class LearningCatalystResponse
+{
+ /// The user these recommendations are for.
+ public string UserId { get; set; } = string.Empty;
+
+ /// Curated learning recommendations, ordered by relevance.
+ public List Recommendations { get; set; } = new();
+
+ /// Skill gaps identified from the user's profile.
+ public List IdentifiedGaps { get; set; } = new();
+}
+
+///
+/// A single learning recommendation from the catalyst engine.
+///
+public class LearningRecommendation
+{
+ /// Title of the recommended learning activity.
+ public string Title { get; set; } = string.Empty;
+
+ /// Description of what the learner will gain.
+ public string Description { get; set; } = string.Empty;
+
+ /// Type of learning activity (Article, Course, Mentorship, Project, PeerSession).
+ public LearningActivityType ActivityType { get; set; }
+
+ /// Relevance score (0.0–1.0).
+ public double RelevanceScore { get; set; }
+
+ /// Skill area this recommendation targets.
+ public string TargetSkill { get; set; } = string.Empty;
+
+ /// Estimated time commitment in minutes.
+ public int EstimatedMinutes { get; set; }
+
+ /// Champion who contributed this knowledge, if applicable.
+ public string? ContributorUserId { get; set; }
+}
+
+///
+/// Represents an identified skill gap for a user.
+///
+public class SkillGap
+{
+ /// Name of the skill with a gap.
+ public string SkillName { get; set; } = string.Empty;
+
+ /// Current proficiency level (0.0–1.0).
+ public double CurrentLevel { get; set; }
+
+ /// Target proficiency level (0.0–1.0).
+ public double TargetLevel { get; set; }
+
+ /// Priority of closing this gap (Critical, High, Medium, Low).
+ public string Priority { get; set; } = "Medium";
+}
+
+///
+/// Types of learning activities that can be recommended.
+///
+public enum LearningActivityType
+{
+ /// Written article or blog post.
+ Article,
+ /// Structured course or training module.
+ Course,
+ /// Mentorship session with a champion.
+ Mentorship,
+ /// Hands-on project or exercise.
+ Project,
+ /// Peer learning session or workshop.
+ PeerSession
+}
diff --git a/src/BusinessApplications/ConvenerServices/Ports/IManualAdjudicationPort.cs b/src/BusinessApplications/ConvenerServices/Ports/IManualAdjudicationPort.cs
index 00df5d3..a843c5f 100644
--- a/src/BusinessApplications/ConvenerServices/Ports/IManualAdjudicationPort.cs
+++ b/src/BusinessApplications/ConvenerServices/Ports/IManualAdjudicationPort.cs
@@ -49,27 +49,27 @@ public class ManualReviewRequest
///
/// The tenant context for this review request.
///
- public string TenantId { get; set; }
+ public string TenantId { get; set; } = string.Empty;
///
/// The user or system submitting the review request.
///
- public string RequestedBy { get; set; }
+ public string RequestedBy { get; set; } = string.Empty;
///
/// The type of review being requested (e.g., "EmployabilityHighRisk", "OrgBlindnessAlert").
///
- public string ReviewType { get; set; }
+ public string ReviewType { get; set; } = string.Empty;
///
/// The subject of the review (e.g., a user ID, organization ID, or other entity).
///
- public string SubjectId { get; set; }
+ public string SubjectId { get; set; } = string.Empty;
///
/// The subject type (e.g., "User", "Organization", "Team").
///
- public string SubjectType { get; set; }
+ public string SubjectType { get; set; } = string.Empty;
///
/// The priority of the review request.
@@ -79,12 +79,12 @@ public class ManualReviewRequest
///
/// A summary of the case for quick reference.
///
- public string Summary { get; set; }
+ public string Summary { get; set; } = string.Empty;
///
/// Detailed information about the case.
///
- public string Details { get; set; }
+ public string Details { get; set; } = string.Empty;
///
/// Additional context data relevant to the review.
@@ -110,7 +110,7 @@ public class ManualReviewResponse
///
/// The unique identifier for the review case.
///
- public string ReviewId { get; set; }
+ public string ReviewId { get; set; } = string.Empty;
///
/// The current status of the review.
@@ -130,7 +130,7 @@ public class ManualReviewResponse
///
/// The correlation ID from the original request.
///
- public string CorrelationId { get; set; }
+ public string CorrelationId { get; set; } = string.Empty;
}
///
@@ -141,12 +141,12 @@ public class ReviewDecision
///
/// The unique identifier of the review case.
///
- public string ReviewId { get; set; }
+ public string ReviewId { get; set; } = string.Empty;
///
/// The user making the decision.
///
- public string ReviewedBy { get; set; }
+ public string ReviewedBy { get; set; } = string.Empty;
///
/// The decision (Approved or Rejected).
@@ -156,12 +156,12 @@ public class ReviewDecision
///
/// The rationale for the decision.
///
- public string Rationale { get; set; }
+ public string Rationale { get; set; } = string.Empty;
///
/// Any additional notes or instructions.
///
- public string Notes { get; set; }
+ public string Notes { get; set; } = string.Empty;
///
/// When the decision was made.
@@ -177,52 +177,52 @@ public class ReviewRecord
///
/// The unique identifier for the review case.
///
- public string ReviewId { get; set; }
+ public string ReviewId { get; set; } = string.Empty;
///
/// The tenant context for this review.
///
- public string TenantId { get; set; }
+ public string TenantId { get; set; } = string.Empty;
///
/// The user or system that submitted the review request.
///
- public string RequestedBy { get; set; }
+ public string RequestedBy { get; set; } = string.Empty;
///
/// The type of review.
///
- public string ReviewType { get; set; }
+ public string ReviewType { get; set; } = string.Empty;
///
/// The subject of the review.
///
- public string SubjectId { get; set; }
+ public string SubjectId { get; set; } = string.Empty;
///
/// The subject type.
///
- public string SubjectType { get; set; }
+ public string SubjectType { get; set; } = string.Empty;
///
/// The priority of the review.
///
- public string Priority { get; set; }
+ public string Priority { get; set; } = string.Empty;
///
/// A summary of the case.
///
- public string Summary { get; set; }
+ public string Summary { get; set; } = string.Empty;
///
/// Detailed information about the case.
///
- public string Details { get; set; }
+ public string Details { get; set; } = string.Empty;
///
/// Additional context data.
///
- public Dictionary Context { get; set; }
+ public Dictionary Context { get; set; } = new();
///
/// When the review was submitted.
@@ -237,7 +237,7 @@ public class ReviewRecord
///
/// The user assigned to review the case, if applicable.
///
- public string AssignedTo { get; set; }
+ public string AssignedTo { get; set; } = string.Empty;
///
/// The current status of the review.
@@ -252,17 +252,17 @@ public class ReviewRecord
///
/// The user who completed the review, if applicable.
///
- public string ReviewedBy { get; set; }
+ public string ReviewedBy { get; set; } = string.Empty;
///
/// The rationale for the decision, if applicable.
///
- public string Rationale { get; set; }
+ public string Rationale { get; set; } = string.Empty;
///
/// Any additional notes or instructions.
///
- public string Notes { get; set; }
+ public string Notes { get; set; } = string.Empty;
///
/// The history of status changes and comments for this review.
@@ -272,7 +272,7 @@ public class ReviewRecord
///
/// The correlation ID for tracking this review across system boundaries.
///
- public string CorrelationId { get; set; }
+ public string CorrelationId { get; set; } = string.Empty;
}
///
@@ -288,12 +288,12 @@ public class ReviewHistoryEntry
///
/// The user who performed the action.
///
- public string UserId { get; set; }
+ public string UserId { get; set; } = string.Empty;
///
/// The action that was performed.
///
- public string Action { get; set; }
+ public string Action { get; set; } = string.Empty;
///
/// The status of the review after this action.
@@ -303,7 +303,7 @@ public class ReviewHistoryEntry
///
/// Any comments associated with this action.
///
- public string Comments { get; set; }
+ public string Comments { get; set; } = string.Empty;
}
///
@@ -387,7 +387,7 @@ public interface IManualAdjudicationPort
/// A task that represents the asynchronous operation. The task result contains an
/// enumerable collection of pending s.
///
- Task> GetPendingReviewsAsync(string tenantId, string reviewType = null);
+ Task> GetPendingReviewsAsync(string tenantId, string? reviewType = null);
///
/// Retrieves all review cases for a specific subject (e.g., user, organization).
diff --git a/src/BusinessApplications/ConvenerServices/UseCases/DiscoverChampionsUseCase.cs b/src/BusinessApplications/ConvenerServices/UseCases/DiscoverChampionsUseCase.cs
new file mode 100644
index 0000000..ab7381a
--- /dev/null
+++ b/src/BusinessApplications/ConvenerServices/UseCases/DiscoverChampionsUseCase.cs
@@ -0,0 +1,138 @@
+using FoundationLayer.ConvenerData;
+using Microsoft.Extensions.Logging;
+using static CognitiveMesh.Shared.LogSanitizer;
+
+namespace CognitiveMesh.BusinessApplications.ConvenerServices.UseCases;
+
+///
+/// Request DTO for the champion discovery use case.
+///
+public class DiscoverChampionsRequest
+{
+ /// Tenant ID for data scoping.
+ public string TenantId { get; set; } = string.Empty;
+
+ /// Optional skill filter to narrow champion discovery.
+ public string? SkillFilter { get; set; }
+
+ /// Maximum number of champions to return.
+ public int MaxResults { get; set; } = 10;
+}
+
+///
+/// Response DTO from the champion discovery use case.
+///
+public class DiscoverChampionsResponse
+{
+ /// The discovered and ranked champions.
+ public List Champions { get; set; } = new();
+
+ /// Total champions evaluated before filtering.
+ public int TotalEvaluated { get; set; }
+}
+
+///
+/// Summary view of a single champion for API responses.
+///
+public class ChampionSummary
+{
+ /// User ID of the champion.
+ public string UserId { get; set; } = string.Empty;
+
+ /// Calculated influence score.
+ public double InfluenceScore { get; set; }
+
+ /// Skills attributed to the champion.
+ public List Skills { get; set; } = new();
+
+ /// Number of significant interactions logged.
+ public int InteractionCount { get; set; }
+
+ /// Last recorded activity date.
+ public DateTimeOffset LastActiveDate { get; set; }
+}
+
+///
+/// Orchestrates champion discovery by coordinating the ChampionScorer with
+/// data retrieval and filtering. This is the application-level use case
+/// consumed by the ConvenerController.
+///
+public class DiscoverChampionsUseCase
+{
+ private readonly ILogger _logger;
+ private readonly IChampionDiscoveryPort _championDiscoveryPort;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Logger for structured diagnostics.
+ /// Port for champion data retrieval and scoring.
+ public DiscoverChampionsUseCase(
+ ILogger logger,
+ IChampionDiscoveryPort championDiscoveryPort)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ _championDiscoveryPort = championDiscoveryPort ?? throw new ArgumentNullException(nameof(championDiscoveryPort));
+ }
+
+ ///
+ /// Executes the champion discovery workflow: fetch candidates, score, rank, and return.
+ ///
+ /// The discovery request with tenant scoping and filters.
+ /// Cancellation token.
+ /// A response containing ranked champion summaries.
+ public async Task ExecuteAsync(
+ DiscoverChampionsRequest request,
+ CancellationToken cancellationToken = default)
+ {
+ _logger.LogInformation(
+ "Discovering champions for Tenant '{TenantId}', Skill='{Skill}', Max={Max}",
+ Sanitize(request.TenantId), Sanitize(request.SkillFilter ?? "any"), request.MaxResults);
+
+ var candidates = await _championDiscoveryPort.GetChampionCandidatesAsync(
+ request.TenantId, request.SkillFilter, cancellationToken);
+
+ var candidateList = candidates.ToList();
+ _logger.LogDebug("Found {Count} champion candidates", candidateList.Count);
+
+ var ranked = await _championDiscoveryPort.ScoreAndRankAsync(
+ candidateList, cancellationToken);
+
+ var topChampions = ranked
+ .Take(request.MaxResults)
+ .Select(c => new ChampionSummary
+ {
+ UserId = c.UserId,
+ InfluenceScore = c.InfluenceScore,
+ Skills = c.Skills.Select(s => s.Name).ToList(),
+ InteractionCount = c.InteractionCount,
+ LastActiveDate = c.LastActiveDate
+ })
+ .ToList();
+
+ return new DiscoverChampionsResponse
+ {
+ Champions = topChampions,
+ TotalEvaluated = candidateList.Count
+ };
+ }
+}
+
+///
+/// Port interface for champion discovery operations.
+/// Abstracts the data retrieval and scoring logic from the use case.
+///
+public interface IChampionDiscoveryPort
+{
+ ///
+ /// Retrieves champion candidates for a tenant, optionally filtered by skill.
+ ///
+ Task> GetChampionCandidatesAsync(
+ string tenantId, string? skillFilter, CancellationToken cancellationToken = default);
+
+ ///
+ /// Scores and ranks champions by influence.
+ ///
+ Task> ScoreAndRankAsync(
+ IEnumerable candidates, CancellationToken cancellationToken = default);
+}
diff --git a/src/BusinessApplications/CustomerIntelligence/CustomerIntelligence.csproj b/src/BusinessApplications/CustomerIntelligence/CustomerIntelligence.csproj
index 61350d7..4c54104 100644
--- a/src/BusinessApplications/CustomerIntelligence/CustomerIntelligence.csproj
+++ b/src/BusinessApplications/CustomerIntelligence/CustomerIntelligence.csproj
@@ -9,6 +9,7 @@
+
diff --git a/src/BusinessApplications/CustomerIntelligence/CustomerIntelligenceManager.cs b/src/BusinessApplications/CustomerIntelligence/CustomerIntelligenceManager.cs
index 275767c..df5c14a 100644
--- a/src/BusinessApplications/CustomerIntelligence/CustomerIntelligenceManager.cs
+++ b/src/BusinessApplications/CustomerIntelligence/CustomerIntelligenceManager.cs
@@ -1,5 +1,7 @@
using System;
using System.Collections.Generic;
+using System.Linq;
+using System.Text.Json;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
@@ -8,7 +10,9 @@
namespace CognitiveMesh.BusinessApplications.CustomerIntelligence
{
///
- /// Manages customer intelligence operations including analysis, segmentation, and insights
+ /// Manages customer intelligence operations including analysis, segmentation, and insights.
+ /// Uses ICustomerDataPort for data retrieval, ILLMClient for AI-driven insight generation,
+ /// and IVectorDatabaseAdapter for similarity-based behavioral analysis.
///
public class CustomerIntelligenceManager : ICustomerIntelligenceManager
{
@@ -16,17 +20,30 @@ public class CustomerIntelligenceManager : ICustomerIntelligenceManager
private readonly IKnowledgeGraphManager _knowledgeGraphManager;
private readonly ILLMClient _llmClient;
private readonly IVectorDatabaseAdapter _vectorDatabase;
+ private readonly ICustomerDataPort _customerDataPort;
+ private const string CustomerVectorCollection = "customer-behaviors";
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The logger instance for structured logging.
+ /// The knowledge graph manager for relationship queries.
+ /// The LLM client for generating insights and predictions.
+ /// The vector database for behavioral similarity searches.
+ /// The port for customer data retrieval operations.
public CustomerIntelligenceManager(
ILogger logger,
IKnowledgeGraphManager knowledgeGraphManager,
ILLMClient llmClient,
- IVectorDatabaseAdapter vectorDatabase)
+ IVectorDatabaseAdapter vectorDatabase,
+ ICustomerDataPort customerDataPort)
{
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_knowledgeGraphManager = knowledgeGraphManager ?? throw new ArgumentNullException(nameof(knowledgeGraphManager));
_llmClient = llmClient ?? throw new ArgumentNullException(nameof(llmClient));
_vectorDatabase = vectorDatabase ?? throw new ArgumentNullException(nameof(vectorDatabase));
+ _customerDataPort = customerDataPort ?? throw new ArgumentNullException(nameof(customerDataPort));
}
///
@@ -40,24 +57,39 @@ public async Task GetCustomerProfileAsync(
try
{
_logger.LogInformation("Retrieving customer profile: {CustomerId}", customerId);
-
- // TODO: Implement actual customer profile retrieval
- await Task.Delay(100, cancellationToken); // Simulate work
-
- return new CustomerProfile
+
+ var profile = await _customerDataPort.GetProfileAsync(customerId, cancellationToken).ConfigureAwait(false);
+
+ if (profile is null)
{
- Id = customerId,
- Name = "Sample Customer",
- Email = "customer@example.com",
- Segments = new List { "High Value", "Frequent Buyer" },
- LifetimeValue = 5000.00m,
- LastPurchaseDate = DateTime.UtcNow.AddDays(-7),
- CreatedAt = DateTime.UtcNow.AddYears(-1),
- Metadata = new Dictionary
+ _logger.LogWarning("Customer profile not found: {CustomerId}", customerId);
+ throw new KeyNotFoundException($"Customer profile not found for ID: {customerId}");
+ }
+
+ // Enrich profile with knowledge graph relationships
+ var relationships = await _knowledgeGraphManager.QueryAsync(
+ $"MATCH (c:Customer {{id: '{customerId}'}})-[r]->(s:Segment) RETURN s",
+ cancellationToken).ConfigureAwait(false);
+
+ foreach (var relation in relationships)
+ {
+ if (relation.TryGetValue("name", out var segmentName) && segmentName is string name)
{
- ["preferences"] = new { category = "Electronics", brand = "Premium" }
+ if (!profile.Segments.Contains(name))
+ {
+ profile.Segments.Add(name);
+ }
}
- };
+ }
+
+ profile.UpdatedAt = DateTime.UtcNow;
+
+ _logger.LogInformation("Successfully retrieved customer profile: {CustomerId}", customerId);
+ return profile;
+ }
+ catch (KeyNotFoundException)
+ {
+ throw;
}
catch (Exception ex)
{
@@ -76,42 +108,14 @@ public async Task> GetCustomerSegmentsAsync(
try
{
- _logger.LogInformation("Retrieving customer segments");
-
- // TODO: Implement actual segment retrieval logic
- await Task.Delay(100, cancellationToken); // Simulate work
-
- return new[]
- {
- new CustomerSegment
- {
- Id = "segment-1",
- Name = "High-Value Customers",
- Description = "Customers with high lifetime value",
- CustomerCount = 150,
- AverageValue = 2500.00m,
- CreatedAt = DateTime.UtcNow.AddMonths(-1),
- Rules = new Dictionary
- {
- ["minLifetimeValue"] = 1000.00m,
- ["minOrderCount"] = 3
- }
- },
- new CustomerSegment
- {
- Id = "segment-2",
- Name = "At-Risk Customers",
- Description = "Customers who haven't purchased recently",
- CustomerCount = 75,
- AverageValue = 500.00m,
- CreatedAt = DateTime.UtcNow.AddMonths(-2),
- Rules = new Dictionary
- {
- ["maxDaysSinceLastPurchase"] = 90,
- ["minDaysSinceLastPurchase"] = 30
- }
- }
- };
+ _logger.LogInformation("Retrieving customer segments with filter: {NameFilter}, limit: {Limit}",
+ query.NameContains ?? "(none)", query.Limit);
+
+ var segments = await _customerDataPort.QuerySegmentsAsync(query, cancellationToken).ConfigureAwait(false);
+ var segmentList = segments.ToList();
+
+ _logger.LogInformation("Retrieved {SegmentCount} customer segments", segmentList.Count);
+ return segmentList;
}
catch (Exception ex)
{
@@ -131,48 +135,43 @@ public async Task> GenerateCustomerInsightsAsync(
try
{
- _logger.LogInformation("Generating insights for customer: {CustomerId}", customerId);
-
- // TODO: Implement actual insight generation logic
- await Task.Delay(150, cancellationToken); // Simulate work
-
+ _logger.LogInformation("Generating insights for customer: {CustomerId}, types: {InsightType}",
+ customerId, insightType);
+
+ var interactions = await _customerDataPort.GetInteractionHistoryAsync(customerId, cancellationToken)
+ .ConfigureAwait(false);
+ var interactionList = interactions.ToList();
+
+ if (interactionList.Count == 0)
+ {
+ _logger.LogWarning("No interaction history found for customer: {CustomerId}", customerId);
+ return Enumerable.Empty();
+ }
+
var insights = new List();
-
+
if (insightType.HasFlag(InsightType.PurchasePatterns))
{
- insights.Add(new CustomerInsight
+ var purchaseInsight = await GeneratePurchaseInsightAsync(customerId, interactionList, cancellationToken)
+ .ConfigureAwait(false);
+ if (purchaseInsight is not null)
{
- Type = InsightType.PurchasePatterns,
- Title = "Frequent Purchase Category",
- Description = "This customer frequently purchases Electronics",
- Confidence = 0.85f,
- GeneratedAt = DateTime.UtcNow,
- Metadata = new Dictionary
- {
- ["category"] = "Electronics",
- ["purchaseCount"] = 12,
- ["totalSpent"] = 2450.00m
- }
- });
+ insights.Add(purchaseInsight);
+ }
}
-
+
if (insightType.HasFlag(InsightType.BehavioralPatterns))
{
- insights.Add(new CustomerInsight
+ var behavioralInsight = await GenerateBehavioralInsightAsync(customerId, interactionList, cancellationToken)
+ .ConfigureAwait(false);
+ if (behavioralInsight is not null)
{
- Type = InsightType.BehavioralPatterns,
- Title = "Preferred Shopping Time",
- Description = "This customer typically shops in the evening",
- Confidence = 0.75f,
- GeneratedAt = DateTime.UtcNow,
- Metadata = new Dictionary
- {
- ["preferredHour"] = 19,
- ["confidence"] = 0.75
- }
- });
+ insights.Add(behavioralInsight);
+ }
}
-
+
+ _logger.LogInformation("Generated {InsightCount} insights for customer: {CustomerId}",
+ insights.Count, customerId);
return insights;
}
catch (Exception ex)
@@ -193,28 +192,74 @@ public async Task PredictCustomerBehaviorAsync(
try
{
- _logger.LogInformation("Predicting behavior for customer: {CustomerId}, type: {PredictionType}",
+ _logger.LogInformation("Predicting behavior for customer: {CustomerId}, type: {PredictionType}",
customerId, predictionType);
-
- // TODO: Implement actual prediction logic
- await Task.Delay(100, cancellationToken); // Simulate work
-
- return new CustomerPrediction
+
+ // Retrieve behavioral features for prediction
+ var features = await _customerDataPort.GetBehavioralFeaturesAsync(customerId, cancellationToken)
+ .ConfigureAwait(false);
+
+ // Build a feature vector from the behavioral features
+ var featureVector = features.Values.Select(v => (float)v).ToArray();
+
+ // Find similar customer behaviors using vector similarity
+ var similarBehaviors = Enumerable.Empty<(string Id, float Score)>();
+ if (featureVector.Length > 0)
+ {
+ similarBehaviors = await _vectorDatabase.SearchVectorsAsync(
+ CustomerVectorCollection, featureVector, limit: 20, cancellationToken: cancellationToken)
+ .ConfigureAwait(false);
+ }
+
+ // Use LLM to interpret the behavioral data and produce a prediction
+ var featureSummary = string.Join(", ", features.Select(kv => $"{kv.Key}={kv.Value:F2}"));
+ var similarCount = similarBehaviors.Count();
+ var avgSimilarity = similarBehaviors.Any() ? similarBehaviors.Average(s => s.Score) : 0f;
+
+ var prompt = $@"Analyze the following customer behavioral features and predict the {predictionType} outcome.
+
+Customer ID: {customerId}
+Behavioral features: {featureSummary}
+Similar customer profiles found: {similarCount} (average similarity: {avgSimilarity:F2})
+
+Provide:
+1. A predicted value between 0.0 and 1.0
+2. A confidence score between 0.0 and 1.0
+3. A brief explanation
+
+Format your response as:
+PredictedValue: [value]
+Confidence: [value]
+Explanation: [explanation]";
+
+ var llmResponse = await _llmClient.GenerateCompletionAsync(
+ prompt, temperature: 0.3f, maxTokens: 300, cancellationToken: cancellationToken)
+ .ConfigureAwait(false);
+
+ var (predictedValue, confidence, explanation) = ParsePredictionResponse(llmResponse, predictionType);
+
+ var prediction = new CustomerPrediction
{
CustomerId = customerId,
Type = predictionType,
- PredictedValue = predictionType == PredictionType.Churn ? 0.65f : 0.78f,
- Confidence = 0.82f,
+ PredictedValue = predictedValue,
+ Confidence = confidence,
GeneratedAt = DateTime.UtcNow,
- Explanation = predictionType == PredictionType.Churn
- ? "Moderate risk of churn based on recent activity"
- : "High likelihood of making a purchase in the next 30 days",
+ Explanation = explanation,
Metadata = new Dictionary
{
- ["timeframe"] = "30d",
- ["modelVersion"] = "1.0.0"
+ ["featureCount"] = features.Count,
+ ["similarProfileCount"] = similarCount,
+ ["averageSimilarity"] = avgSimilarity,
+ ["modelName"] = _llmClient.ModelName
}
};
+
+ _logger.LogInformation(
+ "Prediction completed for customer {CustomerId}: {PredictionType} = {PredictedValue}, confidence = {Confidence}",
+ customerId, predictionType, predictedValue, confidence);
+
+ return prediction;
}
catch (Exception ex)
{
@@ -222,6 +267,158 @@ public async Task PredictCustomerBehaviorAsync(
throw;
}
}
+
+ private async Task GeneratePurchaseInsightAsync(
+ string customerId,
+ List> interactions,
+ CancellationToken cancellationToken)
+ {
+ var interactionSummary = JsonSerializer.Serialize(interactions.Take(50));
+
+ var prompt = $@"Analyze the following customer interaction data and identify purchase patterns.
+
+Customer ID: {customerId}
+Interaction count: {interactions.Count}
+Recent interactions (sample): {interactionSummary}
+
+Provide:
+1. A concise title for the purchase pattern insight
+2. A description of the pattern
+3. A confidence score between 0.0 and 1.0
+
+Format your response as:
+Title: [title]
+Description: [description]
+Confidence: [score]";
+
+ var response = await _llmClient.GenerateCompletionAsync(
+ prompt, temperature: 0.4f, maxTokens: 400, cancellationToken: cancellationToken)
+ .ConfigureAwait(false);
+
+ var (title, description, confidence) = ParseInsightResponse(response);
+
+ return new CustomerInsight
+ {
+ Type = InsightType.PurchasePatterns,
+ Title = title,
+ Description = description,
+ Confidence = confidence,
+ GeneratedAt = DateTime.UtcNow,
+ Metadata = new Dictionary
+ {
+ ["interactionCount"] = interactions.Count,
+ ["modelName"] = _llmClient.ModelName
+ }
+ };
+ }
+
+ private async Task GenerateBehavioralInsightAsync(
+ string customerId,
+ List> interactions,
+ CancellationToken cancellationToken)
+ {
+ var interactionSummary = JsonSerializer.Serialize(interactions.Take(50));
+
+ var prompt = $@"Analyze the following customer interaction data and identify behavioral patterns.
+
+Customer ID: {customerId}
+Interaction count: {interactions.Count}
+Recent interactions (sample): {interactionSummary}
+
+Provide:
+1. A concise title for the behavioral pattern insight
+2. A description of the pattern
+3. A confidence score between 0.0 and 1.0
+
+Format your response as:
+Title: [title]
+Description: [description]
+Confidence: [score]";
+
+ var response = await _llmClient.GenerateCompletionAsync(
+ prompt, temperature: 0.4f, maxTokens: 400, cancellationToken: cancellationToken)
+ .ConfigureAwait(false);
+
+ var (title, description, confidence) = ParseInsightResponse(response);
+
+ return new CustomerInsight
+ {
+ Type = InsightType.BehavioralPatterns,
+ Title = title,
+ Description = description,
+ Confidence = confidence,
+ GeneratedAt = DateTime.UtcNow,
+ Metadata = new Dictionary
+ {
+ ["interactionCount"] = interactions.Count,
+ ["modelName"] = _llmClient.ModelName
+ }
+ };
+ }
+
+ private static (string Title, string Description, float Confidence) ParseInsightResponse(string response)
+ {
+ var title = "Customer Insight";
+ var description = response;
+ var confidence = 0.7f;
+
+ var lines = response.Split('\n', StringSplitOptions.RemoveEmptyEntries);
+ foreach (var line in lines)
+ {
+ var trimmed = line.Trim();
+ if (trimmed.StartsWith("Title:", StringComparison.OrdinalIgnoreCase))
+ {
+ title = trimmed["Title:".Length..].Trim();
+ }
+ else if (trimmed.StartsWith("Description:", StringComparison.OrdinalIgnoreCase))
+ {
+ description = trimmed["Description:".Length..].Trim();
+ }
+ else if (trimmed.StartsWith("Confidence:", StringComparison.OrdinalIgnoreCase))
+ {
+ if (float.TryParse(trimmed["Confidence:".Length..].Trim(), out var conf))
+ {
+ confidence = Math.Clamp(conf, 0f, 1f);
+ }
+ }
+ }
+
+ return (title, description, confidence);
+ }
+
+ private static (float PredictedValue, float Confidence, string Explanation) ParsePredictionResponse(
+ string response, PredictionType predictionType)
+ {
+ var predictedValue = predictionType == PredictionType.Churn ? 0.5f : 0.5f;
+ var confidence = 0.7f;
+ var explanation = response;
+
+ var lines = response.Split('\n', StringSplitOptions.RemoveEmptyEntries);
+ foreach (var line in lines)
+ {
+ var trimmed = line.Trim();
+ if (trimmed.StartsWith("PredictedValue:", StringComparison.OrdinalIgnoreCase))
+ {
+ if (float.TryParse(trimmed["PredictedValue:".Length..].Trim(), out var pv))
+ {
+ predictedValue = Math.Clamp(pv, 0f, 1f);
+ }
+ }
+ else if (trimmed.StartsWith("Confidence:", StringComparison.OrdinalIgnoreCase))
+ {
+ if (float.TryParse(trimmed["Confidence:".Length..].Trim(), out var conf))
+ {
+ confidence = Math.Clamp(conf, 0f, 1f);
+ }
+ }
+ else if (trimmed.StartsWith("Explanation:", StringComparison.OrdinalIgnoreCase))
+ {
+ explanation = trimmed["Explanation:".Length..].Trim();
+ }
+ }
+
+ return (predictedValue, confidence, explanation);
+ }
}
///
@@ -232,43 +429,43 @@ public class CustomerProfile
///
/// Unique identifier for the customer
///
- public string Id { get; set; }
-
+ public string Id { get; set; } = string.Empty;
+
///
/// Customer's full name
///
- public string Name { get; set; }
-
+ public string Name { get; set; } = string.Empty;
+
///
/// Customer's email address
///
- public string Email { get; set; }
-
+ public string Email { get; set; } = string.Empty;
+
///
/// Segments the customer belongs to
///
public List Segments { get; set; } = new();
-
+
///
/// Customer's lifetime value
///
public decimal LifetimeValue { get; set; }
-
+
///
/// Date of last purchase
///
public DateTime? LastPurchaseDate { get; set; }
-
+
///
/// When the customer was first seen
///
public DateTime CreatedAt { get; set; }
-
+
///
/// When the profile was last updated
///
public DateTime? UpdatedAt { get; set; }
-
+
///
/// Additional metadata
///
@@ -283,38 +480,38 @@ public class CustomerSegment
///
/// Unique identifier for the segment
///
- public string Id { get; set; }
-
+ public string Id { get; set; } = string.Empty;
+
///
/// Name of the segment
///
- public string Name { get; set; }
-
+ public string Name { get; set; } = string.Empty;
+
///
/// Description of the segment
///
- public string Description { get; set; }
-
+ public string Description { get; set; } = string.Empty;
+
///
/// Number of customers in this segment
///
public int CustomerCount { get; set; }
-
+
///
/// Average value of customers in this segment
///
public decimal AverageValue { get; set; }
-
+
///
/// When the segment was created
///
public DateTime CreatedAt { get; set; }
-
+
///
/// When the segment was last updated
///
public DateTime? UpdatedAt { get; set; }
-
+
///
/// Rules that define this segment
///
@@ -330,27 +527,27 @@ public class CustomerInsight
/// Type of insight
///
public InsightType Type { get; set; }
-
+
///
/// Title of the insight
///
- public string Title { get; set; }
-
+ public string Title { get; set; } = string.Empty;
+
///
/// Detailed description of the insight
///
- public string Description { get; set; }
-
+ public string Description { get; set; } = string.Empty;
+
///
/// Confidence score (0-1)
///
public float Confidence { get; set; }
-
+
///
/// When the insight was generated
///
public DateTime GeneratedAt { get; set; }
-
+
///
/// Additional metadata
///
@@ -365,33 +562,33 @@ public class CustomerPrediction
///
/// ID of the customer
///
- public string CustomerId { get; set; }
-
+ public string CustomerId { get; set; } = string.Empty;
+
///
/// Type of prediction
///
public PredictionType Type { get; set; }
-
+
///
/// Predicted value (e.g., probability of churn)
///
public float PredictedValue { get; set; }
-
+
///
/// Confidence in the prediction (0-1)
///
public float Confidence { get; set; }
-
+
///
/// Explanation of the prediction
///
- public string Explanation { get; set; }
-
+ public string Explanation { get; set; } = string.Empty;
+
///
/// When the prediction was made
///
public DateTime GeneratedAt { get; set; }
-
+
///
/// Additional metadata
///
@@ -406,18 +603,18 @@ public class CustomerSegmentQuery
///
/// Filter by segment name
///
- public string NameContains { get; set; }
-
+ public string? NameContains { get; set; }
+
///
/// Minimum number of customers in the segment
///
public int? MinCustomerCount { get; set; }
-
+
///
/// Maximum number of results to return
///
public int Limit { get; set; } = 100;
-
+
///
/// Sort order
///
@@ -434,17 +631,17 @@ public enum InsightType
/// No specific type
///
None = 0,
-
+
///
/// Purchase patterns and history
///
PurchasePatterns = 1,
-
+
///
/// Behavioral patterns
///
BehavioralPatterns = 2,
-
+
///
/// All insight types
///
@@ -460,12 +657,12 @@ public enum PredictionType
/// Likelihood of customer churn
///
Churn,
-
+
///
/// Likelihood of making a purchase
///
Purchase,
-
+
///
/// Predicted lifetime value
///
@@ -481,7 +678,7 @@ public enum SortOrder
/// Ascending order
///
Ascending,
-
+
///
/// Descending order
///
diff --git a/src/BusinessApplications/CustomerIntelligence/ICustomerDataPort.cs b/src/BusinessApplications/CustomerIntelligence/ICustomerDataPort.cs
new file mode 100644
index 0000000..7828994
--- /dev/null
+++ b/src/BusinessApplications/CustomerIntelligence/ICustomerDataPort.cs
@@ -0,0 +1,46 @@
+using System;
+using System.Collections.Generic;
+using System.Threading;
+using System.Threading.Tasks;
+
+namespace CognitiveMesh.BusinessApplications.CustomerIntelligence
+{
+ ///
+ /// Port interface for customer data retrieval and persistence operations.
+ /// Adapters implement this to integrate with specific data stores (CosmosDB, SQL, etc.).
+ ///
+ public interface ICustomerDataPort
+ {
+ ///
+ /// Retrieves a customer profile by its unique identifier.
+ ///
+ /// The unique identifier of the customer.
+ /// A token to monitor for cancellation requests.
+ /// The customer profile, or null if not found.
+ Task GetProfileAsync(string customerId, CancellationToken cancellationToken = default);
+
+ ///
+ /// Queries customer segments based on filtering criteria.
+ ///
+ /// The query parameters for filtering segments.
+ /// A token to monitor for cancellation requests.
+ /// A collection of customer segments matching the query.
+ Task> QuerySegmentsAsync(CustomerSegmentQuery query, CancellationToken cancellationToken = default);
+
+ ///
+ /// Retrieves interaction history for a customer used to generate insights.
+ ///
+ /// The unique identifier of the customer.
+ /// A token to monitor for cancellation requests.
+ /// A collection of interaction records as key-value dictionaries.
+ Task>> GetInteractionHistoryAsync(string customerId, CancellationToken cancellationToken = default);
+
+ ///
+ /// Retrieves behavioral feature vectors for a customer used in prediction models.
+ ///
+ /// The unique identifier of the customer.
+ /// A token to monitor for cancellation requests.
+ /// A dictionary of feature names to their numeric values.
+ Task> GetBehavioralFeaturesAsync(string customerId, CancellationToken cancellationToken = default);
+ }
+}
diff --git a/src/BusinessApplications/DecisionSupport/DecisionSupport.csproj b/src/BusinessApplications/DecisionSupport/DecisionSupport.csproj
index 59d3dd3..58c8228 100644
--- a/src/BusinessApplications/DecisionSupport/DecisionSupport.csproj
+++ b/src/BusinessApplications/DecisionSupport/DecisionSupport.csproj
@@ -3,17 +3,19 @@
Library
net9.0
+ enable
+ enable
+
-
+
-
diff --git a/src/BusinessApplications/DecisionSupport/DecisionSupportManager.cs b/src/BusinessApplications/DecisionSupport/DecisionSupportManager.cs
index 6d561b3..1e68e96 100644
--- a/src/BusinessApplications/DecisionSupport/DecisionSupportManager.cs
+++ b/src/BusinessApplications/DecisionSupport/DecisionSupportManager.cs
@@ -1,5 +1,6 @@
using System;
using System.Collections.Generic;
+using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
@@ -8,85 +9,168 @@ namespace CognitiveMesh.BusinessApplications.DecisionSupport
{
///
/// Provides decision support capabilities for the cognitive mesh.
+ /// Delegates analysis, risk evaluation, recommendation generation, and outcome simulation
+ /// to an adapter that integrates with reasoning engines.
///
public class DecisionSupportManager : IDecisionSupportManager, IDisposable
{
private readonly ILogger _logger;
- private bool _disposed = false;
+ private readonly IDecisionAnalysisPort _analysisPort;
+ private bool _disposed;
///
/// Initializes a new instance of the class.
///
- /// The logger instance.
- public DecisionSupportManager(ILogger logger = null)
+ /// The logger instance for structured logging.
+ /// The port for decision analysis operations backed by reasoning engines.
+ public DecisionSupportManager(
+ ILogger logger,
+ IDecisionAnalysisPort analysisPort)
{
- _logger = logger;
- _logger?.LogInformation("DecisionSupportManager initialized");
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ _analysisPort = analysisPort ?? throw new ArgumentNullException(nameof(analysisPort));
+ _logger.LogInformation("DecisionSupportManager initialized");
}
///
- public Task> AnalyzeDecisionOptionsAsync(
+ public async Task> AnalyzeDecisionOptionsAsync(
string decisionContext,
IEnumerable> options,
- Dictionary criteria = null,
+ Dictionary? criteria = null,
CancellationToken cancellationToken = default)
{
- _logger?.LogDebug("Analyzing decision options for context: {Context}", decisionContext);
- // TODO: Implement actual decision analysis logic
- return Task.FromResult(new Dictionary
+ if (string.IsNullOrWhiteSpace(decisionContext))
+ throw new ArgumentException("Decision context cannot be empty", nameof(decisionContext));
+ if (options is null)
+ throw new ArgumentNullException(nameof(options));
+
+ try
+ {
+ _logger.LogInformation("Analyzing decision options for context: {Context}", decisionContext);
+
+ var optionList = options.ToList();
+ if (optionList.Count == 0)
+ {
+ _logger.LogWarning("No options provided for decision analysis: {Context}", decisionContext);
+ return new Dictionary
+ {
+ ["bestOption"] = -1,
+ ["scores"] = new Dictionary(),
+ ["recommendations"] = Array.Empty()
+ };
+ }
+
+ var result = await _analysisPort.ScoreOptionsAsync(
+ decisionContext, optionList, criteria, cancellationToken).ConfigureAwait(false);
+
+ _logger.LogInformation(
+ "Decision analysis completed for context: {Context}, best option index: {BestOption}",
+ decisionContext, result.GetValueOrDefault("bestOption", -1));
+
+ return result;
+ }
+ catch (Exception ex)
{
- ["bestOption"] = options is System.Collections.ICollection c ? c.Count > 0 ? 0 : -1 : -1,
- ["scores"] = new Dictionary(),
- ["recommendations"] = Array.Empty()
- });
+ _logger.LogError(ex, "Error analyzing decision options for context: {Context}", decisionContext);
+ throw;
+ }
}
///
- public Task> EvaluateRiskAsync(
+ public async Task> EvaluateRiskAsync(
string scenario,
Dictionary parameters,
CancellationToken cancellationToken = default)
{
- _logger?.LogDebug("Evaluating risk for scenario: {Scenario}", scenario);
- // TODO: Implement actual risk evaluation logic
- return Task.FromResult(new Dictionary
+ if (string.IsNullOrWhiteSpace(scenario))
+ throw new ArgumentException("Scenario cannot be empty", nameof(scenario));
+ if (parameters is null)
+ throw new ArgumentNullException(nameof(parameters));
+
+ try
{
- ["riskLevel"] = "low",
- ["riskScore"] = 0.1,
- ["mitigationStrategies"] = Array.Empty()
- });
+ _logger.LogInformation("Evaluating risk for scenario: {Scenario}", scenario);
+
+ var result = await _analysisPort.AssessRiskAsync(scenario, parameters, cancellationToken)
+ .ConfigureAwait(false);
+
+ _logger.LogInformation(
+ "Risk evaluation completed for scenario: {Scenario}, risk level: {RiskLevel}",
+ scenario, result.GetValueOrDefault("riskLevel", "unknown"));
+
+ return result;
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error evaluating risk for scenario: {Scenario}", scenario);
+ throw;
+ }
}
///
- public Task> GenerateRecommendationsAsync(
+ public async Task> GenerateRecommendationsAsync(
string context,
Dictionary data,
CancellationToken cancellationToken = default)
{
- _logger?.LogDebug("Generating recommendations for context: {Context}", context);
- // TODO: Implement actual recommendation generation logic
- return Task.FromResult(new Dictionary
+ if (string.IsNullOrWhiteSpace(context))
+ throw new ArgumentException("Context cannot be empty", nameof(context));
+ if (data is null)
+ throw new ArgumentNullException(nameof(data));
+
+ try
{
- ["recommendations"] = Array.Empty(),
- ["confidenceScores"] = new Dictionary(),
- ["supportingEvidence"] = Array.Empty()
- });
+ _logger.LogInformation("Generating recommendations for context: {Context}", context);
+
+ var result = await _analysisPort.GenerateRecommendationsAsync(context, data, cancellationToken)
+ .ConfigureAwait(false);
+
+ var recommendationCount = result.TryGetValue("recommendations", out var recs) && recs is object[] arr
+ ? arr.Length
+ : 0;
+
+ _logger.LogInformation(
+ "Recommendation generation completed for context: {Context}, count: {Count}",
+ context, recommendationCount);
+
+ return result;
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error generating recommendations for context: {Context}", context);
+ throw;
+ }
}
///
- public Task> SimulateOutcomesAsync(
+ public async Task> SimulateOutcomesAsync(
string scenario,
Dictionary parameters,
CancellationToken cancellationToken = default)
{
- _logger?.LogDebug("Simulating outcomes for scenario: {Scenario}", scenario);
- // TODO: Implement actual outcome simulation logic
- return Task.FromResult(new Dictionary
+ if (string.IsNullOrWhiteSpace(scenario))
+ throw new ArgumentException("Scenario cannot be empty", nameof(scenario));
+ if (parameters is null)
+ throw new ArgumentNullException(nameof(parameters));
+
+ try
+ {
+ _logger.LogInformation("Simulating outcomes for scenario: {Scenario}", scenario);
+
+ var result = await _analysisPort.SimulateAsync(scenario, parameters, cancellationToken)
+ .ConfigureAwait(false);
+
+ _logger.LogInformation(
+ "Outcome simulation completed for scenario: {Scenario}, probability: {Probability}",
+ scenario, result.GetValueOrDefault("probability", 0.0));
+
+ return result;
+ }
+ catch (Exception ex)
{
- ["mostLikelyOutcome"] = new Dictionary(),
- ["probability"] = 1.0,
- ["alternativeScenarios"] = Array.Empty()
- });
+ _logger.LogError(ex, "Error simulating outcomes for scenario: {Scenario}", scenario);
+ throw;
+ }
}
///
@@ -96,18 +180,29 @@ public void Dispose()
GC.SuppressFinalize(this);
}
+ ///
+ /// Releases the unmanaged resources used by the and optionally releases managed resources.
+ ///
+ /// true to release both managed and unmanaged resources; false to release only unmanaged resources.
protected virtual void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
- // Dispose managed resources here
+ // Dispose managed resources if the analysis port is disposable
+ if (_analysisPort is IDisposable disposablePort)
+ {
+ disposablePort.Dispose();
+ }
}
_disposed = true;
}
}
+ ///
+ /// Finalizer for .
+ ///
~DecisionSupportManager()
{
Dispose(false);
@@ -130,7 +225,7 @@ public interface IDecisionSupportManager : IDisposable
Task> AnalyzeDecisionOptionsAsync(
string decisionContext,
IEnumerable> options,
- Dictionary criteria = null,
+ Dictionary? criteria = null,
CancellationToken cancellationToken = default);
///
diff --git a/src/BusinessApplications/DecisionSupport/IDecisionAnalysisPort.cs b/src/BusinessApplications/DecisionSupport/IDecisionAnalysisPort.cs
new file mode 100644
index 0000000..6401403
--- /dev/null
+++ b/src/BusinessApplications/DecisionSupport/IDecisionAnalysisPort.cs
@@ -0,0 +1,64 @@
+using System;
+using System.Collections.Generic;
+using System.Threading;
+using System.Threading.Tasks;
+
+namespace CognitiveMesh.BusinessApplications.DecisionSupport
+{
+ ///
+ /// Port interface for decision analysis operations.
+ /// Adapters implement this to integrate with reasoning engines (ConclAIve, LLM, etc.).
+ ///
+ public interface IDecisionAnalysisPort
+ {
+ ///
+ /// Scores a set of decision options against the given criteria using structured reasoning.
+ ///
+ /// The context describing the decision to be made.
+ /// The available decision options, each represented as a dictionary of properties.
+ /// The evaluation criteria with optional weights.
+ /// A token to monitor for cancellation requests.
+ /// A dictionary containing scored options, the best option index, and recommendations.
+ Task> ScoreOptionsAsync(
+ string decisionContext,
+ IEnumerable> options,
+ Dictionary? criteria,
+ CancellationToken cancellationToken = default);
+
+ ///
+ /// Evaluates risk factors for a given scenario using structured analysis.
+ ///
+ /// The scenario description to evaluate.
+ /// Parameters describing the risk context (e.g., impact, likelihood inputs).
+ /// A token to monitor for cancellation requests.
+ /// A dictionary containing risk level, risk score, identified risks, and mitigation strategies.
+ Task> AssessRiskAsync(
+ string scenario,
+ Dictionary parameters,
+ CancellationToken cancellationToken = default);
+
+ ///
+ /// Generates actionable recommendations based on context and supporting data.
+ ///
+ /// The decision or action context.
+ /// Supporting data for recommendation generation.
+ /// A token to monitor for cancellation requests.
+ /// A dictionary containing ranked recommendations with confidence scores and supporting evidence.
+ Task> GenerateRecommendationsAsync(
+ string context,
+ Dictionary data,
+ CancellationToken cancellationToken = default);
+
+ ///
+ /// Simulates potential outcomes for a scenario using scenario analysis.
+ ///
+ /// The scenario to simulate.
+ /// Parameters controlling the simulation (e.g., time horizon, variables).
+ /// A token to monitor for cancellation requests.
+ /// A dictionary containing the most likely outcome, probability, and alternative scenarios.
+ Task> SimulateAsync(
+ string scenario,
+ Dictionary parameters,
+ CancellationToken cancellationToken = default);
+ }
+}
diff --git a/src/BusinessApplications/ImpactMetrics/Controllers/ImpactMetricsController.cs b/src/BusinessApplications/ImpactMetrics/Controllers/ImpactMetricsController.cs
new file mode 100644
index 0000000..0df9d1e
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Controllers/ImpactMetricsController.cs
@@ -0,0 +1,504 @@
+using System.ComponentModel.DataAnnotations;
+using CognitiveMesh.BusinessApplications.Common.Models;
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Ports;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Controllers;
+
+///
+/// API controller for Impact-Driven AI Metrics, providing endpoints for
+/// psychological safety scoring, mission alignment assessment, adoption telemetry,
+/// and comprehensive impact reporting.
+///
+[ApiController]
+[Route("api/v1/impact-metrics")]
+[Produces("application/json")]
+public class ImpactMetricsController : ControllerBase
+{
+ private readonly IPsychologicalSafetyPort _safetyPort;
+ private readonly IMissionAlignmentPort _alignmentPort;
+ private readonly IAdoptionTelemetryPort _telemetryPort;
+ private readonly IImpactAssessmentPort _assessmentPort;
+ private readonly ILogger _logger;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// Port for psychological safety scoring.
+ /// Port for mission alignment assessment.
+ /// Port for adoption telemetry tracking.
+ /// Port for impact assessment and report generation.
+ /// Logger instance for structured logging.
+ public ImpactMetricsController(
+ IPsychologicalSafetyPort safetyPort,
+ IMissionAlignmentPort alignmentPort,
+ IAdoptionTelemetryPort telemetryPort,
+ IImpactAssessmentPort assessmentPort,
+ ILogger logger)
+ {
+ _safetyPort = safetyPort ?? throw new ArgumentNullException(nameof(safetyPort));
+ _alignmentPort = alignmentPort ?? throw new ArgumentNullException(nameof(alignmentPort));
+ _telemetryPort = telemetryPort ?? throw new ArgumentNullException(nameof(telemetryPort));
+ _assessmentPort = assessmentPort ?? throw new ArgumentNullException(nameof(assessmentPort));
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ #region Psychological Safety Endpoints
+
+ ///
+ /// Calculates the psychological safety score for a team.
+ ///
+ /// The identifier of the team to assess.
+ /// The request containing survey scores and tenant context.
+ /// The calculated psychological safety score.
+ /// Returns the calculated safety score.
+ /// If the request is invalid.
+ /// If an unexpected error occurs.
+ [HttpPost("safety-score/{teamId}")]
+ [ProducesResponseType(typeof(PsychologicalSafetyScore), StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task> CalculateSafetyScoreAsync(
+ string teamId,
+ [FromBody] CalculateSafetyScoreRequest request)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+ _logger.LogInformation(
+ "Safety score calculation requested for team {TeamId} with correlation ID {CorrelationId}",
+ teamId, correlationId);
+
+ try
+ {
+ if (string.IsNullOrWhiteSpace(teamId))
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Team ID is required", correlationId));
+ }
+
+ if (!ModelState.IsValid)
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Invalid request payload", correlationId));
+ }
+
+ var result = await _safetyPort.CalculateSafetyScoreAsync(
+ teamId, request.TenantId, request.SurveyScores);
+
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error calculating safety score for team {TeamId}", teamId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while calculating safety score", correlationId));
+ }
+ }
+
+ ///
+ /// Retrieves historical psychological safety scores for a team.
+ ///
+ /// The identifier of the team.
+ /// The tenant to which the team belongs.
+ /// A list of historical safety scores.
+ /// Returns the historical safety scores.
+ /// If the parameters are invalid.
+ /// If an unexpected error occurs.
+ [HttpGet("safety-score/{teamId}/history")]
+ [ProducesResponseType(typeof(IReadOnlyList), StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task>> GetHistoricalScoresAsync(
+ string teamId,
+ [FromQuery] string tenantId)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+
+ try
+ {
+ if (string.IsNullOrWhiteSpace(teamId) || string.IsNullOrWhiteSpace(tenantId))
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Team ID and tenant ID are required", correlationId));
+ }
+
+ var result = await _safetyPort.GetHistoricalScoresAsync(teamId, tenantId);
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error retrieving historical safety scores for team {TeamId}", teamId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while retrieving historical scores", correlationId));
+ }
+ }
+
+ #endregion
+
+ #region Mission Alignment Endpoints
+
+ ///
+ /// Assesses the alignment of a decision with the organisation's mission statement.
+ ///
+ /// The request containing the decision context and mission statement.
+ /// The mission alignment assessment result.
+ /// Returns the alignment assessment.
+ /// If the request is invalid.
+ /// If an unexpected error occurs.
+ [HttpPost("alignment")]
+ [ProducesResponseType(typeof(MissionAlignment), StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task> AssessAlignmentAsync(
+ [FromBody] AssessAlignmentRequest request)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+ _logger.LogInformation(
+ "Alignment assessment requested for decision {DecisionId} with correlation ID {CorrelationId}",
+ request.DecisionId, correlationId);
+
+ try
+ {
+ if (!ModelState.IsValid)
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Invalid request payload", correlationId));
+ }
+
+ var result = await _alignmentPort.AssessAlignmentAsync(
+ request.DecisionId, request.DecisionContext, request.MissionStatement);
+
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error assessing alignment for decision {DecisionId}", request.DecisionId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while assessing alignment", correlationId));
+ }
+ }
+
+ #endregion
+
+ #region Adoption Telemetry Endpoints
+
+ ///
+ /// Records an adoption telemetry event.
+ ///
+ /// The telemetry event to record.
+ /// A confirmation that the event was recorded.
+ /// The telemetry event was recorded successfully.
+ /// If the request is invalid.
+ /// If an unexpected error occurs.
+ [HttpPost("telemetry")]
+ [ProducesResponseType(StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task RecordTelemetryAsync(
+ [FromBody] RecordTelemetryRequest request)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+ _logger.LogInformation(
+ "Telemetry event recording requested for user {UserId} in tenant {TenantId}",
+ request.UserId, request.TenantId);
+
+ try
+ {
+ if (!ModelState.IsValid)
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Invalid request payload", correlationId));
+ }
+
+ var telemetry = new AdoptionTelemetry(
+ TelemetryId: Guid.NewGuid().ToString(),
+ UserId: request.UserId,
+ TenantId: request.TenantId,
+ ToolId: request.ToolId,
+ Action: request.Action,
+ Timestamp: DateTimeOffset.UtcNow,
+ DurationMs: request.DurationMs,
+ Context: request.Context);
+
+ await _telemetryPort.RecordActionAsync(telemetry);
+
+ return Ok(new { telemetry.TelemetryId, Message = "Telemetry event recorded successfully" });
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error recording telemetry for user {UserId}", request.UserId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while recording telemetry", correlationId));
+ }
+ }
+
+ ///
+ /// Retrieves AI tool usage summary for a tenant.
+ ///
+ /// The tenant whose usage to summarise.
+ /// A list of telemetry events.
+ /// Returns the usage summary.
+ /// If the tenant ID is invalid.
+ /// If an unexpected error occurs.
+ [HttpGet("telemetry/{tenantId}/summary")]
+ [ProducesResponseType(typeof(IReadOnlyList), StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task>> GetUsageSummaryAsync(
+ string tenantId)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+
+ try
+ {
+ if (string.IsNullOrWhiteSpace(tenantId))
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Tenant ID is required", correlationId));
+ }
+
+ var result = await _telemetryPort.GetUsageSummaryAsync(tenantId);
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error retrieving usage summary for tenant {TenantId}", tenantId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while retrieving usage summary", correlationId));
+ }
+ }
+
+ ///
+ /// Detects resistance patterns in AI adoption for a tenant.
+ ///
+ /// The tenant to analyse for resistance patterns.
+ /// A list of detected resistance indicators.
+ /// Returns the resistance patterns.
+ /// If the tenant ID is invalid.
+ /// If an unexpected error occurs.
+ [HttpGet("telemetry/{tenantId}/resistance")]
+ [ProducesResponseType(typeof(IReadOnlyList), StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task>> GetResistancePatternsAsync(
+ string tenantId)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+
+ try
+ {
+ if (string.IsNullOrWhiteSpace(tenantId))
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Tenant ID is required", correlationId));
+ }
+
+ var result = await _telemetryPort.DetectResistancePatternsAsync(tenantId);
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error detecting resistance patterns for tenant {TenantId}", tenantId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while detecting resistance patterns", correlationId));
+ }
+ }
+
+ #endregion
+
+ #region Impact Assessment Endpoints
+
+ ///
+ /// Generates an impact assessment for a tenant over a specified time period.
+ ///
+ /// The tenant to assess.
+ /// The request containing the assessment period.
+ /// The generated impact assessment.
+ /// Returns the impact assessment.
+ /// If the request is invalid.
+ /// If an unexpected error occurs.
+ [HttpPost("assessment/{tenantId}")]
+ [ProducesResponseType(typeof(ImpactAssessment), StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task> GenerateAssessmentAsync(
+ string tenantId,
+ [FromBody] GenerateAssessmentRequest request)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+ _logger.LogInformation(
+ "Impact assessment requested for tenant {TenantId} with correlation ID {CorrelationId}",
+ tenantId, correlationId);
+
+ try
+ {
+ if (string.IsNullOrWhiteSpace(tenantId))
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Tenant ID is required", correlationId));
+ }
+
+ if (!ModelState.IsValid)
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Invalid request payload", correlationId));
+ }
+
+ var result = await _assessmentPort.GenerateAssessmentAsync(
+ tenantId, request.PeriodStart, request.PeriodEnd);
+
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error generating impact assessment for tenant {TenantId}", tenantId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while generating the assessment", correlationId));
+ }
+ }
+
+ ///
+ /// Generates a comprehensive impact report for a tenant.
+ ///
+ /// The tenant for which to generate the report.
+ /// Start of the reporting period.
+ /// End of the reporting period.
+ /// The comprehensive impact report.
+ /// Returns the impact report.
+ /// If the parameters are invalid.
+ /// If an unexpected error occurs.
+ [HttpGet("report/{tenantId}")]
+ [ProducesResponseType(typeof(ImpactReport), StatusCodes.Status200OK)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status400BadRequest)]
+ [ProducesResponseType(typeof(ErrorEnvelope), StatusCodes.Status500InternalServerError)]
+ public async Task> GenerateReportAsync(
+ string tenantId,
+ [FromQuery] DateTimeOffset? periodStart,
+ [FromQuery] DateTimeOffset? periodEnd)
+ {
+ var correlationId = Guid.NewGuid().ToString();
+ _logger.LogInformation(
+ "Impact report requested for tenant {TenantId} with correlation ID {CorrelationId}",
+ tenantId, correlationId);
+
+ try
+ {
+ if (string.IsNullOrWhiteSpace(tenantId))
+ {
+ return BadRequest(ErrorEnvelope.InvalidPayload("Tenant ID is required", correlationId));
+ }
+
+ var start = periodStart ?? DateTimeOffset.UtcNow.AddDays(-30);
+ var end = periodEnd ?? DateTimeOffset.UtcNow;
+
+ var result = await _assessmentPort.GenerateReportAsync(tenantId, start, end);
+
+ return Ok(result);
+ }
+ catch (Exception ex)
+ {
+ _logger.LogError(ex, "Error generating impact report for tenant {TenantId}", tenantId);
+ return StatusCode(StatusCodes.Status500InternalServerError,
+ ErrorEnvelope.Create("INTERNAL_ERROR", "An unexpected error occurred while generating the report", correlationId));
+ }
+ }
+
+ #endregion
+}
+
+#region API Request Models
+
+///
+/// Request payload for calculating a psychological safety score.
+///
+public class CalculateSafetyScoreRequest
+{
+ ///
+ /// The tenant to which the team belongs.
+ ///
+ [Required]
+ public string TenantId { get; set; } = string.Empty;
+
+ ///
+ /// Survey-based scores per safety dimension, each between 0 and 100.
+ ///
+ [Required]
+ public Dictionary SurveyScores { get; set; } = new();
+}
+
+///
+/// Request payload for assessing mission alignment of a decision.
+///
+public class AssessAlignmentRequest
+{
+ ///
+ /// The identifier of the decision being assessed.
+ ///
+ [Required]
+ public string DecisionId { get; set; } = string.Empty;
+
+ ///
+ /// A description of the decision and its context.
+ ///
+ [Required]
+ public string DecisionContext { get; set; } = string.Empty;
+
+ ///
+ /// The organisation's mission statement to compare against.
+ ///
+ [Required]
+ public string MissionStatement { get; set; } = string.Empty;
+}
+
+///
+/// Request payload for recording a telemetry event.
+///
+public class RecordTelemetryRequest
+{
+ ///
+ /// The identifier of the user who performed the action.
+ ///
+ [Required]
+ public string UserId { get; set; } = string.Empty;
+
+ ///
+ /// The tenant to which the user belongs.
+ ///
+ [Required]
+ public string TenantId { get; set; } = string.Empty;
+
+ ///
+ /// The identifier of the AI tool being used.
+ ///
+ [Required]
+ public string ToolId { get; set; } = string.Empty;
+
+ ///
+ /// The type of action performed.
+ ///
+ [Required]
+ public AdoptionAction Action { get; set; }
+
+ ///
+ /// Duration of the action in milliseconds, if applicable.
+ ///
+ public long? DurationMs { get; set; }
+
+ ///
+ /// Additional contextual information about the action.
+ ///
+ public string? Context { get; set; }
+}
+
+///
+/// Request payload for generating an impact assessment.
+///
+public class GenerateAssessmentRequest
+{
+ ///
+ /// Start of the assessment period.
+ ///
+ [Required]
+ public DateTimeOffset PeriodStart { get; set; }
+
+ ///
+ /// End of the assessment period.
+ ///
+ [Required]
+ public DateTimeOffset PeriodEnd { get; set; }
+}
+
+#endregion
diff --git a/src/BusinessApplications/ImpactMetrics/Engines/ImpactMetricsEngine.cs b/src/BusinessApplications/ImpactMetrics/Engines/ImpactMetricsEngine.cs
new file mode 100644
index 0000000..663a4d6
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Engines/ImpactMetricsEngine.cs
@@ -0,0 +1,742 @@
+using System.Collections.Concurrent;
+using System.Security.Cryptography;
+using System.Text;
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Ports;
+using Microsoft.Extensions.Logging;
+
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Engines;
+
+///
+/// Core engine that implements all Impact-Driven AI Metrics capabilities including
+/// psychological safety scoring, mission alignment assessment, adoption telemetry,
+/// and comprehensive impact reporting.
+///
+public class ImpactMetricsEngine : IPsychologicalSafetyPort, IMissionAlignmentPort, IAdoptionTelemetryPort, IImpactAssessmentPort
+{
+ private readonly ILogger _logger;
+
+ // In-memory stores keyed by "{tenantId}:{teamId}" or "{tenantId}"
+ private readonly ConcurrentDictionary> _safetyScores = new();
+ private readonly ConcurrentDictionary> _alignments = new();
+ private readonly ConcurrentDictionary> _telemetryEvents = new();
+ private readonly ConcurrentDictionary> _assessments = new();
+
+ ///
+ /// Weight applied to survey responses when calculating dimension scores.
+ ///
+ internal const double SurveyWeight = 0.7;
+
+ ///
+ /// Weight applied to behavioral signals when calculating dimension scores.
+ ///
+ internal const double BehavioralWeight = 0.3;
+
+ ///
+ /// Threshold of survey responses below which confidence is .
+ ///
+ internal const int LowConfidenceThreshold = 10;
+
+ ///
+ /// Threshold of survey responses above which confidence is .
+ ///
+ internal const int HighConfidenceThreshold = 50;
+
+ ///
+ /// Initializes a new instance of the class.
+ ///
+ /// The logger instance for structured logging.
+ public ImpactMetricsEngine(ILogger logger)
+ {
+ _logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ }
+
+ // -----------------------------------------------------------------------
+ // IPsychologicalSafetyPort
+ // -----------------------------------------------------------------------
+
+ ///
+ public Task CalculateSafetyScoreAsync(
+ string teamId,
+ string tenantId,
+ Dictionary surveyScores,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(teamId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+ ArgumentNullException.ThrowIfNull(surveyScores);
+
+ _logger.LogInformation(
+ "Calculating psychological safety score for team {TeamId} in tenant {TenantId}",
+ teamId, tenantId);
+
+ // Derive behavioral signals from telemetry for this tenant
+ var behavioralScores = DeriveBehavioralScores(tenantId);
+
+ // Calculate per-dimension scores using weighted average
+ var dimensionScores = new Dictionary();
+ foreach (var dimension in Enum.GetValues())
+ {
+ var surveyScore = surveyScores.TryGetValue(dimension, out var sv) ? sv : 50.0;
+ var behavioralScore = behavioralScores.TryGetValue(dimension, out var bv) ? bv : 50.0;
+ dimensionScores[dimension] = (surveyScore * SurveyWeight) + (behavioralScore * BehavioralWeight);
+ }
+
+ // Overall score is the average of all dimension scores
+ var overallScore = dimensionScores.Values.Average();
+
+ // Determine survey response count from the supplied scores
+ var surveyResponseCount = surveyScores.Count;
+ var behavioralSignalCount = behavioralScores.Count;
+
+ // Determine confidence level
+ var totalResponses = surveyResponseCount + behavioralSignalCount;
+ var confidence = totalResponses switch
+ {
+ < LowConfidenceThreshold => ConfidenceLevel.Low,
+ <= HighConfidenceThreshold => ConfidenceLevel.Medium,
+ _ => ConfidenceLevel.High
+ };
+
+ var score = new PsychologicalSafetyScore(
+ ScoreId: Guid.NewGuid().ToString(),
+ TeamId: teamId,
+ TenantId: tenantId,
+ OverallScore: Math.Round(overallScore, 2),
+ Dimensions: dimensionScores,
+ SurveyResponseCount: surveyResponseCount,
+ BehavioralSignalCount: behavioralSignalCount,
+ CalculatedAt: DateTimeOffset.UtcNow,
+ ConfidenceLevel: confidence);
+
+ // Persist in memory
+ var key = BuildTeamKey(tenantId, teamId);
+ _safetyScores.AddOrUpdate(
+ key,
+ _ => [score],
+ (_, existing) => { existing.Add(score); return existing; });
+
+ _logger.LogInformation(
+ "Psychological safety score calculated: {OverallScore} with confidence {Confidence} for team {TeamId}",
+ score.OverallScore, confidence, teamId);
+
+ return Task.FromResult(score);
+ }
+
+ ///
+ public Task> GetHistoricalScoresAsync(
+ string teamId,
+ string tenantId,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(teamId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ var key = BuildTeamKey(tenantId, teamId);
+ if (_safetyScores.TryGetValue(key, out var scores))
+ {
+ return Task.FromResult>(
+ scores.OrderBy(s => s.CalculatedAt).ToList().AsReadOnly());
+ }
+
+ return Task.FromResult>(
+ Array.Empty());
+ }
+
+ ///
+ public Task?> GetDimensionBreakdownAsync(
+ string teamId,
+ string tenantId,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(teamId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ var key = BuildTeamKey(tenantId, teamId);
+ if (_safetyScores.TryGetValue(key, out var scores) && scores.Count > 0)
+ {
+ var latest = scores.OrderByDescending(s => s.CalculatedAt).First();
+ return Task.FromResult?>(
+ new Dictionary(latest.Dimensions));
+ }
+
+ return Task.FromResult?>(null);
+ }
+
+ // -----------------------------------------------------------------------
+ // IMissionAlignmentPort
+ // -----------------------------------------------------------------------
+
+ ///
+ public Task AssessAlignmentAsync(
+ string decisionId,
+ string decisionContext,
+ string missionStatement,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(decisionId);
+ ArgumentException.ThrowIfNullOrWhiteSpace(decisionContext);
+ ArgumentException.ThrowIfNullOrWhiteSpace(missionStatement);
+
+ _logger.LogInformation(
+ "Assessing mission alignment for decision {DecisionId}", decisionId);
+
+ // Tokenize both the decision context and mission statement for keyword matching
+ var missionKeywords = ExtractKeywords(missionStatement);
+ var decisionKeywords = ExtractKeywords(decisionContext);
+
+ // Find matches — keywords that appear in both
+ var matches = missionKeywords
+ .Intersect(decisionKeywords, StringComparer.OrdinalIgnoreCase)
+ .ToList();
+
+ // Find conflicts — simple heuristic: known negative prefixes in decision context
+ var conflicts = DetectConflicts(decisionContext, missionStatement);
+
+ // Score: ratio of matched keywords to total mission keywords, minus conflict penalty
+ var matchRatio = missionKeywords.Count > 0
+ ? (double)matches.Count / missionKeywords.Count
+ : 0.0;
+
+ var conflictPenalty = conflicts.Count * 0.15;
+ var alignmentScore = Math.Max(0.0, Math.Min(1.0, matchRatio - conflictPenalty));
+
+ // Hash the mission statement for reference
+ var hash = ComputeHash(missionStatement);
+
+ var alignment = new MissionAlignment(
+ AlignmentId: Guid.NewGuid().ToString(),
+ DecisionId: decisionId,
+ MissionStatementHash: hash,
+ AlignmentScore: Math.Round(alignmentScore, 4),
+ ValueMatches: matches,
+ Conflicts: conflicts,
+ AssessedAt: DateTimeOffset.UtcNow);
+
+ // Store per decision id (we use decision id as a simple tenant-like key)
+ _alignments.AddOrUpdate(
+ decisionId,
+ _ => [alignment],
+ (_, existing) => { existing.Add(alignment); return existing; });
+
+ _logger.LogInformation(
+ "Mission alignment assessed for decision {DecisionId}: score={Score}, matches={Matches}, conflicts={Conflicts}",
+ decisionId, alignment.AlignmentScore, matches.Count, conflicts.Count);
+
+ return Task.FromResult(alignment);
+ }
+
+ ///
+ public Task> GetAlignmentTrendAsync(
+ string tenantId,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ // Aggregate all alignments (in a real system, these would be filtered by tenant)
+ var all = _alignments.Values
+ .SelectMany(list => list)
+ .OrderBy(a => a.AssessedAt)
+ .ToList();
+
+ return Task.FromResult>(all.AsReadOnly());
+ }
+
+ // -----------------------------------------------------------------------
+ // IAdoptionTelemetryPort
+ // -----------------------------------------------------------------------
+
+ ///
+ public Task RecordActionAsync(
+ AdoptionTelemetry telemetry,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentNullException.ThrowIfNull(telemetry);
+
+ _logger.LogInformation(
+ "Recording telemetry action {Action} for user {UserId} in tenant {TenantId}",
+ telemetry.Action, telemetry.UserId, telemetry.TenantId);
+
+ _telemetryEvents.AddOrUpdate(
+ telemetry.TenantId,
+ _ => [telemetry],
+ (_, existing) => { existing.Add(telemetry); return existing; });
+
+ return Task.CompletedTask;
+ }
+
+ ///
+ public Task> GetUsageSummaryAsync(
+ string tenantId,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ if (_telemetryEvents.TryGetValue(tenantId, out var events))
+ {
+ return Task.FromResult>(
+ events.OrderBy(e => e.Timestamp).ToList().AsReadOnly());
+ }
+
+ return Task.FromResult>(
+ Array.Empty());
+ }
+
+ ///
+ public Task> DetectResistancePatternsAsync(
+ string tenantId,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ _logger.LogInformation("Detecting resistance patterns for tenant {TenantId}", tenantId);
+
+ var indicators = new List();
+
+ if (!_telemetryEvents.TryGetValue(tenantId, out var events) || events.Count == 0)
+ {
+ return Task.FromResult>(indicators.AsReadOnly());
+ }
+
+ // Detect high override rate
+ var totalActions = events.Count;
+ var overrideCount = events.Count(e => e.Action == AdoptionAction.Override);
+ if (totalActions > 0)
+ {
+ var overrideRate = (double)overrideCount / totalActions;
+ if (overrideRate > 0.3)
+ {
+ var affectedUsers = events
+ .Where(e => e.Action == AdoptionAction.Override)
+ .Select(e => e.UserId)
+ .Distinct()
+ .Count();
+
+ indicators.Add(new ResistanceIndicator(
+ IndicatorType: ResistanceType.Override,
+ Severity: Math.Min(1.0, overrideRate),
+ AffectedUserCount: affectedUsers,
+ FirstDetectedAt: events
+ .Where(e => e.Action == AdoptionAction.Override)
+ .Min(e => e.Timestamp),
+ Description: $"High override rate detected: {overrideRate:P0} of all actions are overrides."));
+ }
+ }
+
+ // Detect help request spike
+ var helpCount = events.Count(e => e.Action == AdoptionAction.HelpRequest);
+ if (totalActions > 0)
+ {
+ var helpRate = (double)helpCount / totalActions;
+ if (helpRate > 0.25)
+ {
+ var affectedUsers = events
+ .Where(e => e.Action == AdoptionAction.HelpRequest)
+ .Select(e => e.UserId)
+ .Distinct()
+ .Count();
+
+ indicators.Add(new ResistanceIndicator(
+ IndicatorType: ResistanceType.HelpSpike,
+ Severity: Math.Min(1.0, helpRate),
+ AffectedUserCount: affectedUsers,
+ FirstDetectedAt: events
+ .Where(e => e.Action == AdoptionAction.HelpRequest)
+ .Min(e => e.Timestamp),
+ Description: $"Help request spike detected: {helpRate:P0} of all actions are help requests."));
+ }
+ }
+
+ // Detect declining usage (feature ignore > feature use)
+ var ignoreCount = events.Count(e => e.Action == AdoptionAction.FeatureIgnore);
+ var useCount = events.Count(e => e.Action == AdoptionAction.FeatureUse);
+ if (ignoreCount > useCount && ignoreCount > 0)
+ {
+ var affectedUsers = events
+ .Where(e => e.Action == AdoptionAction.FeatureIgnore)
+ .Select(e => e.UserId)
+ .Distinct()
+ .Count();
+
+ var severity = useCount > 0
+ ? Math.Min(1.0, (double)ignoreCount / (ignoreCount + useCount))
+ : 1.0;
+
+ indicators.Add(new ResistanceIndicator(
+ IndicatorType: ResistanceType.Avoidance,
+ Severity: severity,
+ AffectedUserCount: affectedUsers,
+ FirstDetectedAt: events
+ .Where(e => e.Action == AdoptionAction.FeatureIgnore)
+ .Min(e => e.Timestamp),
+ Description: $"Feature avoidance detected: {ignoreCount} ignores vs {useCount} uses."));
+ }
+
+ // Detect negative feedback pattern
+ var feedbackEvents = events.Where(e => e.Action == AdoptionAction.Feedback).ToList();
+ var negativeFeedback = feedbackEvents
+ .Where(e => e.Context != null &&
+ e.Context.Contains("negative", StringComparison.OrdinalIgnoreCase))
+ .ToList();
+
+ if (negativeFeedback.Count > 0 && feedbackEvents.Count > 0)
+ {
+ var negativeRate = (double)negativeFeedback.Count / feedbackEvents.Count;
+ if (negativeRate > 0.5)
+ {
+ var affectedUsers = negativeFeedback
+ .Select(e => e.UserId)
+ .Distinct()
+ .Count();
+
+ indicators.Add(new ResistanceIndicator(
+ IndicatorType: ResistanceType.NegativeFeedback,
+ Severity: Math.Min(1.0, negativeRate),
+ AffectedUserCount: affectedUsers,
+ FirstDetectedAt: negativeFeedback.Min(e => e.Timestamp),
+ Description: $"Negative feedback pattern detected: {negativeRate:P0} of feedback is negative."));
+ }
+ }
+
+ return Task.FromResult>(indicators.AsReadOnly());
+ }
+
+ // -----------------------------------------------------------------------
+ // IImpactAssessmentPort
+ // -----------------------------------------------------------------------
+
+ ///
+ public async Task GenerateAssessmentAsync(
+ string tenantId,
+ DateTimeOffset periodStart,
+ DateTimeOffset periodEnd,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ _logger.LogInformation(
+ "Generating impact assessment for tenant {TenantId} from {PeriodStart} to {PeriodEnd}",
+ tenantId, periodStart, periodEnd);
+
+ // Get resistance patterns
+ var resistanceIndicators = await DetectResistancePatternsAsync(tenantId, cancellationToken);
+
+ // Calculate adoption rate from telemetry
+ var adoptionRate = CalculateAdoptionRate(tenantId);
+
+ // Calculate productivity and quality deltas (simulated from telemetry patterns)
+ var (productivityDelta, qualityDelta, timeToDecisionDelta) = CalculateDeltas(tenantId);
+
+ // User satisfaction from feedback telemetry
+ var userSatisfactionScore = CalculateUserSatisfaction(tenantId);
+
+ var assessment = new ImpactAssessment(
+ AssessmentId: Guid.NewGuid().ToString(),
+ TenantId: tenantId,
+ PeriodStart: periodStart,
+ PeriodEnd: periodEnd,
+ ProductivityDelta: Math.Round(productivityDelta, 4),
+ QualityDelta: Math.Round(qualityDelta, 4),
+ TimeToDecisionDelta: Math.Round(timeToDecisionDelta, 4),
+ UserSatisfactionScore: Math.Round(userSatisfactionScore, 2),
+ AdoptionRate: Math.Round(adoptionRate, 4),
+ ResistanceIndicators: resistanceIndicators.ToList());
+
+ _assessments.AddOrUpdate(
+ tenantId,
+ _ => [assessment],
+ (_, existing) => { existing.Add(assessment); return existing; });
+
+ return assessment;
+ }
+
+ ///
+ public async Task GenerateReportAsync(
+ string tenantId,
+ DateTimeOffset periodStart,
+ DateTimeOffset periodEnd,
+ CancellationToken cancellationToken = default)
+ {
+ ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
+
+ _logger.LogInformation(
+ "Generating impact report for tenant {TenantId} from {PeriodStart} to {PeriodEnd}",
+ tenantId, periodStart, periodEnd);
+
+ // Generate or retrieve the latest assessment
+ var assessment = await GenerateAssessmentAsync(tenantId, periodStart, periodEnd, cancellationToken);
+
+ // Get the latest safety score across all teams for this tenant
+ var safetyScore = GetLatestSafetyScoreForTenant(tenantId);
+
+ // Get the latest alignment score
+ var alignmentScore = GetLatestAlignmentScore();
+
+ // Calculate overall impact score using weighted formula:
+ // safety * 0.3 + alignment * 0.2 + adoption * 0.3 + productivity * 0.2
+ var normalizedProductivity = Math.Max(0, Math.Min(100, (assessment.ProductivityDelta + 1) * 50));
+ var overallImpactScore =
+ (safetyScore * 0.3) +
+ (alignmentScore * 100.0 * 0.2) +
+ (assessment.AdoptionRate * 100.0 * 0.3) +
+ (normalizedProductivity * 0.2);
+
+ overallImpactScore = Math.Round(Math.Max(0, Math.Min(100, overallImpactScore)), 2);
+
+ // Generate recommendations based on weak areas
+ var recommendations = GenerateRecommendations(
+ safetyScore, alignmentScore, assessment.AdoptionRate,
+ assessment.ProductivityDelta, assessment.ResistanceIndicators);
+
+ var report = new ImpactReport(
+ ReportId: Guid.NewGuid().ToString(),
+ TenantId: tenantId,
+ PeriodStart: periodStart,
+ PeriodEnd: periodEnd,
+ SafetyScore: Math.Round(safetyScore, 2),
+ AlignmentScore: Math.Round(alignmentScore, 4),
+ AdoptionRate: Math.Round(assessment.AdoptionRate, 4),
+ OverallImpactScore: overallImpactScore,
+ Recommendations: recommendations,
+ GeneratedAt: DateTimeOffset.UtcNow);
+
+ _logger.LogInformation(
+ "Impact report generated for tenant {TenantId}: overall score={OverallImpactScore}",
+ tenantId, report.OverallImpactScore);
+
+ return report;
+ }
+
+ // -----------------------------------------------------------------------
+ // Private helpers
+ // -----------------------------------------------------------------------
+
+ private static string BuildTeamKey(string tenantId, string teamId) => $"{tenantId}:{teamId}";
+
+ private Dictionary DeriveBehavioralScores(string tenantId)
+ {
+ var scores = new Dictionary();
+
+ if (!_telemetryEvents.TryGetValue(tenantId, out var events) || events.Count == 0)
+ {
+ return scores;
+ }
+
+ var totalActions = events.Count;
+ var overrideRate = (double)events.Count(e => e.Action == AdoptionAction.Override) / totalActions;
+ var helpRate = (double)events.Count(e => e.Action == AdoptionAction.HelpRequest) / totalActions;
+ var featureUseRate = (double)events.Count(e => e.Action == AdoptionAction.FeatureUse) / totalActions;
+ var ignoreRate = (double)events.Count(e => e.Action == AdoptionAction.FeatureIgnore) / totalActions;
+
+ // Higher feature use -> higher trust, lower override -> higher trust
+ scores[SafetyDimension.TrustInAI] = Math.Max(0, Math.Min(100, (1 - overrideRate) * 100));
+ scores[SafetyDimension.FearOfReplacement] = Math.Max(0, Math.Min(100, (1 - ignoreRate) * 100));
+ scores[SafetyDimension.ComfortWithAutomation] = Math.Max(0, Math.Min(100, featureUseRate * 100));
+ scores[SafetyDimension.WillingnessToExperiment] = Math.Max(0, Math.Min(100, featureUseRate * 100));
+ scores[SafetyDimension.TransparencyPerception] = Math.Max(0, Math.Min(100, (1 - helpRate) * 100));
+ scores[SafetyDimension.ErrorTolerance] = Math.Max(0, Math.Min(100, (1 - overrideRate) * 100));
+
+ return scores;
+ }
+
+ private static List ExtractKeywords(string text)
+ {
+ // Simple keyword extraction: split on whitespace and punctuation, filter short words
+ var separators = new[] { ' ', ',', '.', ';', ':', '!', '?', '\n', '\r', '\t', '(', ')', '[', ']', '{', '}', '"', '\'' };
+ return text
+ .Split(separators, StringSplitOptions.RemoveEmptyEntries)
+ .Where(w => w.Length > 3)
+ .Select(w => w.ToLowerInvariant())
+ .Distinct()
+ .ToList();
+ }
+
+ private static List DetectConflicts(string decisionContext, string missionStatement)
+ {
+ var conflicts = new List();
+
+ // Simple conflict detection: look for negation patterns near mission keywords
+ var negationPrefixes = new[] { "not ", "never ", "against ", "despite ", "ignoring ", "violating ", "contradicting " };
+ var missionKeywords = ExtractKeywords(missionStatement);
+ var lowerDecision = decisionContext.ToLowerInvariant();
+
+ foreach (var keyword in missionKeywords)
+ {
+ foreach (var prefix in negationPrefixes)
+ {
+ if (lowerDecision.Contains($"{prefix}{keyword}", StringComparison.OrdinalIgnoreCase))
+ {
+ conflicts.Add($"Potential conflict: '{prefix}{keyword}' found in decision context");
+ }
+ }
+ }
+
+ return conflicts;
+ }
+
+ private static string ComputeHash(string input)
+ {
+ var bytes = SHA256.HashData(Encoding.UTF8.GetBytes(input));
+ return Convert.ToHexString(bytes)[..16];
+ }
+
+ private double CalculateAdoptionRate(string tenantId)
+ {
+ if (!_telemetryEvents.TryGetValue(tenantId, out var events) || events.Count == 0)
+ {
+ return 0.0;
+ }
+
+ var totalActions = events.Count;
+ var activeUse = events.Count(e =>
+ e.Action == AdoptionAction.FeatureUse ||
+ e.Action == AdoptionAction.WorkflowComplete ||
+ e.Action == AdoptionAction.Login);
+
+ return totalActions > 0 ? (double)activeUse / totalActions : 0.0;
+ }
+
+ private (double productivity, double quality, double timeToDecision) CalculateDeltas(string tenantId)
+ {
+ if (!_telemetryEvents.TryGetValue(tenantId, out var events) || events.Count == 0)
+ {
+ return (0.0, 0.0, 0.0);
+ }
+
+ var completedWorkflows = events.Count(e => e.Action == AdoptionAction.WorkflowComplete);
+ var totalActions = events.Count;
+
+ // Productivity correlates with workflow completion rate
+ var productivityDelta = totalActions > 0
+ ? Math.Min(1.0, (double)completedWorkflows / totalActions * 2 - 0.5)
+ : 0.0;
+
+ // Quality inversely correlates with override rate
+ var overrideRate = (double)events.Count(e => e.Action == AdoptionAction.Override) / totalActions;
+ var qualityDelta = 0.5 - overrideRate;
+
+ // Time-to-decision improves with higher feature use (negative = faster = better)
+ var featureUseRate = (double)events.Count(e => e.Action == AdoptionAction.FeatureUse) / totalActions;
+ var timeToDecisionDelta = -(featureUseRate * 0.5);
+
+ return (productivityDelta, qualityDelta, timeToDecisionDelta);
+ }
+
+ private double CalculateUserSatisfaction(string tenantId)
+ {
+ if (!_telemetryEvents.TryGetValue(tenantId, out var events) || events.Count == 0)
+ {
+ return 50.0; // Neutral baseline
+ }
+
+ var feedbackEvents = events.Where(e => e.Action == AdoptionAction.Feedback).ToList();
+ if (feedbackEvents.Count == 0)
+ {
+ return 50.0;
+ }
+
+ var positive = feedbackEvents.Count(e =>
+ e.Context != null && e.Context.Contains("positive", StringComparison.OrdinalIgnoreCase));
+ var negative = feedbackEvents.Count(e =>
+ e.Context != null && e.Context.Contains("negative", StringComparison.OrdinalIgnoreCase));
+ var total = feedbackEvents.Count;
+
+ return total > 0 ? ((double)positive / total) * 100.0 : 50.0;
+ }
+
+ private double GetLatestSafetyScoreForTenant(string tenantId)
+ {
+ var tenantScores = _safetyScores
+ .Where(kvp => kvp.Key.StartsWith($"{tenantId}:", StringComparison.Ordinal))
+ .SelectMany(kvp => kvp.Value)
+ .ToList();
+
+ if (tenantScores.Count == 0)
+ {
+ return 50.0; // Neutral baseline when no scores exist
+ }
+
+ return tenantScores
+ .OrderByDescending(s => s.CalculatedAt)
+ .First()
+ .OverallScore;
+ }
+
+ private double GetLatestAlignmentScore()
+ {
+ var allAlignments = _alignments.Values.SelectMany(a => a).ToList();
+ if (allAlignments.Count == 0)
+ {
+ return 0.5; // Neutral baseline
+ }
+
+ return allAlignments
+ .OrderByDescending(a => a.AssessedAt)
+ .First()
+ .AlignmentScore;
+ }
+
+ private static List GenerateRecommendations(
+ double safetyScore,
+ double alignmentScore,
+ double adoptionRate,
+ double productivityDelta,
+ List resistanceIndicators)
+ {
+ var recommendations = new List();
+
+ if (safetyScore < 60)
+ {
+ recommendations.Add("Psychological safety score is below target. Consider team workshops on AI collaboration and transparent communication about AI's role.");
+ }
+
+ if (safetyScore < 80)
+ {
+ recommendations.Add("Safety score has room for improvement. Encourage experimentation with AI tools in low-risk scenarios.");
+ }
+
+ if (alignmentScore < 0.5)
+ {
+ recommendations.Add("Mission alignment is low. Review AI decision-making processes to ensure they reflect organizational values.");
+ }
+
+ if (adoptionRate < 0.3)
+ {
+ recommendations.Add("Adoption rate is below expectations. Consider targeted training sessions and identifying AI champions within teams.");
+ }
+
+ if (productivityDelta < 0)
+ {
+ recommendations.Add("Productivity has declined. Investigate whether AI tools are creating friction in existing workflows.");
+ }
+
+ foreach (var indicator in resistanceIndicators)
+ {
+ switch (indicator.IndicatorType)
+ {
+ case ResistanceType.Override:
+ recommendations.Add($"High override rate affecting {indicator.AffectedUserCount} users. Review AI recommendation quality and calibration.");
+ break;
+ case ResistanceType.HelpSpike:
+ recommendations.Add($"Help request spike affecting {indicator.AffectedUserCount} users. Improve onboarding materials and in-app guidance.");
+ break;
+ case ResistanceType.Avoidance:
+ recommendations.Add($"Feature avoidance detected for {indicator.AffectedUserCount} users. Gather qualitative feedback on barriers to adoption.");
+ break;
+ case ResistanceType.NegativeFeedback:
+ recommendations.Add($"Negative feedback pattern from {indicator.AffectedUserCount} users. Conduct user interviews to understand pain points.");
+ break;
+ case ResistanceType.Disengagement:
+ recommendations.Add($"User disengagement detected for {indicator.AffectedUserCount} users. Consider re-engagement campaigns and value demonstrations.");
+ break;
+ }
+ }
+
+ if (recommendations.Count == 0)
+ {
+ recommendations.Add("All metrics are within healthy ranges. Continue monitoring and consider expanding AI capabilities.");
+ }
+
+ return recommendations;
+ }
+}
diff --git a/src/BusinessApplications/ImpactMetrics/ImpactMetrics.csproj b/src/BusinessApplications/ImpactMetrics/ImpactMetrics.csproj
new file mode 100644
index 0000000..224a3e8
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/ImpactMetrics.csproj
@@ -0,0 +1,20 @@
+
+
+
+ net9.0
+ enable
+ enable
+ true
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/BusinessApplications/ImpactMetrics/Infrastructure/ServiceCollectionExtensions.cs b/src/BusinessApplications/ImpactMetrics/Infrastructure/ServiceCollectionExtensions.cs
new file mode 100644
index 0000000..51d7194
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Infrastructure/ServiceCollectionExtensions.cs
@@ -0,0 +1,34 @@
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Engines;
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Ports;
+using Microsoft.Extensions.DependencyInjection;
+
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Infrastructure;
+
+///
+/// Extension methods for registering Impact Metrics services in the
+/// dependency injection container.
+///
+public static class ServiceCollectionExtensions
+{
+ ///
+ /// Adds all Impact Metrics services to the specified
+ /// , registering the
+ /// as all four port interfaces
+ /// required by the Impact-Driven AI Metrics subsystem.
+ ///
+ /// The service collection to configure.
+ /// The same instance for chaining.
+ public static IServiceCollection AddImpactMetricsServices(this IServiceCollection services)
+ {
+ // Register the engine as a scoped service so that all four port interfaces
+ // share the same in-memory state within a single request scope.
+ services.AddScoped();
+
+ services.AddScoped(sp => sp.GetRequiredService());
+ services.AddScoped(sp => sp.GetRequiredService());
+ services.AddScoped(sp => sp.GetRequiredService());
+ services.AddScoped(sp => sp.GetRequiredService());
+
+ return services;
+ }
+}
diff --git a/src/BusinessApplications/ImpactMetrics/Models/AdoptionAction.cs b/src/BusinessApplications/ImpactMetrics/Models/AdoptionAction.cs
new file mode 100644
index 0000000..033fc13
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/AdoptionAction.cs
@@ -0,0 +1,42 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Enumerates the types of user actions tracked by the adoption telemetry system.
+///
+public enum AdoptionAction
+{
+ ///
+ /// User logged in to an AI-enabled tool.
+ ///
+ Login,
+
+ ///
+ /// User actively used a feature of the AI tool.
+ ///
+ FeatureUse,
+
+ ///
+ /// User was presented with a feature but chose not to use it.
+ ///
+ FeatureIgnore,
+
+ ///
+ /// User provided feedback on an AI feature or decision.
+ ///
+ Feedback,
+
+ ///
+ /// User overrode an AI-generated recommendation or decision.
+ ///
+ Override,
+
+ ///
+ /// User requested help or support while using an AI tool.
+ ///
+ HelpRequest,
+
+ ///
+ /// User completed a full workflow involving AI assistance.
+ ///
+ WorkflowComplete
+}
diff --git a/src/BusinessApplications/ImpactMetrics/Models/AdoptionTelemetry.cs b/src/BusinessApplications/ImpactMetrics/Models/AdoptionTelemetry.cs
new file mode 100644
index 0000000..178fea5
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/AdoptionTelemetry.cs
@@ -0,0 +1,22 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Records a single user interaction with an AI-enabled tool for adoption tracking.
+///
+/// Unique identifier for this telemetry record.
+/// The identifier of the user who performed the action.
+/// The tenant to which the user belongs.
+/// The identifier of the AI tool being used.
+/// The type of action performed.
+/// When the action occurred.
+/// The duration of the action in milliseconds, if applicable.
+/// Additional contextual information about the action.
+public record AdoptionTelemetry(
+ string TelemetryId,
+ string UserId,
+ string TenantId,
+ string ToolId,
+ AdoptionAction Action,
+ DateTimeOffset Timestamp,
+ long? DurationMs,
+ string? Context);
diff --git a/src/BusinessApplications/ImpactMetrics/Models/ConfidenceLevel.cs b/src/BusinessApplications/ImpactMetrics/Models/ConfidenceLevel.cs
new file mode 100644
index 0000000..f24e9e4
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/ConfidenceLevel.cs
@@ -0,0 +1,23 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Indicates the statistical confidence level of a calculated metric
+/// based on the volume of underlying data.
+///
+public enum ConfidenceLevel
+{
+ ///
+ /// Low confidence — fewer than 10 survey responses or behavioral signals.
+ ///
+ Low,
+
+ ///
+ /// Medium confidence — between 10 and 50 survey responses or behavioral signals.
+ ///
+ Medium,
+
+ ///
+ /// High confidence — more than 50 survey responses or behavioral signals.
+ ///
+ High
+}
diff --git a/src/BusinessApplications/ImpactMetrics/Models/ImpactAssessment.cs b/src/BusinessApplications/ImpactMetrics/Models/ImpactAssessment.cs
new file mode 100644
index 0000000..0f0bc47
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/ImpactAssessment.cs
@@ -0,0 +1,27 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Represents an impact assessment for a tenant over a specific time period,
+/// aggregating safety, alignment, adoption, and productivity metrics.
+///
+/// Unique identifier for this assessment.
+/// The tenant being assessed.
+/// Start of the assessment period.
+/// End of the assessment period.
+/// Change in productivity as a percentage (-1 to 1).
+/// Change in quality as a percentage (-1 to 1).
+/// Change in time-to-decision as a percentage (-1 to 1, negative means faster).
+/// User satisfaction score on a 0-100 scale.
+/// AI tool adoption rate as a percentage (0-1).
+/// List of detected resistance patterns.
+public record ImpactAssessment(
+ string AssessmentId,
+ string TenantId,
+ DateTimeOffset PeriodStart,
+ DateTimeOffset PeriodEnd,
+ double ProductivityDelta,
+ double QualityDelta,
+ double TimeToDecisionDelta,
+ double UserSatisfactionScore,
+ double AdoptionRate,
+ List ResistanceIndicators);
diff --git a/src/BusinessApplications/ImpactMetrics/Models/ImpactReport.cs b/src/BusinessApplications/ImpactMetrics/Models/ImpactReport.cs
new file mode 100644
index 0000000..0b8c768
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/ImpactReport.cs
@@ -0,0 +1,28 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Represents a comprehensive impact report for a tenant, aggregating psychological
+/// safety, mission alignment, adoption rates, and overall impact into a single
+/// document with actionable recommendations.
+///
+/// Unique identifier for this report.
+/// The tenant for which the report was generated.
+/// Start of the reporting period.
+/// End of the reporting period.
+/// The aggregate psychological safety score (0-100).
+/// The aggregate mission alignment score (0-1).
+/// The AI tool adoption rate (0-1).
+/// The weighted overall impact score (0-100).
+/// Actionable recommendations based on the analysis.
+/// The timestamp when this report was generated.
+public record ImpactReport(
+ string ReportId,
+ string TenantId,
+ DateTimeOffset PeriodStart,
+ DateTimeOffset PeriodEnd,
+ double SafetyScore,
+ double AlignmentScore,
+ double AdoptionRate,
+ double OverallImpactScore,
+ List Recommendations,
+ DateTimeOffset GeneratedAt);
diff --git a/src/BusinessApplications/ImpactMetrics/Models/MissionAlignment.cs b/src/BusinessApplications/ImpactMetrics/Models/MissionAlignment.cs
new file mode 100644
index 0000000..6efdef4
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/MissionAlignment.cs
@@ -0,0 +1,21 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Represents the result of assessing how well an AI decision aligns with an
+/// organisation's stated mission and values.
+///
+/// Unique identifier for this alignment assessment.
+/// The identifier of the decision being assessed.
+/// A hash of the mission statement used for the assessment.
+/// Alignment score on a 0-1 scale where 1 is fully aligned.
+/// List of mission values that the decision supports.
+/// List of conflicts where the decision contradicts stated values.
+/// The timestamp when the alignment was assessed.
+public record MissionAlignment(
+ string AlignmentId,
+ string DecisionId,
+ string MissionStatementHash,
+ double AlignmentScore,
+ List ValueMatches,
+ List Conflicts,
+ DateTimeOffset AssessedAt);
diff --git a/src/BusinessApplications/ImpactMetrics/Models/PsychologicalSafetyScore.cs b/src/BusinessApplications/ImpactMetrics/Models/PsychologicalSafetyScore.cs
new file mode 100644
index 0000000..3470bdf
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/PsychologicalSafetyScore.cs
@@ -0,0 +1,25 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Represents a calculated psychological safety score for a team,
+/// measuring how safe the team feels about AI adoption across multiple dimensions.
+///
+/// Unique identifier for this safety score record.
+/// The identifier of the team being assessed.
+/// The tenant to which the team belongs.
+/// The aggregate psychological safety score on a 0-100 scale.
+/// A breakdown of scores by individual safety dimension.
+/// The number of survey responses used in the calculation.
+/// The number of behavioral signals used in the calculation.
+/// The timestamp when this score was calculated.
+/// The statistical confidence level based on data volume.
+public record PsychologicalSafetyScore(
+ string ScoreId,
+ string TeamId,
+ string TenantId,
+ double OverallScore,
+ Dictionary Dimensions,
+ int SurveyResponseCount,
+ int BehavioralSignalCount,
+ DateTimeOffset CalculatedAt,
+ ConfidenceLevel ConfidenceLevel);
diff --git a/src/BusinessApplications/ImpactMetrics/Models/ResistanceIndicator.cs b/src/BusinessApplications/ImpactMetrics/Models/ResistanceIndicator.cs
new file mode 100644
index 0000000..5d097b9
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/ResistanceIndicator.cs
@@ -0,0 +1,37 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Enumerates the types of resistance patterns that can be detected.
+///
+public enum ResistanceType
+{
+ /// Users are avoiding using AI tools entirely.
+ Avoidance,
+
+ /// Users are frequently overriding AI recommendations.
+ Override,
+
+ /// Users are providing negative feedback about AI tools.
+ NegativeFeedback,
+
+ /// A spike in help requests indicates users are struggling.
+ HelpSpike,
+
+ /// Users are disengaging from AI-enabled workflows.
+ Disengagement
+}
+
+///
+/// Represents a detected pattern of resistance to AI adoption.
+///
+/// The category of resistance detected.
+/// Severity of the resistance indicator on a 0-1 scale.
+/// The number of users exhibiting this resistance pattern.
+/// When this resistance pattern was first observed.
+/// A human-readable description of the resistance pattern.
+public record ResistanceIndicator(
+ ResistanceType IndicatorType,
+ double Severity,
+ int AffectedUserCount,
+ DateTimeOffset FirstDetectedAt,
+ string Description);
diff --git a/src/BusinessApplications/ImpactMetrics/Models/SafetyDimension.cs b/src/BusinessApplications/ImpactMetrics/Models/SafetyDimension.cs
new file mode 100644
index 0000000..f2a3dd0
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Models/SafetyDimension.cs
@@ -0,0 +1,38 @@
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+///
+/// Enumerates the dimensions measured as part of a psychological safety assessment
+/// for AI adoption within a team.
+///
+public enum SafetyDimension
+{
+ ///
+ /// Measures the degree of trust team members place in AI systems.
+ ///
+ TrustInAI,
+
+ ///
+ /// Measures the degree to which team members fear being replaced by AI.
+ ///
+ FearOfReplacement,
+
+ ///
+ /// Measures how comfortable team members are with automated processes.
+ ///
+ ComfortWithAutomation,
+
+ ///
+ /// Measures team members' willingness to experiment with new AI tools and workflows.
+ ///
+ WillingnessToExperiment,
+
+ ///
+ /// Measures how transparent team members perceive AI decision-making to be.
+ ///
+ TransparencyPerception,
+
+ ///
+ /// Measures the tolerance for errors made by AI systems.
+ ///
+ ErrorTolerance
+}
diff --git a/src/BusinessApplications/ImpactMetrics/Ports/IAdoptionTelemetryPort.cs b/src/BusinessApplications/ImpactMetrics/Ports/IAdoptionTelemetryPort.cs
new file mode 100644
index 0000000..9de5155
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Ports/IAdoptionTelemetryPort.cs
@@ -0,0 +1,40 @@
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Ports;
+
+///
+/// Defines the contract for recording AI tool usage telemetry and detecting
+/// resistance patterns in adoption behaviour.
+///
+public interface IAdoptionTelemetryPort
+{
+ ///
+ /// Records a single user interaction with an AI tool.
+ ///
+ /// The telemetry event to record.
+ /// Token to cancel the operation.
+ /// A task representing the asynchronous operation.
+ Task RecordActionAsync(
+ AdoptionTelemetry telemetry,
+ CancellationToken cancellationToken = default);
+
+ ///
+ /// Retrieves a summary of AI tool usage for a tenant.
+ ///
+ /// The tenant whose usage to summarise.
+ /// Token to cancel the operation.
+ /// A list of telemetry events for the tenant.
+ Task> GetUsageSummaryAsync(
+ string tenantId,
+ CancellationToken cancellationToken = default);
+
+ ///
+ /// Analyses telemetry data to detect patterns of resistance to AI adoption.
+ ///
+ /// The tenant to analyse for resistance patterns.
+ /// Token to cancel the operation.
+ /// A list of detected resistance indicators.
+ Task> DetectResistancePatternsAsync(
+ string tenantId,
+ CancellationToken cancellationToken = default);
+}
diff --git a/src/BusinessApplications/ImpactMetrics/Ports/IImpactAssessmentPort.cs b/src/BusinessApplications/ImpactMetrics/Ports/IImpactAssessmentPort.cs
new file mode 100644
index 0000000..e80a7b1
--- /dev/null
+++ b/src/BusinessApplications/ImpactMetrics/Ports/IImpactAssessmentPort.cs
@@ -0,0 +1,39 @@
+using CognitiveMesh.BusinessApplications.ImpactMetrics.Models;
+
+namespace CognitiveMesh.BusinessApplications.ImpactMetrics.Ports;
+
+///