A minimal framework for evaluating agentic AI architectures.
SRAL (State-Reason-Act-Learn) is an evaluation framework that foregrounds State—the constructed, persistent world-model that agents must explicitly maintain—as the foundational component of agent architecture.
Most agent failures trace not to reasoning or action, but to unmanaged state: context window overflow, lost constraints, and forgotten decisions.
┌──────────────────────────────────────────────────────────────┐
│ │
│ STATE → REASON → ACT → LEARN │
│ │
│ Each component depends on the previous one. │
│ When an agent fails, trace backward: │
│ - Visible failure usually in ACT or REASON │
│ - Root cause usually in STATE │
│ │
└──────────────────────────────────────────────────────────────┘
When evaluating any agent system, ask these questions in order:
┌─────────────────────────────────────────────────────────────┐
│ │
│ 1. STATE: What does it remember? │
│ Is state explicit and managed? │
│ │
│ 2. REASON: How does it decide? │
│ Is reasoning grounded in reality? │
│ │
│ 3. ACT: How does it affect the world? │
│ Does action inform reasoning? │
│ │
│ 4. LEARN: How does it improve? │
│ Is learning architectural? │
│ │
└─────────────────────────────────────────────────────────────┘
| Component | Architectural Question |
|---|---|
| State | Is state explicit and managed, or implicit and fragile? |
| Reason | Is reasoning grounded in reality, or floating in abstraction? |
| Act | Does action inform reasoning, or merely execute it? |
| Learn | Is learning architectural, or accidental? |
State → Reason → Act → Learn
- Reason depends on State
- Act depends on Reason
- Learn depends on all three
When an agent fails, trace backward. The visible failure is usually in Act or Reason. The root cause is usually in State.
┌─────────────────────────────────────────────┐
│ Visible Symptom (usually here): │
│ │
│ ┌─────────┐ │
│ │ ACT │ ← Wrong output │
│ └────┬────┘ │
│ │ │
│ ┌────▼────┐ │
│ │ REASON │ ← Bad decision │
│ └────┬────┘ │
│ │ │
│ Root Cause (usually here): │
│ │
│ ┌────▼────┐ │
│ │ STATE │ ← Lost constraints │
│ └─────────┘ │
│ │
└─────────────────────────────────────────────┘
- Paper: SRAL: A Framework for Evaluating Agentic AI Architectures
- Blog Post: The Four Questions I Ask About Every Agent
- Visual Diagrams: Simple diagrams explaining SRAL concepts
- Evaluation Tools:
- Quick Checklist - Fast assessment tool
- Detailed Template - Comprehensive evaluation
- Scoring Guide - How to score each component
- Complete Examples:
- LangChain Evaluation - Full evaluation with scores
- Learn SRAL: Read the paper or blog post
- See it in action: Review the LangChain evaluation
- Understand scoring: Read the scoring guide
- Quick assessment: Use the checklist for rapid evaluation
- Deep dive: Use the detailed template for comprehensive analysis
If you use SRAL in your work, please cite:
@misc{sharan2025sral,
author = {Sharan, Aakash},
title = {SRAL: A Framework for Evaluating Agentic AI Architectures},
year = {2025},
publisher = {Zenodo},
doi = {10.5281/zenodo.18049753},
url = {https://doi.org/10.5281/zenodo.18049753}
}MIT License — see LICENSE for details.