Skip to content

A minimal framework for evaluating agentic AI architectures. Includes evaluation templates, scoring guide, and case studies (LangChain, AutoGen, CrewAI). Most agent failures trace to state management, not reasoning.

License

Notifications You must be signed in to change notification settings

aakashsharan/sral-framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SRAL Framework

A minimal framework for evaluating agentic AI architectures.

SRAL (State-Reason-Act-Learn) is an evaluation framework that foregrounds State—the constructed, persistent world-model that agents must explicitly maintain—as the foundational component of agent architecture.

The Core Insight

Most agent failures trace not to reasoning or action, but to unmanaged state: context window overflow, lost constraints, and forgotten decisions.

┌──────────────────────────────────────────────────────────────┐
│                                                              │
│  STATE → REASON → ACT → LEARN                                │
│                                                              │
│  Each component depends on the previous one.                 │
│  When an agent fails, trace backward:                        │
│  - Visible failure usually in ACT or REASON                  │
│  - Root cause usually in STATE                               │
│                                                              │
└──────────────────────────────────────────────────────────────┘

The Four Questions

When evaluating any agent system, ask these questions in order:

┌─────────────────────────────────────────────────────────────┐
│                                                             │
│  1. STATE:  What does it remember?                          │
│             Is state explicit and managed?                  │
│                                                             │
│  2. REASON: How does it decide?                             │
│             Is reasoning grounded in reality?               │
│                                                             │
│  3. ACT:    How does it affect the world?                   │
│             Does action inform reasoning?                   │
│                                                             │
│  4. LEARN:  How does it improve?                            │
│             Is learning architectural?                      │
│                                                             │
└─────────────────────────────────────────────────────────────┘
Component Architectural Question
State Is state explicit and managed, or implicit and fragile?
Reason Is reasoning grounded in reality, or floating in abstraction?
Act Does action inform reasoning, or merely execute it?
Learn Is learning architectural, or accidental?

The Dependency Model

State → Reason → Act → Learn
  • Reason depends on State
  • Act depends on Reason
  • Learn depends on all three

When an agent fails, trace backward. The visible failure is usually in Act or Reason. The root cause is usually in State.

Failure Tracing Example

┌─────────────────────────────────────────────┐
│  Visible Symptom (usually here):            │
│                                             │
│       ┌─────────┐                           │
│       │   ACT   │ ← Wrong output            │
│       └────┬────┘                           │
│            │                                │
│       ┌────▼────┐                           │
│       │ REASON  │ ← Bad decision            │
│       └────┬────┘                           │
│            │                                │
│  Root Cause (usually here):                 │
│                                             │
│       ┌────▼────┐                           │
│       │  STATE  │ ← Lost constraints        │
│       └─────────┘                           │
│                                             │
└─────────────────────────────────────────────┘

Resources

Quick Start

  1. Learn SRAL: Read the paper or blog post
  2. See it in action: Review the LangChain evaluation
  3. Understand scoring: Read the scoring guide
  4. Quick assessment: Use the checklist for rapid evaluation
  5. Deep dive: Use the detailed template for comprehensive analysis

Citation

If you use SRAL in your work, please cite:

@misc{sharan2025sral,
  author = {Sharan, Aakash},
  title = {SRAL: A Framework for Evaluating Agentic AI Architectures},
  year = {2025},
  publisher = {Zenodo},
  doi = {10.5281/zenodo.18049753},
  url = {https://doi.org/10.5281/zenodo.18049753}
}

License

MIT License — see LICENSE for details.

Author

Aakash SharanLinkedIn | Website

About

A minimal framework for evaluating agentic AI architectures. Includes evaluation templates, scoring guide, and case studies (LangChain, AutoGen, CrewAI). Most agent failures trace to state management, not reasoning.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published