Decentralized Proof-Oriented AI Framework
A proof-oriented, evidence-driven framework for AI-native software engineering
๐ Canonical Specification โข ๐ Resources โข ๐ฌ Discord โข ๐ฆ Twitter
D-POAFยฎ (Decentralized Proof-Oriented AI Framework) is a proof-oriented, decentralized reference framework for AI-native software engineering. It defines a structured lifecycle model and foundational principles for designing, building, operating, and evolving software in human-AI engineering environments.
D-POAF grounds legitimacy, governance, and accountability in verifiable proof, sustained through end-to-end traceability of intent, decisions, actions, artifacts, proofs, and outcomes.
AI-native engineering introduces decisions and changes that cannot be justified by hierarchy, central control, or performance claims alone. In hybrid human-AI environments, trust, value, and responsibility require:
- Decision authority to be distributed
- Governance through evidence
- Demonstrable through traceable proofs over time
Traditional Approaches โ D-POAF
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Trust-based processes โ Verifiable proof
Subjective validation โ Evidence-driven decisions
Centralized authority โ Decentralized governance
Static frameworks โ Living, adaptive systems
Manual ceremonies โ Proof-first engineering
D-POAF is built on five foundational principles:
A decision becomes legitimate when justified by explicit, verifiable proof. Hierarchy, automation, or performance alone does not establish legitimacy.
Decision authority is distributed across humans, AI, and systems, supported by explicit boundaries and escalation paths. Evidence sustains reviewability and prevents opaque concentration of control.
Governance is embedded into workflows and evolves through evidence and outcomes rather than static controls. Rules, constraints, exceptions, and decision rights are maintained as an auditable, lifecycle-wide operating system.
Intent, decisions, actions, artifacts, proofs, and outcomes remain linkable to context and contribution (human or AI). Traceability sustains reviewability, reproducibility, and accountability across system evolution.
Even with AI autonomy, humans retain explicit responsibility for decision boundaries, escalation rules, and outcome acceptance. Autonomy never abolishes accountability.
D-POAF structures system evolution as a continuous proof-grounded cycle:
Intent โ Decision โ Execution โ Evidence โ Learning โ Adaptation
1. Instruct & Scope
Translate intent into a scoped instruction grounded in context, with explicit scope boundaries, initial hypotheses, and proof expectations.
2. Shape & Align
Refines and decomposes scope, aligns decision and investment logic, and prepares execution through prompt action design, risk/acceptance thresholds, and humanโAI delegation bounds.
3. Execute & Evolve
Executes prompt actions to produce artifacts, validates delivery through integration, validates outcomes through review, and sustains reliability through monitoring, producing and refreshing proofs over time.
A Wave is the unit of verifiable progress. A Wave traverses the three macro-phases to produce and refresh proofs (PoD/PoV/PoR). Based on evidence and operational signals, each Wave updates intent, governance, and delegation boundaries.
D-POAF defines three proof families that sustain trust and accountability:
Evidence of intended behavior and technical alignment. Validates that the system behaves as specified.
Evidence of outcomes and measurable impact. Validates that the delivered capability produces real value.
Evidence of sustained quality, safety, and stability over time. Validates that behavior remains dependable as systems evolve.
Living Governance defines and continuously updates the system's operating envelope through evidence:
- Persistent Intent: Hypotheses, success criteria, proof expectations
- Policies & Guardrails: Constraints, exceptions (versioned and auditable)
- HumanโAI Decision Authority: Modes, boundaries, escalation, outcome ownership
- Self-Regulation Loop: Observe -> evaluate -> adjust -> archive, using indicators and signals
- Portfolio Steering: Prioritization and adaptive roadmaps driven by evidence
Governance is not an external overlay, it's a continuous, adaptive, lifecycle-wide operating layer.
D-POAF defines horizontal, collaborative roles without rigid hierarchy:
- Wave Surfer: Executes tasks and contributes to delivery
- Wave Captain: Coordinates delivery cycles and facilitates ceremonies
- RAGer: Manages knowledge, context, and documentation
- Peacekeeper: Ensures security, and integrity
- Community Member: Participates in collective decisions and reviews
Start with the D-POAFยฎ Canonical Specification to understand the foundational concepts and principles.
- Official Guide: Comprehensive methodology documentation
- Terminology: Shared vocabulary for consistent interpretation
- Templates: Artifacts, checklists, and adoption resources
- Examples: Real-world implementation patterns
You can adopt D-POAF principles incrementally:
- Start with proof-oriented thinking (PoD, PoV, PoR)
- Structure the software lifecycle with Waves and macro-phases
- Implement evidence-driven decision-making
- Adopt horizontal team structure
- Establish end-to-end traceability
- Build living governance practices
- Add D-POAF specific roles to your engineering activities
Connect with other practitioners:
- Discord Community
- GitHub Discussions
- Share your experience and learn from others
D-POAF applies wherever AI influences or contributes to software engineering lifecycle:
- AI-Integrated Engineering: Native support for human-AI collaboration
- Regulated Industries: Verifiable compliance (finance, healthcare, aerospace)
- Enterprise Software: Audit-compliant delivery with evidence trails
- Safety-Critical Systems: Demonstrable reliability and accountability
- Large-Scale Modernization: Governance for complex transformations
- Responsible AI Programs: Transparent AI governance and oversight
D-POAFยฎ Canonical Specification v1.0
Status: Frozen Canonical Reference
Date: December 5, 2025
License: CC BY 4.0
Publisher: D-POAF Community (initiated by Inovionix)
Canonical reference: https://www.d-poaf.org
D-POAFยฎ Official Terminology v1.0
Status: Active
Date: January 7, 2026
License: CC BY 4.0
Publisher: D-POAF Community (initiated by Inovionix)
Reference: https://d-poaf.org/resources/D-POAF-Terminology-V1.pdf
ISBN: 979-10-415-8736-0
Legal deposit: Bibliothรจque nationale de France (BnF), December 2025
Publisher: Inovionix
Authors: Azzeddine Ihsine & Sara Ihsine
Ihsine, A., & Ihsine, S. (2025).
D-POAF Framework: Decentralized Proof-Oriented AI Framework.
Inovionix.
https://www.d-poaf.org
ISBN 979-10-415-8736-0
@book{Ihsine2025DPOAF,
title = {D-POAF Framework: Decentralized Proof-Oriented AI Framework},
author = {Ihsine, Azzeddine and Ihsine, Sara},
year = {2025},
publisher = {Inovionix},
isbn = {979-10-415-8736-0},
url = {https://www.d-poaf.org},
note = {Canonical Specification v1.0}
}A. Ihsine and S. Ihsine,
"D-POAF Framework: Decentralized Proof-Oriented AI Framework,"
Inovionix, 2025.
ISBN: 979-10-415-8736-0.
[Online]. Available: https://www.d-poaf.org
| Aspect | Traditional Agile | D-POAF |
|---|---|---|
| Legitimacy | Trust & authority | Verifiable proof |
| Decisions | Centralized (PO, SM) | Decentralized & evidence-driven |
| Governance | Static rules | Living, adaptive system |
| Validation | Subjective acceptance | Proof-based (PoD/PoV/PoR) |
| Traceability | Limited to deliverables | End-to-end (intent โ outcomes) |
| AI Integration | Afterthought | Native, first-class |
| Accountability | Hierarchical | Distributed execution, humans remain accountable |
No. D-POAF is a complementary framework that can work alongside existing methodologies. It provides proof-oriented thinking, decentralized governance, and AI-native practices that enhance traditional approaches.
Yes! While D-POAF is designed for AI-native engineering, its principles of proof, evidence, and governance apply to any software project where trust and accountability matter.
D-POAF introduces new concepts (Waves, Proofs, Living Governance), but teams familiar with Agile will find many patterns recognizable. Start with the core principles and adopt incrementally.
No. D-POAF applies to any team size. Whether you're working solo, in a small team, a growing company, or a large enterprise, you can adopt the proof-oriented principles that make sense for your context and scale them as your organization evolves.
- Canonical Specification - Foundational concepts
- Official Resources - Comprehensive guides
- Discord Community - Connect with practitioners
D-POAF is a community-driven framework. We welcome contributions in several forms:
- Propose improvements via GitHub Issues
- Submit RFCs for significant changes via GitHub Discussions
- Share implementation experiences in Discord
- Contribute templates and examples
- Answer questions in Discord
- Write tutorials and guides
- Share case studies and adoption stories
- Organize meetups or study groups
- Publish papers using D-POAF
- Conduct empirical studies on adoption
- Develop theoretical extensions
- Cite and reference the framework
All contributions follow D-POAF's own governance principles: evidence-driven, community-reviewed, and transparently documented.
This repository contains different types of content under appropriate licenses:
- D-POAFยฎ Canonical Specification (v1.0): CC BY 4.0
- Framework Documentation: CC BY 4.0
- Terminology & Glossary: CC BY 4.0
- D-POAFยฎ Operating Guide (v3.0): Apache License 2.0
- Templates, Tools & Examples: Apache License 2.0
- D-POAFยฎ Trademark: Registered trademark of Inovionix
Why two licenses?
- CC BY 4.0 for conceptual/specification material
- Apache 2.0 for practical implementation guides and tools
Copyright ยฉ 2025 Inovionix - Azzeddine IHSINE & Sara IHSINE
โ
Use D-POAF in your projects (personal or commercial)
โ
Modify and adapt to your needs
โ
Distribute and share
โ
Teach and train others
โ
Publish derivative works (with attribution)
See LICENSE-CC-BY and LICENSE-APACHE for full details.
- Website: https://d-poaf.org
- Discord: https://discord.gg/hm5TQn3neJ
- GitHub Issues: Report issues / Request features
- GitHub Discussions: Design proposals / RFCs
- YouTube: @D-POAFFramework
- Twitter: @inovionix
- Email: contact@inovionix.com
D-POAF was created by:
- Azzeddine Ihsine - Research Engineer (Cybersecurity & AI)
- Sara Ihsine - Research Engineer (Governance & Strategy)
With nearly a decade of experience each in software engineering, AI, and organizational design, we built D-POAF to address the fundamental challenges of AI-native software delivery.
This is a community effort, and we're grateful to all contributors who help shape the future of software engineering.
"Keep it proof-first."
In D-POAF, trust is grounded in verifiable proof, not authority.
D-POAF represents a fundamental shift in how we think about software delivery:
- Verifiable delivery over trust-based processes
- Collective intelligence over individual authority
- Evidence-driven decisions over subjective judgment
- Living systems over rigid methodologies
- Human-AI collaboration over human-only or AI-only approaches
We're not just building a framework. We're building a movement toward more trustworthy, accountable, and intelligent software engineering.
โญ Star this repo โข ๐ฌ Join Discord โข ๐ Read the Specification
Building trustworthy AI-native software, one proof at a time.
Made with โค๏ธ by the D-POAF community