Language-agnostic AI agent skills that enforce fundamental programming principles. This repository provides specific, granular instructions that enable AI coding assistants to produce significantly higher-quality code that adheres to robust engineering standards.
Adopting these skills measurably changes the output of AI models, shifting them from generating merely functional code to producing architecturally sound solutions.
Select your platform for specific setup instructions:
The core of this repository is the skills/ directory. Each skill is encapsulated in its own subdirectory following the ps-<name> convention (e.g., ps-composition-over-coordination).
We use this granular structure because:
- Focus: It allows the AI to load only the relevant context for a specific task, avoiding context window pollution.
- Modularity: Skills can be improved, versioned, and tested independently.
- Composability: Users can select the specific combination of principles they want to enforce for their project.
Every skill is validated against a rigorous testing suite found in the tests/ directory.
- Automated Judging: We use an LLM-as-a-Judge approach. The system compares the output of a "Baseline" model (without the skill) against a "Skill" model (with the skill loaded).
- Semantics over Syntax: The test does not just look for passing unit tests; it analyzes the logic and structure of the code.
- Evidence-Based: The judge identifies the specific lines of code that demonstrate adherence to or violation of the principle.
Read our Case Study on Judge Fairness to see how the system fairly evaluates architectural quality, even when it means failing the Skill model.
Processed 24 evaluation(s).
- Architecture - Repository design & structure
- Contributing - How to add/modify skills & benchmarks
- AI Prompt Wrapper - Configure your AI assistant
- Changelog - Version history & skill changes
MIT License - see LICENSE