A Claude Code skill for creating skills that transform knowledge into operational formats.
npx add-skill https://github.com/daralthus/creating-operational-skillsOr via Claude Code:
/plugin marketplace add daralthus/creating-operational-skills
/plugin install creating-operational-skills@operational-skillsOperationalizing is feature engineering for knowledge.
When you train a machine learning model, raw data often isn't directly useful. You transform it through feature engineering—extracting the signals that matter, reshaping them into formats the model can actually use.
The same principle applies to knowledge in LLM prompts. Raw principles, guidelines, and documentation are like raw data: they contain signal, but not in an immediately actionable form. An LLM reading "ensure quality" has to figure out what quality means in context. An LLM reading a checklist with specific verification steps can execute it directly.
Consider this instruction:
"Write good commit messages that are clear and follow best practices."
Claude knows what good commit messages look like in general. But:
- What's "clear" in your codebase?
- Which best practices matter to your team?
- How do you handle edge cases?
The instruction is too abstract to produce consistent results.
Operational formats—workflows, decision trees, checklists, templates, rubrics—provide affordances that make knowledge directly usable:
| Format | What it does | When to use |
|---|---|---|
| Workflow | Sequential steps | Process is linear |
| Decision tree | Branching logic | Choices depend on conditions |
| Checklist | Binary verification | Things get forgotten |
| Template | Fill-in structure | Output needs consistent shape |
| Rubric | Quality criteria | Judgment is subjective |
| Selector table | Situation → action mapping | Choices are enumerable |
A decision tree for "which testing approach?" gives Claude clear paths. A checklist for "is this commit ready?" gives Claude specific things to verify. A template for "write a bug report" gives Claude exact fields to fill.
LLM-powered coding tools like Claude Code operate in a tight loop: understand context → make decisions → take actions → verify results. Each step benefits from operational formats:
- Understanding context: Selector tables map situations to relevant approaches
- Making decisions: Decision trees provide explicit branching logic
- Taking actions: Workflows sequence the steps correctly
- Verifying results: Checklists and rubrics define "done"
Without these, the LLM falls back to general knowledge, which may not match your specific requirements, conventions, or constraints.
Prose works fine for:
- Background context Claude needs to understand the domain
- Flexible guidance where multiple approaches are valid
- Explanations of why (the operational format handles what)
Operationalize when you see:
- Inconsistent outputs despite clear intent
- Wrong decisions at choice points
- Missed steps or quality issues
- Processes that need to be repeatable
skills/creating-operational-skills/
├── SKILL.md # Main skill instructions
└── references/
├── SKILL_STRUCTURE.md # File structure, frontmatter, naming
├── BEST_PRACTICES.md # Authoring guidelines, patterns based on Anthropic's best practices
└── OPERATIONAL_TOOLS.md # Catalog of operational formats
Once installed, Claude will use this skill when you ask to:
- Create a new skill
- Make a skill
- Automate a workflow as a skill
- Build operational instructions for existing skills
The skill guides you through:
- Identifying the workflow to encode
- Creating the skill structure
- Choosing appropriate operational formats
- Writing effective descriptions for discovery
- Testing and iterating
Apache 2.0