Skill-driven agent toolkit for LangGraph with semantic skill discovery.
Give your LangGraph agents reusable, discoverable skills defined as markdown files. Two paths to use: SkillKit as a standalone toolkit you wire yourself, or the node metaclass that gives you a complete ReAct subgraph with dependency injection.
Requires Python 3.11+, langchain-core>=0.3, langgraph>=0.4.
pip install langchain-skillkitSkills follow the AgentSkills.io specification — each skill is a directory with a SKILL.md and optional reference files:
skills/
market-sizing/
SKILL.md # Instructions + frontmatter (name, description)
calculator.py # Template — loaded on demand via SkillRead
competitive-analysis/
SKILL.md
swot-template.md # Reference doc — loaded on demand via SkillRead
examples/
output.json # Example output
from langchain_core.tools import tool
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langchain_skillkit import node, AgentState
# --- Define tools ---
@tool
def web_search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
# --- Declare an agent ---
# Subclassing `node` produces a StateGraph, not a class.
# Call .compile() to get a runnable graph.
class researcher(node):
llm = ChatOpenAI(model="gpt-4o")
tools = [web_search]
skills = "skills/"
async def handler(state, *, llm):
response = await llm.ainvoke(state["messages"])
return {"messages": [response], "sender": "researcher"}
# --- Compile and use standalone ---
graph = researcher.compile()
result = graph.invoke({"messages": [HumanMessage("Size the B2B SaaS market")]})
# --- Or compose into a parent graph ---
workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher.compile())
workflow.add_edge(START, "researcher")
workflow.add_edge("researcher", END)
graph = workflow.compile()See examples/ for complete working code:
standalone_node.py— Simplest usage: declare a node class, compile, invokemanual_wiring.py— UseSkillKitas a standalone toolkit with full graph controlmulti_agent.py— Compose multiple agents in a parent graphroot_with_checkpointer.py— Multi-turn conversations withinterrupt()andCommand(resume=...)subgraph_with_checkpointer.py— Subgraph inherits parent's checkpointer automaticallycustom_state_type.py— Custom state shape via handler annotation + subgraph schema translation
Toolkit that provides Skill and SkillRead tools.
from langchain_skillkit import SkillKit
kit = SkillKit("skills/")
all_tools = [web_search] + kit.tools # [web_search, Skill, SkillRead]Parameters:
skills_dirs(str | list[str]): Directory or list of directories containing skill subdirectories
Properties:
| Property | Type | Description |
|---|---|---|
tools |
list[BaseTool] |
[Skill, SkillRead] — built once, cached |
Declarative agent builder. Subclassing produces a StateGraph. Call .compile() to get a runnable graph.
from langchain_skillkit import node
class my_agent(node):
llm = ChatOpenAI(model="gpt-4o") # Required
tools = [web_search] # Optional
skills = "skills/" # Optional
async def handler(state, *, llm):
response = await llm.ainvoke(state["messages"])
return {"messages": [response], "sender": "my_agent"}
graph = my_agent.compile()
graph.invoke({"messages": [HumanMessage("...")]})Compile with a checkpointer for interrupt() support:
from langgraph.checkpoint.memory import InMemorySaver
graph = my_agent.compile(checkpointer=InMemorySaver())Class attributes:
| Attribute | Required | Description |
|---|---|---|
llm |
Yes | Language model instance |
tools |
No | List of LangChain tools |
skills |
No | Path(s) to skill directories, or a SkillKit instance |
Handler signature:
async def handler(state, *, llm, tools, runtime): ...state is positional. Everything after * is keyword-only and injected by name — declare only what you need:
| Parameter | Type | Description |
|---|---|---|
state |
dict |
LangGraph state (positional, required) |
llm |
BaseChatModel |
LLM pre-bound with all tools via bind_tools() |
tools |
list[BaseTool] |
All tools available to the agent |
runtime |
Any |
LangGraph runtime context (passed through from config) |
Custom state types — annotate the handler's state parameter:
from typing import Annotated, TypedDict
from langgraph.graph.message import add_messages
class WorkflowState(TypedDict, total=False):
messages: Annotated[list, add_messages]
draft: dict | None
class my_agent(node):
llm = ChatOpenAI(model="gpt-4o")
async def handler(state: WorkflowState, *, llm):
response = await llm.ainvoke(state["messages"])
return {"messages": [response]}Without an annotation, AgentState is used by default.
Minimal LangGraph state type for composing nodes in a parent graph:
from langchain_skillkit import AgentState
from langgraph.graph import StateGraph
workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher)Extend it with your own fields:
class MyState(AgentState):
current_project: str
iteration_count: int| Field | Type | Description |
|---|---|---|
messages |
Annotated[list, add_messages] |
Conversation history with LangGraph message reducer |
sender |
str |
Name of the last node that produced output |
- Path traversal prevention: File paths resolved to absolute and checked against skill directories.
- Name validation: Skill names validated per AgentSkills.io spec — lowercase alphanumeric + hyphens, 1-64 chars, must match directory name.
- Tool scoping: Each
nodesubclass only has access to the tools declared in itstoolsattribute.
Developers building multi-agent LangGraph systems face these problems:
- Prompt reuse is manual. The same domain instructions get copy-pasted across agents with no versioning or structure.
- Agents lack discoverability. There's no standard way for an LLM to find and select relevant instructions at runtime.
- Agent wiring is repetitive. Every ReAct agent needs the same graph boilerplate: handler node, tool node, conditional edges.
- Reference files are inaccessible. Templates, scripts, and examples referenced in prompts can't be loaded on demand.
This toolkit solves all four with:
- Skill-as-markdown: reusable instructions with structured frontmatter
- Semantic discovery: the LLM matches user intent to skill descriptions at runtime
- Declarative agents:
class my_agent(node)gives you a complete ReAct subgraph - On-demand file loading:
SkillReadlets the LLM pull reference files when needed - AgentSkills.io spec compliance: portable skills that work across toolkits
- Full type safety: mypy strict mode support
This toolkit is extracted from a production codebase and is actively maintained. Issues, feature requests, and pull requests are welcome.
git clone https://github.com/rsmdt/langchain-skillkit.git
cd langchain-skillkit
uv sync --extra dev
uv run pytest --tb=short -q
uv run ruff check src/ tests/
uv run mypy src/