A supplementary memory layer for OpenClaw that expands what gets indexed beyond the built-in memory_search. While OpenClaw's native search covers MEMORY.md and memory/*.md, this skill indexes your entire workspace — tasks, CRM, goals, projects, skills, and live chat transcripts.
It's not a replacement for built-in memory. It's the wider net.
Built-in memory_search:
- Indexes
MEMORY.md+memory/*.mdonly - Uses SQLite with FTS5 (full-text) + sqlite-vec (vector) — hybrid search
- More sophisticated search engine with embedding cache, file hashing, incremental updates
- No setup required — ships with OpenClaw
This skill (vector-memory):
- Indexes ALL
.mdfiles in workspace (tasks, CRM, goals, projects, skills, everything) - Indexes live chat transcripts via cron (every minute) — conversations are searchable in near real-time
- Uses JSON storage + cosine similarity — simpler, but works fine for typical workspace sizes (<10k chunks)
- Vector-only search (no keyword/FTS component)
- Simple, portable — no native dependencies, works anywhere Node.js runs
The real value add is scope, not search quality. Built-in search is technically better at finding things within memory files. This skill lets you find things that built-in search doesn't even know about — your task files, project docs, CRM notes, and what was said in yesterday's chat.
Clone into your OpenClaw skills/ directory:
cd your-workspace/skills
git clone https://github.com/exbald/openclaw-skill-vector-memory.git vector-memory
cd vector-memory
npm installOPENAI_API_KEY— Required. OpenAI API key. Reads from~/.clawdbot/.envor~/.openclaw/.envVECTOR_MEMORY_WORKSPACE— Workspace root to index. Auto-detected (looks forAGENTS.md,SOUL.md,memory/)VECTOR_MEMORY_DATA_DIR— Where to store the vector index. Default:./datain skill directory
# Incremental (only changed files)
node index.js
# Full rebuild
node index.js --fullnode search.js "what did we discuss about the API redesign" --limit 5Returns JSON:
{
"query": "...",
"results": [
{
"file": "memory/2025-01-15.md",
"startLine": 10,
"endLine": 25,
"heading": "API Discussion",
"score": 0.87,
"preview": "..."
}
],
"totalIndexed": 905
}node ingest.js --file /path/to/document.md
node ingest.js --source "meeting-notes" --text "Today we decided to..."node ingest-sessions.jsReads OpenClaw session JSONL files from ~/.openclaw/agents/main/sessions/.
- Chunking — Markdown files are split by headings into ~500-800 token chunks
- Embedding — Each chunk is embedded via OpenAI's
text-embedding-3-smallmodel - Storage — Vectors stored as JSON in
data/vectors.json(brute-force, fast for <10k chunks) - Search — Query is embedded and compared via cosine similarity against all chunks
- Incremental — File modification times are tracked; only changed files are re-embedded
lib.js — Shared: embedding, chunking, cosine similarity, storage
index.js — Workspace file indexer (crawls markdown files)
search.js — Semantic search CLI
ingest.js — Ad-hoc content ingestion
ingest-sessions.js — OpenClaw chat session ingestion
data/ — Runtime data (gitignored, per-user)
- Node.js 18+
- OpenAI API key (for embeddings)
- OpenClaw workspace (optional — core search works standalone)
The core (lib.js + search.js) works outside OpenClaw. Set VECTOR_MEMORY_WORKSPACE to any directory with markdown files, run index.js, then search.js.
MIT