Temporal memory system that understands when things happened, not just what happened
WhenM is a schemaless temporal memory system that gives AI applications the ability to understand time, state changes, and causality. Unlike traditional databases or RAG systems, WhenM natively understands that facts change over time.
| Aspect | RAG | WhenM |
|---|---|---|
| Time Understanding | โ None | โ Native temporal reasoning |
| State Changes | โ Can't track | โ Tracks all transitions |
| Contradictions | โ Returns all versions | โ Resolves by timeline |
| Schema | โ Completely schemaless | |
| Query | "What is X?" | "What was X at time Y?" |
# Install
npm install @aid-on/whenm
# Setup (copy and edit .env)
cp .env.example .envimport { WhenM } from '@aid-on/whenm';
// Initialize (uses mock LLM by default, or your API keys from .env)
const memory = await WhenM.auto();
// Or explicitly use mock for testing
const memory = await WhenM.mock();
// Or use Groq (recommended for production)
const memory = await WhenM.groq(
process.env.GROQ_API_KEY // Get from https://console.groq.com/keys
);
// Remember events - any language, any domain
await memory.remember("Alice joined as engineer", "2020-01-15");
await memory.remember("Alice became team lead", "2022-06-01");
await memory.remember("Pikachu learned Thunderbolt", "2023-01-01");
// Ask temporal questions
await memory.ask("What was Alice's role in 2021?");
// โ "engineer"
await memory.ask("What is Alice's current role?");
// โ "team lead"
await memory.ask("When did Pikachu learn Thunderbolt?");
// โ "January 1, 2023"No schemas, no configuration, no entity definitions. WhenM understands any concept in any language through LLM integration.
// Gaming domain
await memory.remember("Mario collected a fire flower", "2024-01-01");
// Cooking domain
await memory.remember("Added salt to the soup", "2024-02-01");
// Business domain
await memory.remember("Tanaka became director", "2024-03-01");
// All work without any setup!Built on formal Event Calculus, providing mathematically sound temporal logic for natural language queries about time and state changes.
The query refinement layer automatically handles multiple languages and domains.
// Japanese example
await memory.remember("Pikachu learned Thunderbolt");
// Spanish example
await memory.remember("El gato subiรณ al รกrbol");
// English with emojis
await memory.remember("๐ launched to Mars");npm install @aid-on/whenmimport { WhenM } from '@aid-on/whenm';
// Simple string format (provider:apikey)
const memory = await WhenM.create('groq:your-api-key');
// With model specification
const memory = await WhenM.create('groq:your-api-key:llama-3.3-70b-versatile');
// Unified config object
const memory = await WhenM.create({
provider: 'groq',
apiKey: process.env.GROQ_API_KEY,
model: 'llama-3.3-70b-versatile'
});
// Provider-specific helpers
const memory = await WhenM.groq(process.env.GROQ_API_KEY);
const memory = await WhenM.gemini(process.env.GEMINI_API_KEY);
const memory = await WhenM.cloudflare({
apiKey: process.env.CLOUDFLARE_API_KEY,
accountId: process.env.CLOUDFLARE_ACCOUNT_ID,
email: process.env.CLOUDFLARE_EMAIL
});// Simple event
await memory.remember("Project started", "2024-01-01");
// Complex state change
await memory.remember("Bob promoted to manager", "2024-06-01");
// Multilingual support
await memory.remember("Experiment succeeded", "2024-07-01");// Natural language queries
await memory.ask("What happened in January?");
await memory.ask("Who became manager this year?");
await memory.ask("What is the current status of the project?");
// All queries use natural language through the ask() method
const events = await memory.ask("What did Alice do between January and December 2024?");
const statusInMarch = await memory.ask("What was Project-X status on March 15, 2024?");
const recentChanges = await memory.ask("What happened with Project-X in the last 30 days?");WhenM includes a sophisticated refinement layer that standardizes queries across languages:
// These all work seamlessly:
await memory.ask("What is Alice's role?");
await memory.ask("What is Alice's role?");
await memory.ask("ยฟCuรกl es el rol de Alice?");For better multilingual support:
const memory = await WhenM.cloudflare({
accountId: process.env.CLOUDFLARE_ACCOUNT_ID,
apiKey: process.env.CLOUDFLARE_API_KEY,
email: process.env.CLOUDFLARE_EMAIL,
enableRefiner: true // Enable multilingual query refinement
});
โ ๏ธ Note: The persistence feature is experimental and has not been fully tested in production. Use with caution.
WhenM provides a pluggable persistence layer for durable storage:
// Default - events stored in memory only
const memory = await WhenM.cloudflare(config);// Cloudflare D1 for durable storage
const memory = await WhenM.cloudflare({
accountId: process.env.CLOUDFLARE_ACCOUNT_ID,
apiKey: process.env.CLOUDFLARE_API_KEY,
email: process.env.CLOUDFLARE_EMAIL,
persistenceType: 'd1',
persistenceOptions: {
database: env.DB, // D1 database binding
tableName: 'whenm_events', // Optional: custom table name
namespace: 'my-app' // Optional: namespace for multi-tenancy
}
});
// Save current state
await memory.persist();
// Restore from database
await memory.restore();
// Restore with filters
await memory.restore({
timeRange: { from: '2024-01-01', to: '2024-12-31' },
limit: 1000
});
// Check persistence stats
const stats = await memory.persistenceStats();
console.log(`Total persisted events: ${stats.totalEvents}`);// Implement your own persistence
class MyCustomPersistence {
async save(event) { /* ... */ }
async load(query) { /* ... */ }
async stats() { /* ... */ }
// ... other required methods
}
const memory = await WhenM.cloudflare({
// ... config
persistenceType: 'custom',
persistenceOptions: new MyCustomPersistence()
});// Core persistence methods
await memory.persist(); // Save all events to storage
await memory.restore(); // Load all events from storage
await memory.restore({ limit: 100 }); // Load with query filters
const stats = await memory.persistenceStats(); // Get storage statistics
// Export/Import Prolog format
const prolog = await memory.exportProlog();
await memory.importProlog(prolog);WhenM combines three powerful technologies:
- Event Calculus - Formal temporal logic for reasoning about time
- Trealla Prolog - High-performance logical inference engine (WASM)
- LLM Integration - Natural language understanding without schemas
The system processes information through 5 stages:
Input โ Language Normalization โ Semantic Decomposition โ Temporal Logic โ Response
Input:
await memory.remember("Taro became manager", "2024-03-01");Stage 1: Language Normalization
{
"original": "Taro became manager",
"language": "ja",
"refined": "Taro became manager",
"entities": ["Taro"]
}Stage 2: Semantic Analysis (LLM)
{
"subject": "taro",
"verb": "became",
"object": "manager",
"temporalType": "STATE_UPDATE",
"affectedFluent": {
"domain": "role", // Dynamically determined
"value": "manager",
"isExclusive": true // Only one role at a time
}
}Stage 3: Prolog Facts Generation
event_fact("evt_1234", "taro", "became", "manager").
happens("evt_1234", 1709251200000).
initiates("evt_1234", role("taro", "manager")).
is_exclusive_domain(role).Input:
await memory.ask("What is Taro's current role?");Prolog Query:
current_state("taro", role, Value)Event Calculus Processing:
- Finds latest
initiates("evt_1234", role("taro", "manager")) - Checks no newer role changes exist (clipping check)
- Returns:
Value = "manager"
Traditional systems require predefined schemas:
// โ Hardcoded approach
if (verb === "became") domain = "role";
if (verb === "learned") domain = "skill";WhenM dynamically understands any concept:
// โ
Dynamic understanding
"Pikachu learned Thunderbolt" โ {domain: "skill", value: "thunderbolt", isExclusive: false}
"Robot battery at 80%" โ {domain: "battery", value: "80", isExclusive: true}
"Alien transformed into energy" โ {domain: "form", value: "energy", isExclusive: true}The LLM determines the semantic meaning, domain, and exclusivity rules dynamically, enabling the system to handle any new concept without code changes.
- Insert Speed: 25,000+ events/second
- Query Speed: 1-30ms for typical queries
- Memory: Optimized for edge (runs in Cloudflare Workers)
- Languages: Any human language supported
const hr = await WhenM.cloudflare(config);
// Track career progression with full context
await hr.remember("Sarah joined as Junior Developer", "2021-01-15");
await hr.remember("Sarah completed React certification", "2021-06-20");
await hr.remember("Sarah led the payment module project", "2021-09-01");
await hr.remember("Sarah promoted to Senior Developer", "2022-01-15");
await hr.remember("Sarah became Tech Lead", "2023-06-01");
// Temporal performance queries
const review = await hr.ask("What achievements led to Sarah's promotion to Senior?");
// โ "Completed React certification and successfully led payment module project"
// Compare growth between employees
const sarahGrowth = await hr.ask("How did Sarah's career progress from January 2021 to January 2024?");
const johnGrowth = await hr.ask("How did John's career progress from January 2021 to January 2024?");
// โ Career progression comparison
// Find high performers
const fastGrowth = await hr.ask("Who was promoted, awarded, or recognized in the last 12 months?");
// โ List of employees with recent achievementsconst medical = await WhenM.cloudflare(config);
// Complex medical timeline
await medical.remember("Patient diagnosed with hypertension", "2020-03-15");
await medical.remember("Started lisinopril 10mg daily", "2020-03-20");
await medical.remember("Blood pressure improved to 130/80", "2020-06-15");
await medical.remember("Developed dry cough side effect", "2020-09-01");
await medical.remember("Switched to losartan 50mg", "2020-09-05");
await medical.remember("Blood pressure stabilized to normal", "2021-01-15"); // Multilingual support
// Critical temporal queries for treatment decisions
const currentMeds = await medical.ask("What medication is the patient currently taking?");
// โ Current medication and conditions
const medicationHistory = await medical.ask("Why was the medication changed in September 2020?");
// โ "Lisinopril caused dry cough side effect, switched to losartan"
// Track treatment effectiveness over time
const bpHistory = await medical.ask("What were the blood pressure measurements in the last 6 months?");
// โ Blood pressure trends for treatment evaluationconst agent = await WhenM.cloudflare(config);
// Agent learns and adapts over time
await agent.remember("User prefers TypeScript over JavaScript", "2024-01-01");
await agent.remember("User works in Tokyo timezone", "2024-01-05");
await agent.remember("User dislikes verbose explanations", "2024-01-10");
await agent.remember("Failed to solve bug with approach A", "2024-02-01");
await agent.remember("Successfully solved bug with approach B", "2024-02-01");
// Context-aware responses based on temporal memory
const preferences = await agent.ask("What are the user's preferences?");
// โ All current user preferences and learned patterns
const debugging = await agent.ask("What debugging approach should I try?");
// โ "Use approach B, as approach A previously failed"
// Learn from interaction patterns
const interactions = await agent.ask("What failed, succeeded, or errored in the last 30 days?");
// โ Analyze success/failure patterns to improveconst ops = await WhenM.cloudflare(config);
// Track incident timeline
await ops.remember("CPU usage spiked to 95%", "2024-03-15 14:30");
await ops.remember("Database connection pool exhausted", "2024-03-15 14:31");
await ops.remember("API response time degraded to 5s", "2024-03-15 14:32");
await ops.remember("Deployed hotfix PR #1234", "2024-03-15 14:45");
await ops.remember("System recovered", "2024-03-15 14:50");
// Root cause analysis with temporal reasoning
const rca = await ops.ask("What caused the API degradation?");
// โ "CPU spike led to connection pool exhaustion, causing API degradation"
// Pattern detection across incidents
const patterns = await ops.ask("What spiked, exhausted, or degraded in the last 90 days?");
// โ Identify recurring issues
// Automated incident correlation
const correlation = await ops.ask("What happened with the system between 2:00 PM and 3:00 PM on March 15, 2024?");
// โ Complete incident timeline for postmortemconst audit = await WhenM.cloudflare(config);
// Maintain complete audit trail
await audit.remember("Account opened by John", "2023-01-15");
await audit.remember("KYC verification completed", "2023-01-16");
await audit.remember("$50,000 deposited from Chase Bank", "2023-02-01");
await audit.remember("Flagged for unusual activity", "2023-03-15");
await audit.remember("Manual review cleared", "2023-03-16");
await audit.remember("Account upgraded to Premium", "2023-06-01");
// Compliance queries
const kycStatus = await audit.ask("Was KYC completed before the first transaction?");
// โ "Yes, KYC completed on Jan 16, first transaction on Feb 1"
// Suspicious activity tracking
const flagged = await audit.ask("What was flagged, suspended, or investigated in 2023?");
// โ All compliance events for regulatory reporting
// Account state at any point for legal inquiries
const snapshot = await audit.ask("What was the account status on March 15, 2023?");
// โ Exact account state when flaggedconst game = await WhenM.cloudflare(config);
// Rich player history
await game.remember("Player discovered hidden dungeon", "2024-01-01 10:00");
await game.remember("Player defeated Dragon Boss", "2024-01-01 11:30");
await game.remember("Player earned 'Dragon Slayer' title", "2024-01-01 11:31");
await game.remember("Player joined guild 'Knights'", "2024-01-02");
await game.remember("Won guild battle", "2024-01-03"); // Multilingual support
// Personalized gameplay based on history
const achievements = await game.ask("What titles and skills does the player have?");
// โ All titles, skills, and progression
// Quest eligibility based on temporal conditions
const eligible = await game.ask("Can player start the 'Ancient Evil' quest?");
// โ "Yes, player has defeated Dragon Boss and joined a guild"
// Leaderboard with time-based scoring
const weeklyChamps = await game.ask("Who defeated bosses, completed quests, or won battles in the last 7 days?");
// โ This week's most active playersconst iot = await WhenM.cloudflare(config);
// Continuous sensor monitoring
await iot.remember("Machine-A vibration increased to 0.8mm/s", "2024-03-01");
await iot.remember("Machine-A temperature at 75ยฐC", "2024-03-02");
await iot.remember("Machine-A bearing noise detected", "2024-03-03");
await iot.remember("Machine-A scheduled maintenance", "2024-03-05");
await iot.remember("Machine-A bearing replaced", "2024-03-05");
// Predictive maintenance queries
const warning = await iot.ask("What signs preceded the bearing failure?");
// โ "Vibration increased, temperature rose, then noise detected"
// Pattern recognition across fleet
const maintenance = await iot.ask("What increased, was detected, or failed in the last 30 days?");
// โ Identify machines showing similar patterns
// Optimal maintenance scheduling
const machineState = await iot.ask("How did Machine-A's condition change from February to March 2024?");
// โ Degradation rate for maintenance planningRecords an event at a specific time.
Answers questions using temporal reasoning. This is the primary interface for all queries.
Records an event at a specific time.
All queries are performed through natural language using the ask() method:
// Temporal queries
await memory.ask("What happened in January 2024?");
await memory.ask("What is Alice's current role?");
await memory.ask("When did Bob learn Python?");
await memory.ask("Who joined the company last year?");
// State queries
await memory.ask("What skills does Alice have?");
await memory.ask("Where does Bob currently work?");
// Historical queries
await memory.ask("What was the status on March 15?");
await memory.ask("How did things change between February and April?");
// Complex queries
await memory.ask("Who was promoted in the last 12 months?");
await memory.ask("What failures occurred before the system recovery?");The LLM-powered query system understands:
- Temporal relationships (before, after, during, between)
- State transitions (became, changed, updated)
- Current vs historical states
- Aggregations (who, what, when, how many)
- Causal relationships (why, what caused)
- Node.js 18+
- LLM Provider API credentials (required - one of the following):
- Cloudflare AI (account ID, API key, email)
- Groq API key
- Google Gemini API key
# Cloudflare AI
CLOUDFLARE_ACCOUNT_ID=your_account_id
CLOUDFLARE_API_KEY=your_api_key
CLOUDFLARE_EMAIL=your_email
# Or Groq
GROQ_API_KEY=your_groq_key
# Or Gemini
GEMINI_API_KEY=your_gemini_key# Run unit tests only (fast)
npm run test:unit
# Run integration tests only (requires API keys or uses mock)
npm run test:integration
# Run all tests
npm run test:all
# Run tests with coverage
npm run test:coverage
# Watch mode for development
npm run test:watch- Query Builder API: Structured query interface (currently all queries use natural language)
- Timeline API: Dedicated timeline tracking and analysis
- Advanced Persistence: Production-ready storage backends
- Performance Optimizations: Faster Prolog integration
- Extended Language Support: More LLM providers
MIT ยฉ Aid-On
WhenM stands on the shoulders of giants:
- Trealla Prolog - WebAssembly-powered Prolog engine providing the logical reasoning foundation
- Event Calculus - Formal temporal logic framework for rigorous time-based reasoning
- @aid-on/unillm - Unified LLM interface enabling seamless multi-provider support
- The Trealla Prolog team for their excellent WASM implementation
- The Event Calculus research community for decades of temporal logic advancement
- The Aid-On team for continuous support and innovation