Transform AWS S3 into a powerful document database
Cost-effective storage • Automatic encryption • ORM-like interface • Streaming API
s3db.js is a document database that transforms AWS S3 into a fully functional database using S3's metadata capabilities. Instead of traditional storage methods, it stores document data in S3's metadata fields (up to 2KB), making it highly cost-effective while providing a familiar ORM-like interface.
Perfect for:
- 🌐 Serverless applications - No database servers to manage
- 💰 Cost-conscious projects - Pay only for what you use
- 🔒 Secure applications - Built-in encryption and validation
- 📊 Analytics platforms - Efficient data streaming and processing
- 🚀 Rapid prototyping - Get started in minutes, not hours
|
|
|
|
- 🚀 What is s3db.js?
- ✨ Key Features
- 🚀 Quick Start
- 💾 Installation
- 🗄️ Database
- 🪵 Logging
- 📋 Resources
- ⚡ Performance & Concurrency
- 🔌 Plugins
- 🤖 MCP & Integrations
- 🔧 CLI
- 📖 Documentation
Core Concepts: Schema Validation • Client API • Fastest Validator
Plugins: API Plugin • Identity Plugin • All Plugins
Integrations: MCP Server • Model Context Protocol
Advanced: Executor Pool Benchmark • Performance Tuning • Migration Guides • TypeScript Support
Get up and running in less than 5 minutes!
npm install s3db.jsNeed deeper telemetry? Pass
taskExecutorMonitoringalongsideexecutorPool. It merges into the pool's monitoring block, making it easy to enable verbose stats/heap tracking for any database instance without touching individual resources.
import { S3db } from "s3db.js";
const s3db = new S3db({
connectionString: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
});
await s3db.connect();
console.log("🎉 Connected to S3 database!");⚡ Performance Tip: s3db.js comes with optimized HTTP client settings by default for excellent S3 performance. The default configuration includes keep-alive enabled, balanced connection pooling, and appropriate timeouts for most applications.
ℹ️ Note: You do not need to provide
ACCESS_KEYandSECRET_KEYin the connection string if your environment already has S3 permissions (e.g., via IAM Role on EKS, EC2, Lambda, or other compatible clouds). s3db.js will use the default AWS credential provider chain, so credentials can be omitted for role-based or environment-based authentication. This also applies to S3-compatible clouds (MinIO, DigitalOcean Spaces, etc.) if they support such mechanisms.
Schema validation powered by fastest-validator ⚡
const users = await s3db.createResource({
name: "users",
attributes: {
name: "string|min:2|max:100",
email: "email|unique",
age: "number|integer|positive",
isActive: "boolean"
},
timestamps: true
});// Insert a user
const user = await users.insert({
name: "John Doe",
email: "john@example.com",
age: 30,
isActive: true,
createdAt: new Date()
});
// Query the user
const foundUser = await users.get(user.id);
console.log(`Hello, ${foundUser.name}! 👋`);
// Update the user
await users.update(user.id, { age: 31 });
// List all users
const allUsers = await users.list();
console.log(`Total users: ${allUsers.length}`);That's it! You now have a fully functional document database running on AWS S3. 🎉
Enhance your database with powerful plugins for production-ready features:
import { S3db, TTLPlugin, RelationPlugin, ReplicatorPlugin, CachePlugin } from "s3db.js";
const s3db = new S3db({
connectionString: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp",
plugins: [
// Auto-cleanup expired records (no cron jobs needed!)
new TTLPlugin({
resources: {
sessions: { ttl: 86400, onExpire: 'soft-delete' } // 24h
}
}),
// ORM-like relationships with 10-100x faster queries
new RelationPlugin({
relations: {
users: {
posts: { type: 'hasMany', resource: 'posts', foreignKey: 'userId' }
}
}
}),
// Real-time replication to BigQuery, PostgreSQL, etc.
new ReplicatorPlugin({
replicators: [{
driver: 'bigquery',
config: { projectId: 'my-project', datasetId: 'analytics' },
resources: { users: 'users_table', posts: 'posts_table' }
}]
}),
// Cache frequently accessed data (memory, S3, or filesystem)
new CachePlugin({
driver: 'memory',
ttl: 300000 // 5 minutes
})
]
});Learn more about available plugins and their features in the Plugin Documentation.
# npm
npm install s3db.js
# pnpm
pnpm add s3db.js
# yarn
yarn add s3db.jsSome features require additional dependencies to be installed manually:
If you plan to use the API plugin, install these dependencies:
# Core API dependencies (required)
npm install hono
# HTTP logging (optional, recommended)
npm install pino-http
# Authentication (optional)
npm install jose # For JWT authIf you plan to use the replicator system with external services, install the corresponding dependencies:
# For SQS replicator (AWS SQS queues)
npm install @aws-sdk/client-sqs
# For BigQuery replicator (Google BigQuery)
npm install @google-cloud/bigquery
# For PostgreSQL replicator (PostgreSQL databases)
npm install pgWhy manual installation? These are marked as peerDependencies to keep the main package lightweight (~500KB). Only install what you need!
Contributing to s3db.js? Use our modular installation system to install only what you need:
# Clone the repo
git clone https://github.com/forattini-dev/s3db.js.git
cd s3db.js
# Install base dependencies (required)
pnpm install
# Choose your dev setup:
./scripts/install-deps.sh minimal # Core only (~50MB)
./scripts/install-deps.sh common # + Replicators + Plugins (~500MB)
./scripts/install-deps.sh full # Everything (~2GB)
# Or install specific groups:
pnpm run install:dev:replicators # PostgreSQL, BigQuery, etc.
pnpm run install:dev:plugins # API, Identity, ML, etc.
pnpm run install:dev:puppeteer # Web scraping suite
pnpm run install:dev:cloud # AWS SDK clientsSee DEVELOPMENT.md for detailed setup instructions and dependency groups breakdown.
s3db.js includes comprehensive TypeScript definitions out of the box. Get full type safety, autocomplete, and IntelliSense support in your IDE!
import { Database, DatabaseConfig, Resource } from 's3db.js';
// Type-safe configuration
const config: DatabaseConfig = {
connectionString: 's3://ACCESS_KEY:SECRET@bucket/path',
logLevel: 'debug',
executorPool: { concurrency: 100 } // Default - nested under executorPool
};
const db = new Database(config);
// TypeScript knows all methods and options!
await db.createResource({
name: 'users',
attributes: {
name: 'string|required',
email: 'string|required|email',
age: 'number|min:0'
}
});
// Full autocomplete for all operations
const users: Resource<any> = db.resources.users;
const user = await users.insert({ name: 'Alice', email: 'alice@example.com', age: 28 });For even better type safety, auto-generate TypeScript interfaces from your resources:
import { generateTypes } from 's3db.js/typescript-generator';
// Generate types after creating resources
await generateTypes(db, { outputPath: './types/database.d.ts' });See the complete example in docs/examples/typescript-usage-example.ts.
A Database is a logical container for your resources, stored in a specific S3 bucket path. The database manages resource metadata, connections, and provides the core interface for all operations.
| Parameter | Type | Default | Description |
|---|---|---|---|
connectionString |
string |
required | S3 connection string (see formats below) |
httpClientOptions |
object |
optimized | HTTP client configuration for S3 requests |
logLevel |
boolean |
false |
Enable debug logging for debugging |
parallelism |
number |
100 |
Concurrent operations for bulk operations (Separate Executor Pools per Database) |
versioningEnabled |
boolean |
false |
Enable automatic resource versioning |
passphrase |
string |
'secret' |
Default passphrase for field encryption |
plugins |
array |
[] |
Array of plugin instances to extend functionality |
s3db.js supports multiple connection string formats for different S3 providers:
// AWS S3 (with credentials)
"s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp?region=us-east-1"
// AWS S3 (IAM role - recommended for production)
"s3://BUCKET_NAME/databases/myapp?region=us-east-1"
// MinIO (self-hosted)
"http://minioadmin:minioadmin@localhost:9000/bucket/databases/myapp"
// Digital Ocean Spaces
"https://SPACES_KEY:SPACES_SECRET@nyc3.digitaloceanspaces.com/SPACE_NAME/databases/myapp"
// LocalStack (local testing)
"http://test:test@localhost:4566/mybucket/databases/myapp"
// MemoryClient (ultra-fast in-memory testing - no S3 required!)
"memory://mybucket/databases/myapp"
// Backblaze B2
"https://KEY_ID:APPLICATION_KEY@s3.us-west-002.backblazeb2.com/BUCKET/databases/myapp"
// Cloudflare R2
"https://ACCESS_KEY:SECRET_KEY@ACCOUNT_ID.r2.cloudflarestorage.com/BUCKET/databases/myapp"🔑 Complete authentication examples
const s3db = new S3db({
connectionString: "s3://ACCESS_KEY:SECRET_KEY@BUCKET_NAME/databases/myapp"
});// No credentials needed - uses IAM role permissions
const s3db = new S3db({
connectionString: "s3://BUCKET_NAME/databases/myapp"
});// MinIO running locally (note: http:// protocol and port)
const s3db = new S3db({
connectionString: "http://minioadmin:minioadmin@localhost:9000/mybucket/databases/myapp"
});// Digital Ocean Spaces (NYC3 datacenter)
const s3db = new S3db({
connectionString: "https://SPACES_KEY:SPACES_SECRET@nyc3.digitaloceanspaces.com/SPACE_NAME/databases/myapp"
});For testing, s3db.js provides MemoryClient - a pure in-memory implementation that's 100-1000x faster than LocalStack and requires zero dependencies.
Why MemoryClient?
- ⚡ 100-1000x faster than LocalStack/MinIO
- 🎯 Zero dependencies - no Docker, LocalStack, or S3 needed
- 💯 100% compatible - same API as S3Client
- 🧪 Perfect for tests - instant setup and teardown
- 💾 Optional persistence - save/load snapshots to disk
Quick Start with Connection String:
import { S3db } from 's3db.js';
// Simple - just use memory:// protocol!
const db = new S3db({
connectionString: 'memory://mybucket'
});
await db.connect();Alternative - Manual Instantiation:
import { S3db, MemoryClient } from 's3db.js';
// Create database with MemoryClient
const db = new S3db({
client: new MemoryClient({ bucket: 'test-bucket' })
});
await db.connect();
// Use exactly like S3 - same API!
const users = await db.createResource({
name: 'users',
attributes: {
name: 'string|required',
email: 'email|required'
}
});
await users.insert({ id: 'u1', name: 'John', email: 'john@test.com' });
const user = await users.get('u1');Connection String Options:
// Basic usage
"memory://mybucket"
// With key prefix (path)
"memory://mybucket/databases/myapp"
// With multiple path segments
"memory://testdb/level1/level2/level3"
// With query parameters
"memory://mybucket?region=us-west-2"Advanced Features (Manual Client):
import { S3db, MemoryClient } from 's3db.js';
// Option 1: Connection string (recommended)
const db1 = new S3db({
connectionString: 'memory://test-bucket/tests/'
});
// Option 2: Manual client configuration
const db2 = new S3db({
client: new MemoryClient({
bucket: 'test-bucket',
keyPrefix: 'tests/', // Optional prefix for all keys
enforceLimits: true, // Enforce S3 2KB metadata limit
persistPath: './test-data.json', // Optional: persist to disk
logLevel: 'silent' // Disable logging
})
});
// Snapshot/Restore (perfect for tests)
const snapshot = client.snapshot(); // Capture current state
// ... run tests that modify data ...
client.restore(snapshot); // Restore to original state
// Persistence
await client.saveToDisk(); // Save to persistPath
await client.loadFromDisk(); // Load from persistPath
// Statistics
const stats = client.getStats();
console.log(`Objects: ${stats.objectCount}, Size: ${stats.totalSizeFormatted}`);
// Clear all data
client.clear();Testing Example:
import { describe, test, beforeEach, afterEach } from '@jest/globals';
import { S3db } from 's3db.js';
describe('User Tests', () => {
let db, users, snapshot;
beforeEach(async () => {
// Simple connection string setup!
db = new S3db({
connectionString: 'memory://test-db/my-tests'
});
await db.connect();
users = await db.createResource({
name: 'users',
attributes: { name: 'string', email: 'email' }
});
// Save snapshot for each test
snapshot = db.client.snapshot();
});
afterEach(() => {
// Restore to clean state (faster than recreating)
db.client.restore(snapshot);
});
test('should insert user', async () => {
await users.insert({ id: 'u1', name: 'John', email: 'john@test.com' });
const user = await users.get('u1');
expect(user.name).toBe('John');
});
});Performance Comparison:
| Operation | LocalStack | MemoryClient | Speedup |
|---|---|---|---|
| Insert 100 records | ~2000ms | ~50ms | 40x faster |
| Query 1000 records | ~5000ms | ~100ms | 50x faster |
| Full test suite | ~120s | ~2s | 60x faster |
📚 Full MemoryClient Documentation
When you create a database, s3db.js organizes your data in a structured way within your S3 bucket:
bucket-name/
└── databases/
└── myapp/ # Database root (from connection string)
├── s3db.json # Database metadata & resource definitions
│
├── resource=users/ # Resource: users
│ ├── data/
│ │ ├── id=user-123 # Document (metadata in S3 metadata, optional body)
│ │ └── id=user-456
│ └── partition=byRegion/ # Partition: byRegion
│ ├── region=US/
│ │ ├── id=user-123 # Partition reference
│ │ └── id=user-789
│ └── region=EU/
│ └── id=user-456
│
├── resource=posts/ # Resource: posts
│ └── data/
│ ├── id=post-abc
│ └── id=post-def
│
├── resource=sessions/ # Resource: sessions (with TTL)
│ └── data/
│ ├── id=session-xyz
│ └── id=session-qwe
│
├── plugin=cache/ # Plugin: CachePlugin (global data)
│ ├── config # Plugin configuration
│ └── locks/
│ └── cache-cleanup # Distributed lock
│
└── resource=wallets/ # Resource: wallets
├── data/
│ └── id=wallet-123
└── plugin=eventual-consistency/ # Plugin: scoped to resource
├── balance/
│ └── transactions/
│ └── id=txn-123 # Plugin-specific data
└── locks/
└── balance-sync # Resource-scoped lock
Key Path Patterns:
| Type | Pattern | Example |
|---|---|---|
| Metadata | s3db.json |
Database schema, resources, versions |
| Document | resource={name}/data/id={id} |
resource=users/data/id=user-123 |
| Partition | resource={name}/partition={partition}/{field}={value}/id={id} |
resource=users/partition=byRegion/region=US/id=user-123 |
| Plugin (global) | plugin={slug}/{path} |
plugin=cache/config |
| Plugin (resource) | resource={name}/plugin={slug}/{path} |
resource=wallets/plugin=eventual-consistency/balance/transactions/id=txn-123 |
| Lock (global) | plugin={slug}/locks/{lockName} |
plugin=ttl/locks/cleanup |
| Lock (resource) | resource={name}/plugin={slug}/locks/{lockName} |
resource=wallets/plugin=eventual-consistency/locks/balance-sync |
Storage Layers:
-
Documents - User data stored in resources
- Metadata: Stored in S3 object metadata (up to 2KB)
- Body: Large content stored in S3 object body (unlimited)
-
Partitions - Organized references for O(1) queries
- Hierarchical paths with field values
- References point to main document
-
Plugin Storage - Plugin-specific data
- Global:
plugin={slug}/...- Shared config, caches, locks - Resource-scoped:
resource={name}/plugin={slug}/...- Per-resource data - Supports same behaviors as resources (body-overflow, body-only, etc.)
- 3-5x faster than creating full resources
- Examples: EventualConsistency transactions, TTL expiration queues, Cache entries, Audit logs
- Global:
Why This Structure?
- ✅ Flat hierarchy - No deep nesting, better S3 performance
- ✅ Self-documenting - Path tells you what data it contains
- ✅ Partition-friendly - O(1) lookups via S3 prefix queries
- ✅ Plugin isolation - Each plugin has its own namespace
- ✅ Consistent naming -
resource=,partition=,plugin=,id=prefixes
import { S3db } from 's3db.js';
// Simple connection
const db = new S3db({
connectionString: 's3://ACCESS_KEY:SECRET@bucket/databases/myapp'
});
await db.connect();
// With plugins and options
const db = new S3db({
connectionString: 's3://bucket/databases/myapp',
logLevel: 'debug',
versioningEnabled: true,
executorPool: {
concurrency: 100, // Default concurrency (can increase for high-throughput)
retries: 3,
retryDelay: 1000
},
taskExecutorMonitoring: {
enabled: true,
collectMetrics: true,
sampleRate: 0.2
},
plugins: [
new CachePlugin({ ttl: 300000 }),
new MetricsPlugin()
],
httpClientOptions: {
keepAlive: true,
maxSockets: 100,
timeout: 60000
}
});
await db.connect();| Method | Description |
|---|---|
connect() |
Initialize database connection and load metadata |
createResource(config) |
Create or update a resource |
getResource(name, options?) |
Get existing resource instance |
resourceExists(name) |
Check if resource exists |
resources.{name} |
Access resource by property |
uploadMetadataFile() |
Save metadata changes to S3 |
Customize HTTP performance for your workload:
const db = new S3db({
connectionString: '...',
httpClientOptions: {
keepAlive: true, // Enable connection reuse
keepAliveMsecs: 1000, // Keep connections alive for 1s
maxSockets: 50, // Max 50 concurrent connections
maxFreeSockets: 10, // Keep 10 free connections in pool
timeout: 60000 // 60 second timeout
}
});Presets:
High Concurrency (APIs)
httpClientOptions: {
keepAlive: true,
keepAliveMsecs: 1000,
maxSockets: 100, // Higher concurrency
maxFreeSockets: 20, // More free connections
timeout: 60000
}Aggressive Performance (High-throughput)
httpClientOptions: {
keepAlive: true,
keepAliveMsecs: 5000, // Longer keep-alive
maxSockets: 200, // High concurrency
maxFreeSockets: 50, // Large connection pool
timeout: 120000 // 2 minute timeout
}Complete documentation: See above for all Database configuration options
s3db.js uses Pino - a blazing-fast, low-overhead JSON logger (5-10x faster than console.*). The logging system is hierarchical: Database → Plugins → Resources automatically inherit log levels, with per-component override capabilities.
All components (Database, Plugins, Resources) automatically inherit the global log level:
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'warn' // ← Database, Resources, and Plugins all inherit 'warn'
}
});
await db.usePlugin(new CachePlugin(), 'cache'); // Inherits 'warn'
await db.usePlugin(new TTLPlugin(), 'ttl'); // Inherits 'warn's3db.js provides two built-in format presets for different environments:
JSON Format (Production - Structured Logs):
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'info',
format: 'json' // ← Compact JSON for log aggregation
}
});
// Output: {"level":30,"time":1234567890,"msg":"User created","userId":"123"}Pretty Format (Development - Human Readable):
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'debug',
format: 'pretty' // ← Colorized, readable output
}
});
// Output: [14:23:45.123] INFO: User created
// userId: "123"Auto-Detection (Default):
// Automatically chooses format based on:
// - TTY detection (terminal vs piped)
// - NODE_ENV (development vs production)
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'info'
// format is auto-detected
}
});s3db.js errors automatically use toJSON() for structured logging:
import { ValidationError } from 's3db.js';
const error = new ValidationError('Invalid email', {
field: 'email',
value: 'invalid@',
statusCode: 422
});
// Logs include full error context automatically
logger.error({ err: error }, 'Validation failed');
// Output includes: name, message, code, statusCode, suggestion, stack, etc.Fine-tune log levels for specific plugins or resources using childLevels:
const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'warn', // ← Global default
childLevels: {
// Override specific plugins
'Plugin:cache': 'debug', // Cache plugin in debug mode
'Plugin:ttl': 'trace', // TTL plugin in trace mode
'Plugin:metrics': 'error', // Metrics plugin only shows errors
'Plugin:s3-queue': 'info', // S3Queue plugin in info mode
// Override specific resources
'Resource:users': 'debug', // Users resource in debug
'Resource:logs': 'silent' // Logs resource silenced
}
}
});Result:
- Database →
warn - CachePlugin →
debug(override) - TTLPlugin →
trace(override) - MetricsPlugin →
error(override) - All other plugins →
warn(inherited)
Plugins can use completely custom loggers that don't inherit from Database:
import { createLogger } from 's3db.js/logger';
// Create custom logger
const customLogger = createLogger({
name: 'MyApp',
level: 'trace',
// Pino options
transport: {
target: 'pino-pretty',
options: { colorize: true }
}
});
// Plugin uses custom logger instead of inheriting
const plugin = new CachePlugin({
logger: customLogger // ← Ignores inheritance
});
await db.usePlugin(plugin, 'cache');Change log levels on the fly for specific components:
// Increase verbosity for debugging
db.setChildLevel('Plugin:cache', 'debug');
// Silence a noisy plugin
db.setChildLevel('Plugin:ttl', 'silent');
// Debug specific resource
db.setChildLevel('Resource:clicks', 'trace');setChildLevel() only affects new child loggers. Loggers already created maintain their previous level.
Override logging globally using environment variables:
# Set log level
S3DB_LOG_LEVEL=debug node app.js
# Set output format (using presets)
S3DB_LOG_FORMAT=pretty node app.js # Pretty format (colorized, human-readable)
S3DB_LOG_FORMAT=json node app.js # JSON format (structured logs for production)
# Combined example
S3DB_LOG_LEVEL=debug S3DB_LOG_FORMAT=pretty node app.jsLegacy Support: The old S3DB_LOG_PRETTY environment variable is still supported for backward compatibility:
S3DB_LOG_PRETTY=true node app.js # Same as S3DB_LOG_FORMAT=pretty
S3DB_LOG_PRETTY=false node app.js # Same as S3DB_LOG_FORMAT=json| Level | Use Case | When to Use |
|---|---|---|
silent |
No logs | Tests, silent components |
fatal |
Critical errors | System unusable |
error |
Errors | Failed operations |
warn |
Warnings | Deprecations, fallbacks |
info |
Information | Default for production |
debug |
Debug | Development |
trace |
Full trace | Deep debugging |
const db = new Database({
connectionString: process.env.S3DB_CONNECTION,
loggerOptions: {
level: 'warn',
format: 'json', // ← Structured logs for aggregation
childLevels: {
// Info-level logging only for critical plugins
'Plugin:metrics': 'info',
'Plugin:audit': 'info'
}
}
});const db = new Database({
connectionString: 'http://localhost:9000/bucket',
loggerOptions: {
level: 'debug',
format: 'pretty', // ← Human-readable, colorized
childLevels: {
// Trace the specific plugin you're debugging
'Plugin:cache': 'trace',
// Silence noisy plugins
'Plugin:metrics': 'silent'
}
}
});const db = new Database({
connectionString: 's3://bucket/db',
loggerOptions: {
level: 'warn',
format: 'json', // ← Production format
childLevels: {
// Debug ONLY the TTL plugin
'Plugin:ttl': 'trace'
}
}
});Plugins: Format is Plugin:{name}
await db.usePlugin(new CachePlugin(), 'cache');
// Child logger: 'Plugin:cache'
await db.usePlugin(new TTLPlugin(), 'my-ttl');
// Child logger: 'Plugin:my-ttl'Resources: Format is Resource:{name}
await db.createResource({ name: 'users', ... });
// Child logger: 'Resource:users'The API Plugin includes automatic HTTP request/response logging with smart detection:
Smart Detection:
- If
pino-httpis installed: Uses full-featured pino-http with all bells and whistles - If
pino-httpis NOT installed: Falls back to simple built-in HTTP logging
Installation (optional, recommended):
npm install pino-httpUsage:
import { APIPlugin } from 's3db.js/plugins';
const api = new APIPlugin({
port: 3000,
// Enable HTTP logging (works with or without pino-http!)
httpLogger: {
enabled: true,
autoLogging: true, // Log all requests/responses
ignorePaths: ['/health'], // Skip logging for these paths
// Custom log level based on status code
customLogLevel: (req, res, err) => {
if (err || res.statusCode >= 500) return 'error';
if (res.statusCode >= 400) return 'warn';
return 'info';
}
},
// Enable request ID tracking (recommended)
requestId: {
enabled: true,
headerName: 'X-Request-ID'
}
});What you get:
| Feature | With pino-http | Without pino-http |
|---|---|---|
| Request logging | ✅ Full | ✅ Basic |
| Response logging | ✅ Full | ✅ Basic |
| Error logging | ✅ Full | ✅ Basic |
| Request ID | ✅ Auto | ✅ Manual |
| Custom serializers | ✅ Yes | ✅ Basic |
| Performance overhead | ⚡ Minimal | ⚡ Minimal |
No installation required! HTTP logging works out-of-the-box with basic features. Install pino-http for enhanced capabilities.
Automatic Logging Output:
{
"level": 30,
"time": 1234567890,
"req": {
"id": "abc123",
"method": "POST",
"url": "/users",
"headers": { "user-agent": "...", "content-type": "application/json" }
},
"res": {
"statusCode": 201,
"headers": { "content-type": "application/json" }
},
"responseTime": 45,
"msg": "request completed"
}Features:
- Request/response correlation with request IDs
- Automatic status code-based log levels
- Error serialization with
toJSON() - Path filtering (e.g., skip
/health,/metrics) - Zero configuration required
- Production: Use
format: 'json'withlevel: 'warn'for structured logging - Development: Use
format: 'pretty'withlevel: 'debug'for readability - Debugging: Use
childLevelsto isolate specific components - Performance: Lower levels (
trace,debug) have performance impact - Inheritance: Components automatically inherit global level if not overridden
- Error Logging: Custom errors automatically use
toJSON()for rich context - CI/CD: Use
format: 'json'in automated environments for parsing - HTTP Logging: Enable
httpLoggerin API Plugin for automatic request tracking
Resources are the core abstraction in s3db.js - they define your data structure, validation rules, and behavior. Think of them as tables in traditional databases, but with much more flexibility and features.
Resources provide:
- ✅ Schema validation with 30+ field types
- ✅ 5 behavior strategies for handling 2KB S3 metadata limit
- ✅ Partitioning for O(1) queries vs O(n) scans
- ✅ Hooks & middlewares for custom logic
- ✅ Events for real-time notifications
- ✅ Versioning for schema evolution
- ✅ Encryption for sensitive fields
- ✅ Streaming for large datasets
Quick example:
const users = await db.createResource({
name: 'users',
attributes: {
email: 'email|required|unique',
password: 'secret|required',
age: 'number|min:18|max:120'
},
behavior: 'enforce-limits',
timestamps: true,
partitions: {
byAge: { fields: { age: 'number' } }
}
});
await users.insert({ email: 'john@example.com', password: 'secret123', age: 25 });Define your data structure with powerful validation using fastest-validator - a blazing-fast validation library with comprehensive type support:
| Type | Example | Validation Rules |
|---|---|---|
string |
"name: 'string|required'" |
min, max, length, pattern, enum |
number |
"age: 'number|min:0'" |
min, max, integer, positive, negative |
boolean |
"isActive: 'boolean'" |
true, false |
email |
"email: 'email|required'" |
RFC 5322 validation |
url |
"website: 'url'" |
Valid URL format |
date |
"createdAt: 'date'" |
ISO 8601 dates |
array |
"tags: 'array|items:string'" |
items, min, max, unique |
object |
"profile: { type: 'object', props: {...} }" |
Nested validation |
| Type | Savings | Example |
|---|---|---|
secret |
Encrypted | "password: 'secret|required'" - AES-256-GCM |
embedding:N |
77% | "vector: 'embedding:1536'" - Fixed-point Base62 |
ip4 |
47% | "ipAddress: 'ip4'" - Binary Base64 |
ip6 |
44% | "ipv6: 'ip6'" - Binary Base64 |
Encoding optimizations:
- ISO timestamps → Unix Base62 (67% savings)
- UUIDs → Binary Base64 (33% savings)
- Dictionary values → Single bytes (95% savings)
📖 Validation powered by fastest-validator All schemas use fastest-validator's syntax with full support for shorthand notation.
// Simple schema
{
name: 'string|required|min:2|max:100',
email: 'email|required|unique',
age: 'number|integer|min:0|max:150'
}
// Nested objects - MAGIC AUTO-DETECT! ✨ (recommended)
// Just write your object structure - s3db detects it automatically!
{
name: 'string|required',
profile: { // ← No $$type needed! Auto-detected as optional object
bio: 'string|max:500',
avatar: 'url|optional',
social: { // ← Deeply nested also works!
twitter: 'string|optional',
github: 'string|optional'
}
}
}
// Need validation control? Use $$type (when you need required/optional)
{
name: 'string|required',
profile: {
$$type: 'object|required', // ← Add required validation
bio: 'string|max:500',
avatar: 'url|optional'
}
}
// Advanced: Full control (rare cases - strict mode, etc)
{
name: 'string|required',
profile: {
type: 'object',
optional: false,
strict: true, // ← Enable strict validation
props: {
bio: 'string|max:500',
avatar: 'url|optional'
}
}
}
// Arrays with validation
{
name: 'string|required',
tags: 'array|items:string|min:1|max:10|unique',
scores: 'array|items:number|min:0|max:100'
}
// Encrypted fields
{
email: 'email|required',
password: 'secret|required',
apiKey: 'secret|required'
}S3 metadata has a 2KB limit. Behaviors define how to handle data that exceeds this:
| Behavior | Enforcement | Data Loss | Use Case |
|---|---|---|---|
user-managed |
None | Possible | Dev/Test - warnings only |
enforce-limits |
Strict | No | Production - throws errors |
truncate-data |
Truncates | Yes | Content management - smart truncation |
body-overflow |
Splits | No | Mixed data - metadata + body |
body-only |
Unlimited | No | Large docs - everything in body |
// Enforce limits (recommended for production)
const users = await db.createResource({
name: 'users',
behavior: 'enforce-limits',
attributes: { name: 'string', bio: 'string' }
});
// Body overflow for large content
const blogs = await db.createResource({
name: 'blogs',
behavior: 'body-overflow',
attributes: { title: 'string', content: 'string' }
});
// Body-only for documents
const documents = await db.createResource({
name: 'documents',
behavior: 'body-only',
attributes: { title: 'string', content: 'string', metadata: 'object' }
});// Create
const user = await users.insert({ name: 'John', email: 'john@example.com' });
// Read
const user = await users.get('user-123');
const all = await users.list({ limit: 10, offset: 0 });
const filtered = await users.query({ isActive: true });
// Update (3 methods with different performance)
await users.update(id, { name: 'Jane' }); // GET+PUT merge (baseline)
await users.patch(id, { name: 'Jane' }); // HEAD+COPY (40-60% faster*)
await users.replace(id, fullObject); // PUT only (30-40% faster)
// *patch() uses HEAD+COPY for metadata-only behaviors
// Delete
await users.delete('user-123');// Bulk insert
await users.insertMany([
{ name: 'User 1', email: 'user1@example.com' },
{ name: 'User 2', email: 'user2@example.com' }
]);
// Bulk get
const data = await users.getMany(['user-1', 'user-2', 'user-3']);
// Bulk delete
await users.deleteMany(['user-1', 'user-2']);Organize data for fast queries without scanning:
const analytics = await db.createResource({
name: 'analytics',
attributes: {
userId: 'string',
event: 'string',
timestamp: 'date',
region: 'string'
},
partitions: {
// Single field
byEvent: { fields: { event: 'string' } },
// Multiple fields (composite)
byEventAndRegion: {
fields: {
event: 'string',
region: 'string'
}
},
// Nested field
byUserCountry: {
fields: {
'profile.country': 'string'
}
}
},
// Async partitions for 70-100% faster writes
asyncPartitions: true
});
// Query by partition (O(1))
const usEvents = await analytics.list({
partition: 'byEventAndRegion',
partitionValues: { event: 'click', region: 'US' }
});Automatic timestamp partitions:
const events = await db.createResource({
name: 'events',
attributes: { name: 'string', data: 'object' },
timestamps: true // Auto-creates byCreatedDate and byUpdatedDate partitions
});
const todayEvents = await events.list({
partition: 'byCreatedDate',
partitionValues: { createdAt: '2024-01-15' }
});Add custom logic before/after operations:
const products = await db.createResource({
name: 'products',
attributes: { name: 'string', price: 'number', sku: 'string' },
hooks: {
// Before operations
beforeInsert: [
async (data) => {
data.sku = `PROD-${Date.now()}`;
return data;
}
],
beforeUpdate: [
async (data) => {
data.updatedAt = new Date().toISOString();
return data;
}
],
// After operations
afterInsert: [
async (data) => {
console.log(`Product ${data.name} created with SKU ${data.sku}`);
}
],
afterDelete: [
async (data) => {
await notifyWarehouse(data.sku);
}
]
}
});Available hooks:
beforeInsert,afterInsertbeforeUpdate,afterUpdatebeforeDelete,afterDeletebeforeGet,afterGetbeforeList,afterList
Intercept and transform method calls:
// Authentication middleware
users.useMiddleware('inserted', async (ctx, next) => {
if (!ctx.args[0].userId) {
throw new Error('Authentication required');
}
return await next();
});
// Logging middleware
users.useMiddleware('updated', async (ctx, next) => {
const start = Date.now();
const result = await next();
console.log(`Update took ${Date.now() - start}ms`);
return result;
});
// Validation middleware
users.useMiddleware('inserted', async (ctx, next) => {
ctx.args[0].name = ctx.args[0].name.toUpperCase();
return await next();
});Supported methods:
fetched, list, inserted, updated, deleted, deleteMany, exists, getMany, count, page, listIds, getAll
Listen to resource operations:
const users = await db.createResource({
name: 'users',
attributes: { name: 'string', email: 'string' },
// Declarative event listeners
events: {
insert: (event) => {
console.log('User created:', event.id, event.name);
},
update: [
(event) => console.log('Update detected:', event.id),
(event) => {
if (event.$before.email !== event.$after.email) {
console.log('Email changed!');
}
}
],
delete: (event) => {
console.log('User deleted:', event.id);
}
}
});
// Programmatic listeners
users.on('inserted', (event) => {
sendWelcomeEmail(event.email);
});Available events:
inserted, updated, deleted, insertMany, deleteMany, list, count, fetched, getMany
Process large datasets efficiently:
// Readable stream
const readableStream = await users.readable({
batchSize: 50,
concurrency: 10
});
readableStream.on('data', (user) => {
console.log('Processing:', user.name);
});
readableStream.on('end', () => {
console.log('Stream completed');
});
// Writable stream
const writableStream = await users.writable({
batchSize: 25,
concurrency: 5
});
userData.forEach(user => writableStream.write(user));
writableStream.end();A complex, production-ready resource showing all capabilities:
const orders = await db.createResource({
name: 'orders',
// Schema with all features
attributes: {
// Basic fields
orderId: 'string|required|unique',
userId: 'string|required',
status: 'string|required|enum:pending,processing,completed,cancelled',
total: 'number|required|min:0',
// Encrypted sensitive data
paymentToken: 'secret|required',
// Nested objects
customer: {
type: 'object',
props: {
name: 'string|required',
email: 'email|required',
phone: 'string|optional',
address: {
type: 'object',
props: {
street: 'string|required',
city: 'string|required',
country: 'string|required|length:2',
zipCode: 'string|required'
}
}
}
},
// Arrays
items: 'array|items:object|min:1',
tags: 'array|items:string|unique|optional',
// Special types
ipAddress: 'ip4',
userAgent: 'string|max:500',
// Embeddings for AI/ML
orderEmbedding: 'embedding:384'
},
// Behavior for large orders
behavior: 'body-overflow',
// Automatic timestamps
timestamps: true,
// Versioning for schema evolution
versioningEnabled: true,
// Custom ID generation
idGenerator: () => `ORD-${Date.now()}-${Math.random().toString(36).substr(2, 5)}`,
// Partitions for efficient queries
partitions: {
byStatus: { fields: { status: 'string' } },
byUser: { fields: { userId: 'string' } },
byCountry: { fields: { 'customer.address.country': 'string' } },
byUserAndStatus: {
fields: {
userId: 'string',
status: 'string'
}
}
},
// Async partitions for faster writes
asyncPartitions: true,
// Hooks for business logic
hooks: {
beforeInsert: [
async function(data) {
// Validate stock availability
const available = await this.validateStock(data.items);
if (!available) throw new Error('Insufficient stock');
// Calculate total
data.total = data.items.reduce((sum, item) => sum + item.price * item.quantity, 0);
return data;
},
async (data) => {
// Add metadata
data.processedAt = new Date().toISOString();
return data;
}
],
afterInsert: [
async (data) => {
// Send confirmation email
await sendOrderConfirmation(data.customer.email, data.orderId);
},
async (data) => {
// Update inventory
await updateInventory(data.items);
}
],
beforeUpdate: [
async function(data) {
// Prevent status rollback
if (data.status === 'cancelled' && this.previousStatus === 'completed') {
throw new Error('Cannot cancel completed order');
}
return data;
}
],
afterUpdate: [
async (data) => {
// Notify customer of status change
if (data.$before.status !== data.$after.status) {
await notifyStatusChange(data.customer.email, data.status);
}
}
]
},
// Events for monitoring
events: {
insert: (event) => {
console.log(`Order ${event.orderId} created - Total: $${event.total}`);
metrics.increment('orders.created');
},
update: [
(event) => {
if (event.$before.status !== event.$after.status) {
console.log(`Order ${event.orderId}: ${event.$before.status} → ${event.$after.status}`);
metrics.increment(`orders.status.${event.$after.status}`);
}
}
],
delete: (event) => {
console.warn(`Order ${event.orderId} deleted`);
metrics.increment('orders.deleted');
}
}
});
// Add middlewares for cross-cutting concerns
orders.useMiddleware('inserted', async (ctx, next) => {
// Rate limiting
await checkRateLimit(ctx.args[0].userId);
return await next();
});
orders.useMiddleware('updated', async (ctx, next) => {
// Audit logging
const start = Date.now();
const result = await next();
await auditLog.write({
action: 'order.update',
orderId: ctx.args[0],
duration: Date.now() - start,
timestamp: new Date()
});
return result;
});Complete documentation: docs/resources.md
s3db.js features Separate Executor Pools - a revolutionary architecture where each Database instance gets its own independent executor pool for maximum efficiency and zero contention.
Each database instance gets its own executor pool, enabling:
- 🚀 40-50% faster at medium scale (5,000+ operations)
- 📈 13x less memory at large scale (10,000+ operations)
- ⏱️ Zero contention between concurrent databases
- 🛡️ Auto-retry with exponential backoff
- 🧠 Adaptive tuning - automatically adjusts concurrency based on performance
- Default parallelism: 100 (up from 10, optimized for S3 throughput)
Executor pool is enabled by default with optimized settings:
import { Database } from 's3db.js'
const db = new Database({
connectionString: 's3://bucket/database'
// That's it! Executor pool is automatically configured with:
// - Separate pool per database (zero contention)
// - Concurrency: 100 (default)
// - Auto-retry with exponential backoff
// - Priority queue for important operations
// - Real-time metrics
})
await db.connect()Executor pools (and the standalone TasksRunner/TasksPool) support lightweight vs full-featured schedulers, observability exports, and adaptive concurrency:
const db = new Database({
connectionString: 's3://bucket/database',
executorPool: {
features: { profile: 'light', emitEvents: false }, // or 'balanced'
monitoring: {
enabled: true,
reportInterval: 1000,
exporter: (snapshot) => console.log('[executor]', snapshot)
},
autoTuning: {
enabled: true,
minConcurrency: 10,
maxConcurrency: 200,
targetLatency: 250,
adjustmentInterval: 5000
}
}
})Use the light profile for PromisePool-style throughput when you just need FIFO fan-out. Switch to balanced when you need retries, priority aging, rich metrics, or adaptive scaling. The same options apply to filesystem/memory clients via taskExecutorMonitoring, autoTuning, and features.profile.
Customize concurrency for your specific workload:
import { Database } from 's3db.js'
const db = new Database({
connectionString: 's3://bucket/database',
executorPool: {
concurrency: 200, // Increase for high-throughput scenarios
// Or use auto-tuning:
// concurrency: 'auto', // Auto-tune based on system load
autotune: {
targetLatency: 100, // Target 100ms per operation
minConcurrency: 50, // Never go below 50
maxConcurrency: 500 // Never exceed 500
}
}
})// Get queue statistics
const stats = db.client.getQueueStats()
console.log(stats)
// {
// queueSize: 0,
// activeCount: 50,
// processedCount: 15420,
// errorCount: 3,
// retryCount: 8
// }
// Get performance metrics
const metrics = db.client.getAggregateMetrics()
console.log(metrics)
// {
// count: 15420,
// avgExecution: 45,
// p50: 42,
// p95: 78,
// p99: 125
// }
// Lifecycle control
await db.client.pausePool() // Pause processing
db.client.resumePool() // Resume processing
await db.client.drainPool() // Wait for queue to empty
db.client.stopPool() // Stop and cleanupOperationPool emits events for monitoring and observability:
| Event | Parameters | Description |
|---|---|---|
pool:taskStarted |
(task) |
Task execution started |
pool:taskCompleted |
(task, result) |
Task completed successfully |
pool:taskError |
(task, error) |
Task failed with error |
pool:taskRetry |
(task, attempt) |
Task retry attempt (1-based) |
pool:taskMetrics |
(metrics) |
Task performance metrics |
pool:paused |
() |
Pool paused (waiting for active tasks) |
pool:resumed |
() |
Pool resumed processing |
pool:drained |
() |
All tasks completed (queue empty) |
pool:stopped |
() |
Pool stopped (pending tasks cancelled) |
Example:
db.client.on('pool:taskCompleted', (task, result) => {
console.log(`✓ ${task.id}: ${task.timings.execution}ms`)
})
db.client.on('pool:taskError', (task, error) => {
console.error(`✗ ${task.id}:`, error.message)
})See src/concerns/operation-pool.js for event implementation details.
Benchmark results from comprehensive testing of 108 scenarios (see docs/benchmarks/operation-pool.md and BENCHMARK-RESULTS-TABLE.md):
| Scale | Separate Pools | Promise.all | Shared Pool | Winner |
|---|---|---|---|---|
| 1,000 ops | 2.1ms | 1.8ms | 2.5ms | Promise.all (marginal) |
| 5,000 ops | 18ms | 28ms | 32ms | Separate Pools (+40%) |
| 10,000 ops | 35ms | 45ms | 52ms | Separate Pools (+37%) |
| Memory (10K) | 88 MB | 1,142 MB | 278 MB | Separate Pools (13x better) |
✅ Automatic (no configuration needed):
- All operations benefit from Separate Pools
- Default concurrency: 100 (optimized for S3)
- Zero contention between databases
- Auto-retry with exponential backoff
- Adaptive tuning available for custom scenarios
Customize concurrency for:
- High-throughput APIs:
executorPool: { concurrency: 200 } - Data pipelines:
executorPool: { concurrency: 300-500 } - Single/low-frequency ops:
executorPool: { concurrency: 10 } - Memory-constrained:
executorPool: { concurrency: 25-50 }
Separate Pools comes pre-configured with production-ready defaults. Override only what you need:
// Minimal - uses all defaults (recommended)
const db = new Database({
connectionString: 's3://bucket/database'
// executorPool uses defaults: { concurrency: 100 }
})
// Custom - override specific settings
const db = new Database({
connectionString: 's3://bucket/database',
executorPool: {
concurrency: 200, // Concurrency per database pool (default: 100)
retries: 3, // Max retry attempts
retryDelay: 1000, // Initial retry delay (ms)
timeout: 30000, // Operation timeout (ms)
retryableErrors: [ // Errors to retry (empty = all)
'NetworkingError',
'TimeoutError',
'RequestTimeout',
'ServiceUnavailable',
'SlowDown',
'RequestLimitExceeded'
],
autotune: { // Auto-tuning (optional)
enabled: true,
targetLatency: 100, // Target latency (ms)
minConcurrency: 50, // Min per database
maxConcurrency: 500, // Max per database
targetMemoryPercent: 0.7, // Target memory usage (70%)
adjustmentInterval: 5000 // Check interval (ms)
}
},
taskExecutorMonitoring: {
enabled: true,
collectMetrics: true,
sampleRate: 1,
mode: 'balanced'
}
})Complete documentation: docs/benchmarks/executor-pool.md
Quick Jump: 🌐 API | 🔐 Identity | ⚡ Performance | 📊 Data | 🎮 Gaming | 🔧 DevOps | 🤖 ML/AI | 🕷️ Web Scraping
Extend s3db.js with powerful plugins. All plugins are optional and can be installed independently.
APIPlugin - Transform s3db.js into production-ready REST API with OpenAPI, multi-auth (JWT/OIDC/Basic/API Key), rate limiting, and template engines.
IdentityPlugin - Complete OAuth2/OIDC server with MFA, whitelabel UI, and enterprise SSO.
CachePlugin • TTLPlugin • EventualConsistencyPlugin • MetricsPlugin
CachePlugin - Memory/S3/filesystem caching with compression and automatic invalidation.
TTLPlugin - Auto-cleanup expired records with O(1) partition-based deletion.
EventualConsistencyPlugin - Eventually consistent counters and high-performance analytics.
MetricsPlugin - Performance monitoring with Prometheus export.
ReplicatorPlugin • ImporterPlugin • BackupPlugin • AuditPlugin
ReplicatorPlugin - Real-time replication to BigQuery, PostgreSQL, MySQL, Turso, PlanetScale, and SQS.
ImporterPlugin - Stream processing for large JSON/CSV imports.
BackupPlugin - Automated backups to S3, filesystem, or cross-cloud.
AuditPlugin - Compliance logging for all database operations.
TournamentPlugin - Complete tournament engine supporting Single/Double Elimination, Round Robin, Swiss, and League formats with automated bracket generation.
QueueConsumerPlugin • SchedulerPlugin • TfstatePlugin • CloudInventoryPlugin • CostsPlugin
QueueConsumerPlugin - Process RabbitMQ/SQS messages for event-driven architectures.
SchedulerPlugin - Cron-based job scheduling for maintenance tasks.
TfstatePlugin - Track Terraform infrastructure changes and drift detection.
CloudInventoryPlugin - Multi-cloud inventory with versioning and diff tracking.
CostsPlugin - AWS cost tracking and optimization insights.
MLPlugin • VectorPlugin • FullTextPlugin • GeoPlugin
MLPlugin - Machine learning model management and inference pipelines.
VectorPlugin - Vector similarity search (cosine, euclidean) for RAG and ML applications.
FullTextPlugin - Full-text search with tokenization and indexing.
GeoPlugin - Geospatial queries and distance calculations.
PuppeteerPlugin - Enterprise-grade browser automation with anti-bot detection, cookie farming, proxy rotation, and intelligent pooling for web scraping at scale.
RelationPlugin • StateMachinePlugin • S3QueuePlugin
RelationPlugin - ORM-like relationships with join optimization (10-100x faster queries).
StateMachinePlugin - Finite state machine workflows for business processes.
S3QueuePlugin - Distributed queue with zero race conditions using S3.
# Core plugins (no dependencies)
# Included in s3db.js package
# External dependencies (install only what you need)
pnpm add pg # PostgreSQL replication (ReplicatorPlugin)
pnpm add @google-cloud/bigquery # BigQuery replication (ReplicatorPlugin)
pnpm add @aws-sdk/client-sqs # SQS replication/consumption (ReplicatorPlugin, QueueConsumerPlugin)
pnpm add amqplib # RabbitMQ consumption (QueueConsumerPlugin)
pnpm add ejs # Template engine (APIPlugin - optional)import { S3db, CachePlugin, MetricsPlugin, TTLPlugin } from 's3db.js';
const db = new S3db({
connectionString: 's3://bucket/databases/myapp',
plugins: [
// Cache frequently accessed data
new CachePlugin({
driver: 'memory',
ttl: 300000, // 5 minutes
config: {
maxMemoryPercent: 0.1, // 10% of system memory
enableCompression: true
}
}),
// Track performance metrics
new MetricsPlugin({
enablePrometheus: true,
port: 9090
}),
// Auto-cleanup expired sessions
new TTLPlugin({
resources: {
sessions: { ttl: 86400, onExpire: 'soft-delete' } // 24h
}
})
]
});Simple plugin example:
import { Plugin } from 's3db.js';
export class MyPlugin extends Plugin {
constructor(options = {}) {
super(options);
this.name = 'MyPlugin';
}
async initialize(database) {
console.log('Plugin initialized!');
// Wrap methods
this.wrapMethod('Resource', 'inserted', async (original, resource, args) => {
console.log(`Inserting into ${resource.name}`);
const result = await original(...args);
console.log(`Inserted: ${result.id}`);
return result;
});
}
}Complete documentation: docs/plugins/README.md
S3DB includes a powerful MCP server with 28 specialized tools for database operations, debugging, and monitoring.
# Claude CLI (one command)
claude mcp add s3db \
--transport stdio \
-- npx -y s3db.js s3db-mcp --transport=stdio
# Standalone HTTP server
npx s3db.js s3db-mcp --transport=sse- ✅ 28 tools - CRUD, debugging, partitions, bulk ops, export/import, monitoring
- ✅ Multiple transports - SSE for web, stdio for CLI
- ✅ Auto-optimization - Cache and cost tracking enabled by default
- ✅ Partition-aware - Intelligent caching with partition support
- Connection (3) -
dbConnect,dbDisconnect,dbStatus - Debugging (5) -
dbInspectResource,dbGetMetadata,resourceValidate,dbHealthCheck,resourceGetRaw - Query (2) -
resourceQuery,resourceSearch - Partitions (4) -
resourceListPartitions,dbFindOrphanedPartitions, etc. - Bulk Ops (3) -
resourceUpdateMany,resourceBulkUpsert,resourceDeleteAll - Export/Import (3) -
resourceExport,resourceImport,dbBackupMetadata - Monitoring (4) -
dbGetStats,resourceGetStats,cacheGetStats,dbClearCache
Complete documentation: docs/mcp.md
s3db.js integrates seamlessly with:
- BigQuery - Real-time data replication via ReplicatorPlugin
- PostgreSQL - Sync to traditional databases via ReplicatorPlugin
- AWS SQS - Event streaming and message queues
- RabbitMQ - Message queue integration
- Prometheus - Metrics export via MetricsPlugin
- Vector Databases - Embedding field type with 77% compression
s3db.js includes a powerful CLI for database management and operations.
# Global
npm install -g s3db.js
# Project
npm install s3db.js
npx s3db [command]# List resources
s3db list
# Query resources
s3db query users
s3db query users --filter '{"status":"active"}'
# Insert records
s3db insert users --data '{"name":"John","email":"john@example.com"}'
# Update records
s3db update users user-123 --data '{"age":31}'
# Delete records
s3db delete users user-123
# Export data
s3db export users --format json > users.json
s3db export users --format csv > users.csv
# Import data
s3db import users < users.json
# Stats
s3db stats
s3db stats users
# MCP Server
s3db s3db-mcp --transport=stdio
s3db s3db-mcp --transport=sse --port=17500S3DB_CONNECTION_STRING=s3://bucket/databases/myapp
S3DB_CACHE_ENABLED=true
S3DB_COSTS_ENABLED=true
S3DB_VERBOSE=false- Resources (Complete Guide) - Everything about resources, schemas, behaviors, partitions, hooks, middlewares, events
- Client Class - Low-level S3 operations and HTTP configuration
- Schema Validation - Comprehensive schema validation and field types
- Plugins Overview - All available plugins and how to create custom ones
- MCP Server Guide - Complete MCP documentation with all 28 tools
- NPX Setup Guide - Use MCP with npx
- Claude CLI Setup - Detailed Claude CLI configuration
- Benchmark Index
- Base62 Encoding
- All Types Encoding
- String Encoding Optimizations
- EventualConsistency Plugin
- Partitions Matrix
- Vector Clustering
Browse 60+ examples covering:
- Basic CRUD (e01-e07)
- Advanced features (e08-e17)
- Plugins (e18-e33)
- Vectors & RAG (e41-e43)
- Testing patterns (e38-e40, e64-e65)
| Resource | Link |
|---|---|
| Resource API | docs/resources.md |
| Client API | docs/client.md |
| Schema Validation | docs/schema.md |
| Plugin API | docs/plugins/README.md |
Common issues and solutions:
Connection Issues
Problem: Cannot connect to S3 bucket
Solutions:
- Verify credentials in connection string
- Check IAM permissions (s3:ListBucket, s3:GetObject, s3:PutObject)
- Ensure bucket exists
- Check network connectivity
// Enable debug logging
const db = new S3db({
connectionString: '...',
logLevel: 'debug'
});Metadata Size Exceeded
Problem: Error: "S3 metadata size exceeds 2KB limit"
Solutions:
- Change behavior to
body-overfloworbody-only - Reduce field sizes or use truncation
- Move large content to separate fields
const resource = await db.createResource({
name: 'blogs',
behavior: 'body-overflow', // Automatically handle overflow
attributes: { title: 'string', content: 'string' }
});Performance Issues
Problem: Slow queries or operations
Solutions:
- Use partitions for frequently queried fields
- Enable caching with CachePlugin
- Increase HTTP client concurrency
- Use bulk operations instead of loops
// Add partitions
const resource = await db.createResource({
name: 'analytics',
attributes: { event: 'string', region: 'string' },
partitions: {
byEvent: { fields: { event: 'string' } }
},
asyncPartitions: true // 70-100% faster writes
});
// Enable caching
const db = new S3db({
connectionString: '...',
plugins: [new CachePlugin({ ttl: 300000 })]
});Orphaned Partitions
Problem: Partition references deleted field
Solutions:
const resource = await db.getResource('users', { strictValidation: false });
const orphaned = resource.findOrphanedPartitions();
console.log('Orphaned:', orphaned);
// Remove them
resource.removeOrphanedPartitions();
await db.uploadMetadataFile();
⚠️ Important: All benchmark results documented were generated using Node.js v22.6.0. Performance may vary with different Node.js versions.
s3db.js includes comprehensive benchmarks demonstrating real-world performance optimizations:
- Base62 Encoding - 40-46% space savings, 5x faster than Base36
- All Types Encoding - Comprehensive encoding across all field types
- String Encoding Optimizations - 2-3x faster UTF-8 calculations
- EventualConsistency Plugin - 70-100% faster writes
- Partitions Matrix - Test 110 combinations to find optimal config
- Vector Clustering - Vector similarity and clustering performance
Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the Unlicense - see the LICENSE file for details.
- Built with AWS SDK for JavaScript
- Validation powered by @icebob/fastest-validator
- ID generation using nanoid
Made with ❤️ by the s3db.js community