Skip to content

technosec-dev/onyxmesh

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OnyxMesh

Post-Quantum Mesh Networking for Autonomous Machine Fleets

OnyxMesh is an experimental network protocol exploring a hybrid approach to fleet communication for autonomous machines - drones, robots, ground vehicles, edge servers. It integrates identity, discovery, routing, transport, and services into a single stack, using post-quantum cryptography throughout, without relying on a central server or external certificate authority.

Rather than layering separate tools for mesh routing, encryption, and fleet management, OnyxMesh attempts to unify these concerns in one protocol. The result is a single daemon, a single binary, and a single trust model - though the trade-off is a larger design surface and less modularity than composing existing battle-tested components.


The Problem

Autonomous fleets need to communicate in environments where infrastructure is sparse or absent. Existing tools each handle part of the problem well - WireGuard for encrypted tunnels, batman-adv for mesh routing, MQTT for pub/sub, Kubernetes for orchestration - but integrating them into a coherent, authenticated fleet requires significant glue work:

  • Security and routing are separate concerns. WireGuard doesn't know about mesh topology; batman-adv doesn't know about cryptographic identity. Combining them works, but the trust models don't naturally compose.
  • Classical cryptography assumptions. Most deployed mesh protocols rely on RSA, ECDSA, or Diffie-Hellman. For fleets with 10+ year lifespans, the transition to post-quantum algorithms is worth starting now rather than retrofitting later.
  • Fleet membership is often external. Proving a new device is authorized - and revoking a compromised one - typically requires external PKI or manual key distribution. This is manageable at small scale but becomes a burden as fleets grow.
  • Coordination lives in separate stacks. Pub/sub, distributed state, RPC, and container orchestration each bring their own discovery, security, and failure handling. The operational complexity compounds.
  • Security testing is usually separate. Protocol implementations rarely ship with adversarial verification tooling built in.

OnyxMesh takes a different approach: integrate these concerns into a single protocol stack and see what simplifications become possible when identity, routing, encryption, and services share one trust model.


What OnyxMesh Provides

Self-certifying identity. Every node derives its 128-bit address from its public key. No DNS, no DHCP, no registration server. The address is the identity, verifiable by anyone.

Post-quantum cryptography throughout. ML-KEM-768 for key exchange, ML-DSA-65 for signatures, AES-256-GCM for traffic encryption, SHA3-256 for hashing. The entire stack uses NIST-standardized post-quantum primitives.

Five-step mutual authentication. The Genome Handshake verifies hardware, firmware, fleet membership, and behavioral history - going beyond key-based identity to include device attestation.

Quorum-based fleet admission. In restricted mode, joining nodes need approval from two-thirds of the fleet, preventing a single compromised node from admitting an attacker. First nodes bootstrap with offline-signed genesis certificates.

Continuous health monitoring. The Immune Response System sends signed heartbeats every 30 seconds, detects behavioral anomalies through statistical peer modeling, and revokes compromised nodes through fleet consensus.

Self-healing mesh routing. Proactive topology maintenance within 3 hops, reactive discovery beyond. Device-class-aware cost metrics prefer routing through more capable nodes. Automatic partition detection, buffering, and recovery. OMA-pinned seed peers enable cross-subnet discovery without DNS or trust-on-first-use.

Reliable and unreliable transport. TCP-like streams with selective acknowledgments, congestion control, and retransmission for data that must arrive. Timer-based retransmit with exponential backoff, fast retransmit on 3 duplicate ACKs, SYN/FIN retry, and zero-window probing ensure delivery over lossy mesh links. Best-effort datagrams for telemetry that just needs to be fast. Streams are exposed via the daemon JSON-RPC API (stream.open, stream.listen, stream.recv, stream.close) so external processes can open mesh-routed byte streams without touching the wire layer.

Fleet services built in. Gossip-based pub/sub (OnyxTopics) with QoS 0 (at-most-once) broadcast and QoS 1 (at-least-once) mesh forwarding with IHAVE/IWANT repair, node-to-node messaging, CRDT key-value store (OnyxState), JSON-RPC (OnyxCall), and P2P container orchestration (OnyxSwarm) - leveraging the shared trust model for encryption, authentication, and decentralized coordination.

Adversarial security testing. OnyxForge ships alongside the protocol: brute-force key analysis, protocol fuzzing, replay attacks, timing side-channel analysis, denial-of-service simulation, and network partitioning. The intent is that fleet deployments can be audited using the same tooling used during development.


Architecture

graph TB
    subgraph "Application Layer"
        APP[Applications]
        SWARM[OnyxSwarm<br/>Container Orchestration]
    end

    subgraph "Fleet Services"
        TOPICS[OnyxTopics<br/>Pub/Sub]
        STATE[OnyxState<br/>CRDT KV Store]
        CALL[OnyxCall<br/>RPC]
    end

    subgraph "Transport Layer"
        RS[Reliable Streams<br/>SACK, Retransmit, CC]
        UD[Unreliable Datagrams<br/>Best Effort]
    end

    subgraph "Routing Layer"
        ROUTE[OnyxRoute<br/>Hybrid Routing]
    end

    subgraph "Identity Layer"
        GENOME[DeviceGenome]
        HS[Genome Handshake]
        ADM[Fleet Admission<br/>Quorum Protocol]
        IRS[Immune Response<br/>System]
    end

    subgraph "Discovery Layer"
        PULSE[Echo-Pulse<br/>Protocol]
        NBR[Neighbor Table]
    end

    subgraph "Link Layer"
        UDP[UDP]
        WIFI[WiFi]
        BLE[BLE]
        LORA[LoRa]
        SERIAL[Serial]
    end

    subgraph "Cryptographic Foundation (OM-QR1)"
        MLDSA[ML-DSA-65]
        MLKEM[ML-KEM-768]
        AES[AES-256-GCM]
        SHA3[SHA3-256]
    end

    APP --> SWARM
    APP --> CALL
    APP --> UD
    SWARM --> TOPICS
    SWARM --> STATE

    TOPICS --> RS
    STATE --> RS
    CALL --> RS

    RS --> ROUTE
    UD --> ROUTE

    ROUTE --> GENOME
    ROUTE --> NBR

    HS --> GENOME
    ADM --> HS
    IRS --> GENOME

    IRS --> PULSE
    PULSE --> NBR

    NBR --> UDP
    NBR --> WIFI
    NBR --> BLE
    NBR --> LORA
    NBR --> SERIAL

    HS --> MLDSA
    HS --> MLKEM
    RS --> AES
    UD --> AES
    GENOME --> SHA3
Loading

Protocol Layers

Layer Component What It Does
Crypto OM-QR1 Suite ML-DSA-65 signatures, ML-KEM-768 key exchange, AES-256-GCM encryption, SHA3-256 hashing
Link Adapters WiFi, Ethernet, BLE, LoRa, Serial - pluggable physical transports
Discovery Echo-Pulse Protocol Periodic broadcast pulses, neighbor table, service advertisement, position/velocity, OMA-pinned seed peer unicast bootstrap
Identity Device Genome Protocol Self-certifying OMA addresses, 5-step handshake, fleet admission, IRS health monitoring
Routing OnyxRoute Proactive Dijkstra (near zone ≤3 hops), reactive RREQ/RREP (far zone), device-class cost metrics
Transport OnyxTransport Reliable streams (SACK, congestion control, retransmission), unreliable datagrams, AES-256-GCM per-packet, stream manager with port-based listeners
Services Fleet Services OnyxTopics (gossip pub/sub, QoS 0/1), OnyxState (CRDT KV), OnyxCall (JSON-RPC), OnyxSwarm (containers)

Link Layer

The link layer uses a pluggable adapter architecture. A Manager multiplexes all active adapters — incoming frames from any adapter are forwarded into a single channel, and outgoing sends are routed to whichever adapter can reach the destination.

Upper layers (discovery, routing, transport, services)
        │
   link.Manager ── multiplexes frames across adapters
        │
   ┌────┼────┬────────┬───────┬────────┐
  WiFi  UDP  BLE    LoRa   Serial

Each adapter implements a common interface: Start, Stop, Send, Broadcast, Receive, Neighbors, and Stats. This means upper layers are transport-agnostic — the same routing and discovery logic works whether frames travel over WiFi, BLE, LoRa, or a serial link.

Adapter Transport MTU Broadcast Use Case
UDP UDP/IPv6 unicast + multicast 1400 B ff02::1 multicast Containers, Ethernet, any IP network
WiFi UDP/IPv6 over WiFi interface 1400 B ff02::4f4e:5958 multicast + per-neighbor Wireless fleets, PoC deployments
BLE GATT characteristics 247 B BLE advertising Short-range sensor clusters
LoRa SPI to SX1276/SX1278 radio 255 B LoRa broadcast Long-range, low-power links (km range)
Serial SLIP framing over UART 1400 B N/A (point-to-point) Wired tethering, debug links

WiFi vs UDP. The UDP adapter is the "always works" baseline — it runs anywhere with IP connectivity (containers, Ethernet, tunnels). The WiFi adapter adds wireless-aware features on top of the same UDP/IPv6 transport: per-neighbor RSSI tracking, WiFi channel management, a beacon protocol for mesh coordination, and nl80211 network scanning on Linux. Upper layers can use RSSI data for signal-quality-based routing decisions and link quality reporting in the topology view.

Frame format. All wireless adapters use a minimal 32-byte header: [source OMA (16 bytes)] [destination OMA (16 bytes)] [payload]. The OMA addresses are the cryptographic identities — no MAC address translation needed.

Platform drivers. BLE, LoRa, Serial, and WiFi scanning have Linux-specific drivers (_linux.go) using BlueZ, periph.io SPI, go.bug.st/serial, and mdlayher/wifi respectively. Non-Linux builds get stub drivers that return "not supported" errors, keeping the codebase portable.

Device Classes

Class Example Hardware RAM Role
EMBEDDED RPi Zero 2W 64–512 MB Leaf node, sensor relay
STANDARD RPi 3B/3B+ 512 MB–1 GB Mesh participant
POWERFUL RPi 4/5, BeagleBone AI 2–8 GB Aggregation hub, container host
COMPUTE Jetson Orin 8–64 GB Fleet coordinator, AI inference

For complete protocol details - wire formats, message structures, algorithms, and constants - see the Protocol Specification.


Quick Start

Build

git clone https://github.com/technosec/onyxmesh.git
cd onyxmesh

# Build all binaries
make build

# Run tests
make test

Cross-Compilation

# ARM64 (RPi 4/5, Jetson)
make build-arm64

# ARMv7 (RPi 3, BeagleBone)
make build-armv7

# x86-64
make build-amd64

Run

# Start the daemon
onyxd --config /etc/onyxmesh/onyxd.toml

# Check node health
onyxd health

# View discovered neighbors
onyxd neighbors

# Discover route to a peer
onyxd ping fc00:a1b2:c3d4:e5f6:7890:abcd:ef01:2345

# Traffic stats (packets, bytes, datagrams)
onyxd stats

# Live packet capture (tcpdump-like, Ctrl-C to stop)
onyxd traffic

# Interactive TUI traffic monitor (iptraf-like)
onyxd traffic -ui

# Send a message to a specific node
onyxctl msg send fc00:a1b2:c3d4:e5f6:7890:abcd:ef01:2345 "hello"

# Broadcast to all nodes
onyxctl msg broadcast "maintenance in 5 minutes"

# Listen for incoming messages
onyxctl msg listen

# Container management (requires Podman runtime)
onyxctl container ls                  # List running containers
onyxctl container deploy alpine       # Pull and start a container
onyxctl container stop <id>           # Stop a running container

# CRIU checkpoint/restore (for live container migration)
# Used internally by higher-level orchestration layers
# Requires CRIU installed and Podman configured with checkpoint support

Cross-Subnet Bootstrap

For nodes on different subnets, configure OMA-pinned seed peers:

# Start with seed peer (cross-subnet discovery)
ONYX_SEED_PEERS="[2001:db8::1]:9000@fc00:a1b2:c3d4:e5f6:7890:abcd:ef01:2345" onyxd

# Generate a genesis certificate with seed peer hint
onyx-provision genesis fc00:a1b2:... -k fleet.key --seed-addr "[2001:db8::1]:9000"

The OMA pin ensures only the peer with the matching private key can complete the handshake -- no DNS dependency, no trust-on-first-use.

OnyxMesh Traffic Monitor


Test Cluster

A 5-node heterogeneous mesh runs in podman containers for integration testing. All nodes share an IPv6-enabled bridge network and discover each other via Echo-Pulse multicast on ff02::1.

make mesh-up              # Build containers and start the cluster
make mesh-quick           # Rebuild only onyxd and restart (fast iteration)
make mesh-logs            # Follow logs from all nodes
make mesh-status          # Check cluster health
make mesh-stress          # Chaos tests (partition, heal, data-plane flood, throughput)
make mesh-flood           # Data-plane flood (FLOOD_RATE=5000 FLOOD_SIZE=256 FLOOD_DUR=10s)
make mesh-traffic NODE=2  # Live packet capture on a specific node (tcpdump-like)
make mesh-traffic-ui NODE=2  # Interactive TUI traffic monitor (iptraf-like)
make mesh-down            # Stop cluster and remove volumes
Container Device Class Role
node1 POWERFUL Aggregation hub
node2 STANDARD Mesh relay
node3 EMBEDDED Leaf sensor
node4 STANDARD Mesh relay
node5 COMPUTE Fleet coordinator
monitor - Web dashboard (:8080), Prometheus (:9102)
forge - Security audit (on-demand)

Restricted-Mode Cluster

A separate 5-node cluster tests quorum-based admission. Seed nodes (1-3) bootstrap with genesis certificates, then joining nodes (4-5) go through the admission protocol.

make mesh-admission-up         # Start all 5 nodes with staged admission
make mesh-admission-down       # Stop restricted-mode cluster
make mesh-admission-logs       # Follow logs from restricted-mode cluster
make mesh-admission-stress     # Chaos tests (expects pre-running restricted cluster)
make mesh-admission-flood      # Data-plane flood on restricted cluster
make mesh-admission-traffic NODE=2     # Live packet capture on restricted node
make mesh-admission-traffic-ui NODE=2  # Interactive TUI on restricted node
make mesh-admission            # Full admission test (seed + join + verify + cleanup)

Deployment

What Goes Where

Component Where Why
onyxd Every node The only required component. Full protocol stack: identity, discovery, routing, transport, services, JSON-RPC API.
onyx-monitor One per fleet Web dashboard and Prometheus metrics aggregation.
onyxctl Operator machines CLI for fleet management. Not needed on mesh nodes.
onyx-provision During onboarding Fleet bootstrapping and device enrollment. Run once per new node.
onyx-forge Optional Security auditing. On-demand or scheduled.

For a minimal deployment: just onyxd on every device. Everything else is operational tooling.

Network Requirements

OnyxMesh uses IPv6 multicast (ff02::1, link-local all-nodes) over UDP port 9000 for local subnet discovery. For cross-subnet deployments, OMA-pinned seed peers provide unicast bootstrap -- see the configuration section below.

Network Type Works? Notes
Ethernet switch Yes Same L2 broadcast domain
WiFi Access Point Yes Clients on same AP share L2
batman-adv Yes bat0 provides flat L2 mesh; OnyxMesh adds identity, crypto, services on top
Linux bridge / VXLAN Yes Any L2 overlay that forwards multicast
Podman/Docker bridge Yes How the test cluster works
Cross-subnet (routed) Yes OMA-pinned seed peers for unicast bootstrap; see ONYX_SEED_PEERS
BLE / LoRa / Serial Yes (Linux) Platform drivers for BLE (BlueZ), LoRa (SX127x SPI), Serial (SLIP). Enable via ONYX_ADAPTERS=ble,lora,serial

Yocto / OpenEmbedded

bitbake-layers add-layer ../meta-onyxmesh

# Minimal image for embedded devices
MACHINE=raspberrypi4-64 bitbake onyxmesh-image-minimal

# Full image with Podman and container support
MACHINE=raspberrypi5 bitbake onyxmesh-image-full

CLI Tools

Tool Description
onyxd Core mesh daemon + diagnostic subcommands (health, neighbors, routes, ping, stats, traffic, flood, stress)
onyxctl Fleet and node management - status, messaging, pub/sub, streams, containers, fleet revoke
onyx-provision Fleet init, device enrollment, genesis certificate generation (with --seed-addr for seed peer config hints)
onyx-forge Security audit suite - brute-force, fuzz, replay, timing, DoS, chaos
onyx-sim Network simulation framework
onyx-monitor Fleet visualization dashboard + Prometheus metrics

onyxd Subcommands

Subcommand Description
onyxd health Node health check (OMA, neighbors, routes)
onyxd neighbors List discovered neighbors (JSON)
onyxd routes Show routing table (JSON)
onyxd ping <oma> Trigger route discovery to a destination OMA
onyxd stats Traffic counters (packets, bytes, datagrams). Use -json for machine-readable output
onyxd traffic Live packet capture, tcpdump-style. Supports filtering (-layer, -type, -src, -dst), count limit (-c), JSON output (-json), and verbose payload dump (-v)
onyxd traffic -ui Interactive TUI traffic monitor (iptraf-like) with layer stats, top talkers, live feed, and network topology diagram
onyxd flood Data-plane traffic generator (-rate, -size, -duration)
onyxd stress RPC throughput benchmark (-duration, -concurrency)

Configuration

# onyxd.toml

[identity]
fleet_id = "fleet-001"
device_class = "STANDARD"  # or auto-detect

[fleet]
mode = "open"              # "open" or "restricted"
admission_quorum = 0.667   # 2/3 majority for admission
admission_timeout = "30s"
certificate_ttl = "168h"   # 7 days
genesis_threshold = 3      # Accept genesis certs when fleet < 3

[discovery]
pulse_interval = "2s"
heartbeat_interval = "30s"
seed_peers = []  # OMA-pinned seed peers for cross-subnet: ["[host]:port@OMA"]

[routing]
near_zone_hops = 3
route_timeout = "30s"
max_buffered_frames = 100

[transport]
max_streams_per_peer = 10
stream_buffer_size = 65536

[swarm]
enabled = true
runtime = "podman"
layer_cache_size_mb = 500

[forge]
enabled = false
profile = "forge-continuous"

See the full configuration reference in configs/onyxd.toml.


Performance

Operation EMBEDDED STANDARD POWERFUL COMPUTE
Genome Handshake < 150 ms < 90 ms < 40 ms < 15 ms
ML-DSA-65 Sign < 15 ms < 10 ms < 4 ms < 1.5 ms
AES-GCM Throughput > 20 Mbps > 30 Mbps > 200 Mbps > 600 Mbps
onyxd RSS Memory < 32 MB < 64 MB < 256 MB < 512 MB
Binary Size (static) < 12 MB < 12 MB < 12 MB < 12 MB

Project Structure

onyxmesh/
├── cmd/
│   ├── onyxd/              # Core daemon + diagnostic subcommands
│   ├── onyxctl/            # CLI management
│   ├── onyx-provision/     # Fleet provisioning
│   ├── onyx-forge/         # Security audit
│   ├── onyx-sim/           # Simulation
│   └── onyx-monitor/       # Visualization dashboard
├── pkg/
│   ├── crypto/             # OM-QR1 cipher suite (ML-DSA-65, ML-KEM-768)
│   ├── daemon/             # JSON-RPC 2.0 server + client (Unix socket API, incl. stream ops)
│   ├── deviceclass/        # Device detection
│   ├── discovery/          # Echo-Pulse protocol
│   ├── forge/              # Security audit engine
│   ├── identity/           # OMA, Genome, Handshake, Fleet Admission, IRS
│   ├── link/               # Physical adapters (UDP, BLE, LoRa, WiFi, Serial)
│   ├── log/                # Structured logging
│   ├── routing/            # OnyxRoute (LSA, RREQ/RREP/RERR, route table)
│   ├── services/           # Topics, State, Call
│   ├── swarm/              # Container orchestration
│   ├── transport/          # Streams (SACK, OOO buffer, congestion control, retransmit, stream manager), datagrams
│   └── wire/               # Wire format encoding (16B header + payload)
├── deploy/
│   ├── containers/         # Containerfiles for all services
│   ├── compose/            # Production compose stack
│   └── quadlet/            # Podman quadlet units
├── test/
│   ├── bench/              # Benchmarks (routing, crypto)
│   ├── containers/         # Multi-node test cluster (podman-compose)
│   ├── fuzz/               # Fuzz tests (wire, identity, discovery)
│   ├── integration/        # End-to-end tests
│   └── simulation/         # Multi-node simulation
├── yocto/
│   └── meta-onyxmesh/      # Yocto BSP layer
├── configs/                # Configuration files
└── docs/                   # Protocol specification

Documentation

  • Protocol Specification - Complete wire formats, message structures, algorithms, and constants for all protocol layers
  • Testing Guide - How to run the test cluster and write integration tests

License

Apache-2.0. See LICENSE.

Copyright (c) 2026 TechnoSec SRLS.

About

Post-Quantum Mesh Networking for Autonomous Machine Fleets

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors