diff --git a/.github/workflows/sandbox.yml b/.github/workflows/sandbox.yml index 9895cf4..6a8d656 100644 --- a/.github/workflows/sandbox.yml +++ b/.github/workflows/sandbox.yml @@ -11,10 +11,10 @@ jobs: - name: Setup Go uses: actions/setup-go@v5 with: - go-version: "1.23" + go-version: "1.24" - name: golangci-lint - uses: golangci/golangci-lint-action@v4 + uses: golangci/golangci-lint-action@v8 with: - version: v1.63 + version: v2.1 - name: Run CI tests run: script -q -e -c "make citest" diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 0000000..6151229 --- /dev/null +++ b/.gitmodules @@ -0,0 +1,3 @@ +[submodule "docs"] + path = docs + url = https://github.com/luthersystems/docs diff --git a/.golangci.yml b/.golangci.yml index 6d9f179..a060419 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -1,3 +1,4 @@ +version: "2" run: timeout: 2m linters: diff --git a/Makefile b/Makefile index 334b9a1..998bf78 100644 --- a/Makefile +++ b/Makefile @@ -162,7 +162,7 @@ pre-push: cd api && $(MAKE) .PHONY: -download: ${SUBSTRATE_PLUGIN} +download: plugin .PHONY: print-export-path print-export-path: @@ -212,3 +212,12 @@ observability-up: observability-network observability-down: docker compose -f compose/tempo.yaml -f compose/grafana.yaml down --volumes --remove-orphans + +.PHONY: init-docs +init-docs: + git submodule update --init --recursive --remote docs + +.PHONY: setup + +setup: init-docs download + @echo "Running project setup..." diff --git a/README.md b/README.md index 3eb7723..930dc98 100644 --- a/README.md +++ b/README.md @@ -3,55 +3,78 @@ This repository contains a working starter kit for developers to modify and specialize to their specific use case. +[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/luthersystems/sandbox?quickstart=1) + +## Local Setup + +Once you've cloned the repo, run: + +```sh +make setup +``` + +To download the docs and complete the local setup. + ## High-level File System Structure _Application Specific Code_: Add your specific process operations code to the `phylum/` directory. -_Application Templates_: Edit the template code in `oracle/` and `api/` to -specialize for your use case. +\_Application Starter Code: Edit the example template code in `portal/` and +`api/` to specialize for your use case. _Platform_: The remaining files and directories are platform related code that should not be modified. -[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/luthersystems/sandbox?quickstart=1) - ## Component Diagram ```asciiart - FE Portal - + - | - +--------------v---------------+ - | +<----+ Swagger Specification: - | Middleware API | api/swagger/oracle.swagger.json - +--------------+---------------+ - | Middleware Portal Service | - | portal/ | - +------------------+-----------+ - | - JSON-RPC | - +------------v-----------+ - | shiroclient gateway | - | substrate/shiroclient | - +-------------+----------+ - | - | JSON-RPC - +---------------------------v--------------------------+ - | Common Operations Script | - | phylum/ | - +------------------------------------------------------+ - | Substrate (Common Operations Script Runtime) | - +------------------------------------------------------+ - | Distributed Systems Services (fabric) | - +------------------------------------------------------+ + + +------------------+ + | FE Portal | + +--------+---------+ + | + v + +--------+---------+ + | Middleware API | <---- Swagger: + +--------+---------+ api/swagger/oracle.swagger.json + | + +--------v---------+ + | Portal Service | + | (./portal/) | + +--------+---------+ + | + JSON-RPC + | + +--------v---------+ + | Shiroclient | + | (gateway) | + +--------+---------+ + | + JSON-RPC + | + +-----------------------v-----------------------+ + | Common Operations Script (Phylum) | + | ./phylum/README.md | + +-----------------------+-----------------------+ + | + +-----------------------v-----------------------+ + | Substrate Runtime (ELPS interpreter, state) | + +-----------------------+-----------------------+ + | + +-----------------------v-----------------------+ + | Distributed Systems Layer (Fabric) | + | ./fabric/README.md | + +-----------------------------------------------+ ``` This repo includes an end-to-end "hello world" application described below. ## Luther Documentation -Check out the [docs](https://docs.luthersystems.com). +Check out the [docs](https://docs.luthersystems.com) on our website. + +Run `make init-docs` and the docs are available [here](./docs/README.md). ## Getting Started @@ -124,6 +147,20 @@ Run `make` to build all the services: make ``` +## "Hello World" Application + +This repo includes a small application for managing an insurance claim. It +serves a [JSON API](api/srvpb/v1/oracle.swagger.json) that provides endpoints to: + +1. create a new insurance claim +2. add a claimant to the claim +3. retrieve claim details + +> To simplify the sandbox, we have omitted authentication which we handle +> using [lutherauth](https://docs.luthersystems.com/luther/application/modules/lutherauth). +> Authorization is implemented at the application layer over tokens issued by +> lutherauth. + ### Running the Application First we'll run the sample application with a local instance of the Luther @@ -153,121 +190,21 @@ make down Running `docker ps` again will show all the containers have been removed. -### Application tracing (OpenTelemetry) - -There is support for tracing of the application and the Luther platform using -the OpenTelemetry protocol. Each can optionally be configured by setting an -environment variable to point at an OTLP endpoint (e.g. a Grafana agent). When -configured, trace spans will be created at key layers of the stack and delivered -to the configured endpoint. - -You can test tracing locally by setting the following env variables: - -```bash -export SANDBOX_ORACLE_OTLP_ENDPOINT=http://tempo:4317 -export SHIROCLIENT_GATEWAY_OTLP_TRACER_ENDPOINT=http://tempo:4317 -export CHAINCODE_OTLP_TRACER_ENDPOINT=http://tempo:4317 -``` - -And bringing up observability: - -``` -make observability-up -``` - -Login to the local [grafana](http://localhost:3000/) with username: admin, -password: admin. - -Click "Explore" and select "Tempo" as the data source. Query `{}` to list -all traces. - -#### ELPS trace spans - -Phylum endpoints defined with `defendpoint` will automatically receive a span -named after the endpoint. Other functions in the phylum can be traced by adding -a special ELPS doc keyword: - -```lisp -(defun trace-this () - "@trace" - (slow-function1) - (slow-function2)) -``` - -Custom span names are also supported as follows: - -``` -"@trace{ custom span name }" -``` - -### Run Distributed Systems Explorer - -To examine a graphical UI for the transactions and blocks and look at -the details of the work the sandbox network has done, build the -Explorer. With the full network running, run: - -```bash -make explorer -``` - -This creates a web app which will be visible on `localhost:8090`. The default -login credentials are username: `admin`, password `adminpw`. Bringing up the -network should produce some transactions and blocks, and `make integration` will -generate more activity, which can be viewed in the web app. - -If the `make` command fails, or if the Explorer runs but no new activity is -detected, it has most likely failed to authenticate; run - -```bash -make explorer-clean -make explorer-up -``` - -To wipe out the pre-existing database and recreate it empty, then re-build the -Explorer. This will reconnect it to the current network. - -## "Hello World" Application - -This repo includes a small application for managing account balances. It serves -a JSON API that provides endpoints to: - -1. create an account with a balance -2. look up the balance for an account -3. transfer between two accounts - -> To simplify the sandbox, we have omitted authentication which we handle -> using [lutherauth](https://docs.luthersystems.com/luther/application/modules/lutherauth). -> Authorization is implemented at the application layer over tokens issued by -> lutherauth. - ### Directory Structure -Overview of the directory structure +Overview of the directory structure: -```asciiart -build/: - Temporary build artifacts (do not check into git). -common.config.mk: - User-defined settings & overrides across the project. -api/: - API specification and artifacts. See README. -compose/: - Configuration for docker compose networks that are brought up during - testing. These configurations are used by the existing Make targets - and the compose python. -fabric/: - Configuration and scripts to launch a fabric network locally. Not used in - codespaces. -portal/: - The portal service responsible for serving the REST/JSON APIs and - communicating with other microservices. -phylum/: - Business logic that is executed in common script using the platform (substrate). -scripts/: - Helper scripts for the build process. -tests/: - End-to-end API tests that use martin. -``` +| Directory | Description | +| ------------------ | ------------------------------------------------------------------------ | +| `build/` | Temporary build artifacts (do not check into Git) | +| `common.config.mk` | User-defined settings and overrides across the project | +| `api/` | API specification and artifacts (see `api/README.md`) | +| `compose/` | Docker Compose configurations used during testing and `make` targets | +| `fabric/` | Local distributed systems network configuration (not used in Codespaces) | +| `portal/` | REST/JSON middleware service and app logic | +| `phylum/` | Business logic written in Common Operations Script (ELPS) | +| `scripts/` | Helper scripts for the build process | +| `tests/` | End-to-end tests using `martin`, Luther’s e2e testing tool | ### Developing the application @@ -289,21 +226,22 @@ sandbox phylum's [documentation](phylum/). There are 3 main types of tests in this project: -1. Phylum _unit_ tests. These tests excercise busines rules and logic around +1. Phylum _unit_ tests. These tests exercise business rules and logic around storage of smart contract data model entities. More information about writing and running unit tests can be found in the phylum - [documentation](phylum/). + [documentation](phylum/README.md). + See the example [claims test](./phylum/claim_test.lisp). 2. Oracle _functional_ tests. These tests exercise API endpoints and their connectivity to the phylum application layer. More information about writing and running functional tests can be found in the oracle - [documentation](portal/). + [documentation](portal/README.md). 3. End-To-End _integration_ tests. These tests use the `martin` tool. These tests exercise realistic end-user functionality of the oracle REST/JSON APIs using [Postman](https://www.postman.com/product/api-client/) under the hood. More information about writing and running integration tests can be found in - the test [documentation](tests/) + the test [documentation](tests/README.md) After making some changes to the phylum's business logic, the oracle middleware, or the API it is a good idea to test those changes. The quickest integrity @@ -328,6 +266,8 @@ against a real network of docker containers. As done in the Getting Started section, this will require running `make up` to create a network and `make integration` to actually run the tests. +> `make mem-up` is a faster alternative suitable for development. + ```bash make up make integration @@ -344,6 +284,9 @@ running fabric network with the following shell command: (cd fabric && make init) ``` +> This command rebuilds and re-installs the business logic on the running +> Fabric network without restarting the containers. + This uses the OTA Update module to immediately install the new business logic on to the fabric network. The upgrade here is done the same way devops engineers would perform the application upgrade when running the platform on production @@ -384,6 +327,89 @@ platform are cleaned up by running the command: make down ``` +## Distributed Systems + +See [fabric/README.md](./fabric/README.md) for an overview of the distributed systems +architecture that powers the platform. + +## Advanced Usage + +Once you're comfortable with the basic development workflow, checkout the +more advanced features. + +### Application tracing (OpenTelemetry) + +There is support for tracing of the application and the Luther platform using +the OpenTelemetry protocol. Each can optionally be configured by setting an +environment variable to point at an OTLP endpoint (e.g. a Grafana agent). When +configured, trace spans will be created at key layers of the stack and delivered +to the configured endpoint. + +You can test tracing locally by setting the following env variables: + +```bash +export SANDBOX_ORACLE_OTLP_ENDPOINT=http://tempo:4317 +export SHIROCLIENT_GATEWAY_OTLP_TRACER_ENDPOINT=http://tempo:4317 +export CHAINCODE_OTLP_TRACER_ENDPOINT=http://tempo:4317 +``` + +And bringing up observability: + +``` +make observability-up +``` + +Login to the local [grafana](http://localhost:3000/) with username: admin, +password: admin. + +Click "Explore" and select "Tempo" as the data source. Query `{}` to list +all traces. + +#### ELPS trace spans + +Phylum endpoints defined with `defendpoint` will automatically receive a span +named after the endpoint. Other functions in the phylum can be traced by adding +a special [ELPS](https://github.com/luthersystems/elps) doc keyword: + +```lisp +(defun trace-this () + "@trace" + (slow-function1) + (slow-function2)) +``` + +Custom span names are also supported as follows: + +``` +"@trace{ custom span name }" +``` + +### Run Distributed Systems Explorer + +To examine a graphical UI for the transactions and blocks and look at +the details of the work the sandbox network has done, build the +Explorer. With the full network running, run: + +```bash +make explorer +``` + +This creates a web app which will be visible on `localhost:8090`. The default +login credentials are username: `admin`, password `adminpw`. Bringing up the +network should produce some transactions and blocks, and `make integration` will +generate more activity, which can be viewed in the web app. + +If the `make` command fails, or if the Explorer runs but no new activity is +detected, it has most likely failed to authenticate; run + +```bash +make explorer-clean +make explorer-up +``` + +To wipe out the pre-existing database and recreate it empty, then re-build the +Explorer. This will reconnect it to the current network. + ## Platform Releases See [Latest Platform Releases](https://docs.luthersystems.com/deployment/release-notes). diff --git a/api/README.md b/api/README.md index 2a40ab6..a4a7afd 100644 --- a/api/README.md +++ b/api/README.md @@ -1,62 +1,72 @@ # API This directory contains the API specification and all files necessary to build -API artifacts. API definitions use gRPC tools for consistent and quality -tooling. Within the oracles (middleware), we use grpc-gateway to expose a -REST/JSON API (documented in the swagger file) that is -[transcoded](https://cloud.google.com/endpoints/docs/grpc/transcoding) to a gRPC -service that is consumed internally. - -Entity types and endpoints are defined in protobuf and gRPC. This has several -advantages over editing the swagger file directly, including: - * Clean diffs. - * Better backwards compatibilty through field numbers. - * Clear semantics for objects and repeated fields. +API artifacts. API definitions use gRPC tools for consistent and high-quality +tooling. We use grpc-gateway to expose a REST/JSON API (documented in Swagger) +that is [transcoded](https://cloud.google.com/endpoints/docs/grpc/transcoding) +to a gRPC service consumed internally. + +Entity types and endpoints are defined in protobuf. This approach offers: + +- Clean diffs +- Backwards compatibility via field numbers +- Clear semantics for structured data + +The `buf` tool is used to manage protobuf definitions and the build toolchain. ## Directory Structure ``` -pb: - Protobuf speciations for entities, models, and messages used and referenced - in various API endpoints. -srvpb: - Endpoint specifications (in gRPC format with HTTP/swagger annotations). -swagger: - Generated swagger JSON and a Go package that serves the json to frontend - clients. +api/ +├── Makefile # Build artifacts from protobuf definitions +├── README.md # This file +├── buf.gen.yaml # Code generation configuration +├── buf.yaml # Linting and dependency config +├── embed.go # Serves swagger JSON via embed.FS +├── pb/ # Shared protobuf messages and types +│ └── v1/ +│ └── oracle.proto # Shared claim models +├── srvpb/ # gRPC services with REST annotations +│ └── v1/ +│ └── oracle.proto # Endpoint and routing definitions ``` -## Generating gRPC service code and Swagger/OpenAPI documentation +❌ Excluded files (generated or irrelevant): -The generated gRPC service code, gateway code, and the swagger file are checked -into git. After every change you should regenerate these files and check them -in. Run `make` in the api directory to regenerate these files. +- `*.pb.go`, `*.pb.gw.go`, `*_grpc.pb.go` — generated Go bindings +- `*.swagger.json` — generated Swagger specs -``` +## Generating Artifacts + +Run `make` to regenerate gRPC service code and Swagger/OpenAPI documentation: + +```sh make ``` -Among the generated output is a swagger file, `swagger/oracle.swagger.json`. Do -not edit this file directly or it will be replaced the next time any .proto -files are modifieed. +Artifacts are written to the same directory as the `.proto` files and committed +to the repo. -## Viewing REST API documentation +> ⚠️ Do not edit generated files directly — they will be overwritten. -Use your favorite OpenAPI/Swagger tool to view the swagger file generated above -at `swagger/oracle.swagger.json`. One such tool is `redoc`. To view the swagger -file using redoc, you need to install the CLI first: +## Viewing API Documentation -``` -brew install npm +The Swagger file for the REST API can be previewed using any OpenAPI tool. +To use [Redoc](https://github.com/Redocly/redoc): + +Install Redoc CLI: + +```sh npm i -g redoc-cli ``` -Run `make redoc` at the root of the project to view the User API spec. The port -has been set to not conflict with Oracle. You can also run redoc directly: +Then view the Swagger file with: +```sh +make redoc ``` -npx redoc-cli serve -p 57505 ./api/swagger/oracle.swagger.json -``` -Use the [swagger editor](https://editor.swagger.io/) to view the swagger -specification online. +This serves the file at `http://localhost:57505`. + +You can also drag `srvpb/v1/oracle.swagger.json` into +[https://editor.swagger.io](https://editor.swagger.io). diff --git a/common.config.mk b/common.config.mk index 991a287..ede47ef 100644 --- a/common.config.mk +++ b/common.config.mk @@ -13,21 +13,20 @@ SERVICE_DIR=portal # The makefiles use docker images to build artifacts in this project. These # variables configure the images used for builds. +# https://github.com/luthersystems/buildenv BUILDENV_TAG=v0.0.92 # These variables control the version numbers for parts of the Luther platform # and should be kept up-to-date to leverage the latest platform features. # See release notes: https://docs.luthersystems.com/luther/platform/release-notes -#SUBSTRATE_VERSION=v2.205.6 -#SUBSTRATE_VERSION=v2.205.11-SNAPSHOT.3-06e4528d SUBSTRATE_VERSION=v2.205.13 CC_VERSION=${SUBSTRATE_VERSION} -#CC_VERSION=v2.205.11-SNAPSHOT.3-06e4528d CHAINCODE_VERSION=${CC_VERSION} VERSION_SUBSTRATE=${CC_VERSION} # is this needed SHIROCLIENT_VERSION=${SUBSTRATE_VERSION} CONNECTORHUB_VERSION=${SUBSTRATE_VERSION} SHIROTESTER_VERSION=${SUBSTRATE_VERSION} +# https://github.com/luthersystems/fabric-network-builder NETWORK_BUILDER_VERSION=v0.0.2 MARTIN_VERSION=v0.1.0 @@ -48,6 +47,6 @@ GONOSUMDB ?= ${GOPRIVATE} # These variables configure the Hyperledger Fabric image versions for running # the full test network. -FABRIC_IMAGE_TAG=2.5.9 +FABRIC_IMAGE_TAG=2.5.13 FABRIC_CA_IMAGE_TAG=1.5.12 BASE_IMAGE_TAG=0.4.22 diff --git a/common.fabric.mk b/common.fabric.mk index 3c519ab..487f923 100644 --- a/common.fabric.mk +++ b/common.fabric.mk @@ -1,5 +1,5 @@ -# This makefile provides targets for local fabric networks. It's meant to -# be re-usable across projects. +# This makefile provides targets for local fabric (distributed systems) +# networks. It's meant to be re-usable across projects. include ${PROJECT_REL_DIR}/common.mk # name of the chaincode diff --git a/docs b/docs new file mode 160000 index 0000000..110e06a --- /dev/null +++ b/docs @@ -0,0 +1 @@ +Subproject commit 110e06af645d602aed321a0311a939c16b3e2f77 diff --git a/fabric/README.md b/fabric/README.md index 85d46ed..00d302a 100644 --- a/fabric/README.md +++ b/fabric/README.md @@ -1,23 +1,203 @@ -# Local Fabric Network +# Local Distributed Systems Network -This directory contains the configuration to run a fabric network locally and -test your application in a realistic setup. +This directory contains everything needed to run a **Luther Platform Fabric +network** locally for realistic testing and development of distributed process +applications. It includes support for bootstrapping the full system, +installing logic, and executing complete flows — all without needing to +interact with real external systems. -## Initial Setup +> 💡 **Chaincode = Common Operations Script (COS)** +> When you see "chaincode," think of it as the compiled business logic that +> runs your process — a domain-specific program written in the Common +> Operations Script (COS) language [ELPS](https://github.com/luthersystems/elps). -In order to run the network locally you need to generate cryptographic assets -(i.e. certs and private keys) for the various components in a fabric network: -the peers, orderers, and users. +--- - make generate-assets +## 🚀 Running the Network -This command only needs to be run once, assets will be placed in the -crypto-config/ and channel-artifacts/ directories. These will persist, ignored -by git, until you run `make clean` or remove the directories. +To launch the full system: -## Running the network +```bash +make up +``` -To start all the fabric network components and install your application in the -network run `make all`, or simply `make` in this directory. +This does the following: - make all +- Packages the latest version of your Common Operations Script (COS) as + chaincode +- Starts all Fabric containers (peers, orderers, gateways) +- Launches gateway and connectorhub containers + +--- + +## 📦 Installing Common Operations Script (Chaincode) + +If you've updated your COS logic and want to re-install it: + +```bash +make install +``` + +This pushes the new script to all configured nodes and prepares it for +execution. + +--- + +## 🧪 Initializing the Application + +To initialize the network with the latest version of your COS and any seed +state: + +```bash +make init +``` + +This is typically required after `install` and ensures the platform is ready +to process incoming requests. + +--- + +## 🚦 Stopping the Network + +To gracefully tear down the entire network: + +```bash +make down +``` + +This stops all containers and removes volumes and temp files (but not crypto +assets). + +--- + +## 🧼 Full Cleanup (Optional) + +To remove all generated state and return the directory to a clean state: + +```bash +make pristine +``` + +--- + +## 🔧 (Optional) Cryptographic Setup + +Before running the network, you can optionally generate the required +cryptographic assets (certs, keys, config files) using: + +```bash +make generate-assets +``` + +This will populate the `crypto-config/` and `channel-artifacts/` directories, +which are git-ignored and persist across runs. You only need to run this once, +unless you wipe the setup with `make clean` or `make pristine`. + +--- + +## 🔌 Configuring the Connector Hub + +The `connectorhub.yaml` file defines how your local network routes process +actions to external systems (or their mocks) via **connectors**. Each +connector maps to a business system — like Stripe, Postgres, or Equifax — +and enables your COS logic to interact with it during execution. + +This file includes: + +- The **peer and user identity** used to invoke chaincode +- The **channel and chaincode ID** that logic will execute against +- A list of **connectors**, each with: + + - A `name` (referenced in COS logic) + - A `mock` toggle (if you want to simulate the system) + - A config block for system-specific settings (e.g., SMTP server, Postgres + credentials) + +Example: + +```yaml +msp-id: Org1MSP +user-id: User1 +org-domain: org1.luther.systems +peer-endpoint: peer0.org1.luther.systems:7051 +channel-name: luther +chaincode-id: sandbox +connectors: + - name: POSTGRES_CLAIMS_DB + mock: true + postgres: + host: localhost + port: 5432 + database: claims_db + username: testuser + password: testpass + ssl_settings: POSTGRES_SSL_MODE_DISABLE +``` + +💡 You can add or modify connectors to suit your flow’s needs. When using +`mock: true`, the connector will simulate the system locally for easier +testing. + +See the Connector Hub +[docs](../docs/platform/connectorhub.yaml) for more details. + +--- + +## 🧱 Luther Platform Node Architecture + +The Luther Platform separates responsibilities across two types of nodes to +ensure high reliability, data integrity, and consistent process execution +across systems: + +--- + +### **Processing Nodes – Run Your Business Logic** + +Each enterprise system you connect is assigned a **dedicated processing +node**. These nodes: + +- **Store and execute** the Common Operations Script (COS), which contains + your end-to-end process logic +- **Use LevelDB**, a fast, embedded key-value store, to persist the history + and state of each process locally +- **Emit structured events** to and from external systems based on COS logic + +We chose to dedicate one node per system to **ensure isolation, reliability, +and fault tolerance** — if one node encounters issues or is overloaded, it +doesn't affect the others. This design also makes it easier to track +system-specific behavior and debug workflows. + +--- + +### **Consistency Nodes – Keep Everything in Sync** + +The platform deploys **1 or 3 consistency nodes**, depending on your **cloud +infrastructure configuration** (i.e., the number of servers and availability +zones). These nodes: + +- **Ensure all processing nodes remain synchronized** across the cluster +- **Manage the official history of transactions**, providing a source of + truth for event ordering and validation +- **Run etcd with Raft consensus**, a battle-tested mechanism used by + Kubernetes to achieve **high availability and strong consistency** + +We use 3 nodes when your setup spans **3 or more availability zones**, +allowing the system to **tolerate failure in one zone** while still reaching +consensus. If you're running in a smaller setup with fewer zones, we use a +**single consistency node** to reduce resource usage while maintaining +functional correctness. + +We selected etcd with Raft because it's **proven, well-integrated with +Kubernetes, and easy to operate** in cloud environments. It gives us fast +writes, strong consistency, and built-in quorum logic — all essential for +keeping your distributed process state reliable. + +--- + +### ✅ Summary + +- Your **logic runs independently per system**, without interference +- The system maintains **strong data consistency**, even across zones in the + cloud +- You don’t need to configure or manage any of this — **we provision it + automatically**, and provide visibility into how it’s operating diff --git a/go.mod b/go.mod index 38cb4c2..29645cc 100644 --- a/go.mod +++ b/go.mod @@ -1,6 +1,6 @@ module github.com/luthersystems/sandbox -go 1.23.0 +go 1.24 require ( buf.build/gen/go/luthersystems/protos/protocolbuffers/go v1.36.5-20250224214741-b97f9dda9589.1 diff --git a/phylum/README.md b/phylum/README.md index 849b12c..2eb275a 100644 --- a/phylum/README.md +++ b/phylum/README.md @@ -1,54 +1,89 @@ -# Phylum: Common Operations Script Business Logic +# Common Operations Script Business Logic (phylum) -The phylum stores process operations business logic. This phylum defines a -route for each of the 3 application API endpoints (see `routes.lisp`). -This code securely runs on all of the participant nodes in the network, and the -platform ensures that these participants reach agreement on the execution of -this code. +This directory contains the business logic ("phylum") for process automation +on the Luther Platform. It defines logic that runs on participant nodes in a +distributed system, with consistent execution enforced by the platform. -See [Phylum Best Practices](https://docs.luthersystems.com/luther/application/development-guidelines/phylum-best-practices). +Each phylum exposes named endpoints via `routes.lisp` and defines application +behavior through [ELPS](https://github.com/luthersystems/elps) code. + +See the [Business Logic Development Guide](https://docs.luthersystems.com/develop-business-logic/business-logic/dev-language) for more details. + +See the [ConnectorHub Guide](https://docs.luthersystems.com/luther-connectors-setup/connector-hub) +for details on raising connector events from within your business logic. + +--- ## Directory Structure ``` -build: - Temporary build artifacts (do not check into git). -main.lisp: - Entrypoint into the common operations script. -routes.lisp: - Routes callable by external services. -utils.lisp: - Common utility functions for the app. -utils_test.lisp: - ELPS tests for the utility functions. +phylum/ +├── .yasirc.json # Editor config for YASI (code formatter) +├── Makefile # Build and test commands for this phylum +├── claim.lisp # Core claim handling logic +├── claim_test.lisp # Unit tests for claims +├── main.lisp # Entrypoint for phylum execution +├── routes.lisp # RPC endpoint definitions ``` -## Making changes +--- + +## Developing This Phylum ### Testing Changes -The phylum can define unit tests in files with names ending it `_test.lisp`. -These tests can be run using the following command: +Tests live in `*_test.lisp` files. To run all unit tests: ```sh make test ``` -From the project's top level `make phylumtest` will run the same tests. +You can also run from the project root: + +```sh +make phylumtest +``` + +### Running Individual Tests -### Formatting Changes +To run specific test files: + +```sh +$(make echo:SHIRO_TEST) claim_test.lisp +``` -You need to install the `yasi` command line tool to use the `make format` -target. This tool is installed using `pip` which requires python: +To filter tests using regex (like Go's `-run`): +```sh +$(make echo:SHIRO_TEST) --run "/my_test_name" ``` -brew install pip -pip install --upgrade yasi + +Test helpers are provided for introspecting event state, response metadata, +and mocking connector behavior. See `claim_test.lisp` for usage examples. + +### Interactive REPL + +You can open a REPL for live evaluation of ELPS expressions: + +```sh +make repl ``` -And to format: +This launches the `shirotester` container in REPL mode inside the phylum: ```sh -make format +shiro> (+ 1 1) +2 +shiro> ``` +--- + +## Notes + +- The phylum runs distributed on participant nodes. +- `main.lisp` loads all routes and logic. +- The platform enforces consistency on state transitions. +- Repl/debug and formatting support available via `make repl` and `.yasirc.json`. + +> ⚠️ Do not commit `build/` artifacts or generated content. diff --git a/phylum/claim.lisp b/phylum/claim.lisp index e4786d4..f06debf 100644 --- a/phylum/claim.lisp +++ b/phylum/claim.lisp @@ -1,7 +1,21 @@ +;; Copyright © 2025 Luther Systems, Ltd. All right reserved. +;; ---------------------------------------------------------------------------- +;; Core business logic for claim processing. +;; Implements the state machine, connector event generation, and handlers for +;; individual claim objects. +;; +;; Defines: +;; - A claim lifecycle as a state transition chain +;; - A mapping of states to system-triggered events +;; - The `mk-claim` function, which returns a stateful handler for a claim +;; - The `claims` connector factory to manage persisted claim objects +;; ---------------------------------------------------------------------------- (in-package 'sandbox) (use-package 'connector) +;; make-state-chain builds a linear map of state transitions from first to last. +;; Used to model the claim lifecycle as a state machine. (defun make-state-chain (chain first-state last-state) (let* ([result (sorted-map)] [states (append! chain last-state)] @@ -14,6 +28,8 @@ (build-chain first-state states) result)) +;; state-transitions defines the allowed sequence of states for a claim. +;; Used by `next-state` in mk-claim to determine the next processing step. (set 'state-transitions (make-state-chain (vector "CLAIM_STATE_LOECLAIM_DETAILS_COLLECTED" @@ -27,11 +43,12 @@ "CLAIM_STATE_NEW" "CLAIM_STATE_DONE")) +;; sys-msp-map maps system names to the MSP ID responsible for handling +;; connector events. These must match the connectorhub.yaml definitions. (set 'sys-msp-map ;; map from system names to responsible connector MSP IDs ;; TODO: for now it's all 1 connector, but in final version each connector ;; is run by a separate org (participant). - ;; IMPORTANT: these system names MUST match the connectorhub.yaml names! (sorted-map "CLAIMS_PORTAL_UI" "Org1MSP" "EQUIFAX_ID_VERIFY" "Org1MSP" @@ -42,13 +59,16 @@ "EMAIL" "Org1MSP" "STRIPE_PAYMENT" "Org1MSP")) +;; event-desc-record returns metadata describing the system and action to trigger. +;; Used when building connector events for a specific state. (defun event-desc-record (sys eng) (denil-map (sorted-map "msp" (default (get sys-msp-map sys) "Org1MSP") "sys" sys "eng" eng))) +;; claims-state-event-desc maps claim states to connector event metadata. +;; Each state may trigger a system action (e.g., ID check, invoice). (set 'claims-state-event-desc - ;; human description of the triggered event for the state: (sorted-map "CLAIM_STATE_UNSPECIFIED" () "CLAIM_STATE_NEW" (event-desc-record "CLAIMS_PORTAL_UI" "input claim details") @@ -62,14 +82,19 @@ "CLAIM_STATE_OOEPAY_PAYMENT_TRIGGERED" () "CLAIM_STATE_DONE" ())) -;; -;; TODO: receive, validate, store, send -;; +;; mk-verify-policy-req creates a request to verify a policy +;; For now, just returns a simple health check query (defun mk-verify-policy-req (policy-id) - ;; mk-verify-policy-req creates a request to verify a policy - ;; For now, just returns a simple health check query (mk-psql-req "SELECT 1")) + +;; mk-claim returns a stateful handler for a claim. +;; Supports operations: +;; - 'init: initialize the claim in a NEW state +;; - 'handle: process a connector response and advance state +;; - 'data: retrieve claim data +;; +;; Internally tracks raised events and manages state transitions. (defun mk-claim (claim) ;; mk-claim implements claims handler logic (unless claim (error 'missing-claim "missing claim")) @@ -82,6 +107,10 @@ ;; get-state returns the current state of the claim. [get-state () (default (get claim "state") "")] + ;; add-event appends a connector event to the current claim. + ;; Builds the full event map using the current state and the provided + ;; request (`event-req`), including metadata like system, msp, and engine. + ;; If `event-req` is nil, no event is added. [add-event (event-req) (let* ([desc (get claims-state-event-desc (get-state))] [event (sorted-map @@ -106,12 +135,24 @@ (next-state) (sorted-map "put" claim "events" events)] + ;; init initializes a new claim with the initial state. + ;; Should be called only once on a newly created claim object. + ;; Returns the result map with updated claim and any initial events. [init () (assoc! claim "state" "CLAIM_STATE_NEW") (ret-save)] + ;; data returns the current claim data map. + ;; Used by external callers (e.g. endpoints) to inspect claim state. [data () claim] + ;; handle processes the response from the previous connector event. + ;; It: + ;; - Determines the current claim state + ;; - Interprets the connector response (resp) + ;; - Optionally triggers the next event based on state logic + ;; - Advances the state machine + ;; Returns updated claim data and any new events to raise. [handle (resp) (let* ([resp-body (get resp "response")] [resp-err (get resp "error")] @@ -166,27 +207,45 @@ ((equal? op 'data) (apply data args)) (:else (error 'unknown-operation op))))))) -;; mk-claims implements connector factory +;; mk-claims returns a connector factory object that manages persisted claims. +;; +;; Claim objects are stored in sidedb — a persistent key-value store +;; provided by the Luther Platform. Sidedb is suitable for storing private +;; or sensitive data and supports fine-grained access control between participants. +;; +;; This factory supports: +;; - 'name: unique name for the factory +;; - 'new: create and store a new claim +;; - 'get: retrieve a claim by ID +;; - 'put: update a stored claim +;; - 'del: delete a claim from storage (defun mk-claims () (labels ([name () "claim"] + ;; mk-claim-storage-key: build namespaced key for claim storage [mk-claim-storage-key (claim-id) (join-index-cols "sandbox" "claim" claim-id)] + ;; storage-put-claim: save claim to sidedb [storage-put-claim (claim) (sidedb:put (mk-claim-storage-key (get claim "claim_id")) claim)] + ;; new-claim creates and initializes a new claim object. + ;; Generates a unique claim ID and sets the initial state. + ;; Returns the result of the initial state transition (usually includes events). [new-claim () (let* ([claim-data (sorted-map "claim_id" (mk-uuid))] [claim (mk-claim claim-data)]) (claim 'init))] + ;; storage-get-claim: load claim from sidedb by ID [storage-get-claim (claim-id) (let* ([key (mk-claim-storage-key claim-id)] [claim-data (sidedb:get key)]) (when claim-data (mk-claim claim-data)))] + ;; storage-del-claim: remove claim from sidedb [storage-del-claim (claim-id) (sidedb:del (mk-claim-storage-key claim-id))]) @@ -198,13 +257,27 @@ ((equal? op 'put) (apply storage-put-claim args)) (:else (error 'unknown-operation op)))))) +;; Initialize the singleton instance of the claims connector factory. +;; This provides a shared object for creating, storing, and retrieving claims. +;; +;; The connector factory must be a singleton to ensure consistent access +;; to state and routing across phylum calls. (set 'claims (singleton mk-claims)) +;; Register the claims connector factory with the connector hub. +;; This allows the Luther Platform to invoke the correct logic when +;; connector events are routed to this phylum. +;; +;; The name exposed is defined by `(claims 'name)` — must be globally unique. (register-connector-factory claims) +;; trigger-claim advances a stored claim by injecting a connector response. +;; Used during API calls or connector callbacks. (defun trigger-claim (claim-id resp) (trigger-connector-object claims claim-id resp)) +;; create-claim initializes and persists a new claim object. +;; Used at process start (e.g., in `create_claim` endpoint). (defun create-claim () ; create claim allocates storage for a new claim, sets the ID and state. (new-connector-object claims)) diff --git a/phylum/claim_test.lisp b/phylum/claim_test.lisp index ff095c1..e0dcb14 100644 --- a/phylum/claim_test.lisp +++ b/phylum/claim_test.lisp @@ -1,11 +1,23 @@ -;; Copyright © 2024 Luther Systems, Ltd. All right reserved. - +;; Copyright © 2025 Luther Systems, Ltd. All right reserved. +;; ---------------------------------------------------------------------------- +;; +;; This file contains unit tests for the claim connector object. +;; It exercises core functionality including: +;; - Claim creation and persistence +;; - State transitions and event triggering +;; - Connector event simulation and response handling +;; +;; Run with: +;; make test +;; ---------------------------------------------------------------------------- (in-package 'sandbox) (use-package 'testing) ;; overwrite return from cc:creator such that tests can complete (set 'cc:creator (lambda () "Org1MSP")) +;; mk-test-claimant returns a map with mock claimant details. +;; Used to simulate an inbound claim submission. (defun mk-test-claimant () (sorted-map "account_number" "" "account_sort_code" "" @@ -19,10 +31,16 @@ "address_post_town" "Westbury" "nationality" "NATIONALITY_GB")) +;; populate-test-claimant! mutates the given claim to add a test claimant. +;; Used to simulate user input collected from the portal. (defun populate-test-claimant! (claim) (let* ([claimant (mk-test-claimant)]) (assoc! claim "claimant" claimant))) +;; Basic storage test for claim lifecycle: +;; - Ensures claim can be created +;; - Ensures it has valid state and ID +;; - Verifies it can be fetched from storage (test "claims" (let* ([claim (create-claim)] [_ (assert (not (nil? claim)))]) @@ -39,7 +57,8 @@ ;; helper functions to interrogate the state after running the tests. ;; -;; get the request corresponding to the event ctx. +;;;; get-connector-event-req retrieves the connector request from state using +;; the given connector event context. (defun get-connector-event-req (ctx) (let* ([key (get ctx "key")] [pdc (get ctx "pdc")] @@ -49,11 +68,14 @@ [event (json:load-bytes event-bytes)]) event)) -;; lookup the event ctx for a request ID. +;;;; get-connector-event-ctx looks up the event context given a request ID. +;; Used to inspect connector request metadata. (defun get-connector-event-ctx (rid) (get (connector-handlers 'get-callback-state rid) "ctx")) -;; get all the events within a tx, recursively. +;; get-connector-event-recurse recursively walks all events in the current +;; transaction metadata and builds a map of connector requests by index. +;; Used to extract all raised events for test assertions. (defun get-connector-event-recurse (metadata i output) (let* ([event-ref-key (format-string "$connector_events:{}" i)]) (when (key? metadata event-ref-key) @@ -66,7 +88,8 @@ (assoc! output (to-string i) req) (get-connector-event-recurse metadata (+ i 1) output))))) -;; get all the events raised within the tx. +;; get-connector-event-reqs returns a vector of all connector requests raised +;; during the current transaction. (defun get-connector-event-reqs () (let* ([m (get-tx-metadata)] [output (sorted-map)]) @@ -77,6 +100,8 @@ ;; connector tests ;; +;; start-new-event-loop creates a new claim and triggers the first event. +;; Used to simulate an end-to-end claim submission. (defun start-new-event-loop () (let* ([claim (create-claim)]) (cc:debugf (sorted-map "claim" claim) "start-new-event-loop") @@ -85,11 +110,16 @@ (populate-test-claimant! claim) (trigger-claim (get claim "claim_id") (get claim "claimant")))) +;; assert-no-more-events verifies that no connector events remain to process. +;; Use after all expected events have been handled. (defun assert-no-more-events () (let* ([event-reqs (get-connector-event-reqs)]) ;; done, no more events! (assert-equal (length event-reqs) 0))) +;; process-single-event-empty-response simulates a connectorhub callback with +;; an empty response body for the first pending event. +;; Used to advance the state machine during tests. (defun process-single-event-empty-response () (let* ([event-reqs (get-connector-event-reqs)] [_ (assert-equal 1 (length event-reqs))] @@ -101,6 +131,12 @@ ;; simulate the connectorhub callback (connector-handlers 'invoke-handler-with-body resp))) +;; process-event-loop advances the claim state machine by simulating connector +;; responses across `iters` iterations. +;; +;; Parameters: +;; - iters: number of iterations (connector responses) to simulate +;; - start: if true, starts a new claim first (defun process-event-loop (iters &optional start) (when start (start-new-event-loop)) (if (<= iters 0) @@ -109,4 +145,6 @@ (process-single-event-empty-response) (process-event-loop (- iters 1))))) +;; Integration test for full claim processing loop. +;; Runs 7 connector event cycles from new claim to DONE state. (test "test-claim-factory" (process-event-loop 7 true)) diff --git a/phylum/main.lisp b/phylum/main.lisp index 8c91f9d..1a40c1e 100644 --- a/phylum/main.lisp +++ b/phylum/main.lisp @@ -1,12 +1,11 @@ -;; Copyright © 2021 Luther Systems, Ltd. All right reserved. - -;; main.lisp - +;; Copyright © 2025 Luther Systems, Ltd. All right reserved. +;; ---------------------------------------------------------------------------- ;; This file is the entrypoint for your operations script. It should ;; initialize global variables and load files containing utilities and endpoint ;; definitions. Be careful not to use methods in the cc: package namespace ;; while main.lisp is loading because there is no transaction context until the ;; endpoint handler fires. +;; ---------------------------------------------------------------------------- (in-package 'sandbox) (use-package 'router) (use-package 'utils) @@ -14,9 +13,12 @@ ;; service-name can be used to identify the service in health checks and longs. (set 'service-name "sandbox") +;; Set during build process to reflect current version and build ID. +;; Used in health check endpoint for visibility. (set 'version "LUTHER_PROJECT_VERSION") ; overridden during build (set 'build-id "LUTHER_PROJECT_BUILD_ID") ; overridden during build (set 'service-version (format-string "{} ({})" version build-id)) +;; Load all route definitions and core business logic for this phylum. (load-file "routes.lisp") (load-file "claim.lisp") diff --git a/phylum/routes.lisp b/phylum/routes.lisp index 8acd893..542afd2 100644 --- a/phylum/routes.lisp +++ b/phylum/routes.lisp @@ -1,11 +1,18 @@ -;; Copyright © 2021 Luther Systems, Ltd. All right reserved. - -;; routes.lisp - -;; This file defines all of the RPC endpoints exposed by the phylum. This -;; package uses a simple macro to define a class of readonly endpoints but this -;; is just one approach for defining such middleware that extends all the -;; routes in an application. +;; Copyright © 2025 Luther Systems, Ltd. All right reserved. +;; ---------------------------------------------------------------------------- +;; +;; Defines all RPC endpoints exposed by this phylum. +;; Endpoints are declared using `defendpoint` (POST) and `defendpoint-get` (GET). +;; +;; All routes are automatically wrapped with error-handling and side-effect +;; protection to ensure consistent behavior and logging. +;; +;; Readonly GET endpoints are protected against state updates via +;; `cc:force-no-commit-tx`. +;; +;; Each endpoint maps to a handler that interacts with the connector objects +;; defined in claim.lisp. +;; ---------------------------------------------------------------------------- (in-package 'sandbox) ;; wrap-endpoint is a simple wrapper for endpoints which allows them to call @@ -21,6 +28,7 @@ ;; defendpoint shadows router:endpoint so that all endpoints can be wrapped ;; with logic contained in wrap-endpoint. +;; This exposes an endpoint that is the equivalent of an HTTP POST. (defmacro defendpoint (name args &rest exprs) (quasiquote (router:defendpoint (unquote name) (unquote args) @@ -32,14 +40,19 @@ ;; readonly transactions and avoid committing them. But defendpoint-get will ;; provide additional protection if a utility function accidentally writes to ;; statedb during these endpoints. +;; This exposes an endpoint that is the equivalent of an HTTP GET. (defmacro defendpoint-get (name args &rest exprs) (quasiquote (sandbox:defendpoint (unquote name) (unquote args) - (cc:force-no-commit-tx) ; get route cannot update + (cc:force-no-commit-tx) ; get route cannot update (unquote-splicing exprs)))) +;; app-version-key stores the deployed version of this phylum in statedb. (set 'app-version-key (format-string "{}:version" service-name)) + +;; init endpoint: called during a successful over-the-air update. +;; Stores the current phylum version and logs upgrade/init info. (defendpoint "init" () (let* ([prev-version (statedb:get app-version-key)] [init? (nil? prev-version)]) @@ -54,6 +67,7 @@ (statedb:put app-version-key version) (route-success ()))) +;; healthcheck endpoint: returns static service metadata and timestamp. (defendpoint-get "healthcheck" () (route-success (sorted-map "reports" @@ -63,9 +77,12 @@ "service_name" service-name "timestamp" (cc:timestamp (cc:now))))))) +;; create_claim endpoint: creates a new claim and stores it. (defendpoint "create_claim" (req) (route-success (sorted-map "claim" (create-claim)))) +;; add_claimant endpoint: populates claim with claimant info and triggers +;; claim processing state machine. (defendpoint "add_claimant" (req) (let* ([claim-id (or (get req "claim_id") (set-exception-business "missing claim_id"))] @@ -74,6 +91,7 @@ (set-exception-business "missing claimant forename")) (route-success (sorted-map "claim" (trigger-claim claim-id claimant))))) +;; get_claim endpoint: fetches the full data for a given claim by ID. (defendpoint-get "get_claim" (req) (let* ([claim-id (or (get req "claim_id") (set-exception-business "missing claim_id"))] diff --git a/portal/README.md b/portal/README.md index 7336c96..e55046c 100644 --- a/portal/README.md +++ b/portal/README.md @@ -1,49 +1,71 @@ # Portal (Oracle Middleware) -The sandbox _oracle_ serves the application's JSON API, abstracting the details -of interacting with smart contracts from API consumers. +The `portal` service acts as the application's JSON API and middleware layer, +abstracting the details of executing distributed workflows and interacting with +underlying systems. It serves as the public-facing oracle for +[ELPS](https://github.com/luthersystems/elps)-based logic. -> An oracle is a service that provides the common operations script with +> An **oracle** is a service that provides the **common operations script** with > information from the outside world. +This is the service that is hosted behind your specified domain name. + ## Directory Structure -```sh -oracle: - Code implementing the oracle service -version: - A mechanism for the oracle to know its build vesrion +```txt +portal/ +├── Makefile # Entrypoint for builds and env setup +├── README.md # This file +├── main.go # CLI entrypoint (version/start) +├── start.go # Start command CLI logic +├── version.go # CLI command to print version +├── oracle/ # Oracle service implementation +│ ├── endpoints.go # gRPC API endpoint logic (e.g., CreateClaim) +│ ├── oracle.go # Oracle server bootstrap and gRPC registration +│ └── oracle_test.go # Functional and snapshot tests +├── version/ # Holds build version info +│ └── version.go ``` -## Making Changes +## Core Concepts -### Testing Changes +- **Common Operations Script (COS):** The COS defines the end-to-end process + logic in ELPS. It is deployed to the platform and invoked by the oracle. +- **Oracle Service:** The `oracle` package is a generic adapter that binds + platform functions to public APIs. +- **gRPC Gateway:** The API is exposed as JSON/HTTP using `grpc-gateway`. -The oracle defines tests in files with names like `oracle/*_test.go`. These are -_functional tests_ which test application API and the code paths connecting the -oracle to the phylum. The functional tests can be run with the following -command: +## Making Changes -```sh -make test -``` +### Testing Changes -From the project's top level `make oraclegotest` will run the same tests. +Functional tests are located in `oracle/oracle_test.go` and exercise the full +path from the API through to COS execution. -### Running Tests Outside of Docker +These tests use an in-memory emulator to simulate the platform runtime. This +avoids needing a full network. -To run tests directly using `go test` there are environment variables needed to -for the tests to set up an in-memory copy of the platform to run tests on. +### Running Tests ```sh eval $(make host-go-env) go test ./... ``` -This can be faster than running the tests in docker and has some additional -benefits. For example, the following command runs only the tests related to the -CreateAccount API endpoint: +To run only a specific test (e.g., `GetClaim`): ```sh -go test -run=CreateAccount ./... +go test -v -run=GetClaim ./... ``` + +## Production vs Emulated Mode + +In production, the oracle communicates with the distributed network via the +**shiroclient gateway**, using the **shiroclient gateway SDK**. This SDK handles +authenticated request routing, state management, and traceability. + +By default, the CLI runs in real mode. To run in-memory (no real nodes), use +`--emulate-cc` or set `SANDBOX_ORACLE_EMULATE_CC=true`. + +This emulated mode loads the `phylum` directory locally and executes the Common +Operations Script (COS) in process without starting an actual network. diff --git a/portal/oracle/oracle_test.go b/portal/oracle/oracle_test.go index 44e3bf4..9777307 100644 --- a/portal/oracle/oracle_test.go +++ b/portal/oracle/oracle_test.go @@ -74,7 +74,7 @@ func TestHealthCheck(t *testing.T) { require.Equal(t, 2, len(resp.GetReports())) } -func TestGetAccount(t *testing.T) { +func TestGetClaim(t *testing.T) { server, stop := makeTestServer(t) t.Cleanup(stop) var id string