This repository contains the codebase for Snow Observing Strategy (SOS) applications integrated within the Novel Observing Strategies Testbed (NOS-T).
To install the NOS-T library, follow the directions here.
To setup the Amazon Web Services (AWS) command line interface (CLI), follow the directions here.
A single manager application is responsible for orchestrating the various applications and keeping a consistent time across applications. Upon initiation of the manager, various managed applications are triggered, each responsible for generating derived, merged datasets or raster layers sent as base64-encoded strings. Below is a table describing each application:
| Application | Purpose | Data Source | Developed | Containerized |
|---|---|---|---|---|
| Manager | Orchestrates applications, maintains time | NA | Y | Y |
| Planner | Selects best taskable observations on the basis of reward | LIS | Y | N |
| Appender | Aggregates planned taskable observations, filtering duplicates | Planner | Y | N |
| Simulator | Simulates satellite operations and determines when and where observations are collected | Appender | Y | N |
Applications communicate via a RabbitMQ message broker utilizing the Advanced Message Queuing Protocol (AMQP) protocol. The figure below illustrates the overall workflow:
flowchart LR
subgraph cluster0["S3 Bucket"]
lis["LIS NetCDF"]
end
subgraph cluster1["Applications"]
style cluster1 stroke-dasharray: 5 5
planner["Planner"]
style planner fill:red
appender["Appender"]
style appender fill:dodgerblue
simulator["Simulator"]
style simulator fill:green
end
subgraph cluster2["Outputs"]
style cluster2 stroke-dasharray: 5 5
sc_geojson["Selected Cells<br/>(GeoJSON)"]
ag_geojson["Aggregated Selected Cells<br/>(GeoJSON)"]
end
subgraph cluster3["Visualization"]
style cluster3 stroke-dasharray: 5 5
cesium["Cesium Web<br/>Application"]
end
lis --> planner
lis ~~~ appender
lis ~~~ simulator
planner -->|Write| sc_geojson
appender -->|Append| ag_geojson
simulator -->|Update| ag_geojson
sc_geojson~~~cluster3
ag_geojson --> cluster3
ag_geojson -.->|Upload/Filter Daily| lis
The SOS applications utilize the Advanced Message Queuing Protocol (AMQP) through a RabbitMQ event broker. These messages include:
| Application | Receives | Sends |
|---|---|---|
| Planner | Data availability messages from AWS Lambda function | Selected cells are saved as GeoJSON file and the contents of this file are also sent as an AMQP message to the appender application |
| Appender | Message from planner containing the selected cells | Aggregates the selected cells into a record, filters duplicate rows, and sends an AMQP message to the simulator application |
| Simulator | Message from the appender containing the aggregated selected cells record | Simulate satellite operations and determines when and where observations are collected, sends to Cesium web application |
The input data and output data generated by applications are uploaded onto an Amazon Web Services (AWS) Simple Storage Service (S3) bucket.
Note: The applications use the AWS SDK for Python, Boto3. Boto3 allows users to create, configure, and manage AWS services, including S3, Simple Notification Service (SNS), and Elastic Compute Cloud (EC2). Access to the AWS SDK is limited to SOS administrators as required by NASA's Science Managed Cloud Environment (SMCE).
flowchart TB
subgraph Discover["Discover"]
lis("LIS")
end
subgraph AWS["AWS"]
S3("S3 Bucket")
end
subgraph NOS-T["NOS-T"]
planner("Planner")
appender("Appender")
simulator("Simulator")
end
planner <-. AMQP .-> appender
appender <-. AMQP .-> simulator
NOS-T -- Boto3 --> AWS
AWS -- AMQP or SNS --> Discover
Discover -. Boto3 .-> AWS
AWS -. AMQP or SNS .-> NOS-T
style lis fill: Violet
style S3 fill: Orange
style planner fill: #ff0000, stroke: #333, stroke-width: 2px
style appender fill: #1e90ff, stroke: #333, stroke-width: 2px
style simulator fill: #008000, stroke: #333, stroke-width: 2px
linkStyle 4 stroke: Violet,fill:none
linkStyle 5 stroke: Violet,fill:none
The LIS inputs are stored in an S3 bucket, which the SOS applications access. The SOS applications then output data into an output directory, organized by the specific day and application. Below is an example:
flowchart LR
subgraph S3Bucket["S3 Bucket"]
subgraph Inputs["LIS Forecasts"]
inputs --> LIS
inputs --> vector
LIS -.-> for1["LIS_HIST_201903010000.d01.nc"]
LIS -.-> for2["LIS_HIST_201903020000.d01.nc"]
LIS -.-> for3["LIS_HIST_201903030000.d01.nc"]
LIS -.-> for4["LIS_HIST_201903040000.d01.nc"]
LIS -.-> for5["LIS_HIST_201903050000.d01.nc"]
LIS -.-> for6["LIS_HIST_201903060000.d01.nc"]
LIS -.-> for7["LIS_HIST_201903070000.d01.nc"]
LIS -.-> for8["LIS_HIST_201903080000.d01.nc"]
LIS -.-> for9["LIS_HIST_201903090000.d01.nc"]
LIS -.-> for10["LIS_HIST_201903100000.d01.nc"]
vector -.-> geoj["WBDHU2.geojson"]
end
subgraph Outputs["NOS-T Application Ouputs"]
outputs --> planner
outputs --> appender
outputs --> simulator
planner -.-> d1p["2019-03-02"]
planner -.-> d1p2[".<br/>.<br/>.<br/>."]
planner -.-> d1p3["2019-03-10"]
d1p -.-> selected["selected_cells.geojson"]
d1p2 -.-> selected2[".<br/>.<br/>.<br/>."]
d1p3 -.-> selected3["selected_cells.geojson"]
appender -.-> d1a["2019-03-02"]
appender -.-> d1a2[".<br/>.<br/>.<br/>."]
appender -.-> d1a3["2019-03-10"]
d1a -.-> appended["appended_cells.geojson"]
d1a2 -.->appended2[".<br/>.<br/>.<br/>."]
d1a3 -.-> appended3["appended_cells.geojson"]
simulator -.-> d1s["2019-03-02"]
simulator -.-> d1s2[".<br/>.<br/>.<br/>."]
simulator -.-> d1s3["2019-03-10"]
d1s -.-> simulated["completed_cells.geojson"]
d1s2 -.->simulated2[".<br/>.<br/>.<br/>."]
d1s3 -.-> simulated3["completed_cells.geojson"]
end
end
style planner fill: #ff0000, stroke: #333, stroke-width: 2px
style d1p fill: #ff0000, stroke: #333, stroke-width: 2px
style d1p2 fill: #ff0000, stroke: #333, stroke-width: 2px
style d1p3 fill: #ff0000, stroke: #333, stroke-width: 2px
style selected fill: #ff0000, stroke: #333, stroke-width: 2px
style selected2 fill: #ff0000, stroke: #333, stroke-width: 2px
style selected3 fill: #ff0000, stroke: #333, stroke-width: 2px
style appender fill: #1e90ff, stroke: #333, stroke-width: 2px
style d1a fill: #1e90ff, stroke: #333, stroke-width: 2px
style d1a2 fill: #1e90ff, stroke: #333, stroke-width: 2px
style d1a3 fill: #1e90ff, stroke: #333, stroke-width: 2px
style appended fill: #1e90ff, stroke: #333, stroke-width: 2px
style appended2 fill: #1e90ff, stroke: #333, stroke-width: 2px
style appended3 fill: #1e90ff, stroke: #333, stroke-width: 2px
style simulator fill: #008000, stroke: #333, stroke-width: 2px
style d1s fill: #008000, stroke: #333, stroke-width: 2px
style d1s2 fill: #008000, stroke: #333, stroke-width: 2px
style d1s3 fill: #008000, stroke: #333, stroke-width: 2px
style simulated fill: #008000, stroke: #333, stroke-width: 2px
style simulated2 fill: #008000, stroke: #333, stroke-width: 2px
style simulated3 fill: #008000, stroke: #333, stroke-width: 2px
style LIS fill:Violet
style for1 fill:Violet
style for2 fill:Violet
style for3 fill:Violet
style for4 fill:Violet
style for5 fill:Violet
style for6 fill:Violet
style for7 fill:Violet
style for8 fill:Violet
style for9 fill:Violet
style for10 fill:Violet
The flow of data between the various applications and systems is shown below:
sequenceDiagram
box Red Discover
participant L as Land<br/>Information<br/>System (LIS)
end
box Green Novel Observing Strategies Testbed (NOS-T)
participant M as Manager
participant P as Planner
participant A as Appender
participant S as Simulator
participant C as Cesium Web<br/>Application
end
activate M
activate C
M->>C: Initialize
M->>C: Start
L-->>P: LIS NetCDF
activate P
Note over P: Maximize<br/>Reward<br/>Values
P-->>A: Taskable<br/>Observations
deactivate P
activate A
Note over A: Append Unique<br/>Taskable<br/>Observations
A-->>S: Appended<br/>Taskable<br/>Observations
deactivate A
activate S
Note over S: Simulate<br/>Satellite<br/>Operations
box Blue Amazon Web<br/>Services (AWS)
participant S3 as S3 Bucket
end
S-->>S3: Collected Taskable Observations
deactivate S
Note over C: Visualize<br/>Taskable<br/>Observations
M->>C: Stop
S3->>L: Collected Taskable Observations
deactivate M
deactivate C
The SOS applications can be executed using Conda or Docker. The steps for executing Conda are provided below, assuming you have following the NOS-T installation instructions and AWS CLI installation instructions.
In the sos directory, create a YAML file named sos.yaml with the following contents:
info:
title: Novel Observing Strategies Testbed (NOS-T) YAML Configuration
version: '1.0.0'
description: Version-controlled AsyncAPI document for RabbitMQ event broker with Keycloak authentication within NOS-T
servers:
rabbitmq:
keycloak_authentication: False
host: "localhost"
port: 5672
tls: False
virtual_host: "/"
execution:
general:
prefix: sos
manager:
sim_start_time: "2019-03-01T23:59:59+00:00"
sim_stop_time: "2019-03-10T23:59:59+00:00"
start_time:
time_step: "0:00:01"
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_scale_updates: []
time_status_step: "0:00:01" # 1 second * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
command_lead: "0:00:05"
required_apps:
- manager
- planner
- appender
- simulator
init_retry_delay_s: 5
init_max_retry: 5
set_offset: True
shut_down_when_terminated: True
managed_applications:
planner:
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_step: "0:00:01" # 1 second * time scale factor
set_offset: True
time_status_step: "0:00:10" # 10 seconds * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
shut_down_when_terminated: True
manager_app_name: "manager"
appender:
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_step: "0:00:01" # 1 second * time scale factor
set_offset: True
time_status_step: "0:00:10" # 10 seconds * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
shut_down_when_terminated: True
manager_app_name: "manager"
simulator:
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_step: "0:01:00" # 1 second * time scale factor
set_offset: True
time_status_step: "0:00:10" # 10 seconds * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
shut_down_when_terminated: True
manager_app_name: "manager"flowchart LR
A{"Scenario day change"} -- Freeze --> B{"Timed or indefinite?"}
B -- Timed --> C["Resume after timed freeze<br>(1-2 hours)"]
B -- Indefinite --> D["Data Upload Triggers Lambda Function<br>"]
D -->F["Resume after S3 upload"]
A -- No freeze --> E["Continue after scenario day change"]
linkStyle 2 stroke:#00C853,fill:none
linkStyle 3 stroke:#00C853,fill:none
linkStyle 4 stroke:#D50000,fill:none
Depending on whether the applications are running in isolation or in integration with LIS, scenario time freezes may be required. To enhance flexibility, multiple freezes modes are possible and detailed below:
-
Indefinite Freeze: Useful when running Planner, Appender, and Simulator applications with LIS
configuration_parameters: scenario_day_freeze: enabled: true mode: "indefinite"
-
Timed Freeze: Useful when running Planner, Appender, and Simulator applications with LIS
configuration_parameters: scenario_day_freeze: enabled: true mode: "timed" duration: "0:02:00" # duration for timed freeze (HH:MM:SS format)
-
No Freeze: Useful when running Planner, Appender, and Simulator applications separately from LIS (e.g., experimental or development purposes).
configuration_parameters: scenario_day_freeze: enabled: false
Below is a complete example showing the various freeze modes implemented in the YAML configuration file:
info:
title: Novel Observing Strategies Testbed (NOS-T) YAML Configuration
version: '1.0.0'
description: Version-controlled AsyncAPI document for RabbitMQ event broker with Keycloak authentication within NOS-T
servers:
rabbitmq:
keycloak_authentication: False
host: "localhost"
port: 5672
tls: False
virtual_host: "/"
execution:
general:
prefix: sos
manager:
sim_start_time: "2019-03-01T23:59:59+00:00"
sim_stop_time: "2019-03-10T23:59:59+00:00"
start_time:
time_step: "0:00:01"
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_scale_updates: []
time_status_step: "0:00:01" # 1 second * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
command_lead: "0:00:05"
required_apps:
- manager
- planner
- appender
- simulator
init_retry_delay_s: 5
init_max_retry: 5
set_offset: True
shut_down_when_terminated: True
managed_applications:
planner:
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_step: "0:00:01" # 1 second * time scale factor
set_offset: True
time_status_step: "0:00:10" # 10 seconds * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
shut_down_when_terminated: True
manager_app_name: "manager"
# configuration_parameters:
# Optional: Scenario day freeze configuration
# If this section is omitted, planner will default to no freeze behavior (freeze disabled)
# scenario_day_freeze:
# enabled: true # false = no freeze, true = enable freeze on scenario day change
# mode: "indefinite" # "timed" = resume after duration, "indefinite" = resume after S3 upload
# scenario_day_freeze:
# enabled: true # false = no freeze, true = enable freeze on scenario day change
# mode: "timed" # "timed" = resume after duration, "indefinite" = resume after S3 upload
# duration: "0:02:00" # duration for timed freeze (HH:MM:SS format)
# scenario_day_freeze:
# enabled: false # false = no freeze, true = enable freeze on scenario day change
appender:
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_step: "0:00:01" # 1 second * time scale factor
set_offset: True
time_status_step: "0:00:10" # 10 seconds * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
shut_down_when_terminated: True
manager_app_name: "manager"
simulator:
time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
time_step: "0:01:00" # 1 second * time scale factor
set_offset: True
time_status_step: "0:00:10" # 10 seconds * time scale factor
time_status_init: "2019-03-01T23:59:59+00:00"
shut_down_when_terminated: True
manager_app_name: "manager"In the sos directory, create a .env file with the following content specific to your event broker running on local host:
USERNAME="admin"
PASSWORD="admin"In the sos directory, create a .env file with the following content to access the event broker hosted on the Science Cloud:
-
Service Account:
CLIENT_ID="<Request from NOS-T Operator>" CLIENT_SECRET_KEY="<Request from NOS-T Operator>"
-
User Account:
USERNAME="<Keycloak Username>" PASSWORD="<Keycloak Password>" CLIENT_ID="<Request from NOS-T Operator>" CLIENT_SECRET_KEY="<Request from NOS-T Operator>"
Activate the Conda environment:
conda activate nostRun each application in a separate terminal, making sure to start the manager application first:
- Terminal 1:
python3 src/manager/main.py- Terminal 2:
python3 src/planner/main.py- Terminal 3:
python3 src/appender/main.py- Terminal 4:
python3 src/simulator/main.pyBelow is an example:
Terminal running all four SOS applications.
The SOS applications can be run using Docker compose.
-
Change directory to your cloned repo (i.e.
sos/), which will be the working directory for this execution. -
Confirm prerequisites:
- Required files are present in the working directory (i.e.
sos/): - Successful completion of
aws configuration
- Required files are present in the working directory (i.e.
-
Execute the containers using
docker-compose:docker-compose up -dNOTE: To confirm Docker containers are running, run the command:
docker ps. You should see four containers listed: manager, planner, appender, and simulator. -
To shutdown the Docker containers:
docker-compose down
Setting up a Cesium visualization requires you (i) set up an event broker on local host, (ii) acquire a Cesium access token, (iii) create an env.js file with credentials, and (iv) run an HTTP server to expose local files. Each of these steps are covered below.
To setup an event broker on local host, follow the directions here.
-
Sign in or create an account at: https://cesium.com/ion/signin/tokens.
-
Create a new access token by clicking the blue "Create token" button located in the upper left corner.
-
Add the Asset "Blue Marble Next Generation July 2004" to your assets: https://ion.cesium.com/assetdepot/3845?query=Blue%20Mar. Click the blue 'Add to my assets' button located in the bottom right corner.
-
In the
sos/src/visualizationdirectory, create a file namedenv.jsfile with the following contents:var HOST="localhost" var RABBITMQ_PORT=15670 var USERNAME= #Your RabbitMQ username var PASSWORD= #Your RabbitMQ password var TOKEN= #Cesium access token
Note: Add your Cesium access token that you generated in the Cesium Access Token Section.
-
In the
sos/src/visualizationdirectory, run an HTTP server:python3 -m http.server 7000
-
In your web browser, navigate to http://localhost:7000
-
Finally, click on
cesium_visualization.html. You should see a Cesium visualization web application running on local host.