This project calibrates multi-agent ODEs, SDEs, and PDEs to data using a neural network, and presents general experiments on hybrid neural modelling. We estimate marginal densities on the equation parameters, including adjacency matrices. This repository contains all the code and models used in our publications on the topic, as well as an extensive set of tools and examples for you to calibrate your own model:
- T. Gaskin, G. Pavliotis, M. Girolami. Neural parameter calibration for large-scale multiagent models. PNAS 120,
7, 2023.
https://doi.org/10.1073/pnas.2216415120 (
HarrisWilsonandSIRmodels) - T. Gaskin, G. Pavliotis, M. Girolami, . Inferring networks from time series: a neural approach. PNAS Nexus 4,
63, 2024. https://academic.oup.com/pnasnexus/article/3/4/pgae063/7604085 (
KuramotoandHarrisWilsonNWmodels) - T. Gaskin, T. Conrad, G. Pavliotis, C. Schütte. Neural parameter calibration and uncertainty quantification for
epidemic
forecasting. https://arxiv.org/abs/2312.03147 (
SIRandCovidmodels) - T. Gaskin, A. Bankowski, S. Winkelmann. Towards a Unified Theory of Hybrid Neural Models. (
SIRS,Neurotransmission, andLasermodels, as well as theManifold_Learningexample.)
The repository is organised into models, which are defined as mechanistic equations (e.g. the SIR model)
with components that are to be inferred from or calibrated to data. The models are located in the models
folder, each containing a Jupyter notebook with examples or a README file, detailing specifics on the code.
In addition, ensemble_training subfolders provide a utopya version of the models, allowing you to quickly
train multiple networks in parallel for an ensemble training-based approach to uncertainty quantification.
Since the code is
continuously being reworked and improved, the plots produced by the current version may differ from the publication
plots. For this reason, this repository is versioned, such that each publication has a version that will produce the
plots exactly as they appear in the paper. However, the results produced by the latest code base will typically be more
accurate, performative, and reliable than older versions.
This README will guide you through installation and getting started. There are a number of Jupyter notebooks
you can run out-of-the-box without any further modification. For uncertainty quantification with ensemble training, the project uses the utopya package to handle simulation
configuration, data management,
and plotting. This README gives a brief introduction to installation and a basic tutorial, which will be sufficient to
just run the models, reproduce plots, and play around with the code. You can also refer to the model-specific README
files, located at <model_name>/README.md, for detailed instructions on each model's features. A complete guide to
running models with Utopia/utopya can be
found here. As you go through
the Tutorial below, you will find links to the relevant tutorial entries, and it is recommended to peruse
these if you wish to build your own model using our code base.
Tip
If you encounter any difficulties or have questions, please file an issue.
We recommend you start with the SIR_demo and SIRS_demo Jupyter notebooks, located in the models/SIR and
models/SIRS folders, which will
introduce you to the general principles of neural parameter calibration and help you get set up. You will learn how to
train a neural network to learn constant and time-dependent parameters on a single and multiple datasets, and use
different
network architectures to improve learning performance.
There are a number of further notebooks for various hybrid and calibration experiments:
models/SIRS/SIRS_hybrid_experiments: varying the degree of SIRS hybridisationmodels/Neurotransmission/Neurotransmission_model: calibrating the parameters of a compartmental Neurotransmitter model to datamodels/Laser/Laser_model: online calibration of a mode-locked Laser beammodels/Manifold_Learning/ManifoldLearning: uncertainty quantification using manifold learning on spheres
If you want to efficiently train ensembles of networks in parallel, we suggest you go through this tutorial step by
step, which requires installation of the utopya package to handle the simulation, including parallel training.
Warning
utopya is currently only fully supported on Unix systems (macOS and Ubuntu). For Windows installation instructions, see below; be aware that utopya for Windows is currently work in progress.
Clone this repository using a link obtained from 'Code' button (for non-developers, use HTTPS):
git clone <GIT-CLONE-URL>
We recommend creating a new virtual environment in a location of your choice and installing all requirements into the venv. The following command will install the utopya package and the utopya CLI from PyPI, as well as all other requirements:
pip install -r requirements.txt
This assumes your current directory is the project folder. You should now be able to invoke the utopya CLI:
utopya --help
Note
Enabling CUDA for PyTorch requires additional packages, e.g. torchvision and torchaudio.
Follow these instructions to enable GPU training.
For Apple Silicon, follow these
installation instructions. Note that GPU acceleration for Apple Silicon is still work in progress and many functions
have not
yet been implemented.
In the project directory (i.e. this one), register the entire project and all its models using the following command:
utopya projects register . --with-models
You should get a positive response from the utopya CLI and your project should appear in the project list when calling:
utopya projects ls
Done! 🎉
Important
Any changes to the project info file need to be communicated to utopya by calling the registration command anew.
You will then have to additionally pass the --exists-action overwrite flag, because a project of that name
already exists.
See utopya projects register --help for more information.
To properly display mathematical equations and symbols in the plots, we recommend installing latex. However, latex distributions are typically quite large, so ensure you have enough space on your disk.
On Ubuntu, first install latex by running
sudo apt-get install texlive-latex-extra texlive-fonts-recommended dvipng cm-super
For macOS, install latex via a package manager, e.g. Homebrew or ports.
For both operating systems, also run the following command from within the virtual environment:
pip install latex
Thereafter, set the plots to use latex by changing the following entry in the base_plots.yaml file of the model:
.default_style:
style:
text.usetex: True
# Keep everything else unchangedLatex will then be used in all model plots. You can also change this individually for each plot.
There are a number of datasets available, both real and synthetic, you can use in order to test the model. In order to save space, example datasets have been uploaded using git lfs (large file storage). To download, first install lfs via
git lfs install
This assumes you have the git command line extension installed. Then, from within the repo, do
git lfs pull
This will pull all the datasets.
On Windows systems, you must use the Windows development branch of utopya; after completing the steps above, run:
pip uninstall utopya
pip install git+https://gitlab.com/utopia-project/utopya@89-allow-exec-prefix
Be aware that development on the utopya Windows dev branch is ongoing; if you run into any problems, please file an issue.
Next, in cfg/multiverse_project_cfg.yml, uncomment the following line:
executable_control:
prefix: !if-windows-else [ [ python ], ~ ]Lastly, you must change the default encoding to utf-8 on Windows; in the Control Panel, navigate to the Regional Settings, go to the 'Administrative' tab, click 'Change system locale' under 'Language for non-Unicode programs', and check the 'Beta: Use Unicode UTF-8 for worldwide language support option'. See here for instructions.
Tip
At any stage and for any command, you can use the --help flag to show a description of the command, syntax details,
and valid arguments, e.g.
utopya eval --help
Now you have set up the repository, let's run a model. We'll use the SIR model as an example. Running a model is a
simple command:
utopya run SIR
You can call
utopya models ls
to see a full list of all the registered models. Replace SIR with any of the registered model names to run that model
instead.
For all models, this command will generate some synthetic data, train the neural net to calibrate the model equations on
it, and generate a series of plots in the
utopya_output directory, located by default in your home directory (but this can
be changed). Once everything is done, you should see an output like this in your
terminal:
SUCCESS logging Performed plots from 5 plot configurations in 37.5s.
SUCCESS logging All done.
Tip
If you get the following error message
ValueError: The writer 'ffmpeg' is not available on your system!
you don't have a writer installed to save animations. Don't worry: it's only needed for producing animated plots, so the error isn't critical and doesn't prevent you from plotting non-animated plots.
Navigate to your utopya_output directory and open the SIR folder. In it you should see a time-stamped folder
containing a config, a data, and an eval folder. One of the most important benefits of using utopya is that it
automatically
stores data, plots, and all the configuration files used to generate them in a unique folder, and outputs are never
overwritten. This makes reproducing
and repeating runs easy, and keeps all the data organised. We will shortly see how you can easily re-evaluate the data
from a given run without having to re-run the simulation.
This directory structure already hints at the three basic steps that are executed during a model run:
- Combine different configurations, prepare the simulation run(s) and start them.
- Store the data
- Read in the data and automatically evaluate it by calling plot functions.
Open the eval folder — in it there will be a further time-stamped folder. Every time you evaluate a simulation, a new
folder is created. This way, no evaluation result is ever overwritten. In the eval/YYMMDD-hhmmss folder, you should
find five plots. Take a look at densities_from_joint.pdf, which should look something like this:
You can see the true data (orange) together with the neural net predictions (blue) and an error estimate (blue shaded
area).
The results aren't great; you will also notice from the loss.pdf plot that the training loss has barely decreased.
Why? Well,
take a look at the SIR_cfg.yml file. This file holds all the default parameters for the model run. Scroll down to the
Training entry: you will notice the batch_size is set to 1. This means that the neural network performs a gradient
descent step every time it has reproduced a single frame of the time series. Further above, you will notice that the
synthetic dataset used to train the model has a length of num_steps: 100. For these disease dynamics, let's see if
letting the neural network see the whole time series for each gradient descent step would improve things. You could
change the batch size in the SIR_cfg.yml file directly, but actually this is not recommended: this file holds all the
default values the model will fall back on, should something go wrong. Instead create a new run.yml file, somewhere on
your computer, and copy the following entries into it:
parameter_space:
num_epochs: 300
SIR:
Training:
batch_size: 100We are now using a batch size of 100, i.e. the length of the time series, and are also training the model for a little bit longer (300 epochs instead of the default 100). Now, run the model again and pass the path to this file to the model:
utopya run SIR path/to/run.yml
Here, we are only updating those entries of the base configuration which are also given in the run.yml file; the
remaining ones are taken from the default configuration file. The results in the output folder should look something
like this:
Perhaps a little bit better, but still not great, and the uncertainty is much too small. The real problem here is that we are only training our neural network from a single initialisation, and letting it find one of the possible parameters that fit the problem. This doesn't give us an accurate representation of the parameter space. What we really need to be doing is training it multiple times, in parallel, from different initialisations, so that it can see the more of the parameter space. This is what we will do in the next section.
Tip
If you wish to save the model output to a different directory, add the following entry to your run configuration:
paths:
out_dir: ... # path/to/diror run the model with
utopya run <model_name> -p paths.out_dir path/to/out_dir
Take a look at the models/SIR/cfgs folder. In it you will find lots of subfolders, each containing a pair of run.yml
and eval.yml files. These are called configuration sets: pre-fabricated run files and corresponding evaluation
configurations. Try running the following command:
utopya run SIR --cs Predictions_from_ABM_data
The --cs ('configuration set') command tells utopya to use the run.yml and later the eval.yml file for the
plotting routine (we will get to the plots a little later on). In the run.yml file, take
note of the following entries:
perform_sweep: True
parameter_space:
seed: !sweep
default: 1
range: [ 60 ]The seed entry controls the random initialisation of the neural network, and we are 'sweeping' over 60 different
initialisations (range: [60]) and training the model on the same dataset each time! The perform_sweep entry tells
the model to run the sweep – set it to False to just perform a single run again. The seed would then be set to its
default value, in this case 1. utopya will automatically parallelise the runs over as many cores as your computer
makes available (you can change how many workers it can use). A single run is called
a 'universe' run, a sweep run over many 'universes' is called a 'multiverse' run.
Once the run is complete, the plot output should look like this:
Much better! You can see that the predicted densities are significantly closer to the true data. The folder also contains the marginal densities on the parameters we are estimating:
These too look good: we obtain an infection parameter of about 0.21, and infection period of about 15 days – these are very similar to the values of 0.2 and 14 used to generate the synthetic data.
Tip
You can also configure sweeps by adding a --run-mode sweep or --run-mode single flag to the command in the CLI:
utopya run SIR --run-mode sweep`
This will overwrite the settings in the configuration file. In general, paths to run.yml files will overwrite the
default entries, and CLI flags will overwrite the
entries in the config file. You can also change parameters right from the CLI:
utopya run SIR --pp num_epochs=300
See here for details.
In your output folder you will also find the following plot:
Each line represents a trajectory taken by the neural net during training; as you can see, we are training the net
multiple times in parallel, each time initialising the neural network at a different value of the initial distribution –
see the corresponding section on how to adjust this distribution. The colour of
each line repressents the training loss at that sample.
The number of initialisations is controlled by the seed entry of the run config.
Tip
As an exercise, play around with the seed.range argument of the run.yml config. How does the quality of the time
series prediction and marginal densities change as you increase or decrease the number of runs?
You can sweep over as many parameters and entries as you like; any key in the run configuration can be swept over. An sweep entry must take the following form:
parameter: !sweep
default: 0
values: [ 1, 2, 3, 4 ]Any configuration file must be compatible with both a multiverse ('sweep') and a universe ('single') run. The
default entry is used whenever a universe run is performed,
the values entry used for the sweep. Instead of specifying a list of values, you can also provide a range, a
linspace, or a logspace:
parameter: !sweep
default: default_value
range: [ 1, 4 ] # passed to python range()
# Other ways to specify sweep values:
# values: [1,2,3,4] # taken as they are
# range: [1, 4] # passed to python range()
# linspace: [1,4,4] # passed to np.linspace
# logspace: [-5, -2, 7] # 7 log-spaced values in [10^-5, 10^-2], passed to np.logspaceOnce you have set up your sweep configuration file, enable a multiverse run either by setting perform_sweep: True to
the top-level of the file, or by passing --run-mode sweep to the CLI command when you run your model. Without one of
these, the model will be run as a universe run.
There is no limit to how many parameters you can sweep over. Take a look, for instance, at the
models/HarrisWilson/cfgs/Marginals_over_noise/run.yml file. Here, we are sweeping over the noise in the training
data (sigma) as well as the seed. Sweeping over more parameters takes longer, of course, since the volume of
parameters increases exponentially.
Tip
Read the full guide on running parameter sweeps here.
If you want to sweep over one parameter but vary some others along with it, you can perform a coupled sweep:
param1: !sweep
default: 1
values: [ 1, 2, 3, 4 ]
param2: !coupled-sweep
default: foo
values: [ bar, baz, foo, fab ]
target_name: param1Here, param2 is being varied along param1 – the dimension of the parameter space remains 1. You can couple as many
parameters to sweep parameters as you like.
When running a sweep, you will see the following logging entry in your terminal:
PROGRESS logging Initializing WorkerManager ...
NOTE logging Number of available CPUs: 8
NOTE logging Number of workers: 8
NOTE logging Non-zero exit handling: raise
PROGRESS logging Initialized WorkerManager.
As you can see, here utopya is using 8 CPU cores as individual workers to run universes in parallel. If you wish to
adjust this, e.g. to reduce the load on the CPU, you can adjust the worker_manager settings in your configuration
file:
worker_manager:
num_workers: 4As you have seen, there are multiple configuration layers that are recursively updated: at the bottom, there are default
configuration entries for each model, stored in <model_name>_cfg.yml. These are default values that will, broadly
speaking, be useful in most situations. For this reason, it is best to not change them when performaing
a specific run. The default configuration file should include all the defaults used for a model, but you wouldn't want
to have to copy-paste all of them into a new file if you only want to change a few. For this purpose there are
run-specific configuration files, which you can pass to the model CLI via
utopya run <model> path/to/run.yml
You can pass a relative or an absolute path, it's up to you. Entries in these files will overwrite the default values.
Remember that you only need to provide those entries of the default config you wish to update! Finally, you can also
change parameters directly by passing a --pp flag from the CLI:
utopya run <model> --pp num_epochs=100 --pp entry.key=2
Note that, when using the CLI, you can set sublevel entries of outer scopes by connecting them with a .:
key.subkey.subsubkey. YAML offers a wide range of functionality within the configuration file. Take a look e.g. at
the learnXinYminutes YAMl tutorial for an overview – but since it is an
intuitive and humand-readable configuration language, most things should seem very familiar to you already.
Important
YAML is sensitive to indentation levels! In utopya, nearly every option can be set through a configuration parameter. With these, it is important to take care of the correct indentation level. If you place a parameter at the wrong location, it will often be ignored, sometimes even without warning! A common mistake at the beginning is to place model specific parameters outside of the <model_name> scope:
parameter_space:
SIR:
model_parameter: 1 # Parameters in this scope are passed to the model!In general, every aspect of running, evaluation, and configuring models is controllable from the configuration file. Take a look at the documentation entry for a full overview of the keys and controls at your disposal.
Take a look at, for example, the models/SIR/SIR_cfg.yml file. You will notice lots of little !is-positive or
!is-positive-or-zero flags. These are so-called validation flags, and can only be used in the default configuration.
They are optional, but their function is to make sure you do not pass invalid parameters to the model (e.g. negative
values where only positive ones are allowed), and to catch such errors before the model is run. Running a model with
invalid parameters can sometimes lead to cryptic error messages or are even not caught at all, leading to unpredictable
behaviour which can be a nightmare to debug. For this reason, you can add these validation flags to the default
configuration, along with possible values, ranges, or datatypes for each parameter.
Tip
See the full tutorial entry for a guide on how to use these. They are useful if you wish to implement your own model.
Inside our utopya_output/SIR output folder, take a look at the config folder. You will see a whole bunch of
configuration files. Every single level of the configuration hierarchy is backed up to this folder, allowing you to
always reconstruct which parameters you used to run a model. A couple of useful pointers:
- the
model_cfg.ymlfile contains the default configuration - the
run_cfg.ymlis the run configuration - the
update_cfg.ymlcontains any additional parameters you passed from the CLI - the
meta_cfg.ymlis the combination of all three, plus all the other defaults (many provided by utopya itself) used to run the model. This file will probably seem very large and overwhelming, and you don't really need to worry about it. However, when in doubt, you can refer to it to check where in your custom configuration you need to place certain keys.
Tip
Almost every aspect of running, evaluation, and configuring models is controllable from the configuration file. Take a look at
for a full overview of the keys and controls at your disposal.
As you saw, calling
utopya run <model_name>
performs a series of tasks:
- It collects all the configuration files, parameters passed, backs up the files, validates parameters, and prepares sweep runs (if configured)
- It passes the parameters to the model (or models, if running a sweep)
- It then collects and bundles the output data and stores it
- Finally, it loads all the data into a so-called
DataManagerand plots the files.
Running a simulation and plotting the data are seperate steps that can be run indepedently of one another. For instance, if you call
utopya run <model_name> --no-eval
the evaluation step will be skipped. A common use case however will be re-evaluating a model run you have already performed. This can easily be done by running the command
utopya eval <model_name>
This will re-evaluate the last simulation run that was performed. If you wish to evaluate a different run, simply pass the path to that folder in the CLI:
utopya eval <model_name> path/to/folder
Calling this will use all the plots given in the default plot configuration file <model_name>_plots.yml. This is the
default behaviour; you can pass a different plot configuration using the --plots-cfg flat in the CLI:
utopya eval <model_name> --plots-cfg path/to/config.yml
Take a look at the SIR_plots.yml file: you will see a list of entries, each corresponding to one plot. In each of the
configuration folders, you will notice an eval.yml file. These are plot configurations used for these specific
configuration sets; thus, all the configuration set --cs flag is is a shorthand for the command
utopya run <model_name> path/to/run.yml --plots-cfg path/to/eval.yml
Many of these plots are based on a base plot: these are default plots given in the SIR_base_plots.yml file and which
are available throughout the model, i.e. to any other plot configuration. This is handy, since you may wish to share
plots throughout the model and not want to have to copy the configuration each time. Take a look at the
SIR_base_plots.yml file, and scroll all the way down to the loss baseplot:
loss:
based_on:
- .creator.universe
- .plot.facet_grid.line
select:
data: lossThis function plots the training loss for each batch, and is available throughout the model. Let's go through it line by
line: the based_on argument tells the PlotManager which configurations to use as the base. Remember that in utopya,
a single run is called a universe, and that sweeping over multiple parameters creates multiple universes, or
multiverses. The two plot creators to use are thus the .creator.universe and the .creator.multiverse. The universe
creator creates plots for each individual universe, whereas the multiverse creator creates plots for the multiverse. The
.plot.facet_grid.line is the plot function to use to plot a line. Finally, the select key tells the PlotManager
which data to plot. It's that simple. Everything else shown in the configuration entry is just styling, which you can
also control right from the configuration (and this backed up and reconstructible later on). If you now wish to use this
function in your model evaluation, create an eval.yml and simply add
loss:
based_on: loss # This is the 'loss' plot from the base configurationTip
Read the full tutorial entry on plotting before continuing to the next steps.
The advantage of configuration-based plotting is twofold: for one, it once again means the configuration files are stored alongside plots, meaning any given plot can be quickly recreated, and you will always be able to understand what you did to create a specific plot long after you first made it. This is invaluable for scientific research, where workflows often involve a lot of experimenting and playing around with numerical settings, and you may wish to return to a previous configuration weeks or months later. The other advantage is that utopya supports data transformation right from the configuration file: this means that data analysis and data plotting are kept seperate, and you can always reconstruct the analysis steps later.
Tip
Read the full tutorial entry on configuration-based analysis using a DAG (directed acyclic graph). utopya uses xarray for data handling and transformation.
You can vary the size of the neural net and the activation functions
right from the config. The size of the input layer is inferred from
the data passed to it, and the size of the output layer is
determined by the number of parameters you wish to learn — all the hidden layers
can be determined by the user. The net is configured from the NeuralNet key of the
config:
NeuralNet:
num_layers: 6
nodes_per_layer:
default: 20
layer_specific:
0: 10
activation_funcs:
default: sigmoid
layer_specific:
0: sine
1: cosine
2: tanh
-1: abs
biases:
default: [ 0, 4 ]
layer_specific:
1: [ -1, 1 ]
learning_rate: 0.002num_layers sets the number of hidden layers. nodes_per_layer, activation_funcs, and biases are
dictionaries controlling the structure of the hidden layers. Each requires a default key
giving the default value, applied to all layers. An optional layer_specific entry
controls any deviations from the default on specific layers; in the above example,
all layers have 20 nodes by default, use a sigmoid activation function, and have a bias
which is initialised uniformly at random on [0, 4]. Layer-specific settings are then provided.
You can also set the bias initialisation interval to default: this will initialise the bias using
the PyTorch default
Xavier uniform distribution.
Any PyTorch activation function
is supported, such as relu, linear, tanh, sigmoid, etc. Some activation functions take arguments and
keyword arguments; these can be provided like this:
NeuralNet:
num_layers: 6
nodes_per_layer: 20
activation_funcs:
default:
name: Hardtanh
args:
- -2 # min_value
- +2 # max_value
kwargs:
# any kwargs here ...For many applications, you will want control over the prior distribution of the parameters. To this
end, you can add a prior entry that gives a distribution over the parameters you wish to learn:
NeuralNet:
prior:
distribution: uniform
parameters:
lower: 0
upper: 2This will train the neural network to initially output values uniformly within [0, 2], for all
parameters you wish to learn. If you want individual parameters to have their own priors, you can do so by passing a
list as the argument to prior. For instance, assume you wish to learn 2 parameters; the configuration entry then could
be:
NeuralNet:
prior:
- distribution: normal
parameters:
mean: 0.5
std: 0.1
- distribution: uniform
parameters:
lower: 0
upper: 5This will initialise each parameter with a separate prior. Take a look at the output folder for the
Predictions_on_smooth_data run; it contains a plot of the initial value distribution:
You can modify the training settings, such as the batch size or the training device, from the
Training entry of the config:
Training:
batch_size: 1
loss_function:
name: MSELoss
to_learn: [ param1, param2, param3 ]
true_parameters:
param4: 0.5
device: cpu
num_threads: ~The to_learn entry lists the parameters you wish to learn. If you are not learning the complete
parameter set, you must supply the parameter value to use during training for that parameter under
true_parameters.
Note
Specifying the parameters to learn is not supported in the HarrisWilsonNW and Kuramoto models, since these learn
the entire network adjacency matrix.
The device entry sets the training device. The default here is the cpu; you can set it to any
supported PyTorch training device; for instance, set it to cuda to use the GPU for training. Make sure your platform
is configured to support the selected device.
On Apple Silicon, set the device to mps to enable GPU training, provided you have followed the corresponding
installation instructions (see above). Note that PyTorch for Apple Silicon is still work in progress at this stage,
and some functions have not yet been fully implemented.
utopya automatically parallelises multiple runs; the number of CPU cores available to do this
can be specified under worker_managers/num_workers on the root-level configuration (i.e. on the same level as
parameter_space). The Training/num_threads entry controls the number of threads per model run to be used during
training.
If you thus set num_workers to 4 and num_threads to 3, you will in total be able to use 12 threads.
You can set the loss_function/name argument to point to any supported
Pytorch loss function. Additional arguments to
the loss function can be passed via an optional args and kwargs entry:
loss_function:
name: CTCLoss
args:
- 1 # blank
- 'sum' # reduction to useBy default, new synthetic data is produced during every run, but this is often not desired. For one, when performing a multiverse run, we want each universe to calibrate the same data. For another, we will want to be able to load in real data. The specific loading syntax for each model is slightly (unifying this is still WIP), but the general concept is always the same: to your run config, add the following entry (here using SIR as an example):
SIR:
Data:
load_from_dir:
load_from_dir: data/SIR/ABM_data/data/uni0/data.h5This will load in the training data from the given h5 file and use it across universes. See the model-specific README
files to see the syntax for each model. Data is stored in the data/ folder.
This repository contains the following models:
- SIR: An SDE model of contagious diseases with scalar parameters that are learned from data.
- Kuramoto: A linear SDE model of synchronisation of network osciallations. The network adjacency matrix is learned from data.
- HarrisWilson: A non-linear SDE model of optimal transport, modelling the flow of supply and demand on a network. Scalar parameters are learned from data.
- HarrisWilsonNW: The Harris-Wilson model, but learning the network adjacency matrix from data. The physical equations
- Covid: A complex model of contagion and the spread of Covid-19. Scalar parameters are learned from data.
See the model-specific README files for a guide to each model. The README files are located in the respective
<model_name> folders.
If you are ready to build your own NeuralABM model, there is an easy command you can use to get started:
utopya models copy <model_name>
This command will duplicate an existing model and rename it to whatever name you give when prompted. You can then successively change an existing model to your own requirements.



