Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ class GroupHandler_mock:
needs_sphinx = "1.3"

extensions = [
"sphinx.ext.intersphinx",
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.mathjax",
Expand Down Expand Up @@ -61,6 +62,12 @@ class GroupHandler_mock:

toc_object_entries_show_parents = "hide"

intersphinx_mapping = {
"numpy": ("https://numpy.org/doc/stable", None),
"python": ("https://docs.python.org/3", None),
"scipy": ("https://docs.scipy.org/doc/scipy", None),
}

def on_missing_reference(app, env, node, contnode):
if node["reftype"] == "any":
return contnode
Expand Down
20 changes: 11 additions & 9 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The JiTCODE module
Overview
--------

JiTCODE (just-in-time compilation for ordinary differential equations) is an extension of SciPy’s `ODE`_ (`scipy.integrate.ode`) or `Solve IVP`_ (`scipy.integrate.solve_ivp`).
JiTCODE (just-in-time compilation for ordinary differential equations) is an extension of SciPy’s `ODE`_ (:class:`scipy.integrate.ode`) or `Solve IVP`_ (:func:`scipy.integrate.solve_ivp`).
Where the latter take a Python function as an argument, JiTCODE takes an iterable (or generator function or dictionary) of symbolic expressions, which it translates to C code, compiles on the fly, and uses as the function to feed into SciPy’s ODE or Solve IVP.
Symbolic expressions are mostly handled by `SymEngine`_, `SymPy`_’s compiled-backend-to-be (see `SymPy vs. SymEngine`_ for details).

Expand Down Expand Up @@ -40,7 +40,7 @@ As with SciPy’s ODE, this documentation assumes that the differential equation

.. math::

\dot{y} = f(t,y)
\dot{y} = f(t,y).


.. _example:
Expand Down Expand Up @@ -69,9 +69,11 @@ An explicit example
Details of the building process
-------------------------------

.. currentmodule:: jitcode

To generate the functions needed by SciPy’s ODE or Solve IVP, JiTCODE executes a series of distinct processing steps, each of which can be invoked through a command and tweaked as needed.
These commands execute previous steps as needed, i.e., if they have not been performed yet.
If you are happy with the default options, however, you do not need to bother with this and can just use the commands at the very end of the chain, namely `set_integrator`, `set_initial_value`, or `save_compiled`.
If you are happy with the default options, however, you do not need to bother with this and can just use the commands at the very end of the chain, namely :meth:`~_jitcode.jitcode.set_integrator`, :meth:`~_jitcode.jitcode.set_initial_value`, or :meth:`~_jitcode.jitcode.save_compiled`.

The following diagram details which command calls which other command when needed:

Expand Down Expand Up @@ -102,19 +104,19 @@ A more complicated example
Calculating Lyapunov exponents with `jitcode_lyap`
--------------------------------------------------

`jitcode_lyap` is a simple extension of `jitcode` that almost automatically handles calculating Lyapunov exponents by evolving tangent vectors [BGGS80]_.
It works just like `jitcode`, except that it generates and integrates additional differential equations for the tangent vectors.
After every call of `integrate`, the tangent vectors are orthonormalised, and the “local” Lyapunov exponents for this integration step are returned alongside with the system’s state.
:class:`~_jitcode.jitcode_lyap` is a simple extension of :class:`~_jitcode.jitcode` that almost automatically handles calculating Lyapunov exponents by evolving tangent vectors [BGGS80]_.
It works just like :class:`~_jitcode.jitcode`, except that it generates and integrates additional differential equations for the tangent vectors.
After every call of :meth:`~_jitcode.jitcode_lyap.integrate`, the tangent vectors are orthonormalised, and the “local” Lyapunov exponents for this integration step are returned alongside with the system’s state.
These can then be further processed to obtain the Lyapunov exponents.
The tangent vectors are initialised with random vectors, and you have to take care of the preiterations that the tangent vectors require to align themselves.
You also have to take care not to `integrate` for too long to avoid a numerical overflow in the tangent vectors.
You also have to take care not to :meth:`~_jitcode.jitcode_lyap.integrate` for too long to avoid a numerical overflow in the tangent vectors.

Estimates for the Lyapunov vectors are returned as well.

.. automodule:: double_fhn_lyapunov

.. literalinclude:: ../examples/double_fhn_lyapunov.py
:emphasize-lines: 20-21,26,28-36
:emphasize-lines: 22-23,27,29-37
:start-after: example-start
:dedent: 1
:linenos:
Expand All @@ -124,7 +126,7 @@ Estimates for the Lyapunov vectors are returned as well.
Calculating transversal Lyapunov exponents with `jitcode_transversal_lyap`
--------------------------------------------------------------------------

`jitcode_transversal_lyap` is a variant of `jitcode_lyap` that calculates Lyapunov exponents transversal to a user-defined synchronisation manifold.
:class:`~_jitcode.jitcode_transversal_lyap` is a variant of :class:`~_jitcode.jitcode_lyap` that calculates Lyapunov exponents transversal to a user-defined synchronisation manifold.
It automatically conflates the differential equations of a group of synchronised components into one equation on the synchronisation manifold.
Moreover, it transforms the equations for the tangent vectors such that the tangent vector is automatically orthogonal to the synchronisation manifold.
If you are interested in the mathematical details, please read `the accompanying paper`_.
Expand Down
12 changes: 6 additions & 6 deletions examples/SW_of_Roesslers.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,25 +12,25 @@
\\dot{z}_i &= b + z_i (x_i -c) + k \\sum_{j=0}^N (z_j-z_i)
\\end{alignedat}

The control parameters shall be :math:`a = 0.165`, :math:`b = 0.2`, :math:`c = 10.0`, and :math:`k = 0.01`. The (frequency) parameter :math:`ω_i` shall be picked randomly from the uniform distribution on :math:`[0.8,1.0]` for each :math:`i`. :math:`A∈ℝ^{N×N}` shall be the adjacency matrix of a one-dimensional small-world network (which shall be provided by a function `small_world_network` in the following example code). So, the :math:`x` components are coupled diffusively with a small-world coupling topology, while the :math:`z` components are coupled diffusively to their mean field, with the coupling term being modulated with :math:`\\sin(t)`.
The control parameters shall be :math:`a = 0.165`, :math:`b = 0.2`, :math:`c = 10.0`, and :math:`k = 0.01`. The (frequency) parameter :math:`ω_i` shall be picked randomly from the uniform distribution on :math:`[0.8,1.0]` for each :math:`i`. :math:`A∈ℝ^{N×N}` shall be the adjacency matrix of a one-dimensional small-world network (which shall be provided by a function :any:`small_world_network` in the following example code). So, the :math:`x` components are coupled diffusively with a small-world coupling topology, while the :math:`z` components are coupled diffusively to their mean field, with the coupling term being modulated with :math:`\\sin(t)`.

Without further ado, here is the example code (`complete running example <https://raw.githubusercontent.com/neurophysik/jitcode/master/examples/SW_of_Roesslers.py>`_); highlighted lines will be commented upon below:

.. literalinclude:: ../examples/SW_of_Roesslers.py
:linenos:
:dedent: 1
:lines: 63-
:emphasize-lines: 9, 25-26, 34, 31, 43
:emphasize-lines: 10, 26-27, 35, 32, 44

Explanation of selected features and choices:

* The values of :math:`ω` are initialised globally (line 9). We cannot just define a function here, because the parameter is used twice for each oscillator. Moreover, if we were trying to calculate Lyapunov exponents or the Jacobian, the generator function would be called multiple times, and thus the value of the parameter would not be consistent (which would be desastrous).
* The values of :math:`ω` are initialised globally (line 10). We cannot just define a function here, because the parameter is used twice for each oscillator. Moreover, if we were trying to calculate Lyapunov exponents or the Jacobian, the generator function would be called multiple times, and thus the value of the parameter would not be consistent (which would be desastrous).

* Since we need :math:`\\sum_{j=0}^N x_j` to calculate the derivative of :math:`z` for every oscillator, it is prudent to only calculate this once. Therefore we define a helper symbol for this in lines 25 and 26, which we employ in line 34. (See the arguments of `jitcode` for details.)
* Since we need :math:`\\sum_{j=0}^N x_j` to calculate the derivative of :math:`z` for every oscillator, it is prudent to only calculate this once. Therefore we define a helper symbol for this in lines 26 and 27, which we employ in line 35. (See the arguments of `jitcode` for details.)

* In line 31, we implement :math:`\\sin(t)`. For this purpose we had to import `t` in line 1. Also, we need to use `symengine.sin` – in contrast to `math.sin` or `numpy.sin`.
* In line 32, we implement :math:`\\sin(t)`. For this purpose we had to import `t` in line 3. Also, we need to use `symengine.sin` – in contrast to :func:`math.sin` or :data:`numpy.sin`.

* As this is a large system, we use a generator function instead of a list to define :math:`f` (lines 28-35) and have the code automatically be split into chunks of 150 lines, corresponding to the equations of fifty oscillators, in line 43. (For this system, any reasonably sized multiple of 3 is a good choice for `chunk_size`; for other systems, the precise choice of the value may be more relevant.) See `large_systems` for the rationale.
* As this is a large system, we use a generator function instead of a list to define :math:`f` (lines 29-36) and have the code automatically be split into chunks of 150 lines, corresponding to the equations of fifty oscillators, in line 44. (For this system, any reasonably sized multiple of 3 is a good choice for `chunk_size`; for other systems, the precise choice of the value may be more relevant.) See `large_systems` for the rationale.
"""

if __name__ == "__main__":
Expand Down
12 changes: 6 additions & 6 deletions examples/double_fhn_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@

.. math::

f(y) = \\left(
\\begin{matrix}
y_0 ( a-y_0 ) ( y_0-1) - y_1 + k (y_2 - y_0) \\\\
b_1 y_0 - c y_1 \\\\
y_2 ( a-y_2 ) ( y_2-1 ) - y_3 + k (y_0 - y_2)\\\\
f(y) =
\begin{bmatrix}
y_0 ( a-y_0 ) ( y_0-1) - y_1 + k (y_2 - y_0) \\
b_1 y_0 - c y_1 \\
y_2 ( a-y_2 ) ( y_2-1 ) - y_3 + k (y_0 - y_2)\\
b_2 y_2 - c y_3
\\end{matrix} \\right),
\end{bmatrix},

and :math:`a = -0.025794`, :math:`b_1 = 0.0065`, :math:`b_2 = 0.0135`, :math:`c = 0.02`, and :math:`k = 0.128`.
Then the following code integrates the above for 100000 time units (after discarding 2000 time units of transients), with :math:`y(t=0) = (1,2,3,4)`, and writes the results to :code:`timeseries.dat`:
Expand Down
4 changes: 2 additions & 2 deletions examples/double_fhn_transversal_lyap.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@
For instance, let’s interpret the system from `example` as two oscillators (which is what it is), one consisting of the first and second and one of the third and fourth component. Furthermore, let’s change the control parameters a bit to make the two oscillators identical. We can then calculate the transversal Lyapunov exponents to the synchronisation manifold as follows (important changes are highlighted):
.. literalinclude:: ../examples/double_fhn_transversal_lyap.py
:emphasize-lines: 6, 12, 14, 17, 19-20
:emphasize-lines: 7, 13, 15, 18, 20-22
:start-after: example-st\u0061rt
:dedent: 1
:linenos:
Note that the initial state (line 17) is reduced in dimensionality and there is only one component for each synchronisation group.
Note that the initial state (line 18) is reduced in dimensionality and there is only one component for each synchronisation group.
"""

if __name__ == "__main__":
Expand Down
24 changes: 12 additions & 12 deletions examples/lotka_volterra.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,21 @@

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 1-2
:lines: 1-3
:dedent: 1

… and defining the control parameters:

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 4-7
:lines: 5-8
:dedent: 1

The `y` that we imported from `jitcode` has to be used for the dynamical variables. However, to make our code use the same notation as the above equation, we can rename them:

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 9
:lines: 10
:dedent: 1

We here might as well have written `R,B = y(0),y(1)`, but the above is more handy for larger systems. Note that the following code is written such that the order of our dynamical variables does not matter.
Expand All @@ -39,7 +39,7 @@

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 11-14
:lines: 12-15
:dedent: 1

Note that there are other ways to define the derivative, e.g., used in the previous and following example.
Expand All @@ -48,35 +48,35 @@

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 16
:lines: 17
:dedent: 1

Before we start integrating, we have to choose an integrator and define initial conditions. We here choose the 5th-order Dormand–Prince method. Moreover the initial :math:`B` shall be 0.5, the initial :math:`R` shall be 0.2, and the starting time should be :math:`t=0`.

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 17-19
:lines: 18-20
:dedent: 1

We then define an array of time points where we want to observe the system, and a dictionary of empty lists that shall be filled with our results.

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 21-22
:lines: 22-23
:dedent: 1

Finally, we perform the actual integration by calling `ODE.integrate` for each of our `times`. After this, we use `ODE.y_dict` to access the current state as a dictionary and appending its values to the respective lists.

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 23-26
:lines: 24-27
:dedent: 1

We can now plot or analyse our data, but that’s beyond the scope of JiTCODE. So we just save it:

.. literalinclude:: ../examples/lotka_volterra.py
:start-after: example-st\u0061rt
:lines: 28
:lines: 29
:dedent: 1

Taking everything together, our code is:
Expand All @@ -101,9 +101,9 @@
R,B = dynvars = [ y(i) for i in range(2) ]

lotka_volterra_diff = {
B: γ*B - φ*R*B,
R: -ω*R + ν*R*B,
}
B: γ*B - φ*R*B,
R: -ω*R + ν*R*B,
}

ODE = jitcode(lotka_volterra_diff)
ODE.set_integrator("dopri5")
Expand Down
Loading