Lorenz05⚓︎
A multi-scale, smooth version of the classic Lorenz-96.
This is an implementation of "Model III" of lorenz2005a.
Similar to mods.LorenzUV
this model is designed
to contain two different scales. However, in "Model III"
the two scales are not kept separate, but superimposed,
and the large scale variables are (adjustably) spatially smooth.
Interestingly, the model is known as "Lorenz 04" in DART, where it was coded by Hansen (colleague of Lorenz) in 2004 (prior to publication).
Special cases of this model are:
- Set
J=1
to get "Model II". - Set
K=1
(andJ=1
) to get "Model I", which is the same as the Lorenz-96 model.
An implementation using explicit for-loops can be found in commit 6193532b . It uses numba (pip install required) for speed gain, but is still very slow. The implementation hereunder uses efficient numpy vectorization => much faster.
With rk4 the largest stable time step (for free run) seems to be
somewhere around what Lorenz used, namely dt=0.05/12
.
Modules:
Name | Description |
---|---|
demo |
Demonstrate the Lorenz-05 model. |
Model
⚓︎
The model configuration.
Functionality that does not directly depend on the model parameters has been left outside of this class.
Using OOP (rather than module-level encapsulation) facilitates working with multiple parameter settings simultaneously.
decompose(z)
⚓︎
Split z
into x
and y
fields, where x
is the large-scale component.
boxcar(x, n, method='direct')
⚓︎
Moving average (boxcar filter) on x
using n
nearest (periodically) elements.
For symmetry, if n
is pair, the actual number of elements used is n+1
,
and the outer elements weighted by 0.5
to compensate for the +1
.
This is the modified sum of lorenz2005a, used e.g. in eqn. 9. For intuition: this type of summation (and the associated Sigma prime notation) may also be found for the "Trapezoidal rule" and in the inverse DFT used in spectral methods on a periodic domain.
Apart from this weighting, this constitutes merely a boxcar filter.
There are of course several well-known implementations. The computational
suggestion suggested by Lorenz below eqn 10 could maybe be implemented
(with vectorisation) using cumsum
, but this seems tricky due to weighting
and periodicity.
In testing with default system parameters, and ensemble size N=50, the
"direct" method is generally 2x faster than the "fft" method, and the "oa"
method is a little slower again. If K
or J
is increased, then the "fft"
method becomes the fastest.
Examples:
>>> x = np.array([0, 1, 2], dtype=float)
>>> np.allclose(boxcar(x, 1), x)
True
>>> boxcar(x, 2)
array([0.75, 1. , 1.25])
>>> boxcar(x, 3)
array([1., 1., 1.])
>>> x = np.arange(10, dtype=float)
>>> boxcar(x, 2)
array([2.5, 1. , 2. , 3. , 4. , 5. , 6. , 7. , 8. , 6.5])
>>> boxcar(x, 5)
array([4., 3., 2., 3., 4., 5., 6., 7., 6., 5.])
prodsum_K1(x, y)
⚓︎
Compute prodsum(x, y, 1)
efficiently.
prodsum_self(x, k)
⚓︎
Compute prodsum(x, x, k)
efficiently: eqn 10 of lorenz2005a.
shift(x, k)
⚓︎
Rolls x
leftwards. I.e. output[i] = input[i+k]
.
Notes about speed that usually hold when testing with ensemble DA:
- This implementation is somewhat faster than x[..., np.mod(ii + k, M)]
.
- Computational savings of re-using already shifted vectors (or matrices)
compared to just calling this function again are negligible.
summation_kernel(width)
⚓︎
Prepare computation of the modified sum in lorenz2005a.
Note: This gets repeatedly called, but actually the input is only ever
width = K
or 2*J
, so we should really cache or pre-compute.
But with default system parameters and N=50, the savings are negligible.