ensemble⚓︎
The EnKF and other ensemble-based methods.
Modules:
Name | Description |
---|---|
multiproc |
Paralellisation via multiprocessing. Limit num. of CPUs used by |
EnKF
⚓︎
EnKF_N
⚓︎
Finite-size EnKF (EnKF-N).
Refs: bocquet2011, bocquet2015
This implementation is pedagogical, prioritizing the "dual" form.
In consequence, the efficiency of the "primal" form suffers a bit.
The primal form is included for completeness and to demonstrate equivalence.
In da_methods.variational.iEnKS
, however,
the primal form is preferred because it
already does optimization for w (as treatment for nonlinear models).
infl
should be unnecessary (assuming no model error, or that Q is correct).
Hess
: use non-approx Hessian for ensemble transform matrix?
g
is the nullity of A (state anomalies's), ie. g=max(1,N-Nx),
compensating for the redundancy in the space of w.
But we have made it an input argument instead, with default 0,
because mode-finding (of p(x) via the dual) completely ignores this redundancy,
and the mode gets (undesireably) modified by g.
xN
allows tuning the hyper-prior for the inflation.
Usually, I just try setting it to 1 (default), or 2.
Further description in hyperprior_coeffs().
EnKS
⚓︎
The ensemble Kalman smoother.
Refs: evensen2009a
The only difference to the EnKF is the management of the lag and the reshapings.
EnRTS
⚓︎
LETKF
⚓︎
Same as EnKF (Sqrt), but with localization.
Refs: hunt2007.
NB: Multiproc. yields slow-down for mods.Lorenz96
,
even with batch_size=(1,)
. But for mods.QG
(batch_size=(2,2)
or less) it is quicker.
NB: If len(ii)
is small, analysis may be slowed-down with '-N' infl.
SL_EAKF
⚓︎
Serial, covariance-localized EAKF.
Refs: karspeck2007.
In contrast with LETKF, this iterates over the observations rather than over the state (batches).
Used without localization, this should be equivalent (full ensemble equality)
to the EnKF
with upd_a='Serial'
.
ens_method
⚓︎
Declare default ensemble arguments.
EnKF_analysis(E, Eo, hnoise, y, upd_a, stats=None, ko=None)
⚓︎
Perform the EnKF analysis update.
This implementation includes several flavours and forms,
specified by upd_a
.
Main references: sakov2008b, sakov2008a, hoteit2015a
Newton_m(fun, deriv, x0, is_inverted=False, conf=1.0, xtol=0.0001, ytol=1e-07, itermax=10 ** 2)
⚓︎
Find root of fun
.
This is a simple (and pretty fast) implementation of Newton's method.
add_noise(E, dt, noise, method)
⚓︎
Treatment of additive noise for ensembles.
Refs: raanes2014
effective_N(YR, dyR, xN, g)
⚓︎
Effective ensemble size N.
As measured by the finite-size EnKF-N
hyperprior_coeffs(s, N, xN=1, g=0)
⚓︎
Set EnKF-N inflation hyperparams.
The EnKF-N prior may be specified by the constants:
eN
: Effect of unknown meancL
: Coeff in front of log term
These are trivial constants in the original EnKF-N, but are further adjusted (corrected and tuned) for the following reasons.
-
Reason 1: mode correction. These parameters bridge the Jeffreys (
xN=1
) and Dirac (xN=Inf
) hyperpriors for the prior covariance, B, as discussed in bocquet2015. Indeed, mode correction becomes necessary when $$ R \rightarrow \infty $$ because then there should be no ensemble update (and also no inflation!). More specifically, the mode ofl1
's should be adjusted towards 1 as a function of $$ I - K H $$ ("prior's weight"). PS: why do we leave the prior mode below 1 at all? Because it sets up "tension" (negative feedback) in the inflation cycle: the prior pulls downwards, while the likelihood tends to pull upwards. -
Reason 2: Boosting the inflation prior's certainty from N to xN*N. The aim is to take advantage of the fact that the ensemble may not have quite as much sampling error as a fully stochastic sample, as illustrated in section 2.1 of raanes2019a.
-
Its damping effect is similar to work done by J. Anderson.
The tuning is controlled by:
xN=1
: is fully agnostic, i.e. assumes the ensemble is generated from a highly chaotic or stochastic model.xN>1
: increases the certainty of the hyper-prior, which is appropriate for more linear and deterministic systems.xN<1
: yields a more (than 'fully') agnostic hyper-prior, as if N were smaller than it truly is.xN<=0
is not meaningful.
local_analyses(E, Eo, R, y, state_batches, obs_taperer, mp=map, xN=None, g=0)
⚓︎
Perform local analysis update for the LETKF.
post_process(E, infl, rot)
⚓︎
Inflate, Rotate.
To avoid recomputing/recombining anomalies,
this should have been inside EnKF_analysis
But it is kept as a separate function
- for readability;
- to avoid inflating/rotationg smoothed states (for the
EnKS
).
serial_inds(upd_a, y, cvR, A)
⚓︎
Get the indices used for serial updating.
- Default: random ordering
- if "mono" in
upd_a
:1, 2, ..., len(y)
- if "sorted" in
upd_a
: sort by variance
zeta_a(eN, cL, w)
⚓︎
EnKF-N inflation estimation via w.
Returns zeta_a = (N-1)/pre-inflation^2
.
Using this inside an iterative minimization as in the
da_methods.variational.iEnKS
effectively blends
the distinction between the primal and dual EnKF-N.