Misplaced Pages

Innovation method

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by Lilianmm87 (talk | contribs) at 14:58, 28 February 2023 (Software). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 14:58, 28 February 2023 by Lilianmm87 (talk | contribs) (Software)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
This article, Innovation method, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author

In statistics, the Innovation Method provides an estimator for the parameters of stochastic differential equations given a time series of (potentially noisy) observations of the state variables. In the framework of continuous-discrete state space models, the innovation estimator is obtained by maximizing the log-likelihood of the corresponding discrete-time innovation process with respect to the parameters. The innovation estimator can be classified as a M-estimator, a quasi-maximum likelihood estimator or a prediction error estimator depending of the inferential considerations that want to be emphasized. The innovation method is a system identification technique for developing mathematical models of dynamical systems from measured data and for the optimal design of experiments.

Background

Stochastic differential equations (SDEs) have became an important mathematical tool for describing the time evolution of several random phenomenon in natural, social and applied sciences. Statistical inference for SDEs is thus of great importance in applications for model building, model selection, model identification and forecasting. To carry out statistical inference for SDEs, measurements of the state variables of these random phenomena are indispensable. Usually, in practice, only a few state variables are measured by physical devices that introduce random measurement errors (observational errors).

Mathematical model for inference

The innovation estimator for SDEs is defined in the framework of continuous-discrete state space models . These models arise as natural mathematical representation of the temporal evolution of continuous random phenomena and their measurements in a succession of time instants. In the simplest formulation, these continuous-discrete models are expressed in term of a SDE of the form

d x ( t ) = f ( t , x ( t ) ; θ ) d t + i = 1 m g i   ( t , x ( t ) ; θ )   d w i ( t ) ( 1 ) {\displaystyle \qquad \qquad d\mathbf {x} (t)=\mathbf {f} (t,\mathbf {x} (t);\theta )dt+\sum _{i=1}^{m}\mathbf {g} _{i}\ (t,\mathbf {x} (t);\theta )\ d\mathbf {w} ^{i}(t)\qquad \qquad (1)}

describing the time evolution of d {\displaystyle d} state variables x {\displaystyle \mathbf {x} } of the phenomenon for all time instant t t 0 {\displaystyle t\geq t_{0}} , and an observation equation

z t k = C x ( t k ) + e t k ( 2 ) {\displaystyle \qquad \qquad \mathbf {z} _{t_{k}}=\mathbf {Cx} (t_{k})+\mathbf {e} _{t_{k}}\qquad \qquad (2)}

describing the time series of measurements z t 0 , . . . , z t M 1 {\displaystyle \mathbf {z} _{t_{0}},...,\mathbf {z} _{t_{M-1}}} of at least one of the variables x {\displaystyle \mathbf {x} } of the random phenomenon on M {\displaystyle M} time instants t 0   , . . . ,   t M 1 {\displaystyle t_{0}\ ,...,\ t_{M-1}} . In the model (1)-(2), f {\displaystyle \mathbf {f} } and g i {\displaystyle \mathbf {g} _{i}} are differentiable functions, w = ( w 1 , . . . , w m ) {\displaystyle \mathbf {w} =(\mathbf {w} ^{1},...,\mathbf {w} ^{m})} is an m {\displaystyle m} -dimensional standard Wiener process, θ R p {\displaystyle \theta \in \mathbb {R} ^{p}} is a vector of p {\displaystyle p} parameters, { e t k : e t k N ( 0 , Π t k ) } k = 0 , . . . , M 1 {\displaystyle \{\mathbf {e} _{t_{k}}:\mathbf {e} _{t_{k}}\sim \mathrm {N} (0,\Pi _{t_{k}})\}_{k=0,...,M-1}} is a sequence of r {\displaystyle r} -dimensional i.i.d. Gaussian random vectors independent of w {\displaystyle \mathbf {w} } , Π t k {\displaystyle \Pi _{t_{k}}} an r × r {\displaystyle r\times r} positive definite matrix, and C {\displaystyle C} an r × d {\displaystyle r\times d} matrix.

Statistical problem to solve

Once the dynamics of a phenomenon is described by a state equation as (1) and the way of measurement the state variables specified by a observation equation as (2), the inference problem to solve is the following : given M {\displaystyle M} partial and noisy observations z t 0 , . . . , z t M 1 {\displaystyle \mathbf {z} _{t_{0}},...,\mathbf {z} _{t_{M-1}}} of the stochastic process x {\displaystyle \mathbf {x} } on the observation times t 0 , . . . , t M 1 {\displaystyle t_{0},...,t_{M-1}} , estimate the unobserved state variable of x {\displaystyle \mathbf {x} } and the unknown parameters θ {\displaystyle \theta } in (1) that better fit to the given observations.

Discrete-time innovation process

Let { t } M {\displaystyle \{t\}_{M}} be the sequence of M {\displaystyle M} observation times t 0 , , t M 1 {\displaystyle t_{0},\ldots ,t_{M-1}} of the states of (1), and Z ρ = { z t k : t k ρ , t k { t } M } {\displaystyle Z_{\rho }=\{\mathbf {z} _{t_{k}}:t_{k}\leq \rho ,t_{k}\in \{t\}_{M}\}} the time series of partial and noisy measurements of x {\displaystyle \mathbf {x} } described by the observation equation (2).

Further, let x t / ρ = E ( x ( t ) | Z ρ ) {\displaystyle \mathbf {x} _{t/\rho }=\mathrm {E} (\mathbf {x} (t)|Z_{\rho })} and U t / ρ = E ( x ( t ) x ( t ) | Z ρ ) x t / ρ x t / ρ {\displaystyle \mathbf {U} _{t/\rho }=E(\mathbf {x} (t)\mathbf {x} ^{\intercal }(t)|Z_{\rho })-\mathbf {x} _{t/\rho }\mathbf {x} _{t/\rho }^{\intercal }} be the conditional mean and variance of x {\displaystyle \mathbf {x} } with ρ t {\displaystyle \rho \leq t} , where E ( ) {\displaystyle E(\cdot )} denotes the expected value of random vectors.

The random sequence { ν t k } k = 1 , , M 1 , {\displaystyle \{\nu _{t_{k}}\}_{k=1,\ldots ,M-1},} with

ν t k = z t k C x t k / t k 1 ( θ ) , ( 3 ) {\displaystyle \qquad \qquad \nu _{t_{k}}=\mathbf {z} _{t_{k}}-\mathbf {Cx} _{{t_{k}}/{t_{k-1}}}(\theta ),\qquad \qquad (3)}

defines the discrete-time innovation process , where ν t k {\displaystyle \nu _{t_{k}}} is proved to be an independent normally distributed random vector with zero mean and variance

Σ t k = C U t k / t k 1 ( θ )   C + Π t k , ( 4 ) {\displaystyle \qquad \qquad \Sigma _{t_{k}}=\mathbf {CU} _{{t_{k}}/t_{k-1}}(\theta )\ \mathbf {C} ^{\intercal }+\Pi _{t_{k}},\qquad \qquad (4)}

for small enough Δ = max k { t k + 1 t k } {\displaystyle \Delta ={\underset {k}{\max }}\{t_{k+1}-t_{k}\}} , with t k , t k + 1 { t } M {\displaystyle t_{k},t_{k+1}\in \{t\}_{M}} . In practice , this distribution for the discrete-time innovation is valid when, with a suitable selection of both, the number M {\displaystyle M} of observations and the time distance t k + 1 t k {\displaystyle t_{k+1}-t_{k}} between consecutive observations, the time series of observations z t 0 , . . . , z t M 1 {\displaystyle \mathbf {z} _{t_{0}},...,\mathbf {z} _{t_{M-1}}} of the SDE contains the main information about the continuous-time process x {\displaystyle \mathbf {x} } . That is, when the sampling of the continuous-time variables has low distortion (aliasing).

Innovation estimator

The innovation estimator for the parameters of the SDE (1) is the one that maximizes the likelihood function of the discrete-time innovation process { ν t k } k = 1 , , M 1 {\displaystyle \{\nu _{t_{k}}\}_{k=1,\ldots ,M-1}} with respect to the parameters . More precisely, given M {\displaystyle M} measurements Z t M 1 {\displaystyle Z_{t_{M-1}}} of the state space model (1)-(2) with θ = θ 0 {\displaystyle \theta =\theta _{0}} on { t } M , {\displaystyle \{t\}_{M},} the innovation estimator for the parameters θ 0 {\displaystyle \theta _{0}} of (1) is defined by

θ ^ M = arg { min θ   U M ( θ , Z t M 1 ) } , ( 5 ) {\displaystyle \qquad \qquad {\hat {\theta }}_{M}=\operatorname {\arg\{} {\underset {\theta }{\min }}\ U_{M}(\theta ,Z_{t_{M-1}})\},\qquad \qquad (5)}

where

U M ( θ , Z t M 1 ) = ( M 1 ) ln ( 2 π ) + k = 1 M 1 ln ( det ( Σ t k ) ) + ν t k Σ t k 1 ν t k , {\displaystyle \qquad \qquad U_{M}(\theta ,Z_{t_{M-1}})=(M-1)\ln(2\pi )+\sum _{k=1}^{M-1}\ln(\det(\Sigma _{t_{k}}))+\nu _{t_{k}}^{\intercal }\Sigma _{t_{k}}^{-1}\nu _{t_{k}},}

being ν t k {\displaystyle \nu _{t{_{k}}}} the discrete-time innovation (3) and Σ t k {\displaystyle \Sigma _{t_{k}}} the innovation variance (4) of the model (1)-(2) at t k {\displaystyle t_{k}} , for all k = 1 , . . . , M 1. {\displaystyle k=1,...,M-1.} In the above expression for U M ( θ , Z t M 1 ) , {\displaystyle U_{M}(\theta ,Z_{t_{M-1}}),} the conditional mean x t k / t k 1 ( θ ) {\displaystyle \mathbf {x} _{t_{k}/t_{k-1}}(\theta )} and variance U t k / t k 1 ( θ ) {\displaystyle \mathbf {U} _{t_{k}/t_{k-1}}(\theta )} are computed by the continuous-discrete filtering algorithm for the evolution of the moments (Section 6.4 in ), for all k = 1 , . . . , M 1. {\displaystyle k=1,...,M-1.}

Differences with the maximum likelihood estimator

The maximum likelihood estimator of the parameters θ {\displaystyle \theta } in the model (1)-(2) involves the evaluation of the - usually unknown - transition density function p θ ( t k + 1 t k , x ( t k ) , x ( t k + 1 ) ) {\displaystyle p_{\theta }(t_{k+1}-t_{k},\mathbf {x} (t_{k}),\mathbf {x} (t_{k+1}))} between the states x ( t k ) {\displaystyle \mathbf {x} (t_{k})} and x ( t k + 1 ) {\displaystyle \mathbf {x} (t_{k+1})} of the diffusion process x {\displaystyle \mathbf {x} } for all the observation times t k {\displaystyle t_{k}} and t k + 1 {\displaystyle t_{k+1}} . instead of this, the innovation estimator (5) is obtained by maximizing the likelihood of the discrete-time innovation process { ν t k } k = 1 , . . . , M 1 , {\displaystyle \{\nu _{t_{k}}\}_{k=1,...,M-1},} taking into account that ν t 0 , . . . , ν t M 1 {\displaystyle \nu _{t_{0}},...,\nu _{t_{M-1}}} are Gaussian and independent random vectors. Remarkably, whereas the transition density function p θ ( t k + 1 t k , x ( t k ) , x ( t k + 1 ) ) {\displaystyle p_{\theta }(t_{k+1}-t_{k},\mathbf {x} (t_{k}),\mathbf {x} (t_{k+1}))} changes when the SDE for x {\displaystyle \mathbf {x} } does, the transition density function p θ ( t k + 1 t k , ν t k , ν t k + 1 ) {\displaystyle {\mathfrak {p}}_{\theta }(t_{k+1}-t_{k},\nu _{t_{k}},\nu _{t_{k+1}})} for the innovation process remains Gaussian independently of the SDEs for x {\displaystyle \mathbf {x} } . Only in the case that the diffusion x {\displaystyle \mathbf {x} } is described by a linear SDE with additive noise, the density function p θ ( t k + 1 t k , x ( t k ) , x ( t k + 1 ) ) {\displaystyle p_{\theta }(t_{k+1}-t_{k},\mathbf {x} (t_{k}),\mathbf {x} (t_{k+1}))} is Gaussian and equal to p θ ( t k + 1 t k , ν t k , ν t k + 1 ) , {\displaystyle {\mathfrak {p}}_{\theta }(t_{k+1}-t_{k},\nu _{t_{k}},\nu _{t_{k+1}}),} and so the maximum likelihood and the innovation estimator coincide . Otherwise , the innovation estimator is an approximation to the maximum likelihood estimator and, in this sense, the innovation estimator is a Quasi-Maximum Likelihood estimator. In addition, the innovation method is a particular instance of the Prediction Error method according to the definition given in . Therefore, the asymptotic results obtained in for that general class of estimators are valid for the innovation estimators . Intuitively, by following the typical control engineering viewpoint, it is expected that the innovation process - viewed as a measure of the prediction errors of the fitted model - be approximately a white noise process when the models fit the data , which can be used as a practical tool for designing of models and for optimal experimental design .

Properties

The innovation estimator (5) has a number of important attributes:

  • The 100 ( 1 α ) % {\displaystyle 100(1-\alpha )\%} confidence limits θ ^ M ± {\displaystyle {\widehat {\theta }}_{M}\pm \bigtriangleup } for the innovation estimator θ ^ M {\displaystyle {\widehat {\theta }}_{M}} is estimated with

  △ = t 1 α , M ρ 1 d i a g ( V a r ( θ ^ M ) ) M p , {\displaystyle \ \qquad \qquad \bigtriangleup =t_{1-\alpha ,M-\rho -1}{\sqrt {\frac {diag(Var({\widehat {\theta }}_{M}))}{M-p}}},}

where t 1 α , M p 1 {\displaystyle t_{1-\alpha ,M-p-1}} is the t-student distribution with 100 ( 1 α ) % {\displaystyle 100(1-\alpha )\%} significance level, and M p 1 {\displaystyle M-p-1} degrees of freedom . Here, V a r ( θ ^ M ) = ( I ( θ ^ M ) ) 1 {\displaystyle {\text{V}}ar({\widehat {\theta }}_{M})=(I({\widehat {\theta }}_{M}))^{-1}} denotes the variance of the innovation estimator θ ^ M {\displaystyle {\widehat {\theta }}_{M}} , where

  I ( θ ^ M ) = k = 1 M 1 I k ( θ ^ M ) {\displaystyle \ \qquad \qquad I({\widehat {\theta }}_{M})=\sum _{k=1}^{M-1}I_{k}({\widehat {\theta }}_{M})}

is the Fisher Information matrix the innovation estimator θ ^ M {\displaystyle {\widehat {\theta }}_{M}} of θ 0 {\displaystyle \theta _{0}} and

[ I k ( θ ^ M ) ] m , n = μ θ m Σ 1 μ θ n + 1 2 t r a c e ( Σ 1 Σ θ m Σ 1 Σ θ n ) {\displaystyle \qquad \qquad \lbrack I_{k}({\widehat {\theta }}_{M})]_{m,n}={\frac {\partial \mu ^{\intercal }}{\partial \theta _{m}}}\Sigma ^{-1}{\frac {\partial \mu }{\partial \theta _{n}}}+{\frac {1}{2}}trace(\Sigma ^{-1}{\frac {\partial \Sigma }{\partial \theta _{m}}}\Sigma ^{-1}{\frac {\partial \Sigma }{\partial \theta _{n}}})}

is the entry ( m , n ) {\displaystyle (m,n)} of the matrix I k ( θ ^ M ) {\displaystyle I_{k}({\widehat {\theta }}_{M})} with μ = C x t k / t k 1 ( θ ^ M ) {\displaystyle \mu =\mathbf {Cx} _{t_{k}/t_{k-1}}({\widehat {\theta }}_{M})} and Σ = Σ t k ( θ ^ M ) {\displaystyle \Sigma =\mathbf {\Sigma } _{t_{k}}({\widehat {\theta }}_{M})} , for 1 m , n p {\displaystyle 1\leq m,n\leq p} .

  • The distribution of the fitting-innovation process { ν t k : ν t k = z t k C x t k / t k 1 ( θ ^ M ) } k = 1 , M 1 {\displaystyle \{\mathbf {\nu } _{t_{k}}:\mathbf {\nu } _{t_{k}}=\mathbf {z} _{t_{k}}-\mathbf {Cx} _{t_{k}/t_{k-1}}({\widehat {\theta }}_{M})\}_{k=1,\ldots M-1}} measures the goodness of fit of the model to the data .
  • For smooth enough function h {\displaystyle \mathbf {h} } , nonlinear observation equations of the form

z t k = h ( t k x ( t k ) ) + e t k , ( 6 ) {\displaystyle \qquad \qquad \mathbf {z} _{t_{k}}=\mathbf {h} (t_{k}{\text{, }}\mathbf {x} (t_{k}))+\mathbf {e} _{t_{k}},\qquad \qquad (6)}

can be transformed to the simpler one (2), and the innovation estimator (5) can be applied .

Approximate Innovation estimators

In practice, close form expressions for computing x t k / t k 1 ( θ ) {\displaystyle \mathbf {x} _{t_{k}/t_{k-1}}(\theta )} and U t k / t k 1 ( θ ) {\displaystyle \mathbf {U} _{t_{k}/t_{k-1}}(\theta )} in (5) are only available for a few models (1)-(2). Therefore, approximate filtering algorithms as the following are used in applications.

Given M {\displaystyle M} measurements Z t M 1 {\displaystyle Z_{t_{M-1}}} and the initial filter estimates y t 0 / t 0 = x t 0 / t 0 {\displaystyle \mathbf {y} _{t_{0}/t_{0}}=\mathbf {x} _{t_{0}/t_{0}}} , V t 0 / t 0 = U t 0 / t 0 {\displaystyle \mathbf {V} _{t_{0}/t_{0}}=\mathbf {U} _{t_{0}/t_{0}}} , the approximate Linear Minimum Variance (LMV) filter for the model (1)-(2) is iteratively defined at each observation time t k { t } M {\displaystyle t_{k}\in \{t\}_{M}} by the prediction estimates

y t k + 1 / t k = E ( y ( t k + 1 ) | Z t k ) {\displaystyle \qquad \qquad \mathbf {y} _{t_{k+1}/t_{k}}=E(\mathbf {y} (t_{k+1})|Z_{t_{k}})\quad } and V t k + 1 / t k = E ( y ( t k + 1 ) y ( t k + 1 ) | Z t k ) y t k + 1 / t k y t k + 1 / t k , ( 7 ) {\displaystyle \quad \mathbf {V} _{t_{k+1}/t_{k}}=E(\mathbf {y} (t_{k+1})\mathbf {y} ^{\intercal }(t_{k+1})|Z_{t_{k}})-\mathbf {y} _{t_{k+1}/t_{k}}\mathbf {y} _{t_{k+1}/t_{k}}^{\intercal },\qquad (7)}

with initial conditions y t k / t k {\displaystyle \mathbf {y} _{t_{k}/t_{k}}} and V t k / t k {\displaystyle \mathbf {V} _{t_{k}/t_{k}}} , and the filter estimates

y t k + 1 / t k + 1 = y t k + 1 / t k + K t k + 1 ( z t k + 1 C y t k + 1 / t k ) {\displaystyle \qquad \qquad \mathbf {y} _{t_{k+1}/t_{k+1}}=\mathbf {y} _{t_{k+1}/t_{k}}+\mathbf {K} _{t_{k+1}}\mathbf {(\mathbf {z} } _{t_{k+1}}-\mathbf {\mathbf {C} y} _{t_{k+1}/t_{k}}\mathbf {)} \quad } and V t k + 1 / t k + 1 = V t k + 1 / t k K t k + 1 C V t k + 1 / t k ( 8 ) {\displaystyle \quad \mathbf {V} _{t_{k+1}/t_{k+1}}=\mathbf {V} _{t_{k+1}/t_{k}}-\mathbf {K} _{t_{k+1}}\mathbf {CV} _{t_{k+1}/t_{k}}\qquad (8)}

with filter gain

K t k + 1 = V t k + 1 / t k C C V t k + 1 / t k ( C + Π t k + 1 ) 1 {\displaystyle \qquad \qquad \mathbf {K} _{t_{k+1}}=\mathbf {V} _{t_{k+1}/t_{k}}\mathbf {C} ^{\intercal }\mathbf {CV} _{t_{k+1}/t_{k}}(\mathbf {C} ^{\intercal }+\mathbf {\Pi } _{t_{k+1}})^{-1}}

for all t k , t k + 1 { t } M {\displaystyle t_{k},t_{k+1}\in \{t\}_{M}} , where y {\displaystyle \mathbf {y} } is an approximation to the solution x {\displaystyle \mathbf {x} } of (1) on the observation times t M {\displaystyle {t}_{M}} .

Given M {\displaystyle M} measurements Z t M 1 {\displaystyle Z_{t_{M-1}}} of the state space model (1)-(2) with θ = θ 0 {\displaystyle \mathbf {\theta =\theta } _{0}} on { t } M {\displaystyle \{t\}_{M}} , the approximate innovation estimator for the parameters θ 0 {\displaystyle \mathbf {\theta } _{0}} of (1) is defined by

ϑ ^ M = arg { min θ D θ   U ~ M ( θ , Z t M 1 ) } , ( 9 ) {\displaystyle \qquad \qquad {\widehat {\mathbf {\vartheta } }}_{M}=\arg\{{\underset {\mathbf {\theta \in } {\mathcal {D}}_{\theta }}{\mathbf {\min } }}{\text{ }}{\widetilde {U}}_{M}\mathbf {(\theta } ,Z_{t_{M-1}})\},\qquad \qquad (9)}

where

U ~ M ( θ , Z t M 1 ) = ( M 1 ) ln ( 2 π ) + k = 1 M 1 ln ( det ( Σ ~ t k ) ) + ν ~ t k ( Σ ~ t k ) 1 ν ~ t k , {\displaystyle \qquad \qquad {\widetilde {U}}_{M}(\mathbf {\theta } ,Z_{t_{M-1}})=(M-1)\ln(2\pi )+\sum \limits _{k=1}^{M-1}\ln(\det({\widetilde {\mathbf {\Sigma } }}_{t_{k}}))+{\widetilde {\mathbf {\nu } }}_{t_{k}}^{\intercal }({\widetilde {\mathbf {\Sigma } }}_{t_{k}})^{-1}{\widetilde {\mathbf {\nu } }}_{t_{k}},}

being

ν ~ t k = z t k C y t k / t k 1 ( θ ) {\displaystyle \qquad \qquad {\widetilde {\mathbf {\nu } }}_{t_{k}}=\mathbf {z} _{t_{k}}-\mathbf {Cy} _{t_{k}/t_{k-1}}(\theta )\qquad } and Σ ~ t k = C V t k / t k 1 ( θ ) C + Π t k {\displaystyle \qquad {\widetilde {\mathbf {\Sigma } }}_{t_{k}}=\mathbf {CV} _{t_{k}/t_{k-1}}(\theta )\mathbf {C} ^{\intercal }+\mathbf {\Pi } _{t_{k}}}

approximations to the discrete-time innovation (3) and innovation variance (4), respectively, resulting from the filtering algorithm (7)-(8).

For models with complete observations free of noise (i.e, with C = I {\displaystyle \mathbf {C} =\mathbf {I} } and Π t k = 0 {\displaystyle \mathbf {\Pi } _{t_{k}}=0} in (2), the approximate innovation estimator (9) reduces to the known Quasi-Maximum Likelihood estimators for SDEs .

Main conventional-type estimators

Conventional-type innovation estimators are those (9) derived from conventional-type continuous-discrete or discrete-discrete approximate filtering algorithms. With approximate continuous-discrete filters there are the innovation estimators based on Local Linearization (LL) filters , on the extended Kalman filter , and on the second order filters . Approximate innovation estimators based on discrete-discrete filters result from the discretization of the SDE (1) by means of a numerical scheme . Typically, the effectiveness of these innovation estimators is directly related to the stability of the involved filtering algorithms.

A shared drawback of these approximate innovation estimators is that, once the observations are given, the error between the approximate and the exact innovation process is fixed and completely settled by the time distance between observations . This might sets a large bias of the approximate estimators in some applications, bias that can not be corrected by increasing the number of observations. However, they are useful in many practical situations for which only medium or low accuracy for the parameter estimation is required .

Order-β innovation estimators

Let us consider the finer time discretization ( τ ) h > 0 = { τ n : τ n + 1 τ n h  for  n = 0 , 1 , , N } {\displaystyle \left(\tau \right)_{h>0}=\{\tau _{n}:\tau _{n+1}-\tau _{n}\leq h{\text{ for }}n=0,1,\ldots ,N\}} of the time interval [ t 0 , t M 1 ] {\displaystyle } satisfying the condition ( τ ) h { t } M {\displaystyle \left(\tau \right)_{h}\supset \{t\}_{M}} . Further, let y {\displaystyle \mathbf {y} } be an approximation to x {\displaystyle \mathbf {x} } defined on the time discretization ( τ ) h { t } M {\displaystyle \left(\tau \right)_{h}\supset \{t\}_{M}} , and y t k / t k 1 = E ( y ( t ) | Z t k ) {\displaystyle \mathbf {y} _{t_{k}/t_{k-1}}=E(\mathbf {y(} t)|Z_{t_{k}})} and V t k / t k 1 = E ( y ( t ) y ( t ) | Z t k ) y t k + 1 / t k y t k + 1 / t k {\displaystyle \mathbf {V} _{t_{k}/t_{k-1}}=E(\mathbf {y(} t)\mathbf {y} ^{\intercal }(t)|Z_{t_{k}})-\mathbf {y} _{t_{k+1}/t_{k}}\mathbf {y} _{t_{k+1}/t_{k}}^{\intercal }} approximations to the exact conditional mean x t k / t k 1 {\displaystyle \mathbf {x} _{t_{k}/t_{k-1}}} and variance U t k / t k 1 {\displaystyle \mathbf {U} _{t_{k}/t_{k-1}}} for all t k , t k 1 { t } M {\displaystyle t_{k},t_{k-1}\in \{t\}_{M}} .

A order- β {\displaystyle \beta } LMV filter is an approximate LMV filter for which y {\displaystyle \mathbf {y} } is an order- β {\displaystyle \beta } weak approximation to x {\displaystyle \mathbf {x} } satisfying the weak convergence condition

sup t k t t k + 1 | E ( g ( x ( t ) ) | Z t k ) E ( g ( y ( t ) ) | Z t k ) | L k h β {\displaystyle \qquad \qquad {\underset {t_{k}\leq t\leq t_{k+1}}{\sup }}\left\vert E\left(g(\mathbf {x} (t))|Z_{t_{k}}\right)-E\left(g(\mathbf {y} (t))|Z_{t_{k}}\right)\right\vert \leq L_{k}h^{\beta }}

for all t k , t k + 1 { t } M {\displaystyle t_{k},t_{k+1}\in \{t\}_{M}} and any 2 ( β + 1 ) {\displaystyle 2(\beta +1)} times continuously differentiable functions g : R d R {\displaystyle g:\mathbb {R} ^{d}\rightarrow \mathbb {R} } for which g {\displaystyle g} and all its partial derivatives up to order 2 ( β + 1 ) {\displaystyle 2(\beta +1)} have polynomial growth, being L k {\displaystyle L_{k}} a positive constant. This order- β {\displaystyle \beta } LMV filter converges with rate β {\displaystyle \beta } to the exact LMV filter as h {\displaystyle h} goes to zero , where h {\displaystyle h} is the maximum stepsize of the time discretization ( τ ) h { t } M {\displaystyle (\tau )_{h}\supset \{t\}_{M}} on which the approximation y {\displaystyle \mathbf {y} } to x {\displaystyle \mathbf {x} } is defined.

A order- β {\displaystyle \beta } innovation estimator is an approximate innovation estimator (9) for which the approximations to the discrete-time innovation (3) and innovation variance (4), respectively, resulting from an order- β {\displaystyle \beta } LMV filter.

Approximations y {\displaystyle \mathbf {y} } of any kind converging to x {\displaystyle \mathbf {x} } in a weak sense (as, e.g., those in ) can be used to design an order- β {\displaystyle \beta } LMV filter and, consequently, an order- β {\displaystyle \beta } innovation estimator. These order- β {\displaystyle \beta } innovation estimators are intended for the recurrent practical situation in which a diffusion process should be identified from a reduced number of observations distant in time or when high accuracy for the estimated parameters is required.

Properties

An order- β {\displaystyle \beta } innovation estimator θ ^ M ( h ) {\displaystyle {\widehat {\mathbf {\theta } }}_{M}(h)} has a number of important properties :

  • For each given data Z t M 1 {\displaystyle Z_{t_{M-1}}} of M {\displaystyle M} observations, θ ^ M ( h ) {\displaystyle {\widehat {\mathbf {\theta } }}_{M}(h)} converges to the exact innovation estimator θ ^ M {\displaystyle {\widehat {\mathbf {\theta } }}_{M}} as the maximum stepsize h {\displaystyle h} of the time discretization ( τ ) h { t } M {\displaystyle \left(\tau \right)_{h}\supset \{t\}_{M}} goes to zero.
  • For finite samples of M {\displaystyle M} observations, the expected value of θ ^ M ( h ) {\displaystyle {\widehat {\mathbf {\theta } }}_{M}(h)} converges to the expected value of the exact innovation estimator θ ^ M {\displaystyle {\widehat {\mathbf {\theta } }}_{M}} as h {\displaystyle h} goes to zero.
  • For an increasing number of observations, θ ^ M ( h ) {\displaystyle {\widehat {\mathbf {\theta } }}_{M}(h)} is asymptotically normal distributed and its bias decreases when h {\displaystyle h} goes to zero.
  • Likewise for the convergence of the order- β {\displaystyle \beta } LMV filter to the exact LMV filter, for the convergence and asymptotic properties of θ ^ M ( h ) {\displaystyle {\widehat {\mathbf {\theta } }}_{M}(h)} there are no constraints on the time distance t k + 1 t k {\displaystyle t_{k+1}-t_{k}} between two consecutive observations z t k {\displaystyle \mathbf {z} _{t_{k}}} and z t k + 1 {\displaystyle \mathbf {z} _{t_{k+1}}} , nor on the time discretization ( τ ) h { t } M {\displaystyle (\tau )_{h}\supset \{t\}_{M}} .
  • Approximations for the Akaike or Bayesian information criterion and confidence limits are directly obtained by replacing the exact estimator θ ^ M {\displaystyle {\widehat {\mathbf {\theta } }}_{M}} by its approximation θ ^ M ( h ) {\displaystyle {\widehat {\mathbf {\theta } }}_{M}(h)} . These approximations converge to the corresponding exact one when the maximum stepsize h {\displaystyle h} of the time discretization ( τ ) h { t } M {\displaystyle \left(\tau \right)_{h}\supset \{t\}_{M}} goes to zero.
  • The distribution of the approximate fitting-innovation process { ν t k : ν t k = z t k C y t k / t k 1 ( θ ^ M ( h ) ) } k = 1 , M 1 {\displaystyle \{\mathbf {\nu } _{t_{k}}:\mathbf {\nu } _{t_{k}}=\mathbf {z} _{t_{k}}-\mathbf {Cy} _{t_{k}/t_{k-1}}({\widehat {\theta }}_{M}(h))\}_{k=1,\ldots M-1}} measures the goodness of fit of the model to the data, which is also used as a practical tool for designing of models and for optimal experimental design.
  • For smooth enough function h {\displaystyle \mathbf {h} } , nonlinear observation equations of the form (6) can be transformed to the simpler one (2), and the order- β {\displaystyle \beta } innovation estimator can be applied.


Fig. 1 Histograms of the differences ( α ^ M α ^ h , M D , σ ^ M σ ^ h , M D ) {\displaystyle ({\widehat {\alpha }}_{M}-{\widehat {\alpha }}_{h,M}^{D},{\widehat {\sigma }}_{M}-{\widehat {\sigma }}_{h,M}^{D})} and ( α ^ M α ^ h , M , σ ^ M σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{M}-{\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{M}-{\widehat {\sigma }}_{h,M})} between the exact innovation estimator ( α ^ M , σ ^ M ) {\displaystyle ({\widehat {\alpha }}_{M},{\widehat {\sigma }}_{M})} with the conventional ( α ^ h , M D , σ ^ h , M D ) {\displaystyle ({\widehat {\alpha }}_{h,M}^{D},{\widehat {\sigma }}_{h,M}^{D})} and order- 1 {\displaystyle 1} ( α ^ h , M , σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{h,M})} innovation estimators for the parameters ( α , σ ) {\displaystyle (\alpha ,\sigma )} of the model (10)-(11) given 100 {\displaystyle 100} time series of M = 10 {\displaystyle M=10} noisy observations on the time interval [ 0.5 , 0.5 + M 1 ] {\displaystyle } with sampling period Δ = 1 {\displaystyle \Delta =1} .

Figure 1 presents the histograms of the differences ( α ^ M α ^ h , M D , σ ^ M σ ^ h , M D ) {\displaystyle ({\widehat {\alpha }}_{M}-{\widehat {\alpha }}_{h,M}^{D},{\widehat {\sigma }}_{M}-{\widehat {\sigma }}_{h,M}^{D})} and ( α ^ M α ^ h , M , σ ^ M σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{M}-{\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{M}-{\widehat {\sigma }}_{h,M})} between the exact innovation estimator ( α ^ M , σ ^ M ) {\displaystyle ({\widehat {\alpha }}_{M},{\widehat {\sigma }}_{M})} with the conventional ( α ^ h , M D , σ ^ h , M D ) {\displaystyle ({\widehat {\alpha }}_{h,M}^{D},{\widehat {\sigma }}_{h,M}^{D})} and order- 1 {\displaystyle 1} ( α ^ h , M , σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{h,M})} innovation estimators for the parameters α = 0.1 {\displaystyle \alpha =-0.1} and σ = 0.1 {\displaystyle \sigma =0.1} of the equation

d x = t x d t + σ t x d w ( 10 ) {\displaystyle \qquad dx=txdt+\sigma {\sqrt {t}}xdw\quad (10)}

obtained from 100 time series z t 0 , . . , z t M 1 {\displaystyle z_{t_{0}},..,z_{t_{M-1}}} of M {\displaystyle M} noisy observations

z t k = x ( t k ) + e t k ,  for  k = 0 , 1 , . . , M 1 , ( 11 ) {\displaystyle \qquad z_{t_{k}}=x(t_{k})+e_{t_{k}},{\text{ for }}k=0,1,..,M-1,\quad (11)}

of x {\displaystyle x} on the observation times { t } M = 10 = { t k = 0.5 + k Δ : k = 0 , M 1 {\displaystyle \{t\}_{M=10}=\{t_{k}=0.5+k\Delta :k=0,\ldots M-1} , Δ = 1 } {\displaystyle \Delta =1\}} , with x ( 0.5 ) = 1 {\displaystyle x(0.5)=1} and Π k = 0.0001 {\displaystyle \Pi _{k}=0.0001} . The classical and the order- 1 {\displaystyle 1} Local Linearization filters of the innovation estimators ( α ^ h , M D , σ ^ h , M D ) {\displaystyle ({\widehat {\alpha }}_{h,M}^{D},{\widehat {\sigma }}_{h,M}^{D})} and ( α ^ h , M , σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{h,M})} are defined as in , respectively, on the uniform time discretizations ( τ ) h = Δ { t } M {\displaystyle \left(\tau \right)_{h=\Delta }\equiv \{t\}_{M}} and ( τ ) h = Δ / 2 , Δ / 8 , Δ / 32 = { τ n : τ n = 0.5 + n h {\displaystyle \left(\tau \right)_{h=\Delta /2,\Delta /8,\Delta /32}=\{\tau _{n}:\tau _{n}=0.5+nh} , with n = 0 , 1 , , ( M 1 ) / h } {\displaystyle n=0,1,\ldots ,(M-1)/h\}} . The number of stochastic simulations of the order- 1 {\displaystyle 1} Local Linearization filter is estimated via an adaptive sampling algorithm with moderate tolerance. The Figure illustrates the convergence of the order- 1 {\displaystyle 1} innovation estimator ( α ^ h , M , σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{h,M})} to the exact innovation estimators ( α ^ M , σ ^ M ) {\displaystyle ({\widehat {\alpha }}_{M},{\widehat {\sigma }}_{M})} as h {\displaystyle h} decreases, which substantially improves the estimation provided by the conventional innovation estimator ( α ^ Δ , M D , σ ^ Δ , M D ) {\displaystyle ({\widehat {\alpha }}_{\Delta ,M}^{D},{\widehat {\sigma }}_{\Delta ,M}^{D})} .

Deterministic approximations

The order- β {\displaystyle \beta } innovation estimators overcome the drawback of the conventional-type innovation estimators concerning the impossibility of reducing bias . However, the viable bias reduction of an order- β {\displaystyle \beta } innovation estimators might eventually require that the associated order- β {\displaystyle \beta } LMV filter performs a large number of stochastic simulations . In situations where only low or medium precision approximate estimators are needed, an alternative deterministic filter algorithm - called deterministic order- β {\displaystyle \beta } LMV filter - can be obtained by tracking the first two conditional moments μ {\displaystyle \mu } and Λ {\displaystyle \Lambda } of the order- β {\displaystyle \beta } weak approximation y {\displaystyle \mathbf {y} } at all the time instants τ n ( τ ) h {\displaystyle \tau _{n}\in \left(\tau \right)_{h}} in between two consecutive observation times t k {\displaystyle t_{k}} and t k + 1 {\displaystyle t_{k+1}} . That is, the value of the predictions y t k + 1 / t k {\displaystyle \mathbf {y} _{t_{k+1}/t_{k}}} and P t k + 1 / t k {\displaystyle \mathbf {P} _{t_{k+1}/t_{k}}} in the filtering algorithm are computed from the recursive formulas

y τ n + 1 / t k = μ ( τ n , y τ n / t k ; h n ) {\displaystyle \qquad \qquad \mathbf {y} _{\tau _{n+1}/t_{k}}=\mu (\tau _{n},\mathbf {y} _{\tau _{n}/t_{k}};h_{n})\quad } and P τ n + 1 / t k = Λ ( τ n , P τ n / t k ; h n ) , {\displaystyle \quad \mathbf {P} _{\tau _{n+1}/t_{k}}=\Lambda (\tau _{n},\mathbf {P} _{\tau _{n}/t_{k}};h_{n}),\quad } with τ n , τ n + 1 ( τ ) h [ t k , t k + 1 ] , {\displaystyle \tau _{n},\tau _{n+1}\in (\tau )_{h}\cap \lbrack t_{k,}t_{k+1}],}

and with h n = τ n + 1 τ n {\displaystyle h_{n}=\tau _{n+1}-\tau _{n}} . The approximate innovation estimators θ ^ h , M {\displaystyle {\widehat {\mathbf {\theta } }}_{h,M}} defined with these deterministic order- β {\displaystyle \beta } LMV filters not longer converge to the exact innovation estimator, but allow a significant bias reduction in the estimated parameters for a given finite sample with a lower computational cost.


Fig. 2 Histograms and confidence limits for the innovation estimators ( α ^ h , M , σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{h,M})} and ( α ^ , M , σ ^ , M ) {\displaystyle ({\widehat {\alpha }}_{\cdot ,M},{\widehat {\sigma }}_{\cdot ,M})} of ( α , σ ) {\displaystyle (\alpha ,\sigma )} computed with the deterministic order-1 LL filter on uniform ( τ ) h , M {\displaystyle \left(\tau \right)_{h,M}} and adaptive ( τ ) , M {\displaystyle \left(\tau \right)_{\cdot ,M}} time discretizations, respectively, from 100 {\displaystyle 100} noisy realizations of the Van der Pol model (12)-(14) with sampling period Δ = 1 {\displaystyle \Delta =1} on the time interval [ 0 , M 1 ] {\displaystyle } and M = 30 {\displaystyle M=30} . Observe the bias reduction of the estimated parameter as h {\displaystyle h} decreases.

Figure 2 presents the histograms and the confidence limits of the approximate innovation estimators ( α ^ h , M , σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{h,M},{\widehat {\sigma }}_{h,M})} and ( α ^ , M , σ ^ , M ) {\displaystyle ({\widehat {\alpha }}_{\cdot ,M},{\widehat {\sigma }}_{\cdot ,M})} for the parameters α = 1 {\displaystyle \alpha =1} and σ = 1 {\displaystyle \sigma =1} of the Van der Pol oscillator with random frequency

d x 1 = x 2 d t ( 12 ) {\displaystyle \qquad dx_{1}=x_{2}dt\quad (12)}

d x 2 = ( ( x 1 2 1 ) x 2 α x 1 ) d t + σ x 1 d w ( 13 ) {\displaystyle \qquad dx_{2}=(-(x_{1}^{2}-1)x_{2}-\alpha x_{1})dt+\sigma x_{1}dw\quad (13)}

obtained from 100 time series z t 0 , . . , z t M 1 {\displaystyle z_{t_{0}},..,z_{t_{M-1}}} of M {\displaystyle M} partial and noisy observations

z t k = x 1 ( t k ) + e t k ,  for  k = 0 , 1 , . . , M 1 , ( 14 ) {\displaystyle \qquad z_{t_{k}}=x_{1}(t_{k})+e_{t_{k}},{\text{ for }}k=0,1,..,M-1,\quad (14)}

of x {\displaystyle x} on the observation times { t } M = 30 = { t k = k Δ : k = 0 , M 1 {\displaystyle \{t\}_{M=30}=\{t_{k}=k\Delta :k=0,\ldots M-1} , Δ = 1 } {\displaystyle \Delta =1\}} , with ( x 1 ( 0 ) , x 1 ( 0 ) ) = ( 1 , 1 ) {\displaystyle (x_{1}(0),x_{1}(0))=(1,1)} and Π k = 0.001 {\displaystyle \Pi _{k}=0.001} . The deterministic order- 1 {\displaystyle 1} Local Linearization filter of the innovation estimators ( α ^ h , , M , σ ^ h , M ) {\displaystyle ({\widehat {\alpha }}_{h,,M},{\widehat {\sigma }}_{h,M})} and ( α ^ , M , σ ^ , M ) {\displaystyle ({\widehat {\alpha }}_{\cdot ,M},{\widehat {\sigma }}_{\cdot ,M})} is defined , respectively, on uniform time discretizations ( τ ) h = { τ n : τ n = n h {\displaystyle \left(\tau \right)_{h}=\{\tau _{n}:\tau _{n}=nh} , with n = 0 , 1 , , ( M 1 ) / h } {\displaystyle n=0,1,\ldots ,(M-1)/h\}} and adaptive time-stepping discretization ( τ ) {\displaystyle \left(\tau \right)_{\cdot }} with moderate relative and absolute tolerances. Observe the bias reduction of the estimated parameter as h {\displaystyle h} decreases.

Software

A Matlab implementation of various approximate innovation estimators is provided by the SdeEstimation toolbox . This toolbox contains various implementations of Local Linearization filters for the state estimation and, consequently, of the Innovation Estimators for the parameters. This includes deterministic and stochastic filters with fixed step sizes and number of samples, with adaptive time stepping algorithms, with adaptive sampling algorithms, as well as local and global optimization algorithms for computing the innovation estimators. For models with complete observations free of noise, various approximations to the Quasi-Maximum Likelihood estimator are implemented in R .

Referencias

  1. ^ Ozaki T. (1994) "The local linearization filter with application to nonlinear system identification". In: Bozdogan H.(ed.) Proceedings of the first US/Japan Conference on the Frontiers of Statistical Modeling: An Informational Approach. 217-240. Kluwer Academic Publishers. https://doi.org/10.1007/978-94-011-0854-6_10
  2. ^ Jazwinski A.H., Stochastic Processes and Filtering Theory, Academic Press, New York, 2019.
  3. ^ Nielsen J.N., Vestergaard M., Madsen H. (2000) "Estimation in continuous-time stochastic volatility models using nonlinear filters", Int. J. Theor. Appl. Finance, 3, 279–308. https://doi.org/10.1142/S0219024900000139
  4. Kailath T., Lectures on Wiener and Kalman Filtering. New York: Springer-Verlag, 1981.
  5. ^ Jimenez J.C., Ozaki T. (2006) "An approximate innovation method for the estimation of diffusion processes from discrete data", J. Time Series Analysis, 27, 77-97. http://dx.doi.org/10.1111/j.1467-9892.2005.00454.x
  6. ^ Jimenez J.C., Yoshimoto A., Miwakeichi F. (2021) "State and parameter estimation of stochastic physical systems from uncertain and indirect measurements", Eur. Phys. J. Plus, 136, 869. https://doi.org/10.1140/epjp/s13360-021-01859-1
  7. Schweppe F. (1965) "Evaluation of likelihood function for Gaussian signals", IEEE Trans. Inf. Theory, 11, 61-70. https://doi.org/10.1109/TIT.1965.1053737
  8. Ljung L., System Identification, Theory for the User (2nd edn). Englewood Cliffs: Prentice Hall, 1999.
  9. Ljung L., Caines P.E. (1979) "Asymptotic normality of prediction error estimators for approximate system models", Stochastics 3, 29-46. https://doi.org/10.1080/17442507908833135
  10. ^ Nolsoe K., Nielsen, J.N., Madsen H. (2000) "Prediction-based estimating function for diffusion processes with measurement noise", Technical Reports 2000, No. 10, Informatics and Mathematical Modelling, Technical University of Denmark.
  11. ^ Ozaki T., Jimenez J.C., Haggan V. (2000) "Role of the likelihood function in the estimation of chaos models", J. Time Ser. Anal., 21, 363-387. http://dx.doi.org/10.1111/1467-9892.00189
  12. ^ Jimenez J.C. (2020) "Bias reduction in the estimation of diffusion processes from discrete observations", IMA J. Math. Control. Inform., 37, 1468-1505. https://doi.org/10.1093/imamci/dnaa021
  13. ^ Jimenez J.C. (2019) "Approximate linear minimum variance filters for continuous-discrete state space models: convergence and practical adaptive algorithms", IMA J. Math. Control Inform., 36, 341-378. http://dx.doi.org/10.1093/imamci/dnx047
  14. Shoji I. (1998) "A comparative study of maximum likelihood estimators for nonlinear dynamical systems", Int. J. Control, 71, 391-404. https://doi.org/10.1080/002071798221731
  15. Nielsen, J. N., Madsen, H. (2001) "Applying the EKF to stochastic differential equations with level effects", Automatica, 37, 107-112. https://doi.org/10.1016/S0005-1098(00)00128-X
  16. ^ Singer H. (2002) "Parameter estimation of nonlinear stochastic differential equations: Simulated maximum likelihood versus extended Kalman filter and Ito-Taylor expansion", J. Comput. Graph. Stat., 11, 972-995. https://doi.org/10.1198/106186002808
  17. Ozaki T., Iino M. (2001) "An innovation approach to non-Gaussian time series analysis", J. Appl. Prob., 38A, 78-92. https://doi.org/10.1239/jap/1085496593
  18. Peng H., Ozaki T., Jimenez J.C. (2002) "Modeling and control for foreign exchange based on a continuous time stochastic microstructure model", Proceedings of the 41st IEEE Conference on Decision and Control, LasVegas, Nevada USA, December 2002 IEEE, 4, 4440-4445. http://dx.doi.org/10.1109/CDC.2002.1185071
  19. Kloeden P.E., Platen E., Numerical Solution of Stochastic Differential Equations, 3rd edn. Berlin: Springer, 1999.
  20. SdeEstimation toolbox https://github.com/locallinearization/SdeEstimation
  21. Iacus S.M., Simulation and inference for stochastic differential equations: with R examples, New York: Springer, 2008.
Categories: