Misplaced Pages

Compound Poisson process

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Random process in probability theory
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Compound Poisson process" – news · newspapers · books · scholar · JSTOR (September 2014) (Learn how and when to remove this message)

A compound Poisson process is a continuous-time stochastic process with jumps. The jumps arrive randomly according to a Poisson process and the size of the jumps is also random, with a specified probability distribution. To be precise, a compound Poisson process, parameterised by a rate λ > 0 {\displaystyle \lambda >0} and jump size distribution G, is a process { Y ( t ) : t 0 } {\displaystyle \{\,Y(t):t\geq 0\,\}} given by

Y ( t ) = i = 1 N ( t ) D i {\displaystyle Y(t)=\sum _{i=1}^{N(t)}D_{i}}

where, { N ( t ) : t 0 } {\displaystyle \{\,N(t):t\geq 0\,\}} is the counting variable of a Poisson process with rate λ {\displaystyle \lambda } , and { D i : i 1 } {\displaystyle \{\,D_{i}:i\geq 1\,\}} are independent and identically distributed random variables, with distribution function G, which are also independent of { N ( t ) : t 0 } . {\displaystyle \{\,N(t):t\geq 0\,\}.\,}

When D i {\displaystyle D_{i}} are non-negative integer-valued random variables, then this compound Poisson process is known as a stuttering Poisson process.

Properties of the compound Poisson process

The expected value of a compound Poisson process can be calculated using a result known as Wald's equation as:

E ( Y ( t ) ) = E ( D 1 + + D N ( t ) ) = E ( N ( t ) ) E ( D 1 ) = E ( N ( t ) ) E ( D ) = λ t E ( D ) . {\displaystyle \operatorname {E} (Y(t))=\operatorname {E} (D_{1}+\cdots +D_{N(t)})=\operatorname {E} (N(t))\operatorname {E} (D_{1})=\operatorname {E} (N(t))\operatorname {E} (D)=\lambda t\operatorname {E} (D).}

Making similar use of the law of total variance, the variance can be calculated as:

var ( Y ( t ) ) = E ( var ( Y ( t ) N ( t ) ) ) + var ( E ( Y ( t ) N ( t ) ) ) = E ( N ( t ) var ( D ) ) + var ( N ( t ) E ( D ) ) = var ( D ) E ( N ( t ) ) + E ( D ) 2 var ( N ( t ) ) = var ( D ) λ t + E ( D ) 2 λ t = λ t ( var ( D ) + E ( D ) 2 ) = λ t E ( D 2 ) . {\displaystyle {\begin{aligned}\operatorname {var} (Y(t))&=\operatorname {E} (\operatorname {var} (Y(t)\mid N(t)))+\operatorname {var} (\operatorname {E} (Y(t)\mid N(t)))\\&=\operatorname {E} (N(t)\operatorname {var} (D))+\operatorname {var} (N(t)\operatorname {E} (D))\\&=\operatorname {var} (D)\operatorname {E} (N(t))+\operatorname {E} (D)^{2}\operatorname {var} (N(t))\\&=\operatorname {var} (D)\lambda t+\operatorname {E} (D)^{2}\lambda t\\&=\lambda t(\operatorname {var} (D)+\operatorname {E} (D)^{2})\\&=\lambda t\operatorname {E} (D^{2}).\end{aligned}}}

Lastly, using the law of total probability, the moment generating function can be given as follows:

Pr ( Y ( t ) = i ) = n Pr ( Y ( t ) = i N ( t ) = n ) Pr ( N ( t ) = n ) {\displaystyle \Pr(Y(t)=i)=\sum _{n}\Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n)}
E ( e s Y ) = i e s i Pr ( Y ( t ) = i ) = i e s i n Pr ( Y ( t ) = i N ( t ) = n ) Pr ( N ( t ) = n ) = n Pr ( N ( t ) = n ) i e s i Pr ( Y ( t ) = i N ( t ) = n ) = n Pr ( N ( t ) = n ) i e s i Pr ( D 1 + D 2 + + D n = i ) = n Pr ( N ( t ) = n ) M D ( s ) n = n Pr ( N ( t ) = n ) e n ln ( M D ( s ) ) = M N ( t ) ( ln ( M D ( s ) ) ) = e λ t ( M D ( s ) 1 ) . {\displaystyle {\begin{aligned}\operatorname {E} (e^{sY})&=\sum _{i}e^{si}\Pr(Y(t)=i)\\&=\sum _{i}e^{si}\sum _{n}\Pr(Y(t)=i\mid N(t)=n)\Pr(N(t)=n)\\&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(Y(t)=i\mid N(t)=n)\\&=\sum _{n}\Pr(N(t)=n)\sum _{i}e^{si}\Pr(D_{1}+D_{2}+\cdots +D_{n}=i)\\&=\sum _{n}\Pr(N(t)=n)M_{D}(s)^{n}\\&=\sum _{n}\Pr(N(t)=n)e^{n\ln(M_{D}(s))}\\&=M_{N(t)}(\ln(M_{D}(s)))\\&=e^{\lambda t\left(M_{D}(s)-1\right)}.\end{aligned}}}

Exponentiation of measures

Let N, Y, and D be as above. Let μ be the probability measure according to which D is distributed, i.e.

μ ( A ) = Pr ( D A ) . {\displaystyle \mu (A)=\Pr(D\in A).\,}

Let δ0 be the trivial probability distribution putting all of the mass at zero. Then the probability distribution of Y(t) is the measure

exp ( λ t ( μ δ 0 ) ) {\displaystyle \exp(\lambda t(\mu -\delta _{0}))\,}

where the exponential exp(ν) of a finite measure ν on Borel subsets of the real line is defined by

exp ( ν ) = n = 0 ν n n ! {\displaystyle \exp(\nu )=\sum _{n=0}^{\infty }{\nu ^{*n} \over n!}}

and

ν n = ν ν n  factors {\displaystyle \nu ^{*n}=\underbrace {\nu *\cdots *\nu } _{n{\text{ factors}}}}

is a convolution of measures, and the series converges weakly.

See also

Stochastic processes
Discrete time
Continuous time
Both
Fields and other
Time series models
Financial models
Actuarial models
Queueing models
Properties
Limit theorems
Inequalities
Tools
Disciplines
Categories: