Misplaced Pages

Empirical measure

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (March 2011) (Learn how and when to remove this message)

In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical statistics.

The motivation for studying empirical measures is that it is often impossible to know the true underlying probability measure P {\displaystyle P} . We collect observations X 1 , X 2 , , X n {\displaystyle X_{1},X_{2},\dots ,X_{n}} and compute relative frequencies. We can estimate P {\displaystyle P} , or a related distribution function F {\displaystyle F} by means of the empirical measure or empirical distribution function, respectively. These are uniformly good estimates under certain conditions. Theorems in the area of empirical processes provide rates of this convergence.

Definition

Let X 1 , X 2 , {\displaystyle X_{1},X_{2},\dots } be a sequence of independent identically distributed random variables with values in the state space S with probability distribution P.

Definition

The empirical measure Pn is defined for measurable subsets of S and given by
P n ( A ) = 1 n i = 1 n I A ( X i ) = 1 n i = 1 n δ X i ( A ) {\displaystyle P_{n}(A)={1 \over n}\sum _{i=1}^{n}I_{A}(X_{i})={\frac {1}{n}}\sum _{i=1}^{n}\delta _{X_{i}}(A)}
where I A {\displaystyle I_{A}} is the indicator function and δ X {\displaystyle \delta _{X}} is the Dirac measure.

Properties

  • For a fixed measurable set A, nPn(A) is a binomial random variable with mean nP(A) and variance nP(A)(1 − P(A)).
  • For a fixed partition A i {\displaystyle A_{i}} of S, random variables Y i = n P n ( A i ) {\displaystyle Y_{i}=nP_{n}(A_{i})} form a multinomial distribution with event probabilities P ( A i ) {\displaystyle P(A_{i})}
    • The covariance matrix of this multinomial distribution is C o v ( Y i , Y j ) = n P ( A i ) ( δ i j P ( A j ) ) {\displaystyle Cov(Y_{i},Y_{j})=nP(A_{i})(\delta _{ij}-P(A_{j}))} .

Definition

( P n ( c ) ) c C {\displaystyle {\bigl (}P_{n}(c){\bigr )}_{c\in {\mathcal {C}}}} is the empirical measure indexed by C {\displaystyle {\mathcal {C}}} , a collection of measurable subsets of S.

To generalize this notion further, observe that the empirical measure P n {\displaystyle P_{n}} maps measurable functions f : S R {\displaystyle f:S\to \mathbb {R} } to their empirical mean,

f P n f = S f d P n = 1 n i = 1 n f ( X i ) {\displaystyle f\mapsto P_{n}f=\int _{S}f\,dP_{n}={\frac {1}{n}}\sum _{i=1}^{n}f(X_{i})}

In particular, the empirical measure of A is simply the empirical mean of the indicator function, Pn(A) = Pn IA.

For a fixed measurable function f {\displaystyle f} , P n f {\displaystyle P_{n}f} is a random variable with mean E f {\displaystyle \mathbb {E} f} and variance 1 n E ( f E f ) 2 {\displaystyle {\frac {1}{n}}\mathbb {E} (f-\mathbb {E} f)^{2}} .

By the strong law of large numbers, Pn(A) converges to P(A) almost surely for fixed A. Similarly P n f {\displaystyle P_{n}f} converges to E f {\displaystyle \mathbb {E} f} almost surely for a fixed measurable function f {\displaystyle f} . The problem of uniform convergence of Pn to P was open until Vapnik and Chervonenkis solved it in 1968.

If the class C {\displaystyle {\mathcal {C}}} (or F {\displaystyle {\mathcal {F}}} ) is Glivenko–Cantelli with respect to P then Pn converges to P uniformly over c C {\displaystyle c\in {\mathcal {C}}} (or f F {\displaystyle f\in {\mathcal {F}}} ). In other words, with probability 1 we have

P n P C = sup c C | P n ( c ) P ( c ) | 0 , {\displaystyle \|P_{n}-P\|_{\mathcal {C}}=\sup _{c\in {\mathcal {C}}}|P_{n}(c)-P(c)|\to 0,}
P n P F = sup f F | P n f E f | 0. {\displaystyle \|P_{n}-P\|_{\mathcal {F}}=\sup _{f\in {\mathcal {F}}}|P_{n}f-\mathbb {E} f|\to 0.}

Empirical distribution function

Main article: Empirical distribution function

The empirical distribution function provides an example of empirical measures. For real-valued iid random variables X 1 , , X n {\displaystyle X_{1},\dots ,X_{n}} it is given by

F n ( x ) = P n ( ( , x ] ) = P n I ( , x ] . {\displaystyle F_{n}(x)=P_{n}((-\infty ,x])=P_{n}I_{(-\infty ,x]}.}

In this case, empirical measures are indexed by a class C = { ( , x ] : x R } . {\displaystyle {\mathcal {C}}=\{(-\infty ,x]:x\in \mathbb {R} \}.} It has been shown that C {\displaystyle {\mathcal {C}}} is a uniform Glivenko–Cantelli class, in particular,

sup F F n ( x ) F ( x ) 0 {\displaystyle \sup _{F}\|F_{n}(x)-F(x)\|_{\infty }\to 0}

with probability 1.

See also

References

  1. Vapnik, V.; Chervonenkis, A (1968). "Uniform convergence of frequencies of occurrence of events to their probabilities". Dokl. Akad. Nauk SSSR. 181.

Further reading

Categories: