Revision as of 11:07, 7 July 2004 editThe Anome (talk | contribs)Edit filter managers, Administrators253,502 edits Category:StatisticsCategory:Information theory← Previous edit | Revision as of 18:33, 9 September 2004 edit undo150.135.248.126 (talk) Added links to related information "physical information" and principle of "extreme physical information" which generates laws of physics.Next edit → | ||
Line 86: | Line 86: | ||
In case the parameter θ is vector valued, the information is a positive-definite matrix, which defines a metric on the parameter space; consequently ] is applied to this topic. See ]. | In case the parameter θ is vector valued, the information is a positive-definite matrix, which defines a metric on the parameter space; consequently ] is applied to this topic. See ]. | ||
=== Physical information === | |||
The difference between the Fisher information in data and in the source effect that generated them is called the ]. When the latter is mathematically extremized through choice of the system probability amplitudes, the approach is called the principle of ]. The solution amplitudes define the physics of the source effect. | |||
]] | ]] |
Revision as of 18:33, 9 September 2004
In statistics, the Fisher information I(θ), thought of as the amount of information that an observable random variable carries about an unobservable parameter θ upon which the probability distribution of X depends, is the variance of the score. Because the expectation of the score is zero, this may be written as
where f is the probability density function of random variable X. The Fisher information is thus the expectation of the square of the score. A random variable carrying high Fisher information implies that the absolute value of the score is frequently high (remember that the expectation of the score is zero).
This concept is named in honor of the geneticist and statistician Ronald Fisher.
Note that the information as defined above is not a function of a particular observation, as the random variable X has been averaged out. The concept of information is useful when comparing two methods of observation of some random process.
Information as defined above may be written as
and is thus the expection of log of the second derivative of X with respect to θ. Information may thus be seen to be a measure of the "sharpness" of the support curve near the maximum likelihood estimate of θ. A "blunt" support curve (one with a shallow maximum) would have low expected second derivative, and thus low information; while a sharp one would have a high expected second derivative and thus high information.
Information is additive, in the sense that the information gathered by two independent experiments is the sum of the information of each of them:
This is because the variance of the sum of two independent random variables is the sum of their variances. It follows that the information in a random sample of size n is n times that in a sample of size one (if observations are independent).
The information provided by a sufficient statistic is same as that of the sample X. This may be seen by using Fisher's factorization criterion for a sufficient statistic. If T(X) is sufficient for θ, then
for some functions g and h (see sufficient statistic for a more detailed explanation). The equality of information follows from the fact that
(which is the case because h(X) is independent of θ) and the definition for information given above. More generally, if T=t(X) is a statistic, then
with equality if and only if T is a sufficient statistic.
The Cramér-Rao inequality states that the reciprocal of the Fisher information is a lower bound on the variance of any unbiased estimator of θ.
Example
The information contained in n independent Bernoulli trials, each with probability of success θ may be calculated as follows. In the following, a represents the number of successes, b the number of failures, and n=a+b is the total number of trials.
The first line is just the definition of information; the second uses the fact that the information contained in a sufficient statistic is the same as that of the sample itself; the third line just expands the log term (and drops a constant), the fourth and fifth just differentiation wrt θ, the sixth replaces a and b with their expectations, and the seventh is algebraic manipulation.
The overall result, viz
may be seen to be in accord with what one would expect, since it is the reciprocal of the variance of the sum of the n Bernoulli random variables..
In case the parameter θ is vector valued, the information is a positive-definite matrix, which defines a metric on the parameter space; consequently differential geometry is applied to this topic. See Fisher information metric.
Physical information
The difference between the Fisher information in data and in the source effect that generated them is called the physical information. When the latter is mathematically extremized through choice of the system probability amplitudes, the approach is called the principle of Extreme physical information. The solution amplitudes define the physics of the source effect.
Categories: