Misplaced Pages

Bhatia–Davis inequality

Article snapshot taken from[REDACTED] with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article relies excessively on references to primary sources. Please improve this article by adding secondary or tertiary sources.
Find sources: "Bhatia–Davis inequality" – news · newspapers · books · scholar · JSTOR (February 2016) (Learn how and when to remove this message)

In mathematics, the Bhatia–Davis inequality, named after Rajendra Bhatia and Chandler Davis, is an upper bound on the variance σ of any bounded probability distribution on the real line.

Statement

Let m and M be the lower and upper bounds, respectively, for a set of real numbers a1, ..., an , with a particular probability distribution. Let μ be the expected value of this distribution.

Then the Bhatia–Davis inequality states:

σ 2 ( M μ ) ( μ m ) . {\displaystyle \sigma ^{2}\leq (M-\mu )(\mu -m).\,}

Equality holds if and only if every aj in the set of values is equal either to M or to m.

Proof

Since m A M {\displaystyle m\leq A\leq M} ,

0 E [ ( M A ) ( A m ) ] = E [ A 2 ] m M + ( m + M ) μ {\displaystyle 0\leq \mathbb {E} =-\mathbb {E} -mM+(m+M)\mu } .

Thus,

σ 2 = E [ A 2 ] μ 2 m M + ( m + M ) μ μ 2 = ( M μ ) ( μ m ) {\displaystyle \sigma ^{2}=\mathbb {E} -\mu ^{2}\leq -mM+(m+M)\mu -\mu ^{2}=(M-\mu )(\mu -m)} .

Extensions of the Bhatia–Davis inequality

If Φ {\displaystyle \Phi } is a positive and unital linear mapping of a C* -algebra A {\displaystyle {\mathcal {A}}} into a C* -algebra B {\displaystyle {\mathcal {B}}} , and A is a self-adjoint element of A {\displaystyle {\mathcal {A}}} satisfying m {\displaystyle \leq } A {\displaystyle \leq } M, then:

Φ ( A 2 ) ( Φ A ) 2 ( M Φ A ) ( Φ A m ) {\displaystyle \Phi (A^{2})-(\Phi A)^{2}\leq (M-\Phi A)(\Phi A-m)} .

If X {\displaystyle {\mathit {X}}} is a discrete random variable such that

P ( X = x i ) = p i , {\displaystyle P(X=x_{i})=p_{i},} where i = 1 , . . . , n {\displaystyle i=1,...,n} , then:

s p 2 = 1 n p i x i 2 ( 1 n p i x i ) 2 ( M 1 n p i x i ) ( 1 n p i x i m ) {\displaystyle s_{p}^{2}=\sum _{1}^{n}p_{i}x_{i}^{2}-(\sum _{1}^{n}p_{i}x_{i})^{2}\leq (M-\sum _{1}^{n}p_{i}x_{i})(\sum _{1}^{n}p_{i}x_{i}-m)} ,

where 0 p i 1 {\displaystyle 0\leq p_{i}\leq 1} and 1 n p i = 1 {\displaystyle \sum _{1}^{n}p_{i}=1} .

Comparisons to other inequalities

The Bhatia–Davis inequality is stronger than Popoviciu's inequality on variances (note, however, that Popoviciu's inequality does not require knowledge of the expectation or mean), as can be seen from the conditions for equality. Equality holds in Popoviciu's inequality if and only if half of the aj are equal to the upper bounds and half of the aj are equal to the lower bounds. Additionally, Sharma has made further refinements on the Bhatia–Davis inequality.

See also

References

  1. Bhatia, Rajendra; Davis, Chandler (2000). "A Better Bound on the Variance". The American Mathematical Monthly. 107 (4): 353–357. doi:10.1080/00029890.2000.12005203. ISSN 0002-9890. S2CID 38818437.
  2. Sharma, Rajesh (2008). "Some more inequalities for arithmetic mean, harmonic mean and variance". Journal of Mathematical Inequalities (1): 109–114. doi:10.7153/jmi-02-11. ISSN 1846-579X.


Stub icon

This probability-related article is a stub. You can help Misplaced Pages by expanding it.

Categories:
Bhatia–Davis inequality Add topic