Misplaced Pages

Gram matrix

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by Citation bot (talk | contribs) at 01:11, 9 April 2020 (Alter: title. Add: edition. | You can use this bot yourself. Report bugs here. | Activated by Amigao | Category:Systems theory | via #UCB_Category). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 01:11, 9 April 2020 by Citation bot (talk | contribs) (Alter: title. Add: edition. | You can use this bot yourself. Report bugs here. | Activated by Amigao | Category:Systems theory | via #UCB_Category)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v 1 , , v n {\displaystyle v_{1},\dots ,v_{n}} in an inner product space is the Hermitian matrix of inner products, whose entries are given by G i j = v i , v j {\displaystyle G_{ij}=\langle v_{i},v_{j}\rangle } .

An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.

It is named after Jørgen Pedersen Gram.

Examples

For finite-dimensional real vectors in R n {\displaystyle \mathbb {R} ^{n}} with the usual Euclidean dot product, the Gram matrix is simply G = V T V {\displaystyle G=V^{\mathrm {T} }V} , where V {\displaystyle V} is a matrix whose columns are the vectors v k {\displaystyle v_{k}} . For complex vectors in C n {\displaystyle \mathbb {C} ^{n}} , G = V H V {\displaystyle G=V^{H}V} , where V H {\displaystyle V^{H}} is the conjugate transpose of V {\displaystyle V} .

Given square-integrable functions { i ( ) , i = 1 , , n } {\displaystyle \{\ell _{i}(\cdot ),\,i=1,\dots ,n\}} on the interval [ t 0 , t f ] {\displaystyle } , the Gram matrix G = [ G i j ] {\displaystyle G=} is:

G i j = t 0 t f i ( τ ) j ¯ ( τ ) d τ . {\displaystyle G_{ij}=\int _{t_{0}}^{t_{f}}\ell _{i}(\tau ){\bar {\ell _{j}}}(\tau )\,d\tau .}

For any bilinear form B {\displaystyle B} on a finite-dimensional vector space over any field we can define a Gram matrix G {\displaystyle G} attached to a set of vectors v 1 , , v n {\displaystyle v_{1},\dots ,v_{n}} by G i j = B ( v i , v j ) {\displaystyle G_{ij}=B(v_{i},v_{j})} . The matrix will be symmetric if the bilinear form B {\displaystyle B} is symmetric.

Applications

  • In Riemannian geometry, given an embedded k {\displaystyle k} -dimensional Riemannian manifold M R n {\displaystyle M\subset \mathbb {R} ^{n}} and a coordinate chart ϕ : U M {\displaystyle \phi :U\to M} for ( x 1 , , x k ) U R k {\displaystyle (x_{1},\ldots ,x_{k})\in U\subset \mathbb {R} ^{k}} , the volume form ω {\displaystyle \omega } on M {\displaystyle M} induced by the embedding may be computed using the Gramian of the coordinate tangent vectors:
ω = det G   d x 1 d x k , G = [ ϕ x i , ϕ x j ] . {\displaystyle \omega ={\sqrt {\det G}}\ dx_{1}\cdots dx_{k},\quad G=\left.}

This generalizes the classical surface integral of a parametrized surface ϕ : U S R 3 {\displaystyle \phi :U\to S\subset \mathbb {R} ^{3}} for ( x , y ) U R 2 {\displaystyle (x,y)\in U\subset \mathbb {R} ^{2}} :

S f   d A   =   U f ( ϕ ( x , y ) ) | ϕ x × ϕ y | d x d y . {\displaystyle \int _{S}f\ dA\ =\ \iint _{U}f(\phi (x,y))\,\left|{\tfrac {\partial \phi }{\partial x}}\,{\times }\,{\tfrac {\partial \phi }{\partial y}}\right|\,dx\,dy.}

Properties

Positive-semidefiniteness

The Gramian matrix is positive-semidefinite, and every positive symmetric semidefinite matrix is the Gramian matrix for some set of vectors. Further, in finite-dimensions it determines the vectors up to isomorphism, i.e. any two sets of vectors with the same Gramian matrix must be related by a single unitary matrix. These facts follow from taking the spectral decomposition of any positive-semidefinite matrix P {\displaystyle P} , so that P = U D U H = ( U D ) ( U D ) H {\displaystyle P=UDU^{\mathrm {H} }=(U{\sqrt {D}})(U{\sqrt {D}})^{\mathrm {H} }} and so P {\displaystyle P} is the Gramian matrix of the rows of U D {\displaystyle U{\sqrt {D}}} . The Gramian matrix of any orthonormal basis is the identity matrix. The infinite-dimensional analog of this statement is Mercer's theorem.

Derivation of positive-semidefiniteness

The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:

x T G x = i , j v i , v j x i x j = i , j v i x i , v j x j = i v i x i , j v j x j = i v i x i 2 0. {\displaystyle x^{\mathrm {T} }\mathbf {G} x=\sum _{i,j}\langle v_{i},v_{j}\rangle x_{i}x_{j}=\sum _{i,j}\langle v_{i}x_{i},v_{j}x_{j}\rangle =\langle \sum _{i}v_{i}x_{i},\sum _{j}v_{j}x_{j}\rangle =\|\sum _{i}v_{i}x_{i}\|^{2}\geq 0.}

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product.

Note that this also shows that the Gramian matrix is positive definite if and only if the vectors v i {\displaystyle v_{i}} are linearly independent.

Gram determinant

The Gram determinant or Gramian is the determinant of the Gram matrix:

G ( x 1 , , x n ) = | x 1 , x 1 x 1 , x 2 x 1 , x n x 2 , x 1 x 2 , x 2 x 2 , x n x n , x 1 x n , x 2 x n , x n | . {\displaystyle G(x_{1},\dots ,x_{n})={\begin{vmatrix}\langle x_{1},x_{1}\rangle &\langle x_{1},x_{2}\rangle &\dots &\langle x_{1},x_{n}\rangle \\\langle x_{2},x_{1}\rangle &\langle x_{2},x_{2}\rangle &\dots &\langle x_{2},x_{n}\rangle \\\vdots &\vdots &\ddots &\vdots \\\langle x_{n},x_{1}\rangle &\langle x_{n},x_{2}\rangle &\dots &\langle x_{n},x_{n}\rangle \end{vmatrix}}.}

If x 1 , , x n {\displaystyle x_{1},\cdots ,x_{n}} are vectors in R n {\displaystyle \mathbb {R} ^{n}} , then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular.

The Gram determinant can also be expressed in terms of the exterior product of vectors by

G ( x 1 , , x n ) = x 1 x n 2 . {\displaystyle G(x_{1},\dots ,x_{n})=\|x_{1}\wedge \cdots \wedge x_{n}\|^{2}.}

See also

References

  1. Horn & Johnson 2013, p. 441
    Theorem 7.2.10 Let v 1 , , v m {\displaystyle v_{1},\ldots ,v_{m}} be vectors in an inner product space V {\displaystyle V} with inner product , {\displaystyle \langle {\cdot ,\cdot }\rangle } and let G = [ v j , v i ] i , j = 1 m M m {\displaystyle G=_{i,j=1}^{m}\in M_{m}} . Then
    (a) G {\displaystyle G} is Hermitian and positive-semidefinite
    (b) G {\displaystyle G} is positive-definite if and only if the vectors v 1 , , v m {\displaystyle v_{1},\ldots ,v_{m}} are linearly-independent.
    (c) rank G = dim span { v 1 , , v m } {\displaystyle \operatorname {rank} G=\dim \operatorname {span} \{v_{1},\ldots ,v_{m}\}}
  2. Lanckriet, G. R. G.; Cristianini, N.; Bartlett, P.; Ghaoui, L. E.; Jordan, M. I. (2004). "Learning the kernel matrix with semidefinite programming". Journal of Machine Learning Research. 5: 27–72 .

External links

Matrix classes
Explicitly constrained entries
Constant
Conditions on eigenvalues or eigenvectors
Satisfying conditions on products or inverses
With specific applications
Used in statistics
Used in graph theory
Used in science and engineering
Related terms
Categories: