Misplaced Pages

Gram matrix: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 18:16, 30 May 2018 edit93.143.151.162 (talk) Derivation of Positive Semidefiniteness: Fixed typoTags: Mobile edit Mobile app edit← Previous edit Revision as of 07:33, 15 June 2018 edit undo49.244.170.113 (talk)No edit summaryNext edit →
Line 1: Line 1:
In ], the '''Gram matrix''' ('''Gramian matrix''' or '''Gramian''') of a set of vectors <math>v_1,\dots, v_n</math> in an ] is the ] of ]s, whose entries are given by <math>G_{ij}=\langle v_i, v_j \rangle</math>.<ref>{{harvnb|Horn|Johnson|2013|p=441}}<br>'''Theorem 7.2.10''' Let <math>v_1,\ldots,v_m</math> be vectors in an inner product space {{mvar|V}} with inner product <math>\langle{\cdot,\cdot}\rangle</math> and let <math>G = _{i,j=1}^m \in M_m</math>. Then<br>(a) {{mvar|G}} is Hermitian and positive-semidefinite<br>(b) {{mvar|G}} is positive-definite ] the vectors <math>v_1,\ldots,v_m</math> are linearly-independent.<br>(c) <math>\operatorname{rank}G=\dim\operatorname{span}\{v_1,\ldots,v_m\}</math></ref> In ], the '''Gram matrix''' ('''Gramian matrix''' or '''Gramian''') of a set of vectors <math>v_1,\dots, v_n</math> in an ] is the ] of ]s, whose entries are given by <math>G_{ij}=\langle v_i, v_j \rangle</math>.<ref>{{harvnb|Horn|Johnson|2013|p=441}}<br>'''Theorem 7.2.10''' Let <math>v_1,\ldots,v_m</math> be vectors in an inner product space {{mvar|V}} with inner product <math>\langle{\cdot,\cdot}\rangle</math> and let <math>G = _{i,j=1}^m \in M_m</math>. Then<br>(a) {{mvar|G}} is Hermitian and positive-semidefinite<br>(b) {{mvar|G}} is positive-definite ] the vectors <math>v_1,\ldots,v_m</math> are linearly-independent.<br>(c) <math>\operatorname{rank}G=\dim\operatorname{span}\{v_1,\ldots,v_m\}</math></ref>


An important application is to compute ]: a set of vectors is linearly independent if and only if the ] (the ] of the Gram matrix) is non-zero. An important application is to compute ]: a set of vectors are linearly independent if and only if the ] (the ] of the Gram matrix) is non-zero.


It is named after ]. It is named after ].

Revision as of 07:33, 15 June 2018

In linear algebra, the Gram matrix (Gramian matrix or Gramian) of a set of vectors v 1 , , v n {\displaystyle v_{1},\dots ,v_{n}} in an inner product space is the Hermitian matrix of inner products, whose entries are given by G i j = v i , v j {\displaystyle G_{ij}=\langle v_{i},v_{j}\rangle } .

An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.

It is named after Jørgen Pedersen Gram.

Examples

For finite-dimensional real vectors with the usual Euclidean dot product, the Gram matrix is simply G = V T V {\displaystyle G=V^{\mathrm {T} }V} (or G = V ¯ T V {\displaystyle G={\bar {V}}^{T}V} for complex vectors using the conjugate transpose), where V is a matrix whose columns are the vectors v k {\displaystyle v_{k}} .

Most commonly, the vectors are elements of a Euclidean space, or are functions in an L space, such as continuous functions on a compact interval (which are a subspace of L()).

Given real-valued functions { i ( ) , i = 1 , , n } {\displaystyle \{\ell _{i}(\cdot ),\,i=1,\dots ,n\}} on the interval [ t 0 , t f ] {\displaystyle } , the Gram matrix G = [ G i j ] {\displaystyle G=} , is given by the standard inner product on functions:

G i j = t 0 t f i ( τ ) j ¯ ( τ ) d τ . {\displaystyle G_{ij}=\int _{t_{0}}^{t_{f}}\ell _{i}(\tau ){\bar {\ell _{j}}}(\tau )\,d\tau .}

For a general bilinear form B on a finite-dimensional vector space over any field we can define a Gram matrix G attached to a set of vectors v 1 , , v n {\displaystyle v_{1},\dots ,v_{n}} by G i j = B ( v i , v j ) {\displaystyle G_{ij}=B(v_{i},v_{j})\,} . The matrix will be symmetric if the bilinear form B is symmetric.

Applications

  • In Riemannian geometry, given an embedded k-dimensional Riemannian manifold M R n {\displaystyle M\subset \mathbb {R} ^{n}} and a coordinate chart ϕ : U R n {\displaystyle \phi :U\to \mathbb {R} ^{n}} for ( x 1 , , x k ) U R k {\displaystyle (x_{1},\ldots ,x_{k})\in U\subset \mathbb {R} ^{k}} , the volume form ω on M induced by the embedding may be computed using the Gramian of the coordinate tangent vectors:
ω = det G   d x 1 d x k , G = ( ϕ x i , ϕ x j ) . {\displaystyle \omega ={\sqrt {\det G}}\ dx_{1}\cdots dx_{k},\quad G=\left(\langle {\tfrac {\partial \phi }{\partial x_{i}}},{\tfrac {\partial \phi }{\partial x_{j}}}\rangle \right).}

This generalizes the classical surface integral of a parametrized surface ϕ : U S R 3 {\displaystyle \phi :U\to S\subset \mathbb {R} ^{3}} for ( x , y ) U R 2 {\displaystyle (x,y)\in U\subset \mathbb {R} ^{2}} :

S f   d A   =   U f ( ϕ ( x , y ) ) | ϕ x × ϕ y | d x d y . {\displaystyle \int _{S}f\ dA\ =\ \iint _{U}f(\phi (x,y))\,\left|{\tfrac {\partial \phi }{\partial x}}\,{\times }\,{\tfrac {\partial \phi }{\partial y}}\right|\,dx\,dy.}

Properties

Positive semidefinite

The Gramian matrix is positive semidefinite, and every positive symmetric semidefinite matrix is the Gramian matrix for some set of vectors. Further, in finite-dimensions it determines the vectors up to isomorphism, i.e. any two sets of vectors with the same Gramian matrix must be related by a single unitary matrix. These facts follow from taking the spectral decomposition of any positive semidefinite matrix P, so that P = U D U H = ( U D ) ( U D ) H {\displaystyle P=UDU^{H}=(U{\sqrt {D}})(U{\sqrt {D}})^{H}} and so P is the Gramian matrix of the columns of U D {\displaystyle U{\sqrt {D}}} . The Gramian matrix of any orthonormal basis is the identity matrix. The infinite-dimensional analog of this statement is Mercer's theorem.

Derivation of Positive Semidefiniteness

The fact that the Gramian Matrix is positive semidefinite can be seen from the following simple derivation:

x T G x = i , j v i , v j x i x j = i , j v i x i , v j x j = i v i x i , j v j x j = i v i x i , i v i x i 0 {\displaystyle x^{T}{\mathbf {G}}x=\sum _{i,j}\langle v_{i},v_{j}\rangle x_{i}x_{j}=\sum _{i,j}\langle v_{i}x_{i},v_{j}x_{j}\rangle =\langle \sum _{i}v_{i}x_{i},\sum _{j}v_{j}x_{j}\rangle =\langle \sum _{i}v_{i}x_{i},\sum _{i}v_{i}x_{i}\rangle \geq 0}

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definitness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors v i {\displaystyle v_{i}} are linearly independent.

Change of basis

Under change of basis, to an orthogonal basis, represented by an invertible orthogonal matrix P, the Gram matrix will change by a matrix congruence to P T G P {\displaystyle P^{T}GP} .

Gram determinant

The Gram determinant or Gramian is the determinant of the Gram matrix:

G ( x 1 , , x n ) = | x 1 , x 1 x 1 , x 2 x 1 , x n x 2 , x 1 x 2 , x 2 x 2 , x n x n , x 1 x n , x 2 x n , x n | . {\displaystyle G(x_{1},\dots ,x_{n})={\begin{vmatrix}\langle x_{1},x_{1}\rangle &\langle x_{1},x_{2}\rangle &\dots &\langle x_{1},x_{n}\rangle \\\langle x_{2},x_{1}\rangle &\langle x_{2},x_{2}\rangle &\dots &\langle x_{2},x_{n}\rangle \\\vdots &\vdots &\ddots &\vdots \\\langle x_{n},x_{1}\rangle &\langle x_{n},x_{2}\rangle &\dots &\langle x_{n},x_{n}\rangle \end{vmatrix}}.}

Geometrically, the Gram determinant is the square of the volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the Gram determinant is nonzero (if and only if the Gram matrix is nonsingular).

The Gram determinant can also be expressed in terms of the exterior product of vectors by

G ( x 1 , , x n ) = x 1 x n 2 . {\displaystyle G(x_{1},\dots ,x_{n})=\|x_{1}\wedge \cdots \wedge x_{n}\|^{2}.}

See also

References

  1. Horn & Johnson 2013, p. 441
    Theorem 7.2.10 Let v 1 , , v m {\displaystyle v_{1},\ldots ,v_{m}} be vectors in an inner product space V with inner product , {\displaystyle \langle {\cdot ,\cdot }\rangle } and let G = [ v j , v i ] i , j = 1 m M m {\displaystyle G=_{i,j=1}^{m}\in M_{m}} . Then
    (a) G is Hermitian and positive-semidefinite
    (b) G is positive-definite if and only if the vectors v 1 , , v m {\displaystyle v_{1},\ldots ,v_{m}} are linearly-independent.
    (c) rank G = dim span { v 1 , , v m } {\displaystyle \operatorname {rank} G=\dim \operatorname {span} \{v_{1},\ldots ,v_{m}\}}
  2. Lanckriet, G. R. G.; Cristianini, N.; Bartlett, P.; Ghaoui, L. E.; Jordan, M. I. (2004). "Learning the kernel matrix with semidefinite programming". Journal of Machine Learning Research. 5: 27–72 .

External links

Matrix classes
Explicitly constrained entries
Constant
Conditions on eigenvalues or eigenvectors
Satisfying conditions on products or inverses
With specific applications
Used in statistics
Used in graph theory
Used in science and engineering
Related terms
Categories: