Revision as of 17:46, 12 October 2022 edit14.139.38.101 (talk) →PropertiesTags: Reverted Mobile edit Mobile web edit← Previous edit | Latest revision as of 20:38, 8 January 2025 edit undo104.138.208.182 (talk) Add a mention of the role of the Gram determinant as an inner product on the exterior product space, and the interpretation as a Pythagorean Theorem. | ||
(21 intermediate revisions by 15 users not shown) | |||
Line 1: | Line 1: | ||
{{short description|Matrix of inner products of a set of vectors}} | {{short description|Matrix of inner products of a set of vectors}} | ||
In ], the '''Gram matrix''' (or '''Gramian matrix''', '''Gramian''') of a set of vectors <math>v_1,\dots, v_n</math> in an ] is the ] of ]s, whose entries are given by the ] <math>G_{ij} = \left\langle v_i, v_j \right\rangle</math>.<ref name="HJ-7.2.10">{{harvnb|Horn|Johnson|2013|p=441}}, p.441, Theorem 7.2.10</ref> If the vectors <math>v_1,\dots, v_n</math> are the columns of matrix <math>X</math> then the Gram matrix is <math>X^ |
In ], the '''Gram matrix''' (or '''Gramian matrix''', '''Gramian''') of a set of vectors <math>v_1,\dots, v_n</math> in an ] is the ] of ]s, whose entries are given by the ] <math>G_{ij} = \left\langle v_i, v_j \right\rangle</math>.<ref name="HJ-7.2.10">{{harvnb|Horn|Johnson|2013|p=441}}, p.441, Theorem 7.2.10</ref> If the vectors <math>v_1,\dots, v_n</math> are the columns of matrix <math>X</math> then the Gram matrix is <math>X^\dagger X</math> in the general case that the vector coordinates are complex numbers, which simplifies to <math>X^\top X</math> for the case that the vector coordinates are real numbers. | ||
An important application is to compute ]: a set of vectors are linearly independent if and only if the ] (the ] of the Gram matrix) is non-zero. | An important application is to compute ]: a set of vectors are linearly independent if and only if the ] (the ] of the Gram matrix) is non-zero. | ||
Line 17: | Line 17: | ||
===Applications=== | ===Applications=== | ||
* In ], given an embedded <math>k</math>-dimensional Riemannian manifold <math>M\subset \mathbb{R}^n</math> and a parametrization <math>\phi: U\to M</math> for {{nowrap|<math>(x_1, \ldots, x_k)\in U\subset\mathbb{R}^k</math>,}} the volume form <math>\omega</math> on <math>M</math> induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: <math display="block">\omega = \sqrt{\det G}\ dx_1 \cdots dx_k,\quad G = \left.</math> This generalizes the classical surface integral of a parametrized surface <math>\phi:U\to S\subset \mathbb{R}^3</math> for <math>(x, y)\in U\subset\mathbb{R}^2</math>: <math display="block">\int_S f\ dA = \iint_U f(\phi(x, y))\, \left|\frac{\partial\phi}{\partial x}\,{\times}\,\frac{\partial\phi}{\partial y}\right|\, dx\, dy.</math> | * In ], given an embedded <math>k</math>-dimensional ] <math>M\subset \mathbb{R}^n</math> and a parametrization <math>\phi: U\to M</math> for {{nowrap|<math>(x_1, \ldots, x_k)\in U\subset\mathbb{R}^k</math>,}} the volume form <math>\omega</math> on <math>M</math> induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: <math display="block">\omega = \sqrt{\det G}\ dx_1 \cdots dx_k,\quad G = \left.</math> This generalizes the classical surface integral of a parametrized surface <math>\phi:U\to S\subset \mathbb{R}^3</math> for <math>(x, y)\in U\subset\mathbb{R}^2</math>: <math display="block">\int_S f\ dA = \iint_U f(\phi(x, y))\, \left|\frac{\partial\phi}{\partial x}\,{\times}\,\frac{\partial\phi}{\partial y}\right|\, dx\, dy.</math> | ||
* If the vectors are centered ]s, the Gramian is approximately proportional to the ''']''', with the scaling determined by the number of elements in the vector. | * If the vectors are centered ]s, the Gramian is approximately proportional to the ''']''', with the scaling determined by the number of elements in the vector. | ||
* In ], the Gram matrix of a set of ] is the ''']'''. | * In ], the Gram matrix of a set of ] is the ''']'''. | ||
Line 28: | Line 28: | ||
==Properties== | ==Properties== | ||
===Positive-semidefiniteness=== | ===Positive-semidefiniteness=== | ||
The Gram matrix is ] in the case the |
The Gram matrix is ] in the case the inner product is real-valued; it is ] in the general, complex case by definition of an ]. | ||
The Gram |
The Gram matrix is ], and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation: | ||
: <math> | : <math> | ||
x^\dagger \mathbf{G} x = | x^\dagger \mathbf{G} x = | ||
\sum_{i,j}x_i^* x_j\left\langle v_i, v_j \right\rangle = | \sum_{i,j}x_i^* x_j\left\langle v_i, v_j \right\rangle = | ||
\sum_{i,j}\left\langle x_i v_i, x_j v_j \right\rangle = | \sum_{i,j}\left\langle x_i v_i, x_j v_j \right\rangle = | ||
\ |
\biggl\langle \sum_i x_i v_i, \sum_j x_j v_j \biggr\rangle = | ||
\ |
\biggl\| \sum_i x_i v_i \biggr\|^2 \geq 0 . | ||
</math> | </math> | ||
Line 73: | Line 73: | ||
==Gram determinant== | ==Gram determinant== | ||
The '''Gram determinant''' or '''Gramian''' is the determinant of the Gram matrix: | The '''Gram determinant''' or '''Gramian''' is the determinant of the Gram matrix: | ||
<math display=block>|G( |
<math display=block>\bigl|G(v_1, \dots, v_n)\bigr| = \begin{vmatrix} | ||
\langle v_1,v_1\rangle & \langle v_1,v_2\rangle &\dots & \langle v_1,v_n\rangle \\ | \langle v_1,v_1\rangle & \langle v_1,v_2\rangle &\dots & \langle v_1,v_n\rangle \\ | ||
\langle v_2,v_1\rangle & \langle v_2,v_2\rangle &\dots & \langle v_2,v_n\rangle \\ | \langle v_2,v_1\rangle & \langle v_2,v_2\rangle &\dots & \langle v_2,v_n\rangle \\ | ||
Line 80: | Line 80: | ||
\end{vmatrix}.</math> | \end{vmatrix}.</math> | ||
If <math>v_1, \dots, v_n</math> are vectors in <math>\mathbb{R}^m</math> then it is the square of the ''n''-dimensional volume of the ] formed by the vectors. In particular, the vectors are ] if and only if the parallelotope has nonzero ''n''-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is ]. When {{nowrap|'' |
If <math>v_1, \dots, v_n</math> are vectors in <math>\mathbb{R}^m</math> then it is the square of the ''n''-dimensional volume of the ] formed by the vectors. In particular, the vectors are ] ] the parallelotope has nonzero ''n''-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is ]. When {{nowrap|''n'' > ''m''}} the determinant and volume are zero. When {{nowrap|1=''n'' = ''m''}}, this reduces to the standard theorem that the absolute value of the determinant of ''n'' ''n''-dimensional vectors is the ''n''-dimensional volume. The Gram determinant is also useful for computing the volume of the ] formed by the vectors; its volume is {{math|Volume(parallelotope) / ''n''!}}. | ||
The Gram determinant can also be expressed in terms of the ] of vectors by | The Gram determinant can also be expressed in terms of the ] of vectors by | ||
:<math>|G(v_1, \dots, v_n)| = \| v_1 \wedge \cdots \wedge v_n\|^2.</math> | :<math>\bigl|G(v_1, \dots, v_n)\bigr| = \| v_1 \wedge \cdots \wedge v_n\|^2.</math> | ||
The Gram determinant therefore supplies an ] for the space {{tmath|{\textstyle\bigwedge}^{\!n}(V)}}. If an ] ''e''<sub>''i''</sub>, {{nowrap|1=''i'' = 1, 2, ..., ''n''}} on {{tmath|V}} is given, the vectors | |||
⚫ | When the vectors <math>v_1, \ldots, v_n \in \mathbb{R}^m</math> are defined from the positions of points <math>p_1, \ldots, p_n</math> relative to some reference point <math> |
||
:<math |
: <math> e_{i_1} \wedge \cdots \wedge e_{i_n},\quad i_1 < \cdots < i_n, </math> | ||
will constitute an orthonormal basis of ''n''-dimensional volumes on the space {{tmath|{\textstyle\bigwedge}^{\!n}(V)}}. Then the Gram determinant <math>\bigl|G(v_1, \dots, v_n)\bigr|</math> amounts to an ''n''-dimensional ] for the volume of the parallelotope formed by the vectors <math>v_1 \wedge \cdots \wedge v_n</math> in terms of its projections onto the basis volumes <math>e_{i_1} \wedge \cdots \wedge e_{i_n}</math>. | |||
⚫ | When the vectors <math>v_1, \ldots, v_n \in \mathbb{R}^m</math> are defined from the positions of points <math>p_1, \ldots, p_n</math> relative to some reference point <math>p_{n+1}</math>, | ||
:<math display=block>(v_1, v_2, \ldots, v_n) = (p_1 - p_{n+1}, p_2 - p_{n+1}, \ldots, p_n - p_{n+1})\,,</math> | |||
then the Gram determinant can be written as the difference of two Gram determinants, | then the Gram determinant can be written as the difference of two Gram determinants, | ||
:<math display=block> | :<math display=block> | ||
|G( |
\bigl|G(v_1, \dots, v_n)\bigr| = \bigl|G((p_1, 1), \dots, (p_{n+1}, 1))\bigr| - \bigl|G(p_1, \dots, p_{n+1})\bigr|\,, | ||
</math> | </math> | ||
where each <math>(p_j, 1)</math> is the corresponding point <math>p_j</math> supplemented with the coordinate value of 1 for an <math>(m+1)</math>-st dimension.{{Citation needed|reason=This relation between Gram matrices is apparently true but needs a citation to support its ].|date=February 2022}} Note that in the common case that {{math|1=''n'' = ''m''}}, the second term on the right-hand side will be zero. | where each <math>(p_j, 1)</math> is the corresponding point <math>p_j</math> supplemented with the coordinate value of 1 for an <math>(m+1)</math>-st dimension.{{Citation needed|reason=This relation between Gram matrices is apparently true but needs a citation to support its ].|date=February 2022}} Note that in the common case that {{math|1=''n'' = ''m''}}, the second term on the right-hand side will be zero. | ||
Line 95: | Line 99: | ||
==Constructing an orthonormal basis== | ==Constructing an orthonormal basis== | ||
Given a set of linearly independent vectors <math>\{v_i\}</math> with Gram matrix <math>G_{ij}:= \langle v_i,v_j\rangle</math>, one can construct an orthonormal basis | Given a set of linearly independent vectors <math>\{v_i\}</math> with Gram matrix <math>G</math> defined by <math>G_{ij}:= \langle v_i,v_j\rangle</math>, one can construct an orthonormal basis | ||
:<math> |
:<math>u_i := \sum_j \bigl(G^{-1/2}\bigr)_{ji} v_j.</math> | ||
In matrix notation, <math>U = V G^{-1/2} </math>, where <math>U</math> has orthonormal basis vectors <math>\{u_i\}</math> and the matrix <math>V</math> is composed of the given column vectors <math>\{v_i\}</math>. | |||
The positive definite matrix <math>G^{-1/2}</math> is guaranteed to exist because, as mentioned above, the <math>v_i</math> are linearly independent if and only if G is invertible and hence is positive definite (not just semidefinite). The inverse <math>G^{-1}</math> of positive definite matrix G is unique and also positive definite and thus has a unique positive definite square root <math>G^{-1/2} := (G^{-1})^{1/2}</math>. One can check that these new vectors are orthonormal: | |||
⚫ | |||
The matrix <math>G^{-1/2}</math> is guaranteed to exist. Indeed, <math>G</math> is Hermitian, and so can be decomposed as <math>G=UDU^\dagger</math> with <math>U</math> a unitary matrix and <math>D</math> a real diagonal matrix. Additionally, the <math>v_i</math> are linearly independent if and only if <math>G</math> is positive definite, which implies that the diagonal entries of <math>D</math> are positive. <math>G^{-1/2}</math> is therefore uniquely defined by <math>G^{-1/2}:=UD^{-1/2}U^\dagger</math>. One can check that these new vectors are orthonormal: | |||
⚫ | |||
:<math>\begin{align} | |||
⚫ | |||
\langle u_i,u_j \rangle | |||
⚫ | &= \sum_{i'} \sum_{j'} \Bigl\langle \bigl(G^{-1/2}\bigr)_{i'i} v_{i'},\bigl(G^{-1/2}\bigr)_{j'j} v_{j'} \Bigr\rangle \\ | ||
⚫ | &= \sum_{i'} \sum_{j'} \bigl(G^{-1/2}\bigr)_{ii'} G_{i'j'} \bigl(G^{-1/2}\bigr)_{j'j} \\ | ||
⚫ | &= \bigl(G^{-1/2} G G^{-1/2}\bigr)_{ij} = \delta_{ij} | ||
\end{align}</math> | |||
where we used <math>\bigl(G^{-1/2}\bigr)^\dagger=G^{-1/2} </math>. | |||
==See also== | ==See also== | ||
Line 112: | Line 122: | ||
==External links== | ==External links== | ||
* {{springer|title=Gram matrix|id=p/g044750}} | * {{springer|title=Gram matrix|id=p/g044750}} | ||
* '''' by Frank Jones | * '''' by Frank Jones | ||
{{Matrix classes}} | {{Matrix classes}} | ||
Line 121: | Line 131: | ||
] | ] | ||
] | ] | ||
] |
Latest revision as of 20:38, 8 January 2025
Matrix of inner products of a set of vectorsIn linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.
An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.
It is named after Jørgen Pedersen Gram.
Examples
For finite-dimensional real vectors in with the usual Euclidean dot product, the Gram matrix is , where is a matrix whose columns are the vectors and is its transpose whose rows are the vectors . For complex vectors in , , where is the conjugate transpose of .
Given square-integrable functions on the interval , the Gram matrix is:
where is the complex conjugate of .
For any bilinear form on a finite-dimensional vector space over any field we can define a Gram matrix attached to a set of vectors by . The matrix will be symmetric if the bilinear form is symmetric.
Applications
- In Riemannian geometry, given an embedded -dimensional Riemannian manifold and a parametrization for , the volume form on induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: This generalizes the classical surface integral of a parametrized surface for :
- If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.
- In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.
- In control theory (or more generally systems theory), the controllability Gramian and observability Gramian determine properties of a linear system.
- Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94).
- In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.
- In machine learning, kernel functions are often represented as Gram matrices. (Also see kernel PCA)
- Since the Gram matrix over the reals is a symmetric matrix, it is diagonalizable and its eigenvalues are non-negative. The diagonalization of the Gram matrix is the singular value decomposition.
Properties
Positive-semidefiniteness
The Gram matrix is symmetric in the case the inner product is real-valued; it is Hermitian in the general, complex case by definition of an inner product.
The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:
The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors are linearly independent (that is, for all ).
Finding a vector realization
See also: Positive definite matrix § DecompositionGiven any positive semidefinite matrix , one can decompose it as:
- ,
where is the conjugate transpose of (or in the real case).
Here is a matrix, where is the rank of . Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of .
The columns of can be seen as n vectors in (or k-dimensional Euclidean space , in the real case). Then
where the dot product is the usual inner product on .
Thus a Hermitian matrix is positive semidefinite if and only if it is the Gram matrix of some vectors . Such vectors are called a vector realization of . The infinite-dimensional analog of this statement is Mercer's theorem.
Uniqueness of vector realizations
If is the Gram matrix of vectors in then applying any rotation or reflection of (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any orthogonal matrix , the Gram matrix of is also .
This is the only way in which two real vector realizations of can differ: the vectors are unique up to orthogonal transformations. In other words, the dot products and are equal if and only if some rigid transformation of transforms the vectors to and 0 to 0.
The same holds in the complex case, with unitary transformations in place of orthogonal ones. That is, if the Gram matrix of vectors is equal to the Gram matrix of vectors in then there is a unitary matrix (meaning ) such that for .
Other properties
- Because , it is necessarily the case that and commute. That is, a real or complex Gram matrix is also a normal matrix.
- The Gram matrix of any orthonormal basis is the identity matrix. Equivalently, the Gram matrix of the rows or the columns of a real rotation matrix is the identity matrix. Likewise, the Gram matrix of the rows or columns of a unitary matrix is the identity matrix.
- The rank of the Gram matrix of vectors in or equals the dimension of the space spanned by these vectors.
Gram determinant
The Gram determinant or Gramian is the determinant of the Gram matrix:
If are vectors in then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. When n > m the determinant and volume are zero. When n = m, this reduces to the standard theorem that the absolute value of the determinant of n n-dimensional vectors is the n-dimensional volume. The Gram determinant is also useful for computing the volume of the simplex formed by the vectors; its volume is Volume(parallelotope) / n!.
The Gram determinant can also be expressed in terms of the exterior product of vectors by
The Gram determinant therefore supplies an inner product for the space . If an orthonormal basis ei, i = 1, 2, ..., n on is given, the vectors
will constitute an orthonormal basis of n-dimensional volumes on the space . Then the Gram determinant amounts to an n-dimensional Pythagorean Theorem for the volume of the parallelotope formed by the vectors in terms of its projections onto the basis volumes .
When the vectors are defined from the positions of points relative to some reference point ,
then the Gram determinant can be written as the difference of two Gram determinants,
where each is the corresponding point supplemented with the coordinate value of 1 for an -st dimension. Note that in the common case that n = m, the second term on the right-hand side will be zero.
Constructing an orthonormal basis
Given a set of linearly independent vectors with Gram matrix defined by , one can construct an orthonormal basis
In matrix notation, , where has orthonormal basis vectors and the matrix is composed of the given column vectors .
The matrix is guaranteed to exist. Indeed, is Hermitian, and so can be decomposed as with a unitary matrix and a real diagonal matrix. Additionally, the are linearly independent if and only if is positive definite, which implies that the diagonal entries of are positive. is therefore uniquely defined by . One can check that these new vectors are orthonormal:
where we used .
See also
References
- ^ Horn & Johnson 2013, p. 441, p.441, Theorem 7.2.10
- Lanckriet, G. R. G.; Cristianini, N.; Bartlett, P.; Ghaoui, L. E.; Jordan, M. I. (2004). "Learning the kernel matrix with semidefinite programming". Journal of Machine Learning Research. 5: 27–72 .
- Horn & Johnson (2013), p. 452, Theorem 7.3.11
- Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. ISBN 978-0-521-54823-6.
External links
- "Gram matrix", Encyclopedia of Mathematics, EMS Press, 2001
- Volumes of parallelograms by Frank Jones