Misplaced Pages

Gram matrix: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 15:36, 17 February 2008 editEijkhout (talk | contribs)113 edits Gram matrix in finite elements← Previous edit Revision as of 10:49, 22 February 2008 edit undoAkhram (talk | contribs)28 editsm Interwiki ESNext edit →
Line 57: Line 57:


] ]
]
] ]
] ]

Revision as of 10:49, 22 February 2008

In linear algebra, the Gramian matrix (or Gram matrix or Gramian) of a set of vectors v 1 , , v n {\displaystyle v_{1},\dots ,v_{n}} in an inner product space is the symmetric matrix of inner products, whose entries are given by G i j = ( v i | v j ) {\displaystyle G_{ij}=(v_{i}|v_{j})} .

An important application is to compute linear independence: a set of vectors is linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero.

It is named for Jørgen Pedersen Gram.

Examples

Most commonly, the vectors are elements of a Euclidean space, or are functions in an L 2 {\displaystyle L^{2}} space, such as continuous functions on a compact interval [ a , b ] {\displaystyle } (which are a subspace of L 2 ( [ a , b ] ) {\displaystyle L^{2}()} ).

Given real-valued functions { l i ( ) , i = 1 , , n } {\displaystyle \{l_{i}(\cdot ),\,i=1,\dots ,n\}} on the interval [ t 0 , t f ] {\displaystyle } , the Gram matrix G = [ G i j ] {\displaystyle G=} , is given by the standard inner product on functions: G i j = t 0 t f l i ( τ ) l j ( τ ) d τ {\displaystyle G_{ij}=\int _{t_{0}}^{t_{f}}l_{i}(\tau )l_{j}(\tau )\,d\tau } .

Given a matrix A {\displaystyle A} , the matrix A T A {\displaystyle A^{\mathrm {T} }A} is a Gram matrix (of the columns of A {\displaystyle A} ), while the matrix A A T {\displaystyle AA^{\mathrm {T} }} is the Gram matrix of the rows of A {\displaystyle A} .

For a general bilinear form B on a finite-dimensional vector space over any field we can define a Gram matrix G attached to a set of vectors v 1 , , v n {\displaystyle v_{1},\dots ,v_{n}} by G i , j = B ( v i , v j ) {\displaystyle G_{i,j}=B(v_{i},v_{j})\,} . The matrix will be symmetric if the bilinear form B is.

Applications

Properties

Positive semidefinite

The Gramian matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. This set of vectors is not in general unique: the Gramian matrix of any orthonormal basis is the identity matrix.

The infinite-dimensional analog of this statement is Mercer's theorem.

Change of basis

Under change of basis represented by an invertible matrix P, the Gram matrix will change by a matrix congruence to P T G P {\displaystyle P^{\mathrm {T} }GP} .

Gram determinant

The Gram determinant or Gramian is the determinant of the Gram matrix:

G ( x 1 , , x n ) = | ( x 1 | x 1 ) ( x 1 | x 2 ) ( x 1 | x n ) ( x 2 | x 1 ) ( x 2 | x 2 ) ( x 2 | x n ) ( x n | x 1 ) ( x n | x 2 ) ( x n | x n ) | . {\displaystyle G(x_{1},\dots ,x_{n})={\begin{vmatrix}(x_{1}|x_{1})&(x_{1}|x_{2})&\dots &(x_{1}|x_{n})\\(x_{2}|x_{1})&(x_{2}|x_{2})&\dots &(x_{2}|x_{n})\\\vdots &\vdots &&\vdots \\(x_{n}|x_{1})&(x_{n}|x_{2})&\dots &(x_{n}|x_{n})\end{vmatrix}}.}

Geometrically, the Gram determinant is the square of the volume of the parallelepiped formed by the vectors. In particular, the vectors are linearly independent if and only if the Gram determinant is nonzero (if and only if the Gram matrix is nonsingular).

External links

Categories: