Misplaced Pages

Euclidean subspace: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 10:01, 15 December 2009 edit74.182.242.61 (talk)No edit summary← Previous edit Latest revision as of 08:08, 30 April 2013 edit undoIncnis Mrsi (talk | contribs)Extended confirmed users, Pending changes reviewers, Rollbackers11,646 edits Redirected page to Flat (geometry) 
(28 intermediate revisions by 17 users not shown)
Line 1: Line 1:
#REDIRECT ]
].]]
In ], a '''Euclidean subspace''' (or '''subspace of R<sup>''n''</sup>''') is a ] of ] that is ] under addition and scalar multiplication. Geometrically, a subspace is a ] in ''n''-dimensional ] that passes through the origin. Examples of subspaces include the solution set to a homogeneous ], the subset of Euclidean space described by a system of homogeneous linear ], the ] of a collection of vectors, and the ], ], and ] of a ].<ref>Linear algebra, as discussed in this article, is a very well-established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005.</ref>

In abstract linear algebra, Euclidean subspaces are important examples of ]s. In this context, a Euclidean subspace is simply a ] of a Euclidean space.

==Note on vectors and '''R'''<sup>''n''</sup>==
In mathematics, '''R'''<sup>''n''</sup> denotes the ] of all vectors with ''n'' ] components:
:<math>\textbf{R}^n = \left\{(x_1, x_2, \ldots, x_n) : x_1,x_2,\ldots,x_n \in \textbf{R} \right\}</math><ref>This equation uses ]. The same notation will be used throughout this article.</ref>
Here the word '''vector''' refers to any ordered list of numbers. Vectors can be written as either ordered tuples or as columns of numbers:
:<math>(x_1, x_2, \ldots, x_n) = \left</math><ref>To add to the confusion, there is also an object called a row vector, usually written . Some books identify ordered tuples with row vectors instead of column vectors.</ref>
Geometrically, we regard vectors with ''n'' components as ] in an ''n''-dimensional space. That is, we identify the set '''R'''<sup>''n''</sup> with ''n''-dimensional ]. Any ] of '''R'''<sup>''n''</sup> can be thought of as a geometric object (namely the object consisting of all the points in the subset). Using this mode of thought, a line in three-dimensional space is the same as the set of points on the line, and is therefore just a subset of '''R'''<sup>3</sup>.

==Definition==
A '''Euclidean subspace''' is a subset ''S'' of '''R'''<sup>''n''</sup> with the following properties:
# The ] '''0''' is an ] of ''S''.
# If '''u''' and '''v''' are elements of ''S'', then {{nowrap| '''u''' + '''v'''}} is an element of ''S''.
# If '''v''' is an element of ''S'' and ''c'' is a ], then ''c'''''v''' is an element of ''S''.
There are several common variations on these requirements, all of which are logically equivalent to the list above.<ref>The requirement that ''S'' contains the zero vector is equivalent to requiring that ''S'' is ]. (Once ''S'' contains any single vector '''v''' it must contain 0'''v''' by property&nbsp;3, and therefore must contain the zero vector.)</ref>
<ref>The second and third requirements can be combined into the following statement: If '''u''' and '''v''' are elements of ''S'' and ''b'' and ''c'' are scalars, then
{{nowrap| ''b'''''u''' + ''c'''''v'''}} is an element of&nbsp;''S''.</ref>

Because subspaces are ] under both addition and scalar multiplication, any ] of vectors from a subspace is again in the subspace. That is, if
{{nowrap| '''v'''<sub>1</sub>, '''v'''<sub>2</sub>, ..., '''v'''<sub>''k''</sub>}}
are elements of a subspace ''S'', and
{{nowrap| ''c''<sub>1</sub>, ''c''<sub>2</sub>, ..., ''c<sub>k</sub>''}}
are ], then

:''c''<sub>1</sub> '''v'''<sub>1</sub> + ''c''<sub>2</sub> '''v'''<sub>2</sub> + · · · + ''c<sub>k</sub>'' '''v'''<sub>''k''</sub>

is again an element of ''S''.

==Geometric description==
].]]
Geometrically, a subspace of '''R'''<sup>''n''</sup> is simply a ] through the origin, i.e. a copy of a lower dimensional (or equi-dimensional) Euclidean space sitting in ''n'' dimensions. For example, there are four different types of subspaces in '''R'''<sup>3</sup>:
# The ] {{nowrap| { (0, 0, 0) } }} is a '''zero-dimensional''' subspace of '''R'''<sup>3</sup>.
# Any ] through the origin is a '''one-dimensional''' subspace of '''R'''<sup>3</sup>.
# Any ] through the origin is a '''two-dimensional''' subspace of '''R'''<sup>3</sup>.
# The entire set '''R'''<sup>3</sup> is a '''three-dimensional''' subspace of itself.
In ], there are subspaces of every dimension from 0 to ''n''.

The geometric dimension of a subspace is the same as the number of vectors required for a ] (see below).

==Systems of linear equations==
The solution set to any homogeneous ] with ''n'' variables is a subspace of '''R'''<sup>''n''</sup>:

:<math>\left\{ \left \in \textbf{R}^n : \begin{alignat}{6}
a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \cdots + \;&& a_{1n} x_n &&\; = 0& \\
a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \cdots + \;&& a_{2n} x_n &&\; = 0& \\
\vdots\;\;\; && && \vdots\;\;\; && && \vdots\;\;\; && \vdots\,& \\
a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \cdots + \;&& a_{mn} x_n &&\; = 0&
\end{alignat} \right\} </math>

For example, the set of all vectors {{nowrap| (''x'', ''y'', ''z'') }} satisfying the equations

:<math>x + 3y + 2z = 0 \;\;\;\;\text{and}\;\;\;\; 2x - 4y + 5z = 0</math>

is a one-dimensional subspace of '''R'''<sup>3</sup>. More generally, that is to say that given a set of n, independent functions, the dimension of the subspace in '''R'''<sup>k</sup> will be the dimension of the ] of A, the composite matrix of the n functions.

===Null space of a matrix===
{{main|Null space}}
In linear algebra, a homogeneous system of linear equations can be written as a single ] equation:

:<math>A\textbf{x} = \textbf{0}</math>

The set of solutions to this equation is known as the ] of the matrix. For example, the subspace of '''R'''<sup>3</sup> described above is the null space of the matrix

:<math>A = \left\text{.}</math>

Every subspace of '''R'''<sup>''n''</sup> can be described as the null space of some matrix (see ], below).

==Linear parametric equations==
The subset of '''R'''<sup>''n''</sup> described by a system of homogeneous linear ] is a subspace:

:<math>\left\{ \left \in \textbf{R}^n : \begin{alignat}{7}
x_1 &&\; = \;&& a_{11} t_1 &&\; + \;&& a_{12} t_2 &&\; + \cdots + \;&& a_{1m} t_m & \\
x_2 &&\; = \;&& a_{21} t_1 &&\; + \;&& a_{22} t_2 &&\; + \cdots + \;&& a_{2m} t_m & \\
\vdots \,&& && \vdots\;\;\; && && \vdots\;\;\; && && \vdots\;\;\; & \\
x_n &&\; = \;&& a_{n1} t_1 &&\; + \;&& a_{n2} t_2 &&\; + \cdots + \;&& a_{nm} t_m & \\
\end{alignat} \text{ for some } t_1,\ldots,t_m\in\textbf{R} \right\} </math>

For example, the set of all vectors {{nowrap| (''x'', ''y'', ''z'') }} parameterized by the equations

:<math>x = 2t_1 + 3t_2,\;\;\;\;y = 5t_1 - 4t_2,\;\;\;\;\text{and}\;\;\;\;z = -t_1 + 2t_2</math>

is a two-dimensional subspace of '''R'''<sup>3</sup>.

===Span of vectors===
{{main|Linear span}}
In linear algebra, the system of parametric equations can be written as a single vector equation:

:<math>\left \;=\; t_1 \!\left + t_2 \!\left</math>

The expression on the right is called a ] of the vectors
{{nowrap| (2, 5, -1) }} and {{nowrap| (3, −4, 2)}}. These two vectors are said to '''span''' the resulting subspace.

In general, a '''linear combination''' of vectors
{{nowrap| '''v'''<sub>1</sub>, '''v'''<sub>2</sub>, . . . , '''v'''<sub>''k''</sub> }}
is any vector of the form

:<math>t_1 \textbf{v}_1 + \cdots + t_k \textbf{v}_k\text{.}</math>

The set of all possible linear combinations is called the '''span''':

:<math>\text{Span} \{ \textbf{v}_1, \ldots, \textbf{v}_k \}
= \left\{ t_1 \textbf{v}_1 + \cdots + t_k \textbf{v}_k : t_1,\ldots,t_k\in\mathbf{R} \right\} </math>

If the vectors '''v'''<sub>1</sub>,...,'''v'''<sub>''k''</sub> have ''n'' components, then their span is a subspace of '''R'''<sup>''n''</sup>. Geometrically, the span is the flat through the origin in ''n''-dimensional space determined by the points '''v'''<sub>1</sub>,...,'''v'''<sub>''k''</sub>.

; Example
: The ''xz''-plane in '''R'''<sup>3</sup> can be parameterized by the equations
::<math>x = t_1, \;\;\; y = 0, \;\;\; z = t_2</math>

:As a subspace, the ''xz''-plane is spanned by the vectors {{nowrap| (1, 0, 0) }} and {{nowrap| (0, 0, 1)}}. Every vector in the ''xz''-plane can be written as a linear combination of these two:

::<math>(t_1, 0, t_2) = t_1(1,0,0) + t_2(0,0,1)\text{.}\,</math>

:Geometrically, this corresponds to the fact that every point on the ''xz''-plane can be reached from the origin by first moving some distance in the direction of {{nowrap| (1, 0, 0) }} and then moving some distance in the direction of {{nowrap| (0, 0, 1)}}.

===Column space and row space===
{{main|Column space|Row space}}
A system of linear parametric equations can also be written as a single matrix equation:

:<math>\textbf{x} = A\textbf{t}\;\;\;\;\text{where}\;\;\;\;A = \left\text{.}</math>

In this case, the subspace consists of all possible values of the vector '''x'''. In linear algebra, this subspace is known as the '''column space''' (or ]) of the matrix ''A''. It is precisely the subspace of '''R'''<sup>''n''</sup> spanned by the column vectors of ''A''.

The '''row space''' of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the ] of the null space (see below).

===Independence, basis, and dimension===
{{main|Linear independence|Basis (linear algebra)|Dimension (vector space)}}
]
In general, a subspace of '''R'''<sup>''n''</sup> determined by ''k'' parameters (or spanned by ''k'' vectors) has dimension ''k''. However, there are exceptions to this rule. For example, the subspace of '''R'''<sup>3</sup> spanned by the three vectors {{nowrap| (1, 0, 0)}}, {{nowrap| (0, 0, 1)}}, and
{{nowrap| (2, 0, 3) }} is just the ''xz''-plane, with each point on the plane described by infinitely many different values of {{nowrap| ''t''<sub>1</sub>, ''t''<sub>2</sub>, ''t''<sub>3</sub>}}.

In general, vectors '''v'''<sub>1</sub>,...,'''v'''<sub>''k''</sub> are called '''linearly independent''' if

:<math>t_1 \textbf{v}_1 + \cdots + t_k \textbf{v}_k \;\ne\; u_1 \textbf{v}_1 + \cdots + u_k \textbf{v}_k</math>

for
{{nowrap| (''t''<sub>1</sub>, ''t''<sub>2</sub>, ..., ''t<sub>k</sub>'') ≠ (''u''<sub>1</sub>, ''u''<sub>2</sub>, ..., ''u<sub>k</sub>'')}}.<ref>This definition is often stated differently: vectors '''v'''<sub>1</sub>,...,'''v'''<sub>''k''</sub> are linearly independent if
{{nowrap| ''t''<sub>1</sub>'''v'''<sub>1</sub> + ··· + ''t<sub>k</sub>'''''v'''<sub>''k''</sub> ≠ '''0'''}} for {{nowrap| (''t''<sub>1</sub>, ''t''<sub>2</sub>, ..., ''t<sub>k</sub>'') ≠ (0, 0, ..., 0)}}. The two definitions are equivalent.</ref>
If {{nowrap| '''v'''<sub>1</sub>, ..., '''v'''<sub>''k''</sub> }} are linearly independent, then the '''coordinates''' {{nowrap| ''t''<sub>1</sub>, ..., ''t<sub>k</sub>''}} for a vector in the span are uniquely determined.

A '''basis''' for a subspace ''S'' is a set of linearly independent vectors whose span is ''S''. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see ], below).

; Example
: Let ''S'' be the subspace of '''R'''<sup>4</sup> defined by the equations
::<math>x_1 = 2 x_2\;\;\;\;\text{and}\;\;\;\;x_3 = 5x_4</math>
:Then the vectors {{nowrap| (2, 1, 0, 0) }} and {{nowrap| (0, 0, 5, 1) }} are a basis for ''S''. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:

::<math>(2t_1, t_1, 5t_2, t_2) = t_1(2, 1, 0, 0) + t_2(0, 0, 5, 1)\,</math>

:The subspace ''S'' is two-dimensional. Geometrically, it is the plane in '''R'''<sup>4</sup> passing through the points {{nowrap| (0, 0, 0, 0)}}, {{nowrap| (2, 1, 0, 0)}}, and {{nowrap| (0, 0, 5, 1)}}.

==Algorithms==
Most algorithms for dealing with subspaces involve ]. This is the process of applying ]s to a matrix until it reaches either ] or ]. Row reduction has the following important properties:
# The reduced matrix has the same null space as the original.
# Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original.
# Row reduction does not affect the linear dependence of the column vectors.

===Basis for a row space===
:'''Input''' An {{nowrap| ''m'' × ''n''}} matrix ''A''.
:'''Output''' A basis for the ] of ''A''.
:# Use elementary row operations to put ''A'' into row echelon form.
:# The nonzero rows of the echelon form are a basis for the row space of ''A''.
See the article on ] for an ].

If we instead put the matrix ''A'' into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of '''R'''<sup>''n''</sup> are equal.

===Subspace membership===
:'''Input''' A basis {'''b'''<sub>1</sub>, '''b'''<sub>2</sub>, ..., '''b'''<sub>''k''</sub>} for a subspace ''S'' of '''R'''<sup>''n''</sup>, and a vector '''v''' with ''n'' components.
:'''Output''' Determines whether '''v''' is an element of ''S''
:# Create a (''k'' + 1) × ''n'' matrix ''A'' whose rows are the vectors '''b'''<sub>1</sub>,...,'''b'''<sub>''k''</sub> and '''v'''.
:# Use elementary row operations to put ''A'' into row echelon form.
:# If the echelon form has a row of zeroes, then the vectors {{nowrap| {'''b'''<sub>1</sub>, ..., '''b'''<sub>''k''</sub>, '''v'''} }} are linearly dependent, and therefore {{nowrap| '''v''' ∈ ''S'' }}.

===Basis for a column space===
:'''Input''' An ''m'' × ''n'' matrix ''A''
:'''Output''' A basis for the ] of ''A''
:# Use elementary row operations to put ''A'' into row echelon form.
:# Determine which columns of the echelon form have ]. The corresponding columns of the original matrix are a basis for the column space.
See the article on ] for an ].

This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.

===Coordinates for a vector===
:'''Input''' A basis {'''b'''<sub>1</sub>, '''b'''<sub>2</sub>, ..., '''b'''<sub>''k''</sub>} for a subspace ''S'' of '''R'''<sup>''n''</sup>, and a vector {{nowrap| '''v''' ∈ ''S''}}
:'''Output''' Numbers ''t''<sub>1</sub>, ''t''<sub>2</sub>, ..., ''t''<sub>''k''</sub> such that {{nowrap|1= '''v''' = ''t''<sub>1</sub>'''b'''<sub>1</sub> + ··· + ''t''<sub>''k''</sub>'''b'''<sub>''k''</sub>}}
:# Create an ] ''A'' whose columns are '''b'''<sub>1</sub>,...,'''b'''<sub>''k''</sub> , with the last column being '''v'''.
:# Use elementary row operations to put ''A'' into reduced row echelon form.
:# Express the final column of the reduced echelon form as a linear combination of the first ''k'' columns. The coefficients used are the desired numbers {{nowrap| ''t''<sub>1</sub>, ''t''<sub>2</sub>, ..., ''t''<sub>''k''</sub>}}. (These should be precisely the first ''k'' entries in the final column of the reduced echelon form.)
If the final column of the reduced row echelon form contains a pivot, then the input vector '''v''' does not lie in ''S''.

===Basis for a null space===
:'''Input''' An ''m'' × ''n'' matrix ''A''.
:'''Output''' A basis for the null space of ''A''
:# Use elementary row operations to put ''A'' in reduced row echelon form.
:# Using the reduced row echelon form, determine which of the variables {{nowrap| ''x''<sub>1</sub>, ''x''<sub>2</sub>, ..., ''x<sub>n</sub>''}} are free. Write equations for the dependent variables in terms of the free variables.
:# For each free variable ''x<sub>i</sub>'', choose a vector in the null space for which {{nowrap|1= ''x<sub>i</sub>'' = 1}} and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of ''A''.
See the article on ] for an ].

===Equations for a subspace===
:'''Input''' A basis {'''b'''<sub>1</sub>, '''b'''<sub>2</sub>, ..., '''b'''<sub>''k''</sub>} for a subspace ''S'' of '''R'''<sup>''n''</sup>
:'''Output''' An (''n'' − ''k'') × ''n'' matrix whose null space is ''S''.
:# Create a matrix ''A'' whose rows are {{nowrap| '''b'''<sub>1</sub>, '''b'''<sub>2</sub>, ..., '''b'''<sub>''k''</sub>}}.
:# Use elementary row operations to put ''A'' into reduced row echelon form.
:# Let {{nowrap| '''c'''<sub>1</sub>, '''c'''<sub>2</sub>, ..., '''c'''<sub>''n''</sub> }} be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots.
:# This results in a homogeneous system of ''n'' − ''k'' linear equations involving the variables '''c'''<sub>1</sub>,...,'''c'''<sub>''n''</sub>. The {{nowrap| (''n'' − ''k'') × ''n''}} matrix corresponding to this system is the desired matrix with nullspace ''S''.
; Example
:If the reduced row echelon form of ''A'' is

::<math>\left[ \begin{alignat}{6}
1 && 0 && -3 && 0 && 2 && 0 \\
0 && 1 && 5 && 0 && -1 && 4 \\
0 && 0 && 0 && 1 && 7 && -9 \\
0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 \end{alignat} \,\right] </math>

:then the column vectors {{nowrap| '''c'''<sub>1</sub>, ..., '''c'''<sub>6</sub>}} satisfy the equations

::<math> \begin{alignat}{1}
\textbf{c}_3 &= -3\textbf{c}_1 + 5\textbf{c}_2 \\
\textbf{c}_5 &= 2\textbf{c}_1 - \textbf{c}_2 + 7\textbf{c}_3 \\
\textbf{c}_6 &= 4\textbf{c}_2 - 9\textbf{c}_3
\end{alignat}\text{.}</math>

:It follows that the row vectors of ''A'' satisfy the equations

::<math> \begin{alignat}{1}
x_3 &= -3x_1 + 5x_2 \\
x_5 &= 2x_1 - x_2 + 7x_3 \\
x_6 &= 4x_2 - 9x_3
\end{alignat}\text{.}</math>

:In particular, the row vectors of ''A'' are a basis for the null space of the corresponding matrix.

==Operations on subspaces==
]

===Intersection===
If ''U'' and ''V'' are subspaces of '''R'''<sup>''n''</sup>, their ] is also a subspace:

:<math>U \cap V = \left\{ \textbf{x}\in\textbf{R}^n : \textbf{x}\in U\text{ and }\textbf{x}\in V \right\} </math>

The dimension of the intersection satisfies the inequality

:<math>\dim(U) + \dim(V) - n \leq \dim(U \cap V) \leq \min(\dim U,\,\dim V)\text{.}</math>

The minimum is the most general case<ref>That is, the intersection of ] subspaces {{nowrap| ''U'', ''V'' ⊂ '''R'''<sup>''n''</sup>}} has dimension {{nowrap| dim(''U'') + dim(''V'') − ''n''}}, or dimension zero if this number is negative.</ref>, and the maximum only occurs when one subspace is contained in the other. For example, the intersection of two-dimensional subspaces in '''R'''<sup>3</sup> has dimension one or two (with two only possible if they are the same plane). The intersection of three-dimensional subspaces in '''R'''<sup>5</sup> has dimension one, two, or three, with most pairs intersecting along a line.

The ''']''' of a subspace ''U'' in '''R'''<sup>''n''</sup> is the difference
{{nowrap| ''n'' − dim(''U'')}}. Using codimension, the inequality above can be written

:<math>\max(\text{codim } U,\,\text{codim } V) \leq \text{codim}(U \cap V) \leq \text{codim}(U) + \text{codim}(V) \text{.}</math>

===Sum===
If ''U'' and ''V'' are subspaces of '''R'''<sup>''n''</sup>, their '''sum''' is the subspace

:<math>U + V = \left\{ \textbf{u} + \textbf{v} : \textbf{u}\in U\text{ and }\textbf{v}\in V \right\}\text{.}</math>

For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality

:<math>\max(\dim U,\dim V) \leq \dim(U + V) \leq \dim(U) + \dim(V)\text{.}</math>

Here the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case.<ref>That is, the sum of two ] subspaces
{{nowrap| ''U'', ''V'' − '''R'''<sup>''n''</sup> }} has dimension
{{nowrap| dim(''U'') + dim(''V'')}}, or dimension ''n'' if this number exceeds ''n''.</ref> The dimension of the intersection and the sum are related:

:<math>\dim(U+V) = \dim(U) + \dim(V) - \dim(U \cap V)</math>

===Orthogonal complement===
{{main|Orthogonal complement}}
The '''orthogonal complement''' of a subspace ''U'' is the subspace

:<math>U^\bot = \left\{\textbf{x}\in\textbf{R}^n : \textbf{x} \cdot \textbf{u}=0\text{ for every }\textbf{u}\in U \right\}</math>

Here '''x''' · '''u''' denotes the ] of '''x''' and '''u'''. For example, if ''U'' is a plane through the origin in '''R'''<sup>3</sup>, then ''U''<sup>&perp;</sup> is the line perpendicular to this plane at the origin.

If '''b'''<sub>1</sub>, '''b'''<sub>2</sub>, ..., '''b'''<sub>''k''</sub> is a basis for ''U'', then a vector '''x''' is in the orthogonal complement of ''U'' if and only if it is ] to each '''b'''<sub>''i''</sub>. It follows that the null space of a matrix is the orthogonal complement of the row space.

The dimension of a subspace and its orthogonal complement are related by the equation

:<math>\dim(U) + \dim(U^\bot) = n</math>

That is, the dimension of ''U''<sup>&perp;</sup> is equal to the ] of ''U''. The intersection of ''U'' and ''U''<sup>&perp;</sup> is the origin, and the sum of ''U'' and ''U''<sup>&perp;</sup> is all of '''R'''<sup>''n''</sup>

Orthogonal complements satisfy a version of ]:

:<math>(U + V)^\bot = U^\bot \cap V^\bot\;\;\;\;\text{and}\;\;\;\;(U \cap V)^\bot = U^\bot + V^\bot\text{.}</math>

In fact, the collection of subspaces of '''R'''<sup>''n''</sup> satisfy all of the axioms for a ], with intersection as AND, sum as OR, and orthogonal complement as NOT.

==See also==
* ]
* ]
* ]
* ]

==Notes==
<div class="references-small" style="-moz-column-count:2; column-count:2;">
<references />
</div>

==References==
{{see also|Linear algebra#Further reading}}

===Textbooks===
* {{Citation
| last = Axler
| first = Sheldon Jay
| date = 1997
| title = Linear Algebra Done Right
| publisher = Springer-Verlag
| edition = 2nd
| isbn = 0387982590
}}
* {{Citation
| last = Lay
| first = David C.
| date = August 22, 2005
| title = Linear Algebra and Its Applications
| publisher = Addison Wesley
| edition = 3rd
| isbn = 978-0321287137
}}
* {{Citation
| last = Meyer
| first = Carl D.
| date = February 15, 2001
| title = Matrix Analysis and Applied Linear Algebra
| publisher = Society for Industrial and Applied Mathematics (SIAM)
| isbn = 978-0898714548
| url = http://www.matrixanalysis.com/DownloadChapters.html
}}
* {{Citation
| last = Poole
| first = David
| date = 2006
| title = Linear Algebra: A Modern Introduction
| publisher = Brooks/Cole
| edition = 2nd
| isbn = 0-534-99845-3
}}
* {{Citation
| last = Anton
| first = Howard
| date = 2005
| title = Elementary Linear Algebra (Applications Version)
| publisher = Wiley International
| edition = 9th
}}
* {{Citation
| last = Leon
| first = Steven J.
| date = 2006
| title = Linear Algebra With Applications
| publisher = Pearson Prentice Hall
| edition = 7th
}}

==External links==
* at Google Video, from MIT OpenCourseWare

]
]

Latest revision as of 08:08, 30 April 2013

Redirect to: