Misplaced Pages

Whitehead's lemma (Lie algebra)

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Whitehead's lemma (Lie algebras))

In homological algebra, Whitehead's lemmas (named after J. H. C. Whitehead) represent a series of statements regarding representation theory of finite-dimensional, semisimple Lie algebras in characteristic zero. Historically, they are regarded as leading to the discovery of Lie algebra cohomology.

One usually makes the distinction between Whitehead's first and second lemma for the corresponding statements about first and second order cohomology, respectively, but there are similar statements pertaining to Lie algebra cohomology in arbitrary orders which are also attributed to Whitehead.

The first Whitehead lemma is an important step toward the proof of Weyl's theorem on complete reducibility.

Statements

Without mentioning cohomology groups, one can state Whitehead's first lemma as follows: Let g {\displaystyle {\mathfrak {g}}} be a finite-dimensional, semisimple Lie algebra over a field of characteristic zero, V a finite-dimensional module over it, and f : g V {\displaystyle f\colon {\mathfrak {g}}\to V} a linear map such that

f ( [ x , y ] ) = x f ( y ) y f ( x ) {\displaystyle f()=xf(y)-yf(x)} .

Then there exists a vector v V {\displaystyle v\in V} such that f ( x ) = x v {\displaystyle f(x)=xv} for all x g {\displaystyle x\in {\mathfrak {g}}} . In terms of Lie algebra cohomology, this is, by definition, equivalent to the fact that H 1 ( g , V ) = 0 {\displaystyle H^{1}({\mathfrak {g}},V)=0} for every such representation. The proof uses a Casimir element (see the proof below).

Similarly, Whitehead's second lemma states that under the conditions of the first lemma, also H 2 ( g , V ) = 0 {\displaystyle H^{2}({\mathfrak {g}},V)=0} .

Another related statement, which is also attributed to Whitehead, describes Lie algebra cohomology in arbitrary order: Given the same conditions as in the previous two statements, but further let V {\displaystyle V} be irreducible under the g {\displaystyle {\mathfrak {g}}} -action and let g {\displaystyle {\mathfrak {g}}} act nontrivially, so g V 0 {\displaystyle {\mathfrak {g}}\cdot V\neq 0} . Then H q ( g , V ) = 0 {\displaystyle H^{q}({\mathfrak {g}},V)=0} for all q 0 {\displaystyle q\geq 0} .

Proof

As above, let g {\displaystyle {\mathfrak {g}}} be a finite-dimensional semisimple Lie algebra over a field of characteristic zero and π : g g l ( V ) {\displaystyle \pi :{\mathfrak {g}}\to {\mathfrak {gl}}(V)} a finite-dimensional representation (which is semisimple but the proof does not use that fact).

Let g = ker ( π ) g 1 {\displaystyle {\mathfrak {g}}=\operatorname {ker} (\pi )\oplus {\mathfrak {g}}_{1}} where g 1 {\displaystyle {\mathfrak {g}}_{1}} is an ideal of g {\displaystyle {\mathfrak {g}}} . Then, since g 1 {\displaystyle {\mathfrak {g}}_{1}} is semisimple, the trace form ( x , y ) tr ( π ( x ) π ( y ) ) {\displaystyle (x,y)\mapsto \operatorname {tr} (\pi (x)\pi (y))} , relative to π {\displaystyle \pi } , is nondegenerate on g 1 {\displaystyle {\mathfrak {g}}_{1}} . Let e i {\displaystyle e_{i}} be a basis of g 1 {\displaystyle {\mathfrak {g}}_{1}} and e i {\displaystyle e^{i}} the dual basis with respect to this trace form. Then define the Casimir element c {\displaystyle c} by

c = i e i e i , {\displaystyle c=\sum _{i}e_{i}e^{i},}

which is an element of the universal enveloping algebra of g 1 {\displaystyle {\mathfrak {g}}_{1}} . Via π {\displaystyle \pi } , it acts on V as a linear endomorphism (namely, π ( c ) = i π ( e i ) π ( e i ) : V V {\displaystyle \pi (c)=\sum _{i}\pi (e_{i})\circ \pi (e^{i}):V\to V} .) The key property is that it commutes with π ( g ) {\displaystyle \pi ({\mathfrak {g}})} in the sense π ( x ) π ( c ) = π ( c ) π ( x ) {\displaystyle \pi (x)\pi (c)=\pi (c)\pi (x)} for each element x g {\displaystyle x\in {\mathfrak {g}}} . Also, tr ( π ( c ) ) = tr ( π ( e i ) π ( e i ) ) = dim g 1 . {\displaystyle \operatorname {tr} (\pi (c))=\sum \operatorname {tr} (\pi (e_{i})\pi (e^{i}))=\dim {\mathfrak {g}}_{1}.}

Now, by Fitting's lemma, we have the vector space decomposition V = V 0 V 1 {\displaystyle V=V_{0}\oplus V_{1}} such that π ( c ) : V i V i {\displaystyle \pi (c):V_{i}\to V_{i}} is a (well-defined) nilpotent endomorphism for i = 0 {\displaystyle i=0} and is an automorphism for i = 1 {\displaystyle i=1} . Since π ( c ) {\displaystyle \pi (c)} commutes with π ( g ) {\displaystyle \pi ({\mathfrak {g}})} , each V i {\displaystyle V_{i}} is a g {\displaystyle {\mathfrak {g}}} -submodule. Hence, it is enough to prove the lemma separately for V = V 0 {\displaystyle V=V_{0}} and V = V 1 {\displaystyle V=V_{1}} .

First, suppose π ( c ) {\displaystyle \pi (c)} is a nilpotent endomorphism. Then, by the early observation, dim ( g / ker ( π ) ) = tr ( π ( c ) ) = 0 {\displaystyle \dim({\mathfrak {g}}/\operatorname {ker} (\pi ))=\operatorname {tr} (\pi (c))=0} ; that is, π {\displaystyle \pi } is a trivial representation. Since g = [ g , g ] {\displaystyle {\mathfrak {g}}=} , the condition on f {\displaystyle f} implies that f ( x ) = 0 {\displaystyle f(x)=0} for each x g {\displaystyle x\in {\mathfrak {g}}} ; i.e., the zero vector v = 0 {\displaystyle v=0} satisfies the requirement.

Second, suppose π ( c ) {\displaystyle \pi (c)} is an automorphism. For notational simplicity, we will drop π {\displaystyle \pi } and write x v = π ( x ) v {\displaystyle xv=\pi (x)v} . Also let ( , ) {\displaystyle (\cdot ,\cdot )} denote the trace form used earlier. Let w = e i f ( e i ) {\displaystyle w=\sum e_{i}f(e^{i})} , which is a vector in V {\displaystyle V} . Then

x w = i e i x f ( e i ) + i [ x , e i ] f ( e i ) . {\displaystyle xw=\sum _{i}e_{i}xf(e^{i})+\sum _{i}f(e^{i}).}

Now,

[ x , e i ] = j ( [ x , e i ] , e j ) e j = j ( [ x , e j ] , e i ) e j {\displaystyle =\sum _{j}(,e^{j})e_{j}=-\sum _{j}(,e_{i})e_{j}}

and, since [ x , e j ] = i ( [ x , e j ] , e i ) e i {\displaystyle =\sum _{i}(,e_{i})e^{i}} , the second term of the expansion of x w {\displaystyle xw} is

j e j f ( [ x , e j ] ) = i e i ( x f ( e i ) e i f ( x ) ) . {\displaystyle -\sum _{j}e_{j}f()=-\sum _{i}e_{i}(xf(e^{i})-e^{i}f(x)).}

Thus,

x w = i e i e i f ( x ) = c f ( x ) . {\displaystyle xw=\sum _{i}e_{i}e^{i}f(x)=cf(x).}

Since c {\displaystyle c} is invertible and c 1 {\displaystyle c^{-1}} commutes with x {\displaystyle x} , the vector v = c 1 w {\displaystyle v=c^{-1}w} has the required property. {\displaystyle \square }

Notes

  1. Jacobson 1979, p. 93
  2. Jacobson 1979, p. 77, p. 95
  3. Jacobson 1979, p. 96
  4. Jacobson 1979, Ch. III, ยง 7, Lemma 3.

References

Category: