## Representations of sl2, Part II July 18, 2009

Posted by Akhil Mathew in algebra, representation theory.
Tags: , , , ,

This post is the second in the series on ${\mathfrak{sl}_2}$ and the third in the series on Lie algebras. I’m going to start where we left off yesterday on ${\mathfrak{sl}_2}$, and go straight from there to classification.  Basically, it’s linear algebra.

Classification

We’ve covered all the preliminaries now and we can classify the ${\mathfrak{sl}_2}$-representations, the really interesting material here. By Weyl’s theorem, we can restrict ourselves to irreducible representations. Fix an irreducible ${V}$.

So, we know that ${H}$ acts diagonalizably on ${V}$, which means we can write

$\displaystyle V = \bigoplus_\lambda V_\lambda$

where ${Hv_\lambda = \lambda v_{\lambda}}$ for each ${\lambda}$, i.e. ${V_\lambda}$ is the ${H}$-eigenspace.

Definition 1 The eigenvalues ${\lambda \in \mathbb{C}}$ occuring nontrivially in the above decomposition are called the weights of ${V}$.

Here “weight” is just another word for “eigenvalue” but in the general case of semisimple Lie algebras, weights are actually linear functionals on an abelian subalgebra, which here just happens to be 1-dimensional (spanned by ${H}$).

I claim now:

Proposition 2 We have:

$\displaystyle X V_\lambda \subset V_{\lambda + 2} \text{ and } Y V_{\lambda} \subset V_{\lambda - 2} . \ \ \ \ \ (1)$

This proposition says that, basically, ${X}$ increases the weight ${\lambda}$ by 2, and ${Y}$ decreases it by 2. Its generalization says a lot about how semisimple Lie algebras behave, but isn’t really any different in the proof.

The proof in our case runs as follows. Suppose ${Hv = \lambda v}$. We must show ${H(Xv) = (\lambda+2) Xv}$, which is what the first assertion states. Indeed, by the definition of representations:

$\displaystyle H(Xv) = X(Hv) + [H,X] v = X(\lambda v) + 2X v = (\lambda+2) Xv,$

by the all-important relations between ${H,X,Y}$. For ${Y}$ the story is similar:

$\displaystyle H(Yv) = Y(Hv)+ [H,Y]v = Y(\lambda v) - 2Y v = (\lambda -2 )Yv.$

The proposition itself didn’t use irreducibility, which we’ll soon have to invoke. First of all, ${V}$ is finite-dimensional, so the weights are finite, and there must be a maximal weight ${\omega}$ (a “highest weight”), with an associated eigenvector ${e_0}$: the highest weight vector.

Claim 1 ${Xe_0 = 0}$. In other words, ${e_0}$ is annihilated by the Lie subalgebra of strictly upper-triangular matrices.

Indeed, otherwise by (1) ${Xe_0}$ would be an eigenvector of eigenvalue ${\omega+2}$, though ${\omega}$ was maximal. This proves the claim.

So, we now want to see that repeated multiplication by the ${Y}$‘s on ${e_0}$ leads to vectors that form a basis all of ${V}$. Since the ${Y^i e_0}$ are weight vectors of weight ${\omega - 2i}$ by (1) (and induction), we will have a nice decomposition of ${V}$. Thus, define the sequence ${e_n = Y^{n} e_0}$; the notation ${Y^i}$ refers to repeated multiplication by ${Y}$ on ${V}$.

Claim 2 Suppose ${e_0, \dots, e_m \neq 0}$. Then ${e_0, \dots, e_m}$ are linearly independent.

Well, we know already that ${e_i}$ (${1 \leq i \leq m}$) are eigenvectors of ${H}$ of eigenvalues ${\omega - 2i}$, which are different for distinct ${i}$. The assertion now follows from the general fact from linear algebra:

Let ${T: W \rightarrow W}$ be a linear transformation on a finite-dimensional vector space ${W}$. If the ${w_i}$‘s ${(1 \leq i \leq m}$) are nonzero eigenvectors for ${T}$, say ${Tw_i = \lambda_i w_i}$, with different eigenvalues (say ${\lambda_i \neq \lambda_j}$ if ${i \neq j}$), then the ${w_i}$‘s are linearly independent.

To prove this, suppose given a nontrivial linear relation between the ${w_i}$‘s, which we may assume has as few variables as possible. Renumbering the ${y_i}$ if necessary, write the relation in the form

$\displaystyle a_1 w_1 + a_2 w_2 + \dots + a_k w_k = 0, \quad k \leq m (2)$

with no $a_i = 0$; then apply ${T}$ to (2) to get

$\displaystyle \lambda_1 a_1 w_1 + \lambda_2 a_2 w_2 + \dots + \lambda_k a_k w_k = 0$

and multiply (2) by ${\lambda_1}$ to get

$\displaystyle \lambda_1 a_1 w_1 + \lambda_1 a_2 w_2 + \dots + \lambda_1 a_k w_k = 0.$

Subtracting the last two identties gives a nontrivial (as ${\lambda_1 \neq \lambda_2}$) linear relation between the ${w_j}$ with fewer variables, contradiction.

So, back to ${\mathfrak{sl}_2}$. Our ${V}$ is finite-dimensional, and we can’t have infinitely many linearly independent vectors. Thus eventually we get ${y_m = 0}$; assume moreover ${m}$ is the smallest integer for which this happens.

Our next goal is to show that the ${e_i}$, ${1 \leq i \leq m}$, for a basis for ${V}$. To do this, we just need to show they span ${V}$, and we’ll be done if we show that their span is invariant under the action of ${\mathfrak{sl}_2}$: that will mean their span is a nontrivial ${\mathfrak{sl}_2}$-submodule of ${V}$, hence all of ${V}$ by irreducibility.

The ${e_i}$‘s have decreasing weights, and ${X}$ increases weights. Thus the following makes sense:

Claim 3 We have ${X e_i = (constant) \ e_{i-1}}$ if ${1 \leq i \leq m-1}$.

Induction on ${i}$. First of all, let’s take ${i=1}$; then

$\displaystyle Xe_1 = X(Ye_0) = Y(Xe_0) + [X,Y]e_0 = 0 + He_0 = \omega e_0.$

Here we used Claim 1.

Now assume the claim true for ${i-1}$, and we prove it for ${i}$:$\displaystyle Xe_i = X Y (Y^{n-1} e_0) = YX( e_{n-1}) + H e_{n-1} = YX(e_{n-1}) + (\omega - 2(n-1))e_{n-1} = Y ( (constant) \ e_{n-2}) + (\omega - 2(n-1) e_{n-1}) = (constant) \ e_{n-1}.$

So, both ${X}$ and ${Y}$ map ${span(e_1, \dots, e_m)}$ into itself, and we have proved most of the following:

Theorem 3 (Classification Theorem for Finite-Dimensional Irreducibles) The ${e_i}$, ${1 \leq i \leq m}$ form a basis for the irreducible ${\mathfrak{sl}_2}$-module ${V}$; if ${\omega}$ is the highest weight, we have

$\displaystyle V = \bigoplus_{0 \leq i \leq m-1} V_{\omega - 2i};$

each factor ${V_{\omega - 2i}}$ in this sum is one-dimensional, spanned by ${e_i}$; ${X}$ and ${Y}$ move the weight spaces up and down by ${2}$ respectively. The maps ${X: V_{\lambda -2} \rightarrow V_{\lambda}, \ Y: V_{\lambda-2} \rightarrow V_\lambda}$ for eigenvalues ${\lambda}$ are even isomorphisms at all but boundary values of ${\lambda}$. The compositions ${XY, YX: V_\lambda \rightarrow V_{\lambda}}$ are scalar multiplication by nonzero scalars.

All that remains is the following sub-result:

Claim 4 ${Xe_i \neq 0}$ if ${0< i < m }$.

Otherwise, ${X}$ and ${Y}$ would both map ${span(e_i, e_{i+1}, \dots, e_{m-1})}$ into itself, and that would be a proper ${\mathfrak{sl}_2}$-submodule of ${V}$, contradiction.

It’s actually true in the finite-dimensional case that the highest weight ${\omega}$ is actually a positive integer. This follows from making some of the earlier arguments with myseterious “constants” more precise; I’ll probably return to this later on sometime, in the context of general semisimple Lie algebras. I do want to go back to full generality rather than focusing only on specific examples, but we’ve now covered the key ideas for ${\mathfrak{sl}_2}$.

1. RSI students blog « Secret Blogging Seminar - July 22, 2009

[…] have decided to start a blog. About half their posts seem to be background to their research, like this post on classifying the representations of . The other half seem to be puzzles and Olympiad problems; […]