I’ve been spending more time writing my dissertation which has made it harder to write lengthy pieces for this site. So I’ve been interested in writing up solutions for problems I find particularly interesting or worthwhile. There’s no set criteria for a problem to make this series. I just have to stumble across it and appreciate it.
For the last week I’ve been going through some books on Lie Algebras and Lie Groups. It’s mostly been a tangent to what I’m supposed to be doing but not unfruitful and definitely interesting. I was reading Kirillov’s An Introduction to Lie Groups and Lie Algebras when I looked at exercise 7 in chapter 3:
Let \(G\) be the Lie group of all invertible affine functions \(f : \mathbb{R} \to \mathbb{R}\) with application as the group operator. Describe the corresponding Lie Algebra \(\mathfrak{g}\).
I like that there is more than one approach, and after solving it I feel like I have a better grasp of Lie Groups/Algebras than before. Of course, this sense of accomplishment can be completely misguided. I often only think I know what I’m talking about.
So an element \(f\) of \(G\) has the form \(f(x)=ax+b\) where \(a\) is non-zero, and if we have another element \(g\) such that \(g(x)=cx+d\) then the element \(fg\) is the function defined by \[\begin{equation} (fg)(x) = (f ∘ g)(x) = a(cx+d)+b= acx + (ad+b) \end{equation}\] and \(f^{-1}(x)=\frac{1}{a}x-\frac{b}{a}\). Well will sometimes use \(1\) to denote the identity element \(x \mapsto x\), and when we don’t want to waste a variable name we will write \(x \mapsto ax + b\) for an element of \(G\).
The Easy Way
Well at this point you try to find a way to represent \(f\) as a \(2 \times 2\) matrix. It’s a good guess that for \(f\) the upper-left hand entry of the corresponding matrix should be \(a\). Let’s try and make it upper-triangular to limit our degrees of freedom. Okay, it needs to be always be invertible so the lower-right hand corner can’t ever be 0, so let’s put a 1 in there. We don’t really have options left, so let’s put \(b\) in the upper-right hand side. …and great, we can represent the function \(f(x)=ax+b\) with the matrix \[\begin{bmatrix} a & b \\ 0 & 1 \end{bmatrix}\]To get the Lie Algebra we look at the image of the representations near the identity under the matrix logarithm. Then the Lie Algebra can be represented by the linear span of elements in that image, and for the bracket operator we take \([X,Y]=XY-YX\). This embedding of \(G\) into the Lie group \(GL(2,\mathbb{R})\) of real invertible \(2\times2\) matrices and the subsequent reasoning is what Kirillov describes as “The easy way.” And it is easy. Once you do the initial bit of cleverness of finding an embedding, the rest is linear algebra and matrix calculations. Sometimes its tedious, but its not taxing.
I would not have written this piece if not for his his next comment. Per Kirillov, the above is easy while the following is straight-forward: “construct some basis in the tangent space, construct the corresponding one-parameter subgroups, and compute the commuter using” the following relation \[\begin{equation} \exp(x)\exp(y)\exp(-x)\exp(-y) = \exp([x,y] + \cdots) \end{equation}\] Here, \(x\) and \(y\) are elements of the Lie Algebra, and \([\cdot,\cdot]\) is the commuter and the “\(\cdots\)” stand for all the higher order bracket terms. In this instance we aren’t trying to determine an embedding for \(G\). We take \(G\) as it is, find a basis for tangent space at the identity. Our Lie Algebra is this tangent space of the manifold \(G\), i.e. \(\mathfrak{g}=T_1G\). For one of our basis vectors \(v\) of \(T_1G\) we try to determine the one-parameter sub-group \(\exp(sv)\) as \(s\) varies over the reals. Then we use the above formula to determine the commuter. The hope is by avoiding shortcuts, and following the general program that we get a better understanding of the relationship between a Lie Group and its Lie Algebra.
Finding a basis for \(T_1G\)
We choose the obvious chart \(\phi\) from \(G\) to \(\mathbb{R}^2\) where \(\phi(x \mapsto ax +b)=(a,b)\). We haven’t specified the manifold structure of \(G\) at any point, but to be formal we can imbue \(G\) with the maximal atlas that contains \(\phi\). The main thing to remember is \(\phi\) is the only explicit chart we need to care about and we can let the basic machinery of differential geometry handle the accounting. We need to find a basis for \(T_1G\). Consider a curve \(\gamma_1(t) : (-\epsilon, \epsilon) \to G\). Where \(\gamma_1(t) = (x \mapsto tx)\). \(\gamma_1\) obviously passes through the identity at time zero and \((\phi \circ \gamma_1)(t) = (t,0)\) so \((\phi \circ \gamma_1)^\prime (t) = (1,0)\). We pick \(u=\gamma_1^\prime(0)\) as our first basis vector. Let’s pick the next obvious candidate; the curve defined by \(\gamma_2(t)=(x \mapsto x + t)\). Then \((\phi \circ \gamma_2)(t)=(1,t)\) and \((\phi \circ \gamma_2)^\prime(t) = (0,1)\). So we let \(v = \gamma_2^\prime(0)\) be our second basis vector for the tangent space \(T_1G\). Note, our basis is not \(\{(1,0),(0,1)\}\). Our basis is \(\{u,v\}= \{\gamma_1^\prime(0),\gamma_2^\prime(0)\}\) which is a set of objects which are definitely more abstract than an element of \(\mathbb{R}^2\).
Finding the one-parameter subgroups
Our current goal is to determine \(\exp(tu)\) and \(\exp(tv)\) where \(t\) varies over all of \(\mathbb{R}\). To do we must find morphisms \(\gamma_u,\gamma_v\) from \(\mathbb{R}\) to \(G\) such that \(\gamma_u^\prime(0)=u\) and \(\gamma_v^\prime(0)=v\). Such morphisms are unique and \(\exp(u)\) and \(\exp(v)\) will be defined by them. The curves we used to construct \(u\) and \(v\) will be our hint to find these curves. We will begin with \(v\) because it’s easier. Define \(\gamma_v(t) = [x \mapsto x + t]\), then it’s easy to check \(\gamma\) is indeed a morphism from \(\mathbb{R}\) to \(G\) and \(\gamma_v^\prime(0)=u\). \(\gamma_u\) is only slightly less straight-forward. Note \([x \mapsto sx ][x \mapsto tx]=[x \mapsto (st)x]\). We want the parameters to sum instead of multiply, so let \(\gamma_u(t) = [x \mapsto e^t x]\). \(\gamma_u\) is indeed a morphism from \(\mathbb{R}\) to \(G\). Now \((\phi \circ \gamma_u)(t)=(e^t,0)\) so \((\phi \circ \gamma_u)^\prime(t)=(1,0)\), and thus \(\phi_u^\prime(0)=u\). So we’ve determined the one parameter subgroups \(\exp(su)=[x \mapsto e^s x]\) and \(\exp(tv) = [x \mapsto x + t]\).
Determining the Lie Bracket
To find the commuter we use the relation \[\begin{equation} \exp(tu)\exp(tv)\exp(-tu)\exp(-tv) = \exp( t^2[u,v] + O(t^3)] \end{equation}\] then look at the behavior when \(t\) is close to zero.
\[\begin{equation} \begin{split} & \exp(tu)\exp(tv)\exp(-tu)\exp(-tv) \\ = & [x \mapsto e^t x][x \mapsto x + t][x \mapsto e^{-t}x][x \mapsto x-t] \\ = & [x \mapsto e^t x][x \mapsto x + t][x \mapsto e^{-t}x - e^{-t}t] \\ = & [x \mapsto e^t x][x \mapsto e^{-t}x + t(1- e^{-t})] \\ = & [x \mapsto x + t(e^t-1)] \end{split} \end{equation}\] If we call this curve \(m\), then \((\phi \circ m)(t) = (0,t(e^t-1)) = t^2(0,1)+O(t^3)\). This implies \(\phi(\exp([u,v]))=\phi(\exp(v))\) and thus \([u,v]=v\).
We now have the complete description of \(\mathfrak{g}\) as a Lie Algebra. It’s a two-dimensional real vector space with basis \(u,v\) such that \(\exp(u)=[x \mapsto ex]\) and \(\exp(v) = [x \mapsto x + 1]\) with the Lie Bracket \([u,v]=v\).
Sanity Check
We probably didn’t commit any serious mistakes here. If \(\mathfrak{g}\) is a two-dimensional Lie Algebra then it is either Abelian (the commuter is always zero), or we find a basis \(x,y\) such that \([x,y]=y\). Well, \(G\) wasn’t Abelian so neither is \(\mathfrak{g}\). We did arrive at the only possible solution, which is nice. Our sanity check became our third and quickest way to solve the problem. As usual, a little bit theory can save you some work.