next up previous contents
Next: Characters Up: Symmetry Previous: Convolutionconvolution algebra algebra   Contents

Matrix representation

There are now two ways to get a matrix representation of a the group multiplication, according to whether the characteristic functions are used as left factors or as right factors for a convolution. In each case, the other factor is expanded in a basis as though it were a vector. For the left regular representation,

\begin{eqnarray*}
(\delta_g*f)(x) & = & \sum_{ab=x} \delta(g;a) f(b), \\
& = & \sum_a \delta(g;a) f(a^{-1}x), \\
& = & f(g^{-1}x).
\end{eqnarray*}



So far, this is just a representation by permutations, as would be seen by specializing to characteristic functions.

Consider that a representationgroup representation is given, a set of matrices $\Gamma = \{D(e), D(a), \ldots \}$ for which $D(a)D(b)=D(ab)$, and look at the matrix elements $d_{ij}(a)$. These are complex valued functions of the group elements and thereby subject to the preceding formula:

\begin{eqnarray*}
(\delta_a*d_{ij})(x) & = & d_{ij}(a^{-1}x), \\
& = & \sum_k{d_{ik}(a^{-1}) d_{kj}(x)}.
\end{eqnarray*}



for fixed $a$ and function argument $x$.

We need to know whether the matrices of $\Gamma$ are diagonal or not and if not, to what extent. Recall that matrices commute when they have common eigenvectors, and hence can be chosen to be diagonal, all of them at once. If any matrix but a multiple of the identity commutes with all the matrices of $\Gamma$, and that multiple can be diagonalized with two different eigenvalues, all the matrices of $\Gamma$ must follow suit.

Therefore, if all of $\Gamma$ cannot be placed in the form of diagonal blocks by some choice of coordinates, the matrix which commutes with all of them must be a multiple of the identity, a result which is known as Schur'sSchur's lemmas first lemma. His second lemma relates to the possibility of forming an equivalence between two different matrix representations $\Gamma^1$ and $\Gamma^2$. Give the matrices identifying superscripts and suppose a matrix $V$ for which, whatever group element,

\begin{eqnarray*}
V D^1(x) & = & D^2(x) V.
\end{eqnarray*}



If the dimensions are discrepant, $V$ will be rectangular; suppose there are more columns than rows, reversing the indices in the contrary case. There must be some columns which are linear combinations of others, and therefore a column $X$ expressing this dependence via $VX = 0$.

On the other hand, if $V$ is not zero, there will be a row such that $T^TV$ is non-zero; it just needs a $1$ in a strategic place to take advantage of one of $V$'s non-sero elements.

Finally, consider that neither $D^1(a)$ nor $D^2(a)$ are singular (we really wouldn't want to consider a set of zero matrices a representation) because $D(a)^{-1} = D(a^{-1})$ and every group element has an inverse.

Therefore $D^1$ cannot map a non-zero vector into zero, so $T^TVD^1(a)$ is nonzero, contradicting the vanishing of $D^2(a)VX$. So $V$ would have to vanish in its entirety, which is the statement of Schur's second lemma.

There is a little more to be said because $V$ could be square, leaving the existence of $X$ in question. If there were such an $X$, $V$ would still be singular and would need to vanish, leading to the same conclusion. If, on the contrary, $V$ were invertible, the representations $\Gamma^1$ and $\Gamma^2$ would have to be equivalent so the only possibility for inequivalent representations, even of the same dimension, would be $V = 0$.

In summary, a representation is irreducible when there is no choice of basis where all its matrices are simultaneously partitioned into diagonal nonzero blocks. Of course, some of of the individual matrices, such as the identity matrix representing the identity element, may well be reduced; but not all of them in the same way at once. If only the zero equivalence can connect two representations, they are inequivalent, and if they are both the same representation, only a multiple of the identity can connect them.

Averaging over a group yields a plentiful supply of equivalences, depending upon what is averaged. Define $V$, for any matrix $Q$ compatible with the two dimensions,

\begin{eqnarray*}
V(Q) & = & \frac{1}{\vert G\vert} \sum_{g\varepsilon G}{D^2(g^{-1})QD^1(g)},
\end{eqnarray*}



and follow out the following derivation, step by step:

\begin{eqnarray*}
V(Q)\ D^1(a) & = &
\left(\frac{1}{\vert G\vert} \sum_{g\var...
...lon G}{D^2((ga)^{-1})QD^1(ga)}\right), \\
& = & D^2(a)\ V(Q),
\end{eqnarray*}



Dividing the sum by the order of the group was really unnecessary, but it will be convenient later on. For example, if $Q = I$ and the same representation is used both times, the result will be $V(I) = I$. Accordng to Schur's lemma, after adorning V with superscripts to trace its definition and inventing a sort of generic Kronecker delta for matrices,

\begin{eqnarray*}
V^{\alpha\beta}(I) & = & \Delta(\alpha,\beta).
\end{eqnarray*}



Expressed in terms of matrix elements ( $\delta(k,\ell)$ is the matrix element of $I$),

\begin{eqnarray*}
\frac{1}{\vert G\vert}
\sum_{g\varepsilon G} d^{\alpha}_{ik...
...l j}(g)
& = & \delta(\alpha,\beta) \delta(i,k) \delta(\ell,j),
\end{eqnarray*}



which expresses the biorthonormality of two vector sets, possibly bases, for the group's function space, or convolution algebra. The sets have $\dim(\Gamma^\alpha)^2$ elements for each irreducible representation, placing an upper limit on the number of irreducible representations (which can never exceed $\vert G\vert$) and their dimensions (which can never exceed $\surd\vert G\vert$). Evidently high dimensional representations come at the price of finding fewer of them.


next up previous contents
Next: Characters Up: Symmetry Previous: Convolutionconvolution algebra algebra   Contents
Pedro Hernandez 2004-02-28