Orthogonality Theorem
The orthogonal theorem is fundamental to representation theory. Understand this theorem as much as possible, and you will be an academic weapon in this course.
Definition 2.5 – Unitary Irreps
A unitary irrep, of group $G$, is one where all matrices $D(g)$ are unitary.
$$ D(g)^{\dagger}D(g) = D(g)D(g)^{\dagger} = \mathbb{1} $$Theorem 2.1 – Orthogonality Theorem
Consider a finite group $G$, of order $n$. This group has $N_{i}$ unitary irreps, $\{ D^{(i)}(G) \}$, each of dimension $n_{i} \times n_{i}$, where $i$ is the representation index.
$$ \sum_{a=1}^n D^{(i)}(g_{a})^{*}_{\alpha\beta} D^{(j)}(g_{a})_{\mu \nu} = \frac{n}{n_{i}} \delta_{ij} \delta_{\alpha \mu} \delta_{\beta \nu} $$Implication
Orthogonality
First, we can define the meaning of $D^{(i)}(g_{a})_{\alpha\beta}$.
- The group $G = \{ e, g_{1}, g_{2}, \dots, g_{n}\}$ has $n$ elements.
- The group $G$ has at most $N_{i}$ unitary irreps, however, we don’t know $N_{i}$.
- The representation $D^{(i)}(G)$ is the $i^\text{th}$ representation $(i = 1,2,\dots, N_{i})$, with matrices of dimension $n_{i} \times n_{i}$.
So, $D^{(i)}(g_{a})_{\alpha\beta}$ is the matrix element $\alpha\beta$ of the $i^\text{th}$ irrep of $G$, of the element $g_{a} \in G$.
Second, considering that $D^{(i)}(g_{a})_{\alpha\beta}$ is really just a scalar, we can, for all elements of $G$, form an $n$-dimensional complex vector.
$$ D^{(i)}(g_{a})_{\alpha\beta} = (V^{(i)\alpha\beta})_{a} $$The vector component being specified by
- the group element $a$,
- the representation index $i$,
- and the matrix element index $(\alpha,\beta)$ of the $i^\text{th}$ representation.
Therefore, theorem 2.1 states that these vectors are orthogonal.
Constraint on $N_{i}$ and $n_{i}$
Having stated that $(V^{(i)\alpha\beta})_{a} = D^{(i)}(g_{a})_{\alpha\beta}$, we know that since $1 \leq \alpha,\beta \leq n_{i}$, for each $i$, there are $n_i^{2}$ orthogonal vectors. Furthermore, $i = 1,2,\dots, N_{i}$.
However, each vector has dimension $n$, and since there are at most $n$ mutually orthogonal vectors in an $n$-dimension vector space,
$$ \sum_{i=1}^{N_{i}} n_{i}^{2} \leq n $$In fact, it can be shown that
$$ \sum_{i=1}^{N_{i}} n_{i}^{2} = n $$Proof
Lemma 1
Each representation of finite group is equivalent to a unitary representation.
Lemma 2
Suppose $D(G)$ is a representation of some finite group $G$ of order $n$.
- If $D(G)$ is reducible, there exists non-constant matrices which commute with each element of $D(G)$.
- If $D(G)$ is irreducible, any matrix which commutes with each element of $D(G)$ must be a constant matrix.
Note that a constant matrix $M$ is defined as $M = \lambda_{\mu \beta} \mathbb{1}$ where $\lambda_{\mu\beta}$ is a proportionality constant.
This lemma provides us with a method to determine if a representation is reducible.
Lemma 3
Suppose $D^{(1)}(G)$ and $D^{(2)}(G)$ are irreducible representations of the finite group $G$, each with respective dimensions $n_{1}$ and $n_{2}$.
If there exists some $n_{2} \times n_{1}$ matrix $A$, satisfying
$$ AD^{(1)}(G) = D^{(2)}(G)A $$Then either
- $A = 0$ and $D^{(i)}(G)\not\simeq D^{(j)}(G)$, or
- $n_{1} = n_{2}$, $\text{Det}(A) \neq 0$ and $D^{(1)}(G) \simeq D^{(2)}(G)$.
Note that $\text{Det}(A) \neq 0$, otherwise, according to lemma 2, the representations would not be irreducible.
This lemma provides us with a method to determine if two representations are equivalent.
Reminder
$$ \sum_{a=1}^{n} D^{(i)}(g_{a})^{*}_{\alpha\beta} D^{(j)}(g_{a})_{\mu \nu} = \frac{n}{n_{i}} \delta_{ij} \delta_{\alpha \mu} \delta_{\beta \nu} $$Part One
Suppose that $i \neq j$, meaning the representations are non-equivalent. The theorem then states
$$ \sum_{a=1} D^{(i)}(g_{a})^{*}_{\alpha\beta} D^{(j)}(g_{a})_{\mu \nu} = 0 \quad \text{where} \quad i \neq j $$We consider the matrix
$$ M = \sum_{a} D^{(j)}(g_{a}) X D^{(i)}(g_{a}^{-1}) $$Where $X$ is an $n_{j} \times n_{i}$ matrix, which will be determined later.
We can show that $MD^{(i)}(g) = D^{(j)}(g)M$.
$$ \begin{align} M D^{(i)}(g) &= \sum_{a} \left[ D^{(j)}(g_{a}) X D^{(i)}(g_{a}^{-1}) \right] D^{(i)}(g) \\ &= D^{(j)}(g) D^{(j)}(g^{-1}) \sum_{a} \left[ D^{(j)}(g_{a})X D^{(i)}(g_{a}^{-1}) \right] D^{(i)}(g) \\ &= D^{(j)}(g) \sum_{a} \left[ D^{(j)}(g^{-1}) D^{(j)}(g_{a}) \right] X \left[ D^{(i)}(g_{a}^{-1}) D^{(i)}(g) \right] \\ &= D^{(j)}(g) \sum_{a} \left[ D^{(j)}(g^{-1} g_{a}) \right] X \left[ D^{(i)}(g_{a}^{-1} g) \right] \\ &= D^{(j)}(g) \sum_{a} D^{(j)}(g_{a}) X D^{(i)}(g_{a}^{-1}) \qquad \text{using the rearrangement theorem} \\ &= D^{(j)}(g) M \end{align} $$Since we imposed that the representations are non-equivalent, according to lemma 3, it must mean that $M = 0$ for any matrix $X$.
Suppose that $X_{\tau \tau'} = \delta_{\tau \nu} \delta_{\tau'\beta}$. In this case, the element $\mu\alpha$ of $M =0$ is
$$ \begin{align} M_{\mu\alpha} &= \sum_{a} D^{(j)}(g_{a})_{\mu \tau} \delta_{\tau \nu} \delta_{\tau' \beta} D^{(i)}(g_{a}^{-1})_{\beta \alpha} \\ &= \sum_{a} D^{(j)}(g_{a})_{\mu \nu } D^{(i)}(g_{a}^{-1})_{\beta\alpha} \\ &= 0 \end{align} $$However, since
$$ D^{(i)}(g_{a}^{-1})_{\beta\alpha} = (D^{(i)}(g_{a})^{-1})_{\beta\alpha} = (D^{(i)}(g_{a})^{\dagger})_{\beta\alpha} = D^{(i)}(g_{a})^{*}_{\alpha\beta} $$We can re-express $M_{\mu\alpha}$ as
$$ M_{\mu\alpha} = \sum_{a=1}^{n} D^{(i)}(g_{a})^{*}_{\alpha\beta} D^{(j)}(g_{a})_{\mu \nu} = 0 \quad \text{where} \quad i \neq j $$This corresponds to the theorem.
Part Two
Suppose not that $i = j$, meaning the representations are equivalent. We can still re-use the result we found part one:
$$ MD^{(i)}(g) = D^{(i)}(g)M \quad \forall \space g \in G $$However, since this time the representations are equivalent, the result implies that $M$ is a constant matrix, according to lemmas 2 and 3.
The constant will depend on our choice for $X$. Using, again, $X_{\tau \tau'} = \delta_{\tau \nu} \delta_{\tau' \beta}$, we find that
$$ \begin{align} M_{\mu\alpha} &= \sum_{a} D^{(i)}(g_{a})_{\mu \tau} \delta_{\tau \nu} \delta_{\tau' \beta} D^{(i)}(g_{a}^{-1})_{\beta \alpha} \\ &= \sum_{a} D^{(i)}(g_{a})_{\mu \nu } D^{(i)}(g_{a}^{-1})_{\beta\alpha} \\ &= \lambda_{\nu \beta} \delta_{\mu\alpha} = \lambda_{\nu\beta} \mathbb{1}_{n_{i} \times n_{i}} \end{align} $$$$ \implies \sum_{a} D^{(i)}(g_{a})_{\mu \nu}D^{(i)}(g_{a}^{-1})_{\beta\alpha} = \lambda_{\nu\beta} \delta_{\mu\alpha} $$Suppose $\mu = \alpha$, on the right-hand side of the expression, the summation $\delta_{\mu \mu} = n_{i}$, considering the dimension of $D^{(i)}(G)$.
On the left-hand side,
$$ \begin{align} D^{(i)}(g_{a})_{\mu \nu} D^{(i)}(g_{a}^{-1})_{\beta \mu} &= D^{(i)}(g_{a}^{-1})_{\beta \mu} D^{(i)}(g_{a})_{\mu \nu} \\ &= (D^{(i)}(g_{a}^{-1})D^{(i)}(g_{a}))_{\beta \nu}\\ &= (D^{(i)}(g_{a}^{-1} g_{a}))_{\beta \nu} \\ &= D^{(i)}(e)_{\beta \nu} = \delta_{\beta \nu} \end{align} $$$$ \implies \sum_{a}^n D^{(i)}(g_{a})_{\mu \nu} D^{(i)}(g_{a}^{-1})_{\beta \mu} = \sum_{a}^n \delta_{\beta \mu} = n \delta_{\beta \nu} $$We can finally determine $\lambda_{\nu\beta}$.
$$ \lambda_{\nu\beta} \delta_{\mu \mu} = \lambda_{\nu\beta} n_{i} = n \delta_{\beta \nu} \implies \lambda_{\nu\beta} = \frac{n}{n_{i}} \delta_{\nu \beta} \qquad (\delta_{\nu\beta} = \delta_{\beta \nu}) $$Therefore,
$$ \sum_{a} D^{(i)}(g_{a})_{\mu \nu}D^{(i)}(g_{a}^{-1})_{\beta\alpha} = \frac{n}{n_{i}} \delta_{\nu\beta} \delta_{\mu\alpha} $$In combination with the result from part one, and considering that $D^{(i)}(g_{a}^{-1})_{\beta\alpha} = D^{(i)}(g_{a})^{*}_{\alpha\beta}$,
$$ \sum_{a=1}^n = D^{(i)}(g_{a})^{*}_{\alpha\beta}D^{(j)}(g_{a})_{\mu \nu} = \frac{n}{n_{i}} \delta_{ij} \delta_{\alpha \mu} \delta_{\beta \nu} $$And so, we have proven the Orthogonality Theorem.