When working with a linear map, it is often convenient to make use of the isomorphism between such an operator and its matrix representation. However, any given vector space will have many bases, and in general a change of basis alters the matrix version of a given operator. Hence, it is useful to examine both invariants of a map (properties which do not change regardless of the basis, such as the determinant or trace) and to come up with canonical forms for representing a map by a matrix. The best-case (i.e., simplest) scenario is when a linear map is diagonalisable; that is, we can find a basis with respect to which the matrix only has entries on the lead diagonal. Unfortunately, such a decomposition is not always possible- it is a special case of the more general jordan canonical form (with 1X1 Jordan blocks).

The most we can do is decompose a linear operator into a smaller, simpler collection of operators which tell us how the linear operator works. More formally, for α:V→V, where V is a finite dimensional vector space over any field, the aim is to decompose V as a direct sum of α-invariant subspaces. (a subspace W is α-invariant if for any wW, α(w)W.)


The primary decomposition theorem states that this decomposition is determined by the minimal polynomial of α :

Let α:V→V be a linear operator whose minimal polynomial factorises into monic, coprime polynomials-

mα(t)=p1(t)p2(t)

Then,
V= W1⊕W2

Where the Wi are α-invariant subspaces such that pi is the minimum polynomial of α|Wi

Repeated application of this result (i.e., fully factorising mα into pairwise coprime factors) gives a more general version: if mα(t)=p1(t)...pk(t) as described, then

V=W1⊕...⊕Wk

With α-invariant Wi with corresponding minimal polynomal pi.

It may now be apparent that diagonalisation is a special case in which each Wi has a minimal polynomial which consists of a single factor, (t-λi); i.e. if mα=(t-λ1)...(t-λk) for distinct λi then α is diagonalisable with the λi as the diagonal entries.


Proof of the Primary Decomposition Theorem
The theorem makes two assertions; that we can construct α-invariant subspaces Wi based on the pi; and that the direct sum of these Wi constructs V.

For the first, a result about invariant subspaces is needed:
Lemma: if α,β:V→V are linear maps such that αβ = βα, then ker β is α-invariant.
Proof: Take wKer β - we need to show that α(w) is also in ker β. Now β(α(w))=α(β(w)) by assumption, so β(α(w))=α(0) since wker β , = 0 since α is a linear map. But if β(α(w))=0 then α(w) ker β, hence ker β is α-invariant.
Given this result, we now take the Wi as Ker pi(α). Then since pi(α)α = αpi(α), it follows that ker pi = Wi is α-invariant.

We now seek to show that (i) V = W1 + W2, and (ii) W1∩W2 = {0} (that is, V decomposes as a direct sum of the Wi's.)
Using Euclid's Algorithm for polynomials, since the pi are coprime there are polynomials qi such that p1(t)q1(t) + p2(t)q2(t) =1.
So for any v V, consider w1=p2(α)q2(α)v and w2=p1(α)q1(α)v. Then v= w1 + w2 by the above identity. We can confirm that w1W1: p1(α)w1=mα(α)q2(α)v = 0. Similarly, w2W2. So we have (i).
For (ii), let v W1∩W2. Then
v = id(v) = q1(α)p1(α)v + q2(α)p2(α)v = 0. So W1∩W2 = {0}.

Finally, for the claimed minimum polynomials. Let mi be the min.poly. of α|Wi . We have that pi(α|Wi)=0, so the degree of pi is at least that of mi. This holds for each i. However, p1(t)p2(t)=mα(t)=lcm{m1(t), m2(t)} so we obtain
deg p1 + deg p2 = deg mα ≤ deg m1 + deg m2.
It follows that deg pi = deg mi for each i, and given monic pi it must be that mi=pi. The proof is complete.

Log in or register to write something here or to contact authors.