display | more...

There is another simpler (simple as in easier to remember, it is quite slow for big matrices and if you aren't very fast performing linear operations) to invert matrices. If you are familiar with solving linear equations using matricial notation, maybe you'll find this method easy.

The method:

1. Put an NxN identity matrix next to your NxN matrix, so you get an Nx2N matrix.
2. Use elemental row transforms to turn the original part of the matrix into an identity matrix.
3. The part of the matrix that started as an identity matrix is now the inverse of your matrix.

Say we've got a square matrix like this one:

+ 3 2 1 +
| 2 1 0 |
+ 0 2 2 +

Now we'll put next to it an identity matrix:

+ 3 2 1 | 1 0 0 +
| 2 1 0 | 0 1 0 |
+ 0 2 2 | 0 0 1 +

Our task now is to turn the left part of this matrix (that is, the matrix we are trying to invert) into an identity matrix, using linear row operations.

As a first step, we will divide the first row by three so we have a one in the left hand corner:

+ 1 2/3 1/3 | 1/3 0 0 +
| 2  1   0  |  0  1 0 |
+ 0  2   2  |  0  0 1 +

Now we substract the first row multiplied by two to the second row:

+ 1  2/3  1/3 |  1/3 0 0 +
| 0 -1/3 -2/3 | -2/3 1 0 |
+ 0   2    2  |   0  0 1 +

We multiply the second row by -3 to get a one

+ 1  2/3  1/3 |  1/3  0 0 +
| 0   1    2  |   2  -3 0 |
+ 0   2    2  |   0   0 1 +

Now we substract the second row doubled to the third

+ 1  2/3  1/3 |  1/3  0 0 +
| 0   1    2  |   2  -3 0 |
+ 0   0   -2  |  -4   6 1 +

We divide the last row by -2 to obtain our traditional one

+ 1  2/3  1/3 |  1/3  0   0  +
| 0   1    2  |   2  -3   0  |
+ 0   0    1  |   2  -3 -1/2 +

Now we substract from the first row one third of the last and from the second one, double the last.

+ 1  2/3   0  | -1/3  1  1/6 +
| 0   1    0  |  -2   3   1  |
+ 0   0    1  |   2  -3 -1/2 +

And we only have to substract 2/3 of the second to the first to end:

+ 1   0    0  |   1  -1 -1/2 +
| 0   1    0  |  -2   3   1  |
+ 0   0    1  |   2  -3 -1/2 +

Let's check the result:

+ 3 2 1 +   +  1 -1 -1/2 +   + 1 0 0 +
| 2 1 0 | x | -2  3   1  | = | 0 1 0 | 
+ 0 2 2 +   +  2 -3 -1/2 +   + 0 0 1 +

Thank god! I have made no mistakes!!

As an exercise to the reader, you can apply the method to general matrices like these:

          + a b c +
+ a b +   | d e f |
+ c d + , + g h i + , ...

Doing this will (hopefully) give you the formulas shown on Wntrmute's writeup (I have never had patience to do a 3x3 general matrix, let alone anything bigger, but the 2x2 case is not very complicated).

The proof of the method outlined by koala is quite simple. The method claims that if a sequence L of row operations turns a square matrix A into the identity then applying that same sequence L to the identity yields A-1.

To prove this, we are first going to consider the matrices Lij(λ), which are such that:
lij = λ
if p=q then lpq=1
else lpq=0

so for example, if we are dealing with 3 x 3 matrices, then

       /1 0 4\
L13(4)=|0 1 0 |
       \0 0 1/
Let us now consider the product B=Lij(λ)A.
We know our summation convention:
bpq=lpkakq
  • if we have i=j then :
    • if p = i then biq=li kakq
      li k = 0 if i ≠ k, therefore biq = liiaiq=λaiq
    • if p ≠ i then the above remains true, except that lpp = 1, therefore bpq = apq
    From this it is clear that multiplying A by Lii(λ) multiplies the ith row of A by λ and leaving the other rows untouched.
  • if i ≠ j then we can see that if p = i then biq=likakq =liiaiq + lijajq=aiq + λajq
    multiplying A by Lij(λ) is adding λ times the jth row of A to the ith row, leaving the rest untouched.
Thus we can perform any of the row operations ""multiply a row by λ" and "add λ times a row to another row" by multiplying A by some matrix Lij(λ). Note also that for all the row operations used in this method the Lij(λ) are triangular with non zero terms on the diagonal : they are invertible.

We need one more type of matrix : one that will allow us to swap the ith and jth row. Fortunately, this is not too difficult. we simply have Sij(with i ≠ j) which is such that :
sij = 1
sji = 1
sii = 0
sjj = 0
and all the other terms on the diagonal are 1. You may wish to check for yourself that this matrix does what I claim, and that it is invertible.

We can now perform all the row operations needed by multiplying by an invertible matrix. So what our method is actually doing is finding a series of m invertible matrices such that Lm...L2L1A= I
Lm...L2L1 represents our sequence of row operations. Multiplying on the right side by A-1 yields Lm...L2L1I= A-1, which is what we wanted : performing our sequence of row operations on the identity gives us A-1.

You may also be interested in knowing what happens if this method is applied to a singular matrix. What will happen is that eventually you will have a row with nothing but zeros, it will not be possible to transform the matrix into the identity.

From a pedagogical point of view, examples of matrix inversion are invariably coupled with the use of the determinant and 3x3 matrices.

From a numerical analysis point of view, it is almost never desirable to actually compute the inverse of a matrix. On a practical level, the inverse of a matrix is almost never what you want to compute for "real" matrices. The reasons are as follows:
  • The determinant will overflow IEEE754 doubles for "real" sized matrices
  • The inversion of a matrix takes O(n3) operations. One typically wants to solve for a vector x=A-1b or matrix C=A-1B. In this case, multiplying by the inverse is far more computationally expensive than performing an LU factorisation, followed by right hand solves.
There are instances where a generalized matrix inversion is unavoidable. In this case, one should still avoid the use of the determinant. As suggested by koala's post, one can perform a factorisation (LU or Cholesky, for example), and employ the columns of the identity matrix as multiple right hand sides.

Log in or register to write something here or to contact authors.