display | more...

The process of mathematically multiplying two matrices by each other.

The process is defined for any pair of matrices such that the width of the first matrix is equal to the height of the second matrix. The process is NOT commutative.

To multiply two matrices, you want a triple-loop. Assume that the first matrix is of dimension m x k and the second matrix is of dimension k x n (rows x columns)

```for row = 0 to m
for column = 0 to n
Mres[row][column] =
M1[row] * M2[column]
+ M1[row] * M2[column]
+ M1[row] * M2[column]
+ ...
+ M1[row][k] * M2[k][column]
```
It is interesting that matrix multiplication (on square matrices) can be performed with time bounds considerably lower than O(n^3). In fact, it is not known that multiplying 2 n*n square matrices cannot be done in time O(n^2) (although nobody believes that this is possible).

The current best bound on multiplication is O(n^2.31), and is suitable only for unreasonably large matrices. The first better-than-O(n^3) algorithm was due to Strassen. It is very simple, and explained below.

We shall only be concerned with mutiplying matrices when n=2^k (although any matrix can be zero-padded to attain this size, which just multiplies the time by some constant factor). To multiply 2 such matrices, divide each into 4 equal-sized blocks, and multiply as follows:

```  (A | B)   (E | F)         (AE+BG | AF+BH)
(--+--)   (--+--)    =    (------+------)
(C | D)   (G | H)         (CE+DG | CF+DH)
```
So it would appear that to multiply 2 large matrices, we need to multiply 8 pairs of half-sized matrices; this would save us no time. So let's get by with only 7 multiplications! Additions and subtractions can be done in time proportional to the size of the matrix, so we're less worried about them.

Define the following seven n/2 * n/2 matrices:

• a = (A+D)*(E+H)
• b = (C+D)*E
• c = A*(F-H)
• d = D*(G-E)
• e = (A+B)*H
• f = (C-A)*(E+F)
• g = (B-D)*(G+H)
Then all four submatrices of the desired result are sums of these 7 matrices:
• AE+BG = a+d-e+g
• AF+BH = c+e
• CE+DG = b+d
• CF+DH = a+f-b+c
How long does it take? Well, obviously we're going to multiply our smaller matrices using the same algorithms, recursively. All the additions and subtractions used to combine the smaller matrices are O(n^2) (since we only make a fixed number of them, and each takes time O(n^2)), so we get the recurrence
```  T(n) = O(n^2) + 7*T(n/2)
```

This has solution

```  T(n) = c*n^2 * (1 + 7/4 + 49/16 + (7/4)^(k-1)) =
= c' * 7^k = O(n ^ (lg_2 7))
```
and the exponent of n is less than 2.81.

Ouch, the above writeups must be hard for some to follow. Multiplying matrices can easily be done by lining your two matrices up and hacking away at it, and it works in no time at all.

Before starting, you must look at your two matrices to determine if you can multiply them. Two matrices are only able to be multiplied if the first matrix has the same number of columns as the second one does rows:

If the matrices M is aXb (a by b) and N is cXd, b=c must be true to multiply. The resulting matrix will have dimensions of aXd.

```M= 3 2    N= 4 5 2 3
4 4       1 3 8 6
8 5
```

M is 3x2, and N is 2x4. 2=2, so they work. The resulting matrix will be 3x4.

Start with row one of M {3,2} and column one of N {4,1}. Multiply the first two numbers (3 and 4) and multiply the second two (2 and 1). Add the results:

(3*4)+(2*1)=12+2=14

The item in the first row and first column of the new matrix (MN) is 14.

Now, do the same procedure (keeping with the same row from M) for every column in N. The results will compromise the first row of MN. After that, go to the second row of M, doing the multiplication procedure for every column in N again, and keep going until you run out of rows in M.

```MN= (3*4)+(2*1) | (3*5)+(2*3) | (3*2)+(2*8) | (3*3)+(2*6)
------------+-------------+-------------+------------
(4*4)+(4*1) | (4*5)+(4*3) | (4*2)+(4*8) | (4*3)+(4*6)
------------+-------------+-------------+------------
(8*4)+(5*1) | (8*5)+(5*3) | (8*2)+(5*8) | (8*3)+(5*6)

MN= 14 | 21 | 32 | 21
---+----+----+---
20 | 32 | 40 | 36
---+----+----+---
37 | 55 | 56 | 54
```

If variables make more sense to you, here's another representation:

```M= a b
c d
e f

N= z y x w
v u t s

MN= (a*z)+(b*v) | (a*y)+(b*u) | (a*x)+(b*t) | (a*w)+(b*s)
------------+-------------+-------------+------------
(c*z)+(d*v) | (c*y)+(d*u) | (c*x)+(d*t) | (c*w)+(d*s)
------------+-------------+-------------+------------
(e*z)+(f*v) | (e*y)+(f*u) | (e*x)+(f*t) | (e*w)+(f*s)
```

Final note: matrix multiplication is not commutative. That is, AB does not necessarily equal BA. Also, while AB may be possible, if the number of rows in A does not equal the number of columns in B, then BA is not possible. Weird, huh?

In the normal manual method, you have to keep track of which row and column you're at in both matrices. If you forget, refinding your place can be difficult. There is an easier method that I call the proportions-vectors method :

```                     +-     -+
+-              -+   |  5  6 |
|  0  5  6 11 13 |   | 11 -5 |
| 10 -1  8  0  0 | x |  0  2 |
|  0  0  1  0  6 |   |  0  0 |
+-              -+   |  1  0 |
+-     -+
```

Assume you have a list of vectors that you want to mix together in certain proportions. The left matrix has the proportions for three mixes. The right matrix has the five vectors that are going to be squeezed together in various ways :

```                     +-       -+
+-              -+   | ( 5  6) |
|  0  5  6 11 13 |   | (11 -5) |
| 10 -1  8  0  0 | x | ( 0  2) |
|  0  0  1  0  6 |   | ( 0  0) |
+-              -+   | ( 1  0) |
+-       -+
```

Now multiply :

```+-                                               -+
|  0(5 6) + 5(11 -5) + 6(0 2) + 11(0 0) + 13(1 0) |
| 10(5 6) - 1(11 -5) + 8(0 2) +  0(0 0) +  0(1 0) |
|  0(5 6) + 0(11 -5) + 1(0 2) +  0(0 0) +  6(1 0) |
+-                                               -+
```

Simplify each mix :

```+-        -+
| (68 -13) |
| (61  71) |
| ( 6   2) |
+-        -+
```

Remove vector notation :

```+-      -+
| 68 -13 |
| 61  71 |
|  6   2 |
+-      -+
```

Now you only have to pay attention to which row you're at in both matrices. If you forget, refinding your place is much easier. Not only that, but figuring out other things about matrices is much easier :

• Matrix multiplication isn't commutative because you can't expect to exchange the proportions and the vectors and get the same result (except in special cases).
• The number of columns on the left has to match the number of rows on the right because both tell you how many vectors you're squeezing together.
• The number of rows on the left matches the number of rows in the result because both tell you how many different mixes you made.
• The number of columns on the right matches the number of columns in the result because both tell you the dimension of the vectors.

Log in or register to write something here or to contact authors.