While all of the above writeups give correct and useful definitions and methods for computing or dealing with determinants, they omit one very useful tool: row and column operations.

These can be helpful both with the concrete task of finding a determinant for a given matrix or with slightly more intangible beings, such as calculating this determinant :

|1  x_1  x_1^2 ... x_1^n|
|1  x_2  x_2^2 ... x_2^n|
|.                     .|
|.                     .|
|1  x_n  x_n^2 ... x_n^n|
which is known as Van der Mond's determinant. where the underscores indicate subscripts.

Doing this by expanding the determinant along a row or column is hard (but none the less left as an exercise to the reader). Not only is it hard, but if at the end I tell you that I want the determinant in a factorised form, then you will probably be in a world of pain, even if there are only 6 or 7 variables.

Definitions

The main row operation is adding a multiple of a row to another row. These are not quite the same as the operations defined in elementary row operations. I have put to one side multiplication by a non-zero scalar λ, as this multiplies the determinant by λ , and swapping 2 rows as this changes the sign of the determinant. The main operation is the adding of multiples of rows that really makes things happen, the other 2 are mainly convenience. Column operations are defined similarly, just substitute column for row.

But what do I do with them ?

A quick example is the easiest way to go. Consider the determinant:

|1 2 3 4|
|4 5 6 7|
|6 8 9 2|
|2 4 5 6|
It's a 4x4 matrix, so that means computing 4 3x3 determinants, each of which has 3 2x2 determinants. Somewhat tedious. But if we subtract the first row 4 times from the 2nd, 6 times from the 3rd and 2 times from the 4th, then the matrix becomes:
|1  2  3   4|
|0 -3 -6  -9|
|0 -4 -9 -22|
|0  0 -1  -2|
Already a lot less painful, only one 3x3 determinant, and one of the 2x2 determinants in it is zero. Of course you could continue this process.

But why does it work ?

Good question, glad that you asked. Row operations can be represented by matrices. To swap rows i and j, multiply (on the left) by this matrix:

start with the identity matrix and change these elements: A_ii = A_jj = 0
A_ij = A_ji = 1
So to swap rows 1 and 3 of a 4x4 matrix, you would use this matrix:

|0 0 1 0|
|0 1 0 0|
|1 0 0 0|
|0 0 0 1|
To add λ times row j to row i, you start with an identity matrix, and change A_ij to λ, so this matrix adds 3 times the 4th row to the second:
|1 0 0 0|
|0 1 0 0|
|0 0 1 0|
|0 3 0 1|
To multiply row i by a scalar λ, again start with an identity matrix and change A_ii to λ, so this matrix multiplies the 3rd row by 2:
|1 0 0 0|
|0 1 0 0|
|0 0 2 0|
|0 0 0 1|
So we could represent our entire sequence of row and column operations by a product of matrices (multiply by these very same matrices on the right hand side for column operations). Look at the matrices defined above: their determinants are very easy to calculate, -1 for the row swapping matrix, λ for multiplying a row by λ and 1 for adding a multiple of a row to another one.

One of the properties of determinants, is that, assuming that A and B are 2 nxn matrices, det(A.B)=det(A).det(B). So we can transform our matrix into something we can calculate the determinant of easily, and divide it by the product of the determinants of the transformation matrices to get the original determinant. Even better, if you only do adding multiples of a row to another row, then your row operations do not change the determinant!

Applications:

Numerical evaluation of determinants: Basically perform Gaussian elimination then calculate the product of the terms on the diagonal. Gaussian elimination is O(n^3), so this is much cheaper than expanding the determinant (which is O(n!)). It is also common to use LU decomposition to calculate determinants (among other things).

More abstract determinants: recall Van der Mond's determinant, given above. Subtract the first row from all the others, this gives:

|1  x_1      x_1^2       ...       x_1^n|
|0  x_2-x_1  x_2^2-x_1^2 ... x_2^n-x_1^n|
|.                                     .|
|.                                     .|
|0  x_n-x_1  x_n^2-x_1^2 ... x_n^n-x_1^n|
Note that we can divide out (x2-x1) out of the second row, (x3-x1) out of the 3rd row and so on.

You can then subtract the 2nd row from all those beneath it, factor out a term from all of those rows and so on, eventually yielding the answer ∏i>j(xi-xj).