When you (a coder) don't have a FPU to do all the dirty work of decimal arithmetic for you, and you want your program to be able to perform decimal arithmetic operations, you must use fixed-point arithmetic. (Or you can use someone else's subroutines to do it, but they also use fixed-point arithmetic. Back to square one.)

Lacking an FPU, your (probably old) CPU only has opcodes which will handle integer arithmetic. Therefore you must figure out a way to use the integer functions to perform floating-point operations on your numbers. But how?

Let's say you want to add 10.5 to 5.5.

  1. Figure out how many powers of 10 you will have to multiply these numbers by to make them integers. In this case, we only need to multiply them by 10^1. (In other words, 10.)

  2. Perform the multiplication:

    10.5 * 10 = 105
    5.5 * 10 = 55

  3. Now add the numbers:

    105 + 55 = 160

  4. Since we are adding, the rule is to divide the result by what we multiplied the originals by. This gives us the decimal result - in this case, 10:

    160 / 10 = 16

Subtraction is done the same way. 105 - 55 = 50. 50 / 10 = 5.

Multiplication is done in a similar, but not identical, way: 105 * 55 = 5775. 5775 / 100 = 57.75. (Because the integers were 10 times greater than the real numbers, their multiplied result was 100 (or 10 * 10) times greater than the real result.)

Division is rather an unruly beast, and from what I remember from the ASM class I took in 1996, it is basically done by performing a reciprocal multiplication:

10 / 2 =
100 / 20 =
100 * (1/20) =
5

You are saved from having to divide the result, since your fraction, being based on the powers of 10, is already reduced. 100/1000 = 10/100 = 1/10.