In general, an operation on floating point numbers. This term is really only used in computing, rather than maths, as mathematicians usually deal with reals as infinite precision quantities (Except when dealing with things like sampling, or applied numerical techniques).

Usually, one expects processors to perform floating point operations more slowly than fixed point operations, as the hardware implementation of a fixed-point operation is incredibly simple (As in UG computer science students can make an adding circuit out of nand gates if they apply themselves.)

In contrast, floating point operations have to take account of the fact that there is both a mantissa and an exponent (Either explicitly or implicitly), and so mantissa overflow has to be detected, the exponent manipulated, oh, and the exponent on the operands might not be the same. The logic is complex, making optimisations harder, and so more costly. A common way of dealing with this is to do these operations in parallel, as in a vector processor.

Log in or register to write something here or to contact authors.