A rounding error is an error introduced by rounding a number or numbers. While it is called an error, it is often not a mistake. That is, if you are adding 12.01 and 6.03 and round it off to 12 + 6, you know what you did, and you know the consequences.

Sometimes this can have larger consequences than you predict. For example, if you are multiplying 1/11 by 1/11, you might round the decimal 0.090909repeating to 0.09, resulting in 0.09 x 0.09 = 0.0081. Or you might round it to 0.0909, resulting in 0.0909 x 0.0909 = 0.00826281. If you were reporting these to four decimal places, rounded up, you might write the first result as 0.0081 and the second as 0.0083. If the ten-thousands place is truly important, you have a problem.

The correct procedure is to report accuracy limited by the least precise number. Thus, 0.09 is accurate to two decimal places; it is traditional to report on no more than one additional decimal place (in this case, three). Our answer is simplified from 0.0081 to 0.008. On the other hand, if the ten-thousands place is important, then you need to start with 0.0909 and not mess around with the weaker approximation.

Rounding problems are sometimes able to produce very large effects in the computer age, when mathematical tasks which require repeated rounding of very precise decimals are given to an unsupervised and uncritical computer.


'Rounding error' can also be used as an informal idiom to indicate a trivial amount.


276