According to http://whatis.techtarget.com/definition/0,,sid9_gci527311,00.html,
An order of magnitude is an exponential change of plus-or-minus 1 in the value of a quantity or unit. The term is generally used in conjunction with power-of-10 scientific notation.
For example, the Richter Scale is a (base 10) logarithmic scale; an earthquake measuring 8 on the Richter Scale is 10 times more powerful than a 7, so it can correctly be said that an 8 is an order of magnitude more powerful than a 7. Another example of a logarithmic scale is decibels, typically used to measure sound or radiant energy (e.g. radio waves). For more information on decibels, see http://arts.ucsc.edu/EMS/Music/tech_background/TE-06/teces_06.html.

Although I've never heard it used this way among my colleagues, isogolem points out, computer geeks might use order of magnitude to mean a doubling of a binary number, because technically in binary a doubling is an exponential change.

I suppose when used rhetorically instead of scientifically, an order of magnitude could just mean an really big difference without specific quantification. However, in everyday speech and occasionally in writing, you'll encounter a common misusage of "order of magnitude" to mean a doubling of a specific quantity. It's common because it sounds more dramatic than a doubling, kind of like saying something is 200% of something else. Unlike 200%, however, using order of magnitude to refer to a doubling of a quanity not expressed as a binary number is incorrect usage. In my opinion, when order of magnitude is used to refer to a specific, quantitative difference, it must refer to an exponential difference. Furthermore, unless otherwise specified, quantities are assumed to be represented in base 10.

An order of magnitude is, roughly, a factor of ten. So 1000 is an order of magnitude bigger than 100, which is an order of magnitude bigger than 10, and so on.

Order of magnitude notation (also known as scientific notation) is based on this concept, expressing it compactly in the form x×10y. For example we can write the speed of light as 3×108m/s, which means 3 with 8 zeros after it - 300,000,000, or three hundred million. This is indispensable in science, where almost every discipline needs to talk about numbers which are much, much bigger than other numbers, and nobody wants to have to write out that an electron weighs 0.000,000,000,000,000,000,000,000,000,000,091kg when they could just write 9.1×10-31. The importance of things happening on mind-bogglingly different scales is explored to great effect in the short film Powers of Ten.

Very big and very tiny numbers are always difficult for humans to get their heads round, and writing them down the way we write more familiar numbers is just disorienting - one reason that public discourse about science, finance and anything to do with statistics tends to be massively confused and often misleading. Switching to an exponential scale like order of magnitude is an incredibly useful trick for expressing enormous differences in quantity. It is still not altogether intuitive, but at least it makes their representation a tractable problem. Exponential scales are also used on graphs sometimes, so that an order of magnitude increase is represented by a constant distance on one of the axes. This can be confusing initially, but it is invaluable when variations are important at both very small and very large scales, and especially when the variables you are interested in change exponentially. In a sense, an exponential scale helps us to compare like with like - a difference of 1g is big when you're talking about something that only weighs 10g altogether, but if you add 1g to something that already weighs 1000g, it is likely to be irrelevant. A linear scale obscures this by showing a 1g difference as being equal wherever it occurs, whereas an exponential scale helpfully shows a 10% increase as being equal whether it is 1g added to 10g, or 100g added to 1000g - it brings out differences on the same order of magnitude as the thing that is changing.

As an interesting aside, the human perceptual system also uses the same trick, presumably for the same reason of needing to meaningfully compare amounts that sometimes differ wildly. For example a repeated doubling of brightness, pitch or loudness is perceived as a steady increase, when in fact the rate of change is constantly increasing. This comes out in the way cameras and televisions are designed, the fact that musical scales are divided into octaves, and the way that an increase of ten decibels represents a ten-fold increase in the power of a sound.

In science, it is quite often enough to know just the order of magnitude of a number, to get an idea of whether something is important, or a plausible candidate for a result. If you come up with an answer that is a hundred times bigger than you expected, you should probably go back to the drawing board, but if you can show that something is about a million times too small to make a noticeable difference, getting your answer wrong by a factor of four in either direction isn't going to be disastrous. Order of magnitude is often employed as a deliberately fuzzy concept, for this reason - when people talk about numbers being 'on the order of tens of millions', that means that they might well be ten times bigger or smaller, but we probably don't need to worry about it.

Log in or register to write something here or to contact authors.