An irrational number is a real number which is not a rational number, meaning it cannot be expressed as a/b where a,b are integer.

IMHO, the most complete definition of an irrational number is the following:
An irrational number is any number N that divides the set of rational numbers into two parts: those greater than N, and those less than N, and for which all rational numbers fall into one of those two categories.
I think this was originally Dedekind's definition.

An irrational number is any number N that divides the set of rational numbers into two parts: those greater than N, and those less than N, and for which all rational numbers fall into one of those two categories.

--hodgepodge


It's an intersting trait but I don't think it's a definition, simply because it doesn't rule out other possibilities. For example, do rational numbers not do the same thing? Any rational number, r divides the set of irrational numbers into two parts : those greater than r and those less than r. This characteristic comes from the fact that no irrational number can be a rational number and vice versa.

The best defintion is that listed above: assume p is an irrational number; then there exist no integers a and b such that a/b = p.

The standard (a standard) definition of the reals is: let r be some subset of Q, the set of rationals. r is called a Dedekind cut if:

  • r is closed downwards; that is, if x ∈ r and yx, then y ∈ r.
  • r has no largest element; for every x ∈ r, there exists a y ∈ r such that y > x.

R, the set of real numbers, is then precisely { r ∈ P(Q) | r is a Dedekind cut }.

We embed Q in R by saying q_r (the embedding of q ∈ Q into R) = { x ∈ Q | x < q }. Then, a real is irrational iff it is not the embedding of a rational number int R. This is equivalent to saying: a Dedekind cut r is irrational iff its complement Q \ r has no smallest element.

An irrational number is any real number that is not a rational number. A rational number can be defined in the form a / b (i.e., a divided by b) where a and b are integers. So in other words, an irrational number is a number that cannot be expressed as a fraction of two integers.

5/3 of all people do not understand fractions.

Another way of looking at it is to say an irrational number is a number whose decimal representation neither repeats nor terminates. While a mathematician may argue the validity of this alternate definition1, it is correct enough to help you understand irrationals if you don't quite understand the definition in the first paragraph.


Let's look at some examples of rational numbers using the second definition:

Let N = .125
N does terminate (i.e., the decimal does not continue on forever), and it can be represented by a fraction - in this case 1/8, so .125 is a rational number

----------

Let N = .381818181818181... (the "81" repeats infinitely)
Since N does repeat, it too is a rational number. To prove this, let's convert it to a fraction.

N = .381818181818181...

Since there are 2 numbers that repeat, multiply both sides by 10^2 (100)

100N = 38.1818181818181...

Subtract N from both sides

99N = 37.8

Divide both sides by 99 and reduce

	    37.8   378   21
        N = ---- = --- = --     
             99    990   55
	

Whether the repeating part is a single number (0.333333...) or a billion numbers, the process is exactly the same other than the number you multiply by in the second step.

As displayed above, even if a number repeats off into infinity, it still can be rational. If this is the case, what would an irrational number look like? Unfortunately, because they never terminate or repeat, there is no way to actually type out an irrational number. Because of this, they are often given names or written out in mathematical formulas:

While it has been proven that the examples above are irrational, it can be an extremely difficult process to create a mathematical proof showing that a number is irrational. You can't just look at it and determine one way of the other. Let's say you have the first 1000 decimal places of a number, and there are no repeating patterns. You cannot just assume that it is irrational because it is possible that those 1000 decimal places are the first part of the repeating pattern (i.e., the second 1000 decimal places are exactly like the first). You can continue extending this example out to one million, one billion, etc, and you will never know just by looking. If you are interested in how some numbers have been confirmed to be irrational, see the related links at the bottom of the writeup.

History

If you are still confused about irrational numbers, don't be too concerned. Even Pythagoras had trouble with them at first. It is said that one of his aids, Hippasus discovered irrational numbers when trying to represent the square root of 2 as a fraction. Unfortunately, Pythagoras still believe that all numbers were absolute (though he could not prove it), and would not accept the existence of these so-called irrationals, and had Hippasus thrown overboard (where he drowned) for his ideas. Pythagoras later wrote the first proof of their existence.

More attention was given to these numbers in the 3rd century BC by Euclid in book 10 of Euclid's Elements. Very little study was given to irrationals from this time until the late 1700s and 1800s where Johann Heinrich Lambert, Paolo Ruffini, Karl Weierstrass, Heine, Georg Cantor, Richard Dedekind, and numerous other mathematicians all studied and wrote about them. Today we learn about various irrational numbers in high school, and some people use the more common ones (pi, e, golden ratio) every day in such fields as engineering, physics, mathematics, architecture, or computer science.

Related Links:


1 For those of you who asked, the second definition does not hold true for all bases. For example, in base π, the second definition would consider π to be rational (since it equaled 1). If you are working strictly in base 10, the second definition should alway be true.

Log in or register to write something here or to contact authors.