An algebra is a set plus one or more operations on that set. What I'll be trying to show here is that algebra is crucial in one's understanding of elementary mathematics, right from the start of your school career. They're just afraid to call it algebra.

(An algebra, in case you forgot, is a set of operations on some domain of values, together with a set of generic rules that tell us which different combinations produce identical values.)

It all starts with counting:


  <{1,2,3,many}, ++>

    where ++ is the single-argument operation given by

      ++1 = 2
      ++2 = 3
      ++3 = *
      ++* = *

(Pronounce '*' as 'many'.)

A very simple algebra, and widely applied: I've been told this is how dogs count. Children usually learn to count to 10 at first but the principle is the same.

The next step is addition. Addition is a two-argument operation written as +, so we obtain an algebra with signature


  <{1,2,3,4,5,6,7,8,9,10,many}, ++, +>

and the rules (the first of which exactly defines the meaning of addition):

  ++a + b = ++(a + b)
  ++* = *

Initially we learn to work with a small set of numbers by giving them names and practising counting (the ++ operator) and addition on them.

The following step is to ask: why stop at a highest number? Why should we at some point say that we can no longer increment our numbers? This thought is the basis of all algebra: take our operations and the equivalence rules that apply to them as basic, instead of the sets to which they are applied. In algebraic terms, what we do is drop the second rule, giving us

  <N, ++, +>
with the following rules:
  a+1 = ++a for every a in N
  (++a)+b = ++(a+b) for every a, b in N
Actually, the algebra we're interested in is the free algebra with these rules, that is, N consists of everything that must be in the algebra by these rules, and no two elements of N are identical unless the rules can prove it. In effect, the ++ operation generates a new number every time. (In algebra, we do indeed call ++ a generator for this algebra.)

The idea of arbitrarily large numbers is stunning at first. I remember having excited discussions, back when I was around 8 years old, with other children about infinity and how big it was. I have seen the same excitement in other 8-year olds. What is more, I have seen discussions among grown adults that were based on the same misunderstanding, namely, that if we do not have a largest number, there must be an infinite a number. (But infinity is no more a number than the horizon is a place.)

Apparently it is difficult for the mind to accept the absence of a largest number. I even think some aspects of God can be explained from the same difficulty.

It is interesting to observe that other generic equivalence rules follow from the rules already given. For example, the rules

  a + b = b + c               for all a,b,c in N
  a + (b + c) = (a + b) + c   for all a,b in N
which are called the commutativity and associativity of addition, respectively, can be shown to hold.

The next step, in school, is multiplication. Like addition, it is just a shorthand to work with numbers we already know. It is defined by the rules

  1*c = c
  (a+b)*c = a*c + b*c
The fact that * is associative and commutative again follows from these rules.

Another useful concept is 0. It can be defined as follows:

  ++0 = 1
  a+0 = 0+a = a
  a*0 = 0*a = 0
We can eliminate 1 by replacing it with ++0 everywhere.

Some feel that 0 is not really a 'natural' number, but an artificial construct to make computation easier. 0 is 'brought into existence' by the rules it follows.

Writing this down produces a definition of the natural numbers: the Peano axioms.

Log in or register to write something here or to contact authors.