In higher mathematics, this is an almost meaningless term. It refers
a set of elements and an associated set of operators defined on them. This encompasses everything from the natural numbers (with addition, successor, and multiplication) to the states of a Rubik's Cube, where the operators are the various manipulations. Once you start specifying additional constraints on the operators, you're in the land of rings, groups, and other algebraic structures.

In lower mathematics, it means the study of some well-known algebraic operations and their algebraic properties.

Most of the algebra in school is concerned with the operations +, -, * and / on natural, integral, rational and real numbers.

By extending + and * to work on vectors of numbers, we get linear algebra, which models natural manipulations on objects in n-dimensional space, such as rotation, translation and scaling.


An algebra is a set with one or more operations defined on it; an operation in this context is a function whose arguments and results are all in the set.

The subject of algebra is the study of combinatorial equivalences of these operations. It isn't at all interested in what the set or the operation(s) represent, but only in the mathematical laws that hold in combining them, and the theory which follows from that.

Group theory for instance is the branch of algebra concerned with groups, a very simple class of algebras that arises often in practice.

A branch of mathematics developed by the medieval Islamic scholars. The word 'algebra' comes from the Arabic term, al-Jabr, which first appeared in The Book of al-Jabr and al-Muqâbalah written by the Persian mathematician, al-Khwârizmî (whose name is the root of the word algorithm) around 825 C.E. Al-Jabr means the removal of negative terms from an equation, and al-Muqâbalah means the combination of similar terms - both of which are central functions in algebra.

Because numerals were not invented at the time, the "Book of Algebra" is acutally a huge collection of word problems what contain many diagrams. A number of these problems are solved geometrically.

Al-Khwârizmî wrote his book for the Abbâsid caliph al-Ma'mûn, a patron of science and a founder of the Bayt al-Hikmah ("House of Wisdom"), the library and the translation academy of Baghdad. The intercultural contacts that took place in this period led to two important devlopments in the history of mathematics. First was the adoption of the Indian numeral system (which included zero) by the Arabs, making it possible to write out long numbers using zero as a placeholder. The second great devlopment was the translation of Greek works on mathematics, geometry and astronomy into Arabic. Drawing on the geometrical techniques of Euclid, Archimedes and Apollonius enabled Islamic mathematicians to discover non-Euclidian geometry.

The Book of Algebra was first translated into Latin by Robert of Chester in 1145 and later by Gerard of Cremona in 1187. Given its practical utility and fundamental importance in mathematics, the development of algebra by medieval Islamic scholars is a great and lasting contribution to scientific knowledge.

source: The Dictionary of Global Culture - Appiah & Gates

In the mathematical field algebra, an algebra is a vectorspace A over a field F with an associative product on the vectorspace - a mappping A x A -> A. The are non associative algebras out there, too.

There special types of algebras, e.g. group algebras where a base of the vectorspace is a group by the algebra multiplication.

Please note that this definition doesn't directly coincide with the definition of a boolean algebra. A boolean algebra would be here a (Z/2Z) - algebra with the "xor" as vectorspace addition and the "and" as algebra multiplication.

An algebra usually is also a ring.

Further information should be found in any good algebra book.

The following essay was submitted for a course in Mathematical Philosophy. It can be broken down to the following parts:
  1. The ancient roots of algebra
  2. How ancient developments led to the "algebraic revolution" of the 19th century
  3. The trend found from concrete to abstract thought, and the philosophical implications of said trend

For more information, please visit the math project.

The Development of Algebra and its Significance to the Development of Mathematical Thought

It is the object of this paper to analyze the roots of the algebraic revolution of the 19th century, as well as discuss the mathematical and philosophical consequences of this explosion in abstract thought. I hope to thereby gain some insight on the mathematical need for and use of abstraction for the purposes of clarity and further development.

Developments in Egypt

The roots of algebra in general are found in the need for efficient and precise calculation in the areas of business and agriculture in ancient cultures. Problems, generally linear systems of equations and solution of equations of one variable, were often posed entirely in words, and then solved in words, with few or no symbols or symbolic reasoning. The Egyptian Rhind Papyrus, which dates back at least to 1650 BC, contains problems detailing methods for addition, subtraction, and the solution of simple practical problems. Although problems are occasionally given without application, the Egyptians rarely strayed far from mathematics which did not directly apply to their everyday life, and applications were generally known from context or subsequently listed.

“A quantity and its 1/7 added become 19. What is the quantity?”

We would ordinarily reduce the left side to 8x/7, and multiply 19 by 7/8 to find the solution. The solutions in the Rhind Papyrus, however, are typically given using the method now known as false position, or false assumption.

“As many times as 8 must be multiplied to give 19, just as many times must 7 be multiplied to give the correct number.”

Note that these problems do not even list specific answers. The method of false position is common in ancient texts since some ancient writing systems did not allow fractions such as 8/7 in the calculation of the answer.

Developments in Babylon

The Babylonians, in their clay tablets dating roughly 1800-1600 BC, use a sexagesimal counting system, which greatly facilitated the use of fractions and allowed Babylonian scholars to concentrate more on the theoretical approach to algebra. It could be said that the Babylonians were among the first to study mathematics more for its own sake than for the solution of practical problems. They seemed to already be familiar our method for solving quadratic equations, and recorded dozens of multiplication tables listing n, n2, n3, and n2 + n3. They even solved systems of equations of two variables; however, the manner of solution is not given. It is commonly assumed that they had used the method of substitution to find solutions for x and y. Papyrus scrolls have been recovered documenting the Egyptian use of the same method, but these scrolls date much later, around 300 BC.

Developments in China

The use of matrices had silent and hidden beginnings, dating back as far as 200 BC, with the Chinese book Chiu-Chang Shuan Shu, or Nine Articles on the Mathematical Art. This influential book contains 246 problems on all sorts of topics near and dear to Chinese life, such as business procedures and agriculture. In particular, Problem 1 in Chapter VIII, “The Way of Calculating by Arrays” is as follows:

“Three sheafs of a good crop, 2 sheafs of a mediocre crop, and 1 sheaf of a bad crop are sold for 39 dou. Two sheafs of good, 3 of mediocre, and 1 of bad are sold for 34 dou. One sheaf of good, 2 of mediocre, and 3 of bad are sold for 26 dou. What is the price for a sheaf of good crop, mediocre crop, and bad crop?”

The book proceeds to describe a method of solution we now call the matrix method; rods on a counting board were used to represent the array

1	2	3
2 3 2
3 1 1
26 34 39

Multiplications and subtractions were executed until the array was reduced to

0	0	3
0 5 2
36 1 1
99 24 39

This is precisely the method of creating the augmented matrix and solving for the three unknowns by elementary column operations.

Since Nine Articles, along with most of Chinese mathematical development, focuses on methods and computations that work rather than proof of existence, there is no proof of this method or for the existence of these arrays. Additionally, further properties of arrays were not investigated until hundreds of years later, in an entirely different context.

Developments in Greece

The Greeks’ contributions to algebra, whether practical or abstract, seem somewhat limited, if only because they concentrated on geometry and the construction of objects. Euclid’s Elements of Geometry, however, contains some vital definitions and theorems relevant to the study of algebra. For example, in Book VII:

An integer b is said to be divisible by an integer a≠0, in symbols a | b, if there exists some integer c such that b = ac. One writes a b to indicate that b is not divisible by a.

Unfortunately, Euclid’s and other scholars’ habit of representing numbers of line segments prevented the consideration of negative numbers and the construction of irrational numbers (though Euclid gives a proof for the irrationality of √2, the Greeks could only conclude that √2 was not a number at all, rather than a different type of number). These scholars, in their drive to define laws and properties of positive integers, certainly never got around to considering mathematics which might exist without numbers or lengths at all. Despite these unfortunate barriers, the Greeks marked themselves as one of the first cultures to make significant advances in mathematics without being driven by the need for applications. While abstraction had not yet begun to show itself in the studies of algebra and geometry, the shift from practical to theoretical was, in many ways, just as important a mathematical development.

Perhaps another obstacle in Greek abstract development lay in the counting boards used for their calculations. While remarkably effective for arithmetic, they relied on physical representations of numbers and physical movements of those representations. A more abstract system of representation came from the Arabs and the Hindu decimal counting system, “nine letters” for the digits 1 through 9 and “a small circle for when nothing remains,” or zero. The prophet for this counting system was the Arab al-Khowârizmî (AD 780-850), who in his treatise Hisâb al-jabr w’al muqâbalah, translated to mean “the science of reunion and reduction,” set forth rules for the solution of equations with unknown variables. Al-jabr, after translation into Latin and other languages, eventually earned its present English corruption of algebra.

Although the basic methods of algebra at this point in history have been largely discovered and accepted as true, very few of them have been given treatment with a rigorous proof. In some of the aforementioned cultures, in particular the Chinese and Arab cultures, rules and theorems are given as divine revelation, and the readers must believe they are true, because, after all, they work! Many proofs are geometric and intuitive, if they are given at all, and certainly very little if any progress has been made with respect to abstraction or the separation of laws and objects. In hindsight, then, we could say that in addition to the shift from practical to theoretical, mathematicians had to revisit their mathematical priorities and begin proving that all these tactics, methods, and theorems actually worked. This is a change that would not be seen on a large scale until the 19th century. Another change that would greatly facilitate the algebraic revolution is the drive for the unification and strengthening of mathematics in general, a philosophical macro-view of mathematics which inspires mathematicians to work toward establishing the foundations of mathematics rather than more and more advanced theorems.

Fast Forward to the 19th Century

The second phase of algebraic development begins with Lagrange, Galois, and other mathematicians of the 18th century. They searched for a method of solving algebraic equations of 5th degree or higher, a problem famously solved by Ruffini in 1799. The algebras of modular arithmetic were developed, but it seems that modular arithmetic was too closely married to the familiar number system to inspire mathematicians to more creative thought. There was little work done which would contradict the old-fashioned statement “Mathematics is the science of quantity or number” except perhaps the study of combination calculus, which dealt with permutations or combinations of elements.

Through the last half of the 18th century and the first part of the 19th century, attention turned toward the structures of algebra, and much work was done abstracting and expanding these ideas of structures. In England, where mathematical development at the time was somewhat slower than in the rest of Europe (this hindrance is largely attributed to English academic practices at the time: namely, the focus for students on cramming for the prestigious Tripos exam rather than learning to think creatively towards new developments) George Peacock started things off with a work “written with a view of conferring upon Algebra the character of a demonstrative science.” Wanting to reevaluate the relationship between arithmetic and algebra, Peacock argued that arithmetic was a Suggestive Science which neither determines nor limits the laws of Algebra. He then differentiates between arithmetic algebra, where the symbols stand for actual numbers and actual arithmetic operations, and symbolic algebra, where the same symbols mean something entirely different, and abstract. However, Peacock’s assertion that all of the laws of algebra remain the same no matter what served as a deterrent for further progress, for it took Hamilton many years to reject that principle and drop commutativity for the sake of an alternate algebra.

The beginning of this phase of the algebraic revolution began with William Rowan Hamilton (1805-1865). In the early 1830s, Hamilton began a lengthy struggle to “save” algebra from what he felt were weak, obscure roots. He hoped to mold algebra into a science “strict, pure and independent; deduced by valid reasonings from its own intuitive principles,” and not a language, as it had been previously described. He began with the intuitionist notion that the human intuition of time can be made the rudimentary foundation of this science; he then continued to define negative numbers (progressive steps backwards in time), laws for rational numbers and even the irrationals. In his 1837 Theory of Conjugate Functions, or Algebraic Couples: With a Preliminary Essay on Algebra as the Science of Pure Time, he attempts to systematically list the properties of the real numbers, so as to more concretely introduce the laws of algebra as they apply to “number couples,” or complex numbers. Students still learn to define the complex numbers in the same manner, an ordered pair of real numbers (a,b) with rules for algebraic operations. He proves the commutative law for addition and multiplication, as well as distributivity of multiplication over addition, but fails to show the associative law. It is postulated that he omitted the associative law because it didn’t occur to him that there might be an algebra for which associativity didn’t hold, a concept which later turned out to be critical. He ends this book by mentioning his hopes for developing the algebra of Triplets, hypercomplex numbers which were related to three-space much as complex numbers were related to two-space.

For the next 6 years Hamilton searched for a way to define algebraic rules on his Triplets, but he couldn’t find a way to do so without compromising his earlier definitions on complex numbers. Then, in October 1843, Hamilton had a flash on intuition as he was taking a walk with his wife. He needed to abandon the commutative law and use terms of four expressions rather than three. Suddenly paranoid of his likelihood of living long enough to communicate his discovery, he etched i2 = j2 = k2 = ijk = -1 into the Brougham Bridge. And so the barrier to formal algebras was broken, and the revolution began. Hamilton plunged into his research with a fervor, convinced he had discovered a true mathematical description of the world in space and time (four dimensions). He published copiously, applying quaternions to almost every scientific field he could get his hands on. He attracted a following of mathematicians who furiously tried to show quaternions to be the greatest thing since sliced bread, the ultimate definition of the physical world around us. However, that fervor gradually died, due in part to a new vector analysis posed by Gibbs as well as the algebra of the more general n-tuple presented by Grassmann, and quaternions eventually took their place as the simplest non-commutative algebra.

Other mathematicians attempted to make discoveries of their own. One natural question to ask of the newly developing algebras was “If one, and two, and four, what of eight?” Graves and Cayley are both credited with work done developing the system of octonions, or “octaves.” John Graves had a system researched a mere three months after Hamilton created the quaternions; this system’s multiplication was not only non-commutative, but non-associative as well. Unfortunately, the publication of Graves’ system was delayed, and Arthur Cayley beat him with a paper describing an essentially identical system. Hermann Günther Grassmann took the next step in defining the algebra of the ordered n-tuple in his work Ausdehnungslehre, or Calculus of Extension. With the general case defined by Grassmann, where would the revolution next take these algebras?

Arthur Cayley, after writing his paper on octonions, turned his attention to linear transformations of the form

      x′ = ax + by
T1
y′ = cx + dy

which transformed an ordered pair (x, y) into a pair (x′, y′). Hoping for “an abbreviated notation for a set of linear equations,” Cayley eventually devised arrays of coefficients such as

┌			┐
│	a	b	│
│	c	d	│
└			┘

By manipulating this array and a similarly defined T2, Cayley was able to define the operations of addition and multiplication. He also found an identity array, and proved commutativity and associativity of addition, as well as associativity and distributivity over addition of multiplication. However, Cayley used the term “array” for the new object he had devised, and we can thank James Joseph Sylvester for his work in coining dozens of new terms in mathematics such as “matrix.”

Cayley also developed greatly the slightly older study of groups, which had begun with Lagrange and Galois and which Cauchy had done some significant work. Until Cayley’s paper “On the theory of groups as depending on the symbolic equation θn = 1” was published in 1854, however, the study of groups was strictly limited to the idea of a permutation group (certainly a more concrete concept, and one with which most undergraduates begin their study of group theory). Cayley abstracted the idea of a group, defining only generic operations (or elements), their product, associativity, identity, and table illustrating the various possible products in the group.

The abstract idea of a group was slow to catch on, as most mathematicians were busy working with the more concrete permutation groups. However, another mathematician was working in an entirely different direction towards the same end. George Boole, with his book Mathematical Analysis of Logic took a giant leap in proving that the view of mathematics as “the science of magnitude or number” was a ridiculous statement. Influenced by George Peacock and Augustus DeMorgan, Boole formalized some critical ideas about algebra and the objects acted upon by algebra. He argued, as Cayley did, that the objects in algebra did not need to be numbers; furthermore, the operations and symbols for operations in algebra were arbitrary. This different direction was the connection of these new abstract ideas to the study of logic, hitherto considered a field entirely separate from mathematics. Boole developed an algebra of sets or logic, using notation currently reserved for the algebra of numbers. For example, + was taken to mean the union of two sets, and × the intersection. He then showed that this algebra was commutative over addition (union) and multiplication (intersection), associative, and distributive. Boole’s developments were important to mathematics for their connections of radical new ideas to the ancient study of logic. Thus, not only was algebra liberated from its numeric burden, but it was found to govern logic and liberate logic from its burden of spoken languages.

But What Does All This Mean Exactly?

The change in values for mathematicians from antiquity to the 19th centuries, from methodology to existence, from “How to…” to “What is…” seems to be the cause of the algebraic revolution, rather than a product. A large intellectual leap was made in ancient cultures when mathematics was finally studied for its own sake, and not simply as a means to the end of solving practical problems. Works were eventually published that were not just collections of problems with answers and solutions, but that discussed ideas in detail and attempted to divine additional ideas from previous ones, without any help from the outside world.

Little inklings of abstract thought are seen in the 18th century, in combination calculus and modular algebra, indicating a gradual shift toward abstract thinking, rather than a big bang. Much like a toy or fashion trend, the beginnings were very slow, but it took one singularly prominent event (a television commercial, celebrity appearance, or influential book by George Peacock) for the market to explode. Many children ask for the toy, many consumers buy the fashion, and many mathematical thinkers devote their time to the new subject matter. However, it has been noted many times that mathematical studies, unlike toys or fashion, have not yet proven to be capable of saturation.

This algebraic revolution most notably inspired the redefining of the nature of pure mathematics. By the time Boole and Hamilton began contributing to the field, it was noted that “Mathematics is the science of quantity” was perhaps very limiting, and even false. It was then recognized that through this study of algebraic structures, and through the separation of algebraic laws from numbers, that more could be discerned about the nature of pure mathematics. Rather than inventing a newer, cuter one-liner to define the entirety of mathematics, mathematicians strove to redefine the roots and foundations of mathematics. Mathematical philosophy grew as its own field, and not as a byproduct of general philosophers musing at the sciences.

So, then, what of a possible causal relationship between algebra and mathematical philosophy? Or for that matter, formalism, intuitionism, and logicism in particular? The explosion of both seem to coincide, mathematical philosophy truly seeing its rise and spread in the 19th century, with Russell, Hilbert, and Brouwer all devising their separate programs describing the foundations of mathematics. Is the liberation of algebra, then, a sister product of a much larger occurrence, or did this liberation have an effect on the philosophic thought of mathematicians of the time?

I contend that mathematical philosophy as a subject of study was born not of the algebraic revolution itself, but of the same “stuff” of which the revolution came. Mathematicians were turning their attention toward “saving” mathematics, establishing foundations for mathematics, and eliminating ambiguities and fuzziness in the field. Boole’s work creating an alternate algebra using sets and logical connectives which followed algebraic laws, which connected mathematics to the hitherto separate study of logic, along with De Morgan’s and Hamilton’s work further connecting mathematics to logic, inspired the field of mathematical logic and the philosophy of logicism, the drive to determine just how much of mathematics could be expressed in terms of logic. Russell, seeing the coming marriage of logic and mathematics, believed that the marriage could be a complete one and that logic was actually “the progenitor of mathematics.”

The other major mathematical philosophies, formalism and intuitionism, were essentially formulated in response to either logicism, the consistency problem, or other criticisms issued by mathematicians of the day. There are many reasons for the formulation of these factions. But in the “revolution” of the 19th century, logicism was the first to arrive, and it arrived as a product of a general turn of thought toward the foundations of mathematics and the interesting results published linking mathematics to logic.

This algebraic liberation also brought up several issues which were hitherto ignored, such as the language and modes of communication which carries mathematics from one mind to another. The discovery that algebras and mathematics in general were independent of the symbols which communicate their meaning forced the language of mathematics to become a focus for research. Before, language was simply a tool which was taken for granted without being rigorously defined, but later the concept of a semantic paradox brought to light the inherent complications that ordinary language brings to mathematics. Spoken and written languages are simply too rich for mathematics. While a shift from words to symbols had already occurred in the ancient texts of the Arabs and Chinese scholars, little time had been devoted to thinking of why the symbols were so much better than the words. The richness of language had to be addressed and dealt with for the sake of the consistency of mathematics.

Another issue which has popped up in more recent times is the dependability of written and spoken communication of mathematics, along with the idea of Platonism. In more recent times, scholars such as Philip J. Davis have written on the uncertainty of mathematical communication, separating the representation in any form from its ideal form, or celestial form. The idea is very much a Platonist one: there is an ideal for every mathematical concept, and the best we can hope for is a fairly accurate imitation or representation of that ideal. If that is the best we can hope for, we can naturally expect and learn to predict an amount of error in our representations (hence, Davis’ statistical discussion on the error of mathematical discourse). To be able to separate the ideas of representations of concepts and the concepts themselves, one must be able to make other types of distinctions, such as the distinction between mathematics and the real world. Mathematics exists apart from its applications, symbols, and even instances in the real world. One must be able to think of mathematics on its own, without depending on the outside world for support in order to think of mathematical laws as independent from objects formerly exclusively considered mathematical objects (I mean numbers, and I mean that any objects which are acted upon by mathematical laws are then mathematical objects).

The abstraction of mathematics, in particular algebra, happened at first very slowly. Various cultures discovered simple algebraic laws as they applied to everyday life. Then, a few began to develop mathematics apart from everyday life, creating new mathematics out of old, and substituting symbols for words. Counting boards were abandoned in favor of the more abstract number system, easing scholars out of the habit of thinking of numbers geometrically. Eventually, a few brilliant minds began to analyze the structures of algebra, and they noticed that perhaps the algebraic objects (numbers) weren’t so important after all to the study of algebra. After quaternions, octonions, n-tuples, matrices, and algebraic logic, it was finally noticed that the objects upon which mathematical laws act aren’t important at all; mathematics could not possibly be just “the science of quantity or number.” Independently, the realization that mathematics could be studied without regard to the physical world inspired the "notion" that mathematics exists without the real world, that there is mathematics in ideal form without the help of mere imitations found in the physical world. It would seem that algebraic reformation and the mathematical philosophy as a study in its own right are both children of the general turning toward philosophy as a study which might help save mathematics from eating itself alive. Foundations needed to be established, structures and laws needed to be revisited, and above all, “mathematics is the science of quantity or number” had to be abandoned for the unification and strengthening of all pure mathematics. Whatever new definition scholars may agree upon for the term mathematics is unimportant so long as the definition is continually revisited and the nature of pure mathematics is constantly pondered.

Bibliography

ANELLIS, IRVING H., HOUSER, NATHAN R. “Nineteenth Century Roots of Algebraic Logic and Universal Algebra.” Algebraic Logic. Amsterdam: North-Holland Publishing, 1991.
BELL, E. T. The Development of Mathematics. 2nd ed. New York: McGraw, 1945.
BOYER, CARL B, MERZBACH, UTA C. A History of Mathematics. 2nd ed. New York: John Wiley & Sons, 1991.
BURTON, DAVID M. The History of Mathematics: An Introduction. 4th ed. Boston: McGraw: 1999.
DAVIS, PHILIP J. Error in Mathematical Discourse. 1971
EUCLIDES, HEATH, T. L. The Thirteen Books of Euclid’s Elements. Cambridge: University Press, 1908.
EVES, HOWARD WHITLEY. An Introduction to the History of Mathematics. 5th ed. New York: CBS College Publishing, 1983.
FÉLIX, LUCIENNE. The Modern Aspect of Mathematics. New York: Basic Books, 1960.
KLINE, MORRIS. Mathematics: a Cultural Approach. New York: Addison-Wesley Publishing Company, 1962.
NOVÝ, LUBOŠ. Origins of Modern Algebra. Leyden, The Netherlands: Noordhoff International, 1973.
RUSSELL, BERTRAND. Introduction to Mathematical Philosophy. London: George Allen & Unwin, 1920.
STERN, AUGUST. Matrix Logic. Amsterdam: North-Holland, 1988.
--------, VOGEL, KURT. Chiu Chang Suan Shu: Neun Bücher Arithmetischer Technik. Braunschweig: Friedr. Vieweg & Sohn GmbH: 1968.

Al"ge*bra (#), n. [LL. algebra, fr. Ar. al-jebr reduction of parts to a whole, or fractions to whole numbers, fr. jabara to bind together, consolidate; al-jebr w'almuqabalah reduction and comparison (by equations): cf. F. algebre, It. & Sp. algebra.]

1. Math.

That branch of mathematics which treats of the relations and properties of quantity by means of letters and other symbols. It is applicable to those relations that are true of every kind of magnitude.

2.

A treatise on this science.

 

© Webster 1913.

Log in or register to write something here or to contact authors.