This concept is quite important, since if you want to do mathematics rigourously you can only use terms that you have proved to be well-defined. Ironically this term is quite difficult to define, since any attempt to do so is bound to appear silly and tautological.
Here is my attempt:

A property is well-defined if its definition is sufficient to determine it uniquely.

Sounds obvious and uninformative, doesn't it? Perhaps some examples will help.

We might want to define a semi-squaring function S: Q -> Q, ie a function which takes a rational and returns a rational. We define S(x) by: "Write x as p/q, with p, q are integers. Then S(x) = p2/q".
This operation is not very meaningful, eg take x = 2. Then x = 2/1, so S(2) = 4. But we could also write x = 4/2, giving S(2) = 8. This shows that the definition given for S is not sufficient to determine S(x) uniquely. S is not well-defined, and writing S(x) is therefore meaningless.

A property we certainly want to define for rationals is the sum. For rationals x, y we define x + y as follows: "Write x = a/b, y = c/d, where a, b, c, d are integers. Then x + y = (ad + bc)/bd". (This definition is in terms of addition and multiplication for integers.)
This seems perfectly reasonable, but before we can use the term 'sum' for rationals we have to make sure that the property is indeed well-defined.
Suppose that a/b = a'/b', c/d = c'/d'. Then

(a'd' + b'c')*bd = a'bd'd + b'bc'd = ab'd'd + b'bcd' = (ad + bc)*b'd' => (a'd' + b'c')/b'd' = (ad + bc)/bd

This shows that the sum of two rationals is independent of which representation in terms of integers we choose. Therefore our definition of the sum of two rationals is well-defined (provided that we have made sure that the sum and product of integers is well-defined). So we can happily add halves and quarters without worrying that our doing so will accidentally prove that 0=1.

Mathematics operates in terms of propositions. A mathematical definition is a proposition in which the mathematical concept being defined occurs. The concept is well-defined if the definition allows it to be identified uniquely.

Examples:

q is the natural number divisible by 5.
With this definition, q isn't well-defined: there is more than one such number.
Q is the set of natural numbers divisible by 5
Q is well-defined: a set is defined by its members and although this set is infinite we know exactly what its members are
Q is the set of natural numbers that occur in the decimal expansion of pi
this is a less well-defined set::we know it is unique, and we have a procedure to find all of its members, but is it possible to find out whether or not any given number is in it? I don't actually know
Q is the set of natural numbers that do not occur in the decimal expansion of pi
here, we don't even have a procedure to list the members of the set; for all we know, it may be empty
T is the set of Turing machines that halt on the empty input
this set is undecidable: there exists no method to determine for any given Turing machine whether it is in this set
and so forth. Different schools of mathematics put different limits on what they still consider acceptable definitions - for instance, intuitionism is rather stricter than standard mathematics.

Log in or register to write something here or to contact authors.