When writing down an infinite sum (or product, union, intersection), in order to achieve maximal generality, one has to use an index set, i.e. not write "i = 1 ... infinity", but instead "i element of I". This allows for the operation to be performed on an uncountable number of operands, not just a countable one. For reasons explained in finite vs. countable vs. uncountable, this can be a crucial difference.

In formal language theory and the theory of computation, index sets have a slightly different meaning. Say we are given some universal Turing Machine U, a Turing Machine capable of simulating the behavior of any other Turing Machine given the encoding of that TM on its input tape. Every such encoding is simply a string of symbols from the Turing machine's input alphabet, and every such encoding may be reduced to a number in a simple way, into the Turing machine's "Godel number", or (to use Turing's original terminology) its description number. An index set is defined as the set of all the description numbers corresponding to the Turing machines that compute some partial recursive function in a set of such functions.

Note that any index set must either be empty or infinite, as an infinite number of Turing machines exist that can compute a particular function (one merely need add an arbitrary number of useless states to any Turing machine that computes the function to see this). The index set consisting of all partial recursive functions is obviously recursively enumerable, but remarkably enough, by Rice's Theorem, all other nontrivial index sets are actually undecidable.

Log in or register to write something here or to contact authors.