The single-layer perceptron (SLP) is the simplest type of neural network used for pattern (i.e. vector) classification, where the patterns are linearly separable – patterns which lie on opposite sides of a hyperplane . In its most basic form, a perceptron consists of a single M&P neuron.

The M&P neuron has inputs (x1,...,xn) with a set of weights (w1,...,wn), and an external bias of b. The induced local field, u of the neuron is therefore

u = SUM(wixi) + b

The neuron uses the signum function as its threshold function, and thus produces a bipolar output, y, where

y = sgm(u)

There is clearly no point in allowing for more than one output from a single neuron, since there are no other neurons in the network to feed signals to, and all outputs would simply be functions of the same induced local field.

Such a perceptron can classify a m-dimensional vector (x1,x2,...,xm) into two different classes, C1 or C2. We do this by specifying a decision rule for classification: if the output is –1 the input vector belongs to class C1, and if the output is 1 the input vector belongs to class C2. Hence, this SLP can classify patterns that belong to two linearly separable classes. Adding more neurons allows the classification to extend to more then two such classes.