Multi-layer Perceptrons (MLPs) are an important type of feedforward network. They consist of a set of input nodes (the input layer), one or more hidden layers of neurons, and finally an output layer of neurons. An input signal is fed into the input layer of nodes, and propagates through the network on a layer-by-layer basis, always in a forward direction (i.e. from the input layer to the output layer, via the hidden layers, in an acyclic fashion), before an output signal is released at the output layer of neurons. It is the hidden layers of neurons that give the network sufficient complexity to extract more meaningful features from the input vectors.

The hidden layers are so called since they are ‘sandwiched’ between the input and output layers, performing computations internal to the network while remaining invisible to the external world. The neurons apart from the output neurons (i.e. those in the hidden layers) are known as hidden neurons.

The network we describe is fully connected, meaning that each neuron in any layer is connected to all neurons in the previous layer. As mentioned above, signals flow from left to right along the network. Therefore, any signal is the same path-distance (i.e. it has travelled through the same number of links) from the input nodes as all other signals in that layer. Two types of signal can be identified in the MLP (Parker, 1987):

1. Function Signals enter the network at the input nodes; they propagate from left to right through the network (neuron by neuron), before finally emerging from the output end of the network as output signals.

2. Error Signals, which originate at the output neurons of the network; they propagate backwards through the network (layer by layer), and are used to adjust synaptic weights in the network to reduce overall error.

The error signal, ej(n), generated by the response of an output neuron j at iteration n of the network's run-time, is defined as

ej(n) = dj(n) - yj(n)

dj(n) = desired response at neuron j
yj(n) = actual output at neuron j

In order to train the MLP, we use the error signals generated at the output neurons to adjust weights over the whole network. These error signals are generated using desired responses and actual responses at output nodes, and therefore this system of learning constitutes a supervised training algorithm. Since errors are propagated backwards through the network, it is known as the error back-propagation algorithm.

Log in or register to write something here or to contact authors.