The
traditional neural network is a set of
connected processing elements. Each of these
processing elements (or
neurons) perform a
mathematic function. These
neurons then
grouped together into layers.
A complete network is made up of two or more of these layers.
To perform, preprocessed data is fed to the various neurons in the first layer - the input layer. The range of outputs from each neuron becomes the domain for one or more neurons in the next layer via synapses (or connections).
And this is the important part: each synapse applies a weight to the data as it passes between processing elements. Although the path of the synapses are generally predefined, the weights on each synapse can be dynamically adjusted until the desired output is generated for a given input (aka training).