Learning machines are systems developed to extract
information from a training set of data.
In the framework of the
statistical learning theory, such a machine can be thought as a system that must be trained with a sample set of labelled data (i.e. n-dimensional
vectors together with a real value), and that, after training, can predict the label of a new
vector.
For instance we can feed a L.M. with a set of images of hand-written numbers (that's the most commonly used example), and tell to it which numbers each of these images represent. Then if we supply to the L.M. a new image, it should be able to predict which number the image represents.
Examples of (statistical) learning machines are the
Neural Networks.
Even if it's pointless to claim that learning machines reproduce the human process of learning, nonetheless it seems to me that L.M.s implement, in a mathematical way, some features present in our brain.
I don't think our brain uses any
entropy minimization or
structural risk minimization technique, but in some way they could be a simplified model of part of its activity.