A layered neural net with adaptable synaptic weights and fixed threshold potentials
is studied, in the presence of a global feedback signal that can only have two
values, depending on whether the output of the network as a reaction to its input
is right or wrong.
On the basis of four biologically motivated assumptions, it is found that
only two forms of learning are possible, Hebbian and Anti–Hebbian learning. It
is shown that Hebbian learning memorizes input–output relations, while Anti–
Hebbian learning does the opposite: it changes the input–output relations of the
network. Hebbian learning should take place when the output is right, while there
should be Anti–Hebbian learning when the output is wrong.
A particular choice for the Anti–Hebbian part of the learning rule is made,
which guarantees an adequate average neuronal activity. A network with non–
zero threshold potentials is shown to perform its task of realizing the desired
input–output relations best if it is sufficiently diluted, i.e. if only a relatively low
fraction of all possible synaptic connections is realized.