Physical models of neural networks.

*(English)*Zbl 0743.92004
Singapore etc.: World Scientific. viii, 143 p. (1990).

This text is based on a course in neural network modeling. It is intended to offer an introduction to the topic from the physicist’s point of view. This viewpoint may explain the author’s occasional failure to use conventional neurophysiological terminology. Although the main subject is the Hopfield model and its descendants, feed-forward networks, such as Rosenblatt’s perceptron, are also reviewed. A qualitative analogy between associative memory and attractors in phase space is established.

The physiology of single neurons is briefly reviewed. Mathematical modelling of the neuron is based on the assumption that all information of interest is represented by neuronal firing frequencies. Postsynaptic depolarization is represented by a linear combination of presynaptic firing frequencies in which the coefficients are “teachable” excitatory or inhibitory synaptic connection strengths. The postsynaptic firing rate is a nonlinear function of the postsynaptic potential which can be approximated by a step function. Discrete-time “clock” dynamics is employed. With the use of these simplifications, the analogy between the neuronal model and real biological neurons is not strong.

Following the physicist John Hopfield, it is proposed that a neuronal network is analogous to an Ising model of a spin glass, which is an alloy of a small amount of randomly distributed magnetic metal with a nonmagnetic metal. With the help of this analogy, the firing of a neuron is represented by a spin-up orientation of a magnetic atom and its failure to fire by a spin-down orientation. Noise is represented by temperature. Learning follows the simple Hebb rule or more complex iterative rules for altering the strengths of synaptic connections. Hopfield network models, like the model of the individual neuron upon which they are based, are oversimplified. The virtue of the simplification is that the models are susceptible to sophisticated mathematical analysis which reveals that they have properties resembling associative memory and can be used to explain other important brain functions. The author shows that the models are robust against relaxation of some of the simplifying assumptions. It can be concluded that networks of artificial neurons are likely to continue to provide interesting and potentially useful alternatives to more conventional computers. However the relation between current network models and the brain is not obvious.

The physiology of single neurons is briefly reviewed. Mathematical modelling of the neuron is based on the assumption that all information of interest is represented by neuronal firing frequencies. Postsynaptic depolarization is represented by a linear combination of presynaptic firing frequencies in which the coefficients are “teachable” excitatory or inhibitory synaptic connection strengths. The postsynaptic firing rate is a nonlinear function of the postsynaptic potential which can be approximated by a step function. Discrete-time “clock” dynamics is employed. With the use of these simplifications, the analogy between the neuronal model and real biological neurons is not strong.

Following the physicist John Hopfield, it is proposed that a neuronal network is analogous to an Ising model of a spin glass, which is an alloy of a small amount of randomly distributed magnetic metal with a nonmagnetic metal. With the help of this analogy, the firing of a neuron is represented by a spin-up orientation of a magnetic atom and its failure to fire by a spin-down orientation. Noise is represented by temperature. Learning follows the simple Hebb rule or more complex iterative rules for altering the strengths of synaptic connections. Hopfield network models, like the model of the individual neuron upon which they are based, are oversimplified. The virtue of the simplification is that the models are susceptible to sophisticated mathematical analysis which reveals that they have properties resembling associative memory and can be used to explain other important brain functions. The author shows that the models are robust against relaxation of some of the simplifying assumptions. It can be concluded that networks of artificial neurons are likely to continue to provide interesting and potentially useful alternatives to more conventional computers. However the relation between current network models and the brain is not obvious.

Reviewer: H.R.Hirsch (Lexington)

##### MSC:

92B20 | Neural networks for/in biological studies, artificial life and related topics |

92-02 | Research exposition (monographs, survey articles) pertaining to biology |

68T05 | Learning and adaptive systems in artificial intelligence |

68U20 | Simulation (MSC2010) |

82C32 | Neural nets applied to problems in time-dependent statistical mechanics |