Oja, Erkki; Wang, Liuyue Neural fitting: Robustness by anti-Hebbian learning. (English) Zbl 0859.68080 Neurocomputing 12, No. 2-3, 155-170 (1996). Summary: A nonlinear neuron, learning by anti-Hebbian learning rules, can perform robust fitting of parametric models to data. Robustness means that the solution is tolerant to non-Gaussian noise and resistant to outliers. The convergence of nonlinear anti-Hebbian learning rules, called here nonlinear minor component analysis algorithms, is analyzed using a simple outlier data model. Both rigorous mathematical results and computer simulations are given. The results show that the linear algorithm, minimizing a Total Least Squares criterion, always has a unique solution which is the direction of the smallest variance of the input data, and thus it is sensitive to outliers. Contrary to this, the nonlinear algorithms exhibit a richer behavior. They do not have a unique solution in general, but will converge to one of several stable fixed points. They are able to maintain their robustness when the effect of the outliers is increasing. MSC: 68T05 Learning and adaptive systems in artificial intelligence Keywords:learning by anti-Hebbian learning rules; minor component analysis algorithms PDFBibTeX XMLCite \textit{E. Oja} and \textit{L. Wang}, Neurocomputing 12, No. 2--3, 155--170 (1996; Zbl 0859.68080) Full Text: DOI Link