×

A faster gradient ascent learning algorithm for nonlinear SVM. (English) Zbl 1298.68217

Summary: We propose a refined gradient ascent method including heuristic parameters for solving the dual problem of nonlinear SVM. Aiming to get better tuning to the particular training sequence, the proposed refinement consists of the use of heuristically established weights in correcting the search direction at each step of the learning algorithm that evolves in the feature space. We propose three variants for computing the correcting weights, their effectiveness being analyzed on experimental basis in the final part of the paper. The tests pointed out good convergence properties, and moreover, the proposed modified variants proved higher convergence rates as compared to Platt’s SMO algorithm. The experimental analysis aimed to derive conclusions on the recognition rate as well as on the generalization capacities. The learning phase of the SVM involved linearly separable samples randomly generated from Gaussian repartitions and the WINE and WDBC datasets. The generalization capacities in case of artificial data were evaluated by several tests performed on new linearly/nonlinearly separable data coming from the same classes. The tests pointed out high recognition rates (about 97%) on artificial datasets and even higher recognition rates in case of the WDBC dataset.

MSC:

68T05 Learning and adaptive systems in artificial intelligence

Software:

Pegasos; mySVM; SVMlight; SSVM
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Vapnik, V., The Nature of Statistical Learning Theory (1995), New York, NY, USA: Springer, New York, NY, USA · Zbl 0833.62008
[2] Vapnik, V., Statistical Learning Theory (1998), New York, NY, USA: John Wiley & Sons, New York, NY, USA · Zbl 0935.62007
[3] Abe, S., Support vector machines for pattern classification, Advances in Pattern Recognition (2010), London, UK: Springer, London, UK · Zbl 1191.68549 · doi:10.1007/978-1-84996-098-4
[4] Shawe-Taylor, J.; Cristianini, N., Support Vector Machines and Other Kernel-Based Learning Methods (2000), Cambridge, UK: Cambridge University Press, Cambridge, UK
[5] Osuna, E.; Freund, R.; Girosi, F., Improved training algorithm for support vector machines, Proceedings of the 7th IEEE Workshop on Neural Networks for Signal Processing (NNSP ’97)
[6] Chien, L. I.-J.; Chang, C.-C.; Lee, Y.-J., Variant methods of reduced set selection for reduced support vector machines, Journal of Information Science and Engineering, 26, 1, 183-196 (2010) · Zbl 1238.68120
[7] Lee, Y.-J.; Mangasarian, O. L., SSVM: a smooth support vector machine for classification, Computational Optimization and Applications, 20, 1, 5-22 (2001) · Zbl 1017.90105 · doi:10.1023/A:1011215321374
[8] Cao, L. J.; Keerthi, S. S.; Ong, C. J.; Uvaraj, P.; Fu, X. J.; Lee, H. P., Developing parallel sequential minimal optimization for fast training support vector machine, Neurocomputing, 70, 1-3, 93-104 (2006) · doi:10.1016/j.neucom.2006.05.007
[9] Cawley, G. C.; Talbot, N. L. C., Improved sparse least-squares support vector machines, Neurocomputing, 48, 1025-1031 (2002) · Zbl 1006.68767 · doi:10.1016/S0925-2312(02)00606-9
[10] Li, C.-H.; Ho, H.-H.; Liu, Y.-L.; Lin, C.-T.; Kuo, B.-C.; Taur, J.-S., An automatic method for selecting the parameter of the normalized kernel function to support vector machines, Journal of Information Science and Engineering, 28, 1, 1-15 (2012)
[11] Joachims, T., Making large-scale SVM learning practical, Advances in Kernel Methods—Support Vector Learning, 41-56 (1998)
[12] Suykens, J. A. K.; de Brabanter, J.; Lukas, L.; Vandewalle, J., Weighted least squares support vector machines: robustness and sparce approximation, Neurocomputing, 48, 85-105 (2002) · Zbl 1006.68799 · doi:10.1016/S0925-2312(01)00644-0
[13] Rueping, S., mySVM: another one of those support vector machines
[14] Alpaydin, E., Introduction to Machine Learning (2004), Cambridge, Mass, USA: MIT Press, Cambridge, Mass, USA
[15] Laskov, P., Feasible direction decomposition algorithms for training support vector machines, Machine Learning, 46, 1-3, 315-349 (2002) · Zbl 1050.68125 · doi:10.1023/A:1012479116909
[16] Shalev-Shwartz, S.; Singer, Y.; Srebro, N., Pegasos: primal estimated sub-GrAdient sOlver for SVM, Proceedings of the 24th International Conference on Machine Learning (ICML ’07) · doi:10.1145/1273496.1273598
[17] Yugov, V.; Kumazava, I., Online boosting algorithm based on two-phase SVM training, ISRN Signal Processing, 12 (2012)
[18] State, L.; Cocianu, C.; Mircea, M., Heuristic attempts to improve the generalization capacities in learning SVMs, Proceedings of the 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing
[19] Cocianu, C.-L.; State, L.; Vlamos, P., A new method for learning the support vector machines, Proceedings of the 6th International Conference on Software and Database Technologies (ICSOFT ’11)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.