Abstract

We investigate the problem of training a support vector machine (SVM) on a very large database in the case in which the number of support vectors is also very large. Training a SVM is equivalent to solving a linearly constrained quadratic programming (QP) problem in a number of variables equal to the number of data points. This optimization problem is known to be challenging when the number of data points exceeds few thousands. In previous work done by us as well as by other researchers, the strategy used to solve the large scale QP problem takes advantage of the fact that the expected number of support vectors is small (<3,000). Therefore, the existing algorithms cannot deal with more than a few thousand support vectors. In this paper we present a decomposition algorithm that is guaranteed to solve the QP problem and that does not make assumptions on the expected number of support vectors. In order to present the feasibility of our approach we consider a foreign exchange rate time series database with 110,000 data points that generates 100,000 support vectors.

Keywords

Computer scienceTraining (meteorology)Support vector machineArtificial intelligenceAlgorithmMachine learning

Related Publications

Publication Info

Year
2002
Type
article
Citations
1065
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1065
OpenAlex

Cite This

E. Osuna, Robert M. Freund, Federico Girosi (2002). An improved training algorithm for support vector machines. . https://doi.org/10.1109/nnsp.1997.622408

Identifiers

DOI
10.1109/nnsp.1997.622408