Abstract

Efficient training of direct multi-class formulations of linear Support Vector Machines is very useful in applications such as text classification with a huge number examples as well as features. This paper presents a fast dual method for this training. The main idea is to sequentially traverse through the training set and optimize the dual variables associated with one example at a time. The speed of training is enhanced further by shrinking and cooling heuristics. Experiments indicate that our method is much faster than state of the art solvers such as bundle, cutting plane and exponentiated gradient methods.

Keywords

TraverseHeuristicsSupport vector machineDual (grammatical number)Computer scienceClass (philosophy)Set (abstract data type)Training setSpeedupAlgorithmArtificial intelligencePattern recognition (psychology)Mathematical optimizationMathematicsParallel computing

Affiliated Institutions

Related Publications

Publication Info

Year
2008
Type
article
Pages
408-416
Citations
152
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

152
OpenAlex

Cite This

S. Sathiya Keerthi, S. Sundararajan, Kai‐Wei Chang et al. (2008). A sequential dual method for large scale multi-class linear svms. , 408-416. https://doi.org/10.1145/1401890.1401942

Identifiers

DOI
10.1145/1401890.1401942