Abstract

This paper compares four classification algorithms-discriminant functions when classifying individuals into two multivariate populations. The discriminant functions (DF's) compared are derived according to the Bayes rule for normal populations and differ in assumptions on the covariance matrices' structure. Analytical formulas for the expected probability of misclassification EPN are derived and show that the classification error EPN depends on the structure of a classification algorithm, asymptotic probability of misclassification P¿, and the ratio of learning sample size N to dimensionality p:N/p for all linear DF's discussed and N2/p for quadratic DF's. The tables for learning quantity H = EPN/P¿ depending on parameters P¿, N, and p for four classifilcation algorithms analyzed are presented and may be used for estimating the necessary learning sample size, detennining the optimal number of features, and choosing the type of the classification algorithm in the case of a limited learning sample size.

Keywords

Pattern recognition (psychology)Curse of dimensionalityArtificial intelligenceComputer scienceStatistical classificationSample (material)Algorithm

Related Publications

Publication Info

Year
1980
Type
article
Volume
PAMI-2
Issue
3
Pages
242-252
Citations
166
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

166
OpenAlex

Cite This

Šarūnas Raudys, Vitalijus Pikelis (1980). On Dimensionality, Sample Size, Classification Error, and Complexity of Classification Algorithm in Pattern Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence , PAMI-2 (3) , 242-252. https://doi.org/10.1109/tpami.1980.4767011

Identifiers

DOI
10.1109/tpami.1980.4767011