Abstract

Advances in neuroimaging, genomic, motion tracking, eye-tracking and many other technology-based data collection methods have led to a torrent of high dimensional datasets, which commonly have a small number of samples because of the intrinsic high cost of data collection involving human participants. High dimensional data with a small number of samples is of critical importance for identifying biomarkers and conducting feasibility and pilot work, however it can lead to biased machine learning (ML) performance estimates. Our review of studies which have applied ML to predict autistic from non-autistic individuals showed that small sample size is associated with higher reported classification accuracy. Thus, we have investigated whether this bias could be caused by the use of validation methods which do not sufficiently control overfitting. Our simulations show that K-fold Cross-Validation (CV) produces strongly biased performance estimates with small sample sizes, and the bias is still evident with sample size of 1000. Nested CV and train/test split approaches produce robust and unbiased performance estimates regardless of sample size. We also show that feature selection if performed on pooled training and testing data is contributing to bias considerably more than parameter tuning. In addition, the contribution to bias by data dimensionality, hyper-parameter space and number of CV folds was explored, and validation methods were compared with discriminable data. The results suggest how to design robust testing methodologies when working with small datasets and how to interpret the results of other studies based on what validation method was used.

Keywords

OverfittingSample size determinationComputer scienceArtificial intelligenceCross-validationData collectionMachine learningSelection biasSample (material)StatisticsFeature selectionData miningPattern recognition (psychology)MathematicsArtificial neural network

MeSH Terms

AlgorithmsBiomedical ResearchData InterpretationStatisticalHumansMachine LearningSample Size

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Volume
14
Issue
11
Pages
e0224365-e0224365
Citations
1443
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1443
OpenAlex
24
Influential
1299
CrossRef

Cite This

Andrius Vabalas, Emma Gowen, Ellen Poliakoff et al. (2019). Machine learning algorithm validation with a limited sample size. PLoS ONE , 14 (11) , e0224365-e0224365. https://doi.org/10.1371/journal.pone.0224365

Identifiers

DOI
10.1371/journal.pone.0224365
PMID
31697686
PMCID
PMC6837442

Data Quality

Data completeness: 90%