Abstract

Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky \etal on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.

Keywords

Softmax functionComputer scienceBenchmark (surveying)Classifier (UML)Artificial intelligenceConvolutional neural networkMachine learningVisualizationPattern recognition (psychology)

Affiliated Institutions

Related Publications

Publication Info

Year
2013
Type
preprint
Citations
447
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

447
OpenAlex

Cite This

Matthew D. Zeiler, Rob Fergus (2013). Visualizing and Understanding Convolutional Networks. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.1311.2901

Identifiers

DOI
10.48550/arxiv.1311.2901