Abstract

The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5× speedup with no loss in accuracy, and 4.5× speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.

Keywords

Convolutional neural networkComputer scienceRank (graph theory)Artificial intelligenceMathematicsCombinatorics

Affiliated Institutions

Related Publications

Publication Info

Year
2014
Type
article
Pages
88.1-88.13
Citations
1130
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1130
OpenAlex

Cite This

Max Jaderberg, Andrea Vedaldi, Andrew Zisserman (2014). Speeding up Convolutional Neural Networks with Low Rank Expansions. , 88.1-88.13. https://doi.org/10.5244/c.28.88

Identifiers

DOI
10.5244/c.28.88