Abstract

By stacking layers of convolution and nonlinearity, convolutional networks (ConvNets) effectively learn from lowlevel to high-level features and discriminative representations. Since the end goal of large-scale recognition is to delineate complex boundaries of thousands of classes, adequate exploration of feature distributions is important for realizing full potentials of ConvNets. However, state-of-the-art works concentrate only on deeper or wider architecture design, while rarely exploring feature statistics higher than first-order. We take a step towards addressing this problem. Our method consists in covariance pooling, instead of the most commonly used first-order pooling, of highlevel convolutional features. The main challenges involved are robust covariance estimation given a small sample of large-dimensional features and usage of the manifold structure of covariance matrices. To address these challenges, we present a Matrix Power Normalized Covariance (MPNCOV) method. We develop forward and backward propagation formulas regarding the nonlinear matrix functions such that MPN-COV can be trained end-to-end. In addition, we analyze both qualitatively and quantitatively its advantage over the well-known Log-Euclidean metric. On the ImageNet 2012 validation set, by combining MPN-COV we achieve over 4%, 3% and 2.5% gains for AlexNet, VGG-M and VGG-16, respectively; integration of MPN-COV into 50-layer ResNet outperforms ResNet-101 and is comparable to ResNet-152. The source code will be available on the project page: http://www.peihuali.org/MPN-COV.

Keywords

Discriminative modelPoolingCovarianceComputer scienceMetric (unit)Pattern recognition (psychology)Convolution (computer science)Artificial intelligenceFeature (linguistics)Covariance matrixEstimation of covariance matricesKernel (algebra)Convolutional neural networkConvolutional codeAlgorithmMathematicsStatisticsArtificial neural networkDecoding methods

Affiliated Institutions

Related Publications

Publication Info

Year
2017
Type
article
Citations
270
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

270
OpenAlex

Cite This

Peihua Li, Jiangtao Xie, Qilong Wang et al. (2017). Is Second-Order Information Helpful for Large-Scale Visual Recognition?. . https://doi.org/10.1109/iccv.2017.228

Identifiers

DOI
10.1109/iccv.2017.228