Abstract

We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few large convolutional kernels instead of a stack of small kernels could be a more powerful paradigm. We suggested five guidelines, e.g., applying re-parameterized large depthwise convolutions, to design efficient high-performance large-kernel CNNs. Following the guidelines, we propose RepLKNet, a pure CNN architecture whose kernel size is as large as 31×31, in contrast to commonly used 3×3. RepLKNet greatly closes the performance gap between CNNs and ViTs, e.g., achieving comparable or superior results than Swin Transformer on ImageNet and a few typical downstream tasks, with lower latency. RepLKNet also shows nice scalability to big data and large models, obtaining 87.8% top-1 accuracy on ImageNet and 56.0% mIoU on ADE20K, which is very competitive among the state-of-the-arts with similar model sizes. Our study further reveals that, in contrast to small-kernel CNNs, large-kernel CNNs have much larger effective receptive fields and higher shape bias rather than texture bias. Code & models at https://github.com/megvii-research/RepLKNet.

Keywords

Computer scienceKernel (algebra)Convolutional neural networkParameterized complexityTree kernelScalabilityArtificial intelligenceScalingPattern recognition (psychology)Contrast (vision)TransformerKernel methodSupport vector machineAlgorithmKernel embedding of distributionsMathematics

Affiliated Institutions

Related Publications

A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification...

2022 2022 IEEE/CVF Conference on Computer ... 5683 citations

Publication Info

Year
2022
Type
article
Pages
11953-11965
Citations
1152
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1152
OpenAlex

Cite This

Xiaohan Ding, Xiangyu Zhang, Jungong Han et al. (2022). Scaling Up Your Kernels to 31×31: Revisiting Large Kernel Design in CNNs. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 11953-11965. https://doi.org/10.1109/cvpr52688.2022.01166

Identifiers

DOI
10.1109/cvpr52688.2022.01166