Abstract

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.

Keywords

Computer scienceHash functionSpeedupRendering (computer graphics)Artificial neural networkHash tableCUDAGraphicsParallel computingMemory bandwidthLeverage (statistics)Graphics hardwareArtificial intelligenceComputer graphics (images)

Affiliated Institutions

Related Publications

Publication Info

Year
2022
Type
article
Volume
41
Issue
4
Pages
1-15
Citations
3089
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3089
OpenAlex

Cite This

Thomas Müller, Alex Evans, Christoph Schied et al. (2022). Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics , 41 (4) , 1-15. https://doi.org/10.1145/3528223.3530127

Identifiers

DOI
10.1145/3528223.3530127