Abstract

Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors and optimizing over whole images can lead to tremendous computational challenges. In contrast, when we work with small image patches, it is possible to learn priors and perform patch restoration very efficiently. This raises three questions - do priors that give high likelihood to the data also lead to good performance in restoration? Can we use such patch based priors to restore a full image? Can we learn better patch priors? In this work we answer these questions. We compare the likelihood of several patch models and show that priors that give high likelihood to data perform better in patch restoration. Motivated by this result, we propose a generic framework which allows for whole image restoration using any patch based prior for which a MAP (or approximate MAP) estimate can be calculated. We show how to derive an appropriate cost function, how to optimize it and how to use it to restore whole images. Finally, we present a generic, surprisingly simple Gaussian Mixture prior, learned from a set of natural images. When used with the proposed framework, this Gaussian Mixture Model outperforms all other generic prior methods for image denoising, deblurring and inpainting.

Keywords

InpaintingPrior probabilityDeblurringImage restorationArtificial intelligenceComputer scienceImage (mathematics)Computer visionMixture modelPattern recognition (psychology)Machine learningImage processingBayesian probability

Affiliated Institutions

Related Publications

Publication Info

Year
2011
Type
article
Pages
479-486
Citations
1498
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1498
OpenAlex

Cite This

Daniel Zoran, Yair Weiss (2011). From learning models of natural image patches to whole image restoration. , 479-486. https://doi.org/10.1109/iccv.2011.6126278

Identifiers

DOI
10.1109/iccv.2011.6126278