Abstract

We present a data-driven method for estimating the 3D shapes of faces viewed in single, unconstrained photos (aka "in-the-wild"). Our method was designed with an emphasis on robustness and efficiency - with the explicit goal of deployment in real-world applications which reconstruct and display faces in 3D. Our key observation is that for many practical applications, warping the shape of a reference face to match the appearance of a query, is enough to produce realistic impressions of the query's 3D shape. Doing so, however, requires matching visual features between the (possibly very different) query and reference images, while ensuring that a plausible face shape is produced. To this end, we describe an optimization process which seeks to maximize the similarity of appearances and depths, jointly, to those of a reference model. We describe our system for monocular face shape reconstruction and present both qualitative and quantitative experiments, comparing our method against alternative systems, and demonstrating its capabilities. Finally, as a testament to its suitability for real-world applications, we offer an open, on-line implementation of our system, providing unique means of instant 3D viewing of faces appearing in web photos.

Keywords

Computer scienceComputer visionArtificial intelligenceImage warpingRobustness (evolution)AKAFace (sociological concept)Computer graphics (images)

Affiliated Institutions

Related Publications

Publication Info

Year
2013
Type
article
Pages
3607-3614
Citations
153
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

153
OpenAlex

Cite This

Tal Hassner (2013). Viewing Real-World Faces in 3D. , 3607-3614. https://doi.org/10.1109/iccv.2013.448

Identifiers

DOI
10.1109/iccv.2013.448