Abstract

Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.

Keywords

Computer scienceConvolutional neural networkArtificial intelligenceRendering (computer graphics)Image (mathematics)Computer visionPattern recognition (psychology)Artificial neural network

Affiliated Institutions

Related Publications

Publication Info

Year
2016
Type
article
Pages
2414-2423
Citations
5772
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

5772
OpenAlex

Cite This

Leon A. Gatys, Alexander S. Ecker, Matthias Bethge (2016). Image Style Transfer Using Convolutional Neural Networks. , 2414-2423. https://doi.org/10.1109/cvpr.2016.265

Identifiers

DOI
10.1109/cvpr.2016.265