Abstract
The real world exhibits an abundance of non-stationary textures. Examples include textures with large-scale structures, as well as spatially variant and inhomogeneous textures. While existing example-based texture synthesis methods can cope well with stationary textures, non-stationary textures still pose a considerable challenge, which remains unresolved. In this paper, we propose a new approach for example-based non-stationary texture synthesis. Our approach uses a generative adversarial network (GAN), trained to double the spatial extent of texture blocks extracted from a specific texture exemplary. Once trained, the fully convolutional generator is able to expand the size of the entire exemplar, as well as of any of its sub-blocks. We demonstrate that this conceptually simple approach is highly effective for capturing large-scale structures, as well as other non-stationary attributes of the input exemplary. As a result, it can cope with challenging textures, which, to our knowledge, no other existing method can handle.
CCS Concepts: • Computing methodologies → Appearance and texture representations; Image manipulation; Texturing;
YANG ZHOU, Shenzhen University and Huazhong University of Science & Technology ZHEN ZHU and XIANG BAI, Huazhong University of Science and Technology DANI LISCHINSKI, The Hebrew University of Jerusalem DANIEL COHEN-OR, Shenzhen University and Tel Aviv University HUI HUANG, Shenzhen University
https://arxiv.org/pdf/1805.04487.pdf
This work is the most amazing one that I have never seen. I have some minds about it:
- How about to use it do microscopy image synthesis?
- It is definitely able to create customized production decorated with paint arts.
- Something else? see this video