Teaching robots to paint in a manner similar to a human painter is an important task in computer vision. A recent paper on arXiv.org proposes a novel approach to this problem, which tackles several limitations of current algorithms.
Like in other methods, reinforcement learning is used to predict a sequence of brush strokes from a given image. However, instead of depicting one single image, the novel method employs a semantic guidance pipeline to learn the distinction between foreground and background brush strokes. Also, a neural alignment model is used to zoom in on a particular foreground object.
Moreover, focus reward helps to concentrate on fine-grain features like a bird’s eye and increases granulation. The results show that the proposed approach develops a top-down painting style and achieves similarity to human-like painting.
Generation of stroke-based non-photorealistic imagery, is an important problem in the computer vision community. As an endeavor in this direction, substantial recent research efforts have been focused on teaching machines “how to paint”, in a manner similar to a human painter. However, the applicability of previous methods has been limited to datasets with little variation in position, scale and saliency of the foreground object. As a consequence, we find that these methods struggle to cover the granularity and diversity possessed by real world images. To this end, we propose a Semantic Guidance pipeline with 1) a bi-level painting procedure for learning the distinction between foreground and background brush strokes at training time. 2) We also introduce invariance to the position and scale of the foreground object through a neural alignment model, which combines object localization and spatial transformer networks in an end to end manner, to zoom into a particular semantic instance. 3) The distinguishing features of the in-focus object are then amplified by maximizing a novel guided backpropagation based focus reward. The proposed agent does not require any supervision on human stroke-data and successfully handles variations in foreground object attributes, thus, producing much higher quality canvases for the CUB-200 Birds and Stanford Cars-196 datasets. Finally, we demonstrate the further efficacy of our method on complex datasets with multiple foreground object instances by evaluating an extension of our method on the challenging Virtual-KITTI dataset.