Materials in Paintings Dataset (in development)
Explore Download About
Materials In Paintings (MIP): An interdisciplinary dataset for perception, art history, and computer vision

Mitchell J.P. van Zuijlen, Hubert Lin, Kavita Bala, Sylvia C. Pont, Maarten W.A. Wijntjes

A painter is free to modify how components of a natural scene are depicted, which can lead to a perceptually convincing image of the distal world. This signals a major difference between photos and paintings: paintings are explicitly created for human perception. Studying these painterly depictions could be beneficial to a multidisciplinary audience. In this paper, we capture and explore the painterly depictions of materials to enable the study of depiction and perception of materials through the artists' eye. We annotated a dataset of 19k paintings with 200k+ bounding boxes from which polygon segments were automatically extracted. Each bounding box was assigned a coarse label (e.g., fabric) and a fine-grained label (e.g., velvety, silky). We demonstrate the cross-disciplinary utility of our dataset by presenting novel findings across art history, human perception, and computer vision. Our experiments include analyzing the distribution of materials depicted in paintings, showing how painters create convincing depictions using a stylized approach, and demonstrating how paintings can be used to build more robust computer vision models. We conclude that our dataset of painterly material depictions is a rich source for gaining insights into the depiction and perception of materials across multiple disciplines. The MIP dataset is freely accessible at https://materialsinpaintings.tudelft.nl.

Mitchell van Zuijlen

Mitchell

Hubert Lin

Hubert

Kavita Bala

Kavita

Sylvia Pont

Sylvia

Maarten Wijntjes

Maarten

Bibtex

@misc{vanzuijlen2020materials,
	title={Materials In Paintings (MIP): An interdisciplinary dataset for perception, art history, and computer vision}, 
	author={Mitchell J. P. van Zuijlen and Hubert Lin and Kavita Bala and Sylvia C. Pont and Maarten W. A. Wijntjes},
	year={2020},
	eprint={2012.02996},
	archivePrefix={arXiv},
	primaryClass={cs.HC}
}

Also check out our conference papers!

Insights From A Large-Scale Database of Material Depictions In Paintings

Hubert Lin, Mitchell J.P. van Zuijlen, Kavita Bala, Sylvia C. Pont, Maarten W.A. Wijntjes

Deep learning has paved the way for strong recognition systems which are often both trained on and applied to natural images. In this paper, we examine the give-and-take relationship between such visual recognition systems and the rich information available in the fine arts. First, we find that visual recognition systems designed for natural images can work surprisingly well on paintings. In particular, we find that interactive segmentation tools can be used to cleanly annotate polygonal segments within paintings, a task which is time consuming to undertake by hand. We also find that FasterRCNN, a model which has been designed for object recognition in natural scenes, can be quickly repurposed for detection of materials in paintings. Second, we show that learning from paintings can be beneficial for neural networks that are intended to be used on natural images. We find that training on paintings instead of natural images can improve the quality of learned features and we further find that a large number of paintings can be a valuable source of test data for evaluating domain adaptation algorithms. Our experiments are based on a novel large-scale annotated database of material depictions in paintings which we detail in a separate manuscript.

Bibtex

What Can Style Transfer and Paintings Do For Model Robustness?

Hubert Lin, Mitchell J.P. van Zuijlen, Kavita Bala, Sylvia C. Pont, Maarten W.A. Wijntjes

A common strategy for improving model robustness is through data augmentations. Data augmentations encourage models to learn desired invariances, such as invariance to horizontal flipping or small changes in color. Recent work has shown that arbitrary style transfer can be used as a form of data augmentation to encourage invariance to textures by creating painting-like images from photographs. However, a stylized photograph is not quite the same as an artist-created painting. Artists depict perceptually meaningful cues in paintings so that humans can recognize salient components in scenes, an emphasis which is not enforced in style transfer. Therefore, we study how style transfer and paintings differ in their impact on model robustness. First, we investigate the role of paintings as style images for stylization-based data augmentation. We find that style transfer functions well even without paintings as style images. Second, we show that learning from paintings as a form of perceptual data augmentation can improve model robustness. Finally, we investigate the invariances learned from stylization and from paintings, and show that models learn different invariances from these differing forms of data. Our results provide insights into how stylization improves model robustness, and provide evidence that artist-created paintings can be a valuable source of data for model robustness.

Bibtex

Acknowledgement

We appreciate the work and feedback of AMT participants that participated in our user studies. We further wish to thank Yuguang Zhao for his help with the design of this website. Mitchell van Zuijlen, Maarten Wijntjes, and Sylvia Pont were financed by the Netherlands Organization for Scientific Research (NWO) with the VIDI project “Visual communication of material properties”, number 276.54.001. Hubert Lin and Kavita Bala acknowledge support from NSF (CHS-1617861 and CHS-1513967), and NSERC (PGS-D)

PDF Icons made by Freepik from www.flaticon.com