Visual Material Traits
Gabriel Schwartz and Ko Nishino Recognizing Per-Pixel Material Context
Information describing the materials that make up scene constituents provides invaluable context that can lead to a better understanding of images. We would like to obtain such material information at every pixel, in arbitrary images, regardless of the objects involved. In this paper, we introduce visual material traits to achieve this. Material traits, such as “shiny,” or “woven,” encode the appearance of characteristic material properties. We learn convolution kernels in an unsupervised setting to recognize complex material trait appearances at each pixel. Unlike previous methods, our framework explicitly avoids influence from object-specific information. We may, therefore, accurately recognize material traits regardless of the object exhibiting them. Our results show that material traits are discriminative and can be accurately recognized. We demonstrate the use of material traits in material recognition and image segmentation. To our knowledge, this is the first method to extract and use such per-pixel material information.
Visual Material Traits: Recognizing Per-Pixel Material Context
G. Schwartz and K. Nishino,
in Proc. of Color and Photometry in Computer Vision (Workshop held in conjunction with ICCV’13), Dec., 2013.
[ paper ][ database ][ project ] Overview
We annotated images in the Flickr Materials Database (FMD) with masks indicating re- gions that exhibit each material trait. From these regions, we extract 45,500 annotated patches2. We use balanced sets of positive and negative examples to train randomized decision forest (RDF) classifiers for each material trait. Though we use the same dataset as methods that include object information, our feature set and recognition process explicitly avoid object dependence.
Visual material trait recognition accuracy. Material traits are recognized via binary classification on a balanced training and testing set, thus random chance accuracy is 50%. Most traits are recognized well. Difficult material traits, such as metallic and transparent, are challenging due to their object- and environment- dependent appearances. Average accuracy is 78.4%.
Material trait distributions. We compute the class- conditional distributions for each material trait given each material category. These are stored as histograms, examples of which are shown above. Plastic is most often smooth, while stone is extremely rarely smooth. We train histogram intersection kernel SVMs to recognize material categories from visual material traits.
The per-pixel recognition of visual material traits converts an ordinary RGB image into a thirteen-channel material property image. This visual material trait image itself provides rich semantic information about the scene in the image. For instance, the monkey can easily detected from the background using the visual material traits, which is otherwise difficult in RGB.
Comparing segmentation with and without material traits. Images on the left were segmented using the original NCuts algorithm, while those on the right were segmented with our mod- ified version. Material traits can indicate the difference between fuzzy grass in the foreground and rocks in the background, despite the fact that they have similar colors.