Single Image Multimaterial Estimation

Stephen Lombardi and Ko Nishino
Drexel University

multimat header image
Estimating the reflectance and illumination from a single image becomes particularly challenging when the object surface consists of multiple materials. The key difficulty lies in recovering the reflectance from sparse angular samples while correctly assigning them to different materials. We tackle this problem by extracting and fully leveraging reflectance priors. The idea is to strongly constrain the possible solutions so that the recovered reflectance conform with those of real-world materials. We achieve this by modeling the parameter space of a directional statistics BRDF model and by extracting an analytical distribution of the subspace that real-world materials span. This is used, with other priors, in a layered MRF-based formulation that models material regions and their spatially varying reflectance with continuous latent layers. The material regions and their reflectance, and the direction and strength of a single point source are jointly estimated. We demonstrate the effectiveness of the method on real and synthetic images.
  • Single Image Multimaterial Estimation
    S. Lombardi and K. Nishino, in Proc. of IEEE International Conference on Computer Vision and Pattern Recognition CVPR’12, pp238-245, Jun., 2012.
    [ paper ][ project ]

See also


multimat brdfspace image
We derive s statistical prior on the parameter values of the isotropic Directional Statistics BRDF (DSBRDF) model that capture the gamut of real-world materials. This isotropic DSBRDF prior provides concise yet accurate representation of what a realistic material reflectance should be by limiting the variation across color channels and reflectance lobes based on functional bases extracted from measured data of real-world materials. The figure shows the distribution of 100 real-world isotropic BRDFs in the subspace spanned by the first three basis functions of DSBRDF parameters. The distribution is roughly elliptical and can be modeled with a multivariate Gaussian distribution.
multimat mrf image
We derive a layered Markov random field formulation of multimaterial estimation to fully leverage these reflectance priors. Each material is represented with an MRF in this formulation. The spatial extent of each material is modeled with a continuous latent layer that encodes soft assignments of pixels to that material. The reflectance of each material is then modeled using a set of DSBRDF parameter values. This formulation nicely captures the spatial segmentation of the multiple ma- terials while allowing us to place constraints on the solution. We jointly estimate the material segmentation and each material reflectance, together with the strength and direction of a single point source.


multimat sphereresult image
Single material estimation results for the colonial-maple- 223 material. Top row: the input image (left) and ground truth renderings of varying incident light source directions (right). Bottom row: synthesized images using estimated reflectance (left) and under varying incident light source directions (right). The results show that the method successfully extrapolates the reflectance from its limited angular samples with good accuracy.
multimat realresults image
Multimaterial estimation results for three real scenes. Each row, from left to right, shows the input image, a synthesized image of the scene using estimated reflectance and light source, the material segmentation result, a relit image of the object, and a ground truth image of the relighting result for each scene. The results demonstrate that the method successfully recovers the reflectance for complex scenes, for instance the gold paint on the mask, except when interreflections and shadowing are prevalent in the scene.
multimat two image
Comparison of using a single input image versus two input images. The left column shows the input images, the middle column shows the results when using a single input image, and the right column shows the results when using the two input images. We examine the results by comparing the predicted images in the top row to the input image in the top-left. Our method can leverage the additional observations to estimate reflectance and segmentation more accurately.