Vagia Tsiminaki, Wei Dong, Martin R. Oswald, and Marc Pollefeys

BMVC 2019 (Spotlight)


We aim to recover a high resolution texture representation of objects observed from multiple view points under varying lighting conditions.

For many applications the lighting conditions need to be changed and thus require a texture decomposition into shading and albedo components. Both texture super-resolution and intrinsic texture decomposition have been separately studied in the literature. Yet, no method has investigated how these methods can be combined. We propose a framework for joint texture map superresolution and intrinsic decomposition. To this end, we define shading and albedo maps of the 3D object as the intrinsic properties of its texture and introduce an image formation model to describe the physics of the image generation.

Our approach accounts for surface geometry and camera calibation errors and is also applicable to spatio-temporal sequences. Our method achieves state-of-the-art results on a variety of datasets.


  title={Joint Multi-view Texture Super-resolution and Intrinsic Decomposition.},
  author={Tsiminaki, Vagia and Dong, Wei and Oswald, Martin R and Pollefeys, Marc},