Reflectance Fields

CG2010

 
 

The surface reflectance properties of an opaque material can be modelled using a 4D BRDF associated to an infinitesimal point in the object surface. A geometric model can be used to model a complex object but most often the object surface material is not homogenous across the surface leading to additional work to model the BRDF for each point in the surface.


Reflectance fields were developed as a form of capturing and encoding “real world” textures able to react to several light sources and respective positions [1][2][3][4][5].


Reflectance Transformation Imaging (RTI) [5] is one of many techniques and as all others acquires the images from a single point of view with varying light positions. However RTI uses a polynomial basis (Bi-quadratic polynom or Spherical Harmonics) to parameterise the surface reflectance at each pixel as function dependent on the 2D light parameters. Since the camera is fixed each pixel index accounts for the remaining 2D spacial dependency parameters. RTI uses a specific format to encoded the reflectance function designated Polynomial Texture Map (PTM). PTM’s technique describes a simple bi-quadratic polynom to encode the reflectance function I(x,y,lx,ly)=a0*lx^2 + a1*ly^2 + a2*lx*ly + a3*lx +a4*ly + a5, where x and y are the index in the image space while lx and ly are the projections of the light vector in the surface tangent plane.


After generating or capturing all images (each with a different light position and same camera position) all the samples, and respective light direction, from a specific pixel is gathered from all the images and fitted to the polynom function (PTM, SH or HSH). The resulting coefficients are stored for later use during rendering stage. The rendering of the surface reflectance is them resumed to the evaluation of the polynom function for each pixel with the desired light direction.


A detailed survey on Reflectance Fields, Texture and BRDF aquisition and rendering can be found at [6].

Reflectance Fields

[1]Ashikhmin, M. and Shirley, P. 2002. Steerable illumination textures. ACM Trans. Graph. 21, 1 (Jan. 2002), 1-19. DOI= http://doi.acm.org/10.1145/504789.504790


[2]Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. Acquiring the reflectance field of a human face. In SIGGRAPH ’00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 145–156, New York, NY, USA, 2000. ACM Press/Addison-Wesley Publishing Co.


[3]Paul Debevec, Andreas Wenger, Chris Tchou, Andrew Gardner, Jamie Waese, and Tim Hawkins. A lighting reproduction approach to live-action compositing. In SIGGRAPH ’02: Proceed- ings of the 29th annual conference on Computer graphics and in- teractive techniques, pages 547–556, New York, NY, USA, 2002. ACM.


[4] aniel N. Wood, Daniel I. Azuma, Ken Aldinger, Brian Curless, Tom Duchamp, David H. Salesin, and Werner Stuetzle. Surface light fields for 3d photography. In ., pages 287–296, 2000.


[5] Malzbender, T., Gelb, D., and Wolters, H. 2001. Polynomial texture maps. In Proceedings of the 28th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '01. ACM, New York, NY, 519-528. DOI= http://doi.acm.org/10.1145/383259.383320


[6] Gero Müller, Jan Meseth, Mirko Sattler, Ralf Sarlette, and Rein- hard Klein. Acquisition, synthesis and rendering of bidirectional texture functions. In Christophe Schlick and Werner Purgath- ofer, editors, Eurographics 2004, State of the Art Reports, pages 69–94. INRIA and Eurographics Association, September 2004.