CS384G Final Project

Photon Mapping

Ming Yao

myao@cs.utexas.edu

1. Why photon mapping?

Traditional ray tracing systems trace rays from an eye (or a camera) rather than from light source, and one light ray from a hit point to each light source is sued to determine the shadow color of that point. These are exactly what we have done for our second project – ray tracer. As a result, traditional ray tracing techniques are not able to correctly generate shadows (and hence caustic) of refractive objects, produce indirect illumination from reflective objects, and implement diffuse reflection (e.g., no color bleeding). Although soft shadows are possible, one must use area lights that can significantly increase processing time. On the other hand, radiosity provides soft shadows, color bleeding and indirect illumination fro free; however, it does not handle specular reflection, had difficulty in processing transparency (i.e., reflection and refraction), requires the scene to be subdivided into polygons, and is very time consuming. A second pass (e.g., ray tracing) is needed to produce reflection and refraction.

Instead of complicating the radiosity implementation by post ray tracing, an easier way would collect illumination information of the scene by a pre-trace from light source. This is the basic idea of photon mapping. The major advantage of photon mapping are (1) using photons to simulate the transport of individual photon energy, (2) being able to calculate global illumination effects, (3) capable of handling arbitrary geometry rather than polygonal scenes, (4) low memory consumption, and (5) producing correct rendering results, even though noise could be introduced.

2. What Is Photon Papping?

The basic idea of photon mapping is very simple [1]. It tries to decouple the representation of a scene from its geometry and stores illumination information in a global data structure, the photon map. Photon mapping is a two-pass method. The first pass builds the photon map by tracing photons from each light source, and the second pass renders the scene using the information stored in the photon map.

2.1 Pass 1: Light Emission and Photon Scattering

The first pass of photon mapping consists of two steps: light emission and photon scattering. In the first step, photons are generated and shot into the scene. A light source with higher intensity will produce more photons, and the direction of each photon is randomly selected based on the type of light source (e.g., spherical, rectangle, or directional). The processing of these photons is similar to ray tracing with one difference: photons propagate flux while rays gather radiance. When a photon hits an object, it can be reflected, transmitted, or absorbed. If a photon hits a specular object (e.g., a mirror), it is reflected withits intensity scaled by the reflection index of that object. On the other hand, if a photon hits a diffuse surface, it is stored in the photon map and reflected. The direction of this “diffusely” reflected photon is a randomly chosen vector that is above the intersection point with a probability proportional to the cosine of the angle with the normal. This can be implemented by playing “Russian roulette”.

The “Russian roulette” technique removes unimportant photons and ensures the same power.

In the specular case, a random number 0≤q≤1 is generated, and

In the diffuse case, a random number 0≤q≤1 is generated, and

The photon hit information is stored in a photon map, which is a balanced kd-tree. Each node of the kd-tree stores the information of a photon hit, which include the coordinates of the hit point (x, y, z) (usually as the key for building the tree), color intensity (r, g, b), incident direction of the photon, and other important information.

2.2 Pass 2: Radiance Estimate and Rendering

The second pass renders the scene with the help of the photon map built in the first pass. A traditional ray tracing procedure is performed by shooting rays from the camera. When a ray hits a point P on a surface, the illumination information (i.e., flux) of the neightboring photons collected from the first pass and stored in the photon map will be added to the radiance information collected from ray tracing at P. Let the normal vector at P be N and r > 0 be a predefined small value. Consider all photons in the sphere S(P, r) of center P and radius r (Figure 1).

Figure 1. Radiance estimate

Note every photon in S(P, r) would contribute to the radiance at P. In fact, a photon with incident direction d can contribute only if d dot N > 0, because if d dot N ≤0, its direction goes inside of the surface. If a photon does not contribute, it is ignored in this radiance estimate for point P. From the illumination equation, the radiance contribution of a photon with incidence direction is:

intensity × (d dot N) × diffuse-factor

Let the sum of all radiance contribution be s. The radiance estimate at P is s/(πr2), where πr2 is the area of a greate circle of sphere S. Therefore, the color ar P is the sum of this radiance contribution and the radiance calculated from ray tracing. The sum of the radiance estimate and the radiance collected from ray tracing may be larger than one, and normalization is needed. Additionally, if the number of photons that can contribute to radiance estimate is too small, thay are all ignored because the computed radiance estimate from very few photon may produce blurred images.

3. Results

I created a Cornell box scene file to initially test the photon mapping program. The Cornell box is nice in that it easily allowed me to compare my image to existing photon mapping images in Jensen's book. The left picture below shows how it should look like when rendered with photon mapping algorithm. There are nice color bleedings at the left side of the tall bolck and right side of the small block. Although these surfaces are not directly illuminated by the light source, with photon mapping they can still be colored by diffuse reflection of the wall. The right picture below is rendered with traditional ray tracer. I added a square light at the ceiling. So it generates soft shadows of these blocks. However, the picture does not show the light because I didn't handle the situation that the master ray goes directly into the light source.

My ray tracer should generate a picture very similar to the one on the left above. But at this piont, it does not work very well. We can see there are a lot of bright spots in the pictures, as well as some un-lit areas. It is hard to say if photons are evenly distributed onto surfaces of objected, or the radiance estimate is not doing right. However, this picture does show some interesting effects very close to color bleeding. Anyway, there is still a lot of work to do to make thing correct.

source code