Project One

[Logo]

Project 1: Ray Tracing

Assigned: Monday, Jan 29, 2018
Due: Monday, Feb 19, 2018 (by 11:59PM)


Project Description

You will build a program that will generate ray-traced images of complex scenes using the Whitted illumination model.

Getting Started

Start with the the starter code for the lab linux machines. This can read both .bmp and .png files for texture maps and can write out images you produce as .bmps. Scene models in the raytracer's input format are available here. An example project build is here. For cube mapping, here are some cubemaps collected from the web.

To install the starter code on the lab machines, untar the file and follow the instructions in the README.md file in the ray directory. The executable resulting from a successful build generates extremely simple ray-traced images where the pixels are shaded only by the diffuse terms of the material at the ray intersections, and this only for non-polygon-mesh objects. For polygon meshes, it doesn't even compute intersections.

This code should run on other platforms such as Macs and Windows. It has been tested and found to work on many of these configurations. If you wish to do development on your own machines, feel free to do so. But, we will do all of our testing and grading on the lab machines, so you must allow yourself time to port and test your code on those machines. In general, that should be easy, but not instantaneous. Also, we do not provide build support for non-lab machines. There are just too many environments for us to keep up. So if you choose to go this route, you're on your own for making the code build and work. We'll try to answer questions if we can, but no promises.

This project is a very large collection of files and object-oriented C++ code. Fortunately, you only need to work directly with a subset of it. However, you will probably want to spend a bit of time getting familiar with the layout and class hierarchy at first, so that when you code you know what classes and methods are available for your use.

To get started, look at comments about where to put your code in RayTracer.cpp, trimesh.cpp, light.cpp, and material.cpp. The starting point for where ray tracing begins, and where you will be needing to add a lot of functionality, is in the RayTracer.cpp file. This is a good file to start studying to explore what methods get called and what they do. In addition, the raytracer features a debugging window that allows you to see individual rays bouncing around the scene. This window provides a lot of visual feedback that can be enormously useful when debugging your application. Here is a more detailed explanation of how to use the debugging window.

The starter code can run in both text mode and gui mode. Running without any arguments will execute the program in the gui mode. For usage see 'ray --help'.

Required Functionality

We'll describe these requirements in more detail later:
  1. The starter code has no triangle-ray intersection implementation. Fill in the triangle intersection code so that your ray tracer can display triangle meshes.
  2. Implement the Whitted illumination model, which includes Phong shading (emissive, ambient, diffuse, and specular terms) as well as reflection and refraction terms. You only need to handle directional and point light sources, i.e. no area lights, but you should be able to handle multiple lights.
  3. Implement Phong interpolation of normals on triangle meshes.
  4. Implement anti-aliasing. Regular super-sampling is acceptable, more advanced anti-aliasing will be considered as an extension.
  5. Implement cube mapping as a user-selectable option. The default for any scene is no cube mapping.
  6. Implement a spatial data structure, either a k-d tree or a bounding volume hierarchy, to speed up the intersection computations in large scenes.

Some example images generated with the example solution

[Logo] [Logo] [Logo] [Logo]

Notes on Whitted's illumination model

The first three terms in Whitted's model will require you to trace rays towards each light, and the last two will require you to recursively trace reflected and refracted rays. (Notice that the number of reflected and refracted rays that will be calculated is limited by the "depth" setting in the ray tracer. This means that to see reflections and refraction, you must set the depth to be greater than zero!)

When tracing rays toward lights, you should look for intersections with objects, thereby rendering shadows. If you intersect a semi-transparent object, you should attenuate the light, thereby rendering partial (color-filtered) shadows, but you may ignore refraction of the light source.

The skeleton code doesn't implement Phong interpolation of normals. You need to add code for this (only for meshes with per-vertex normals.)

Here are some equations that will come in handy when writing your shading and ray tracing algorithms.

Anti-aliasing

Once you've implemented the shading model and can generate images, you will notice that the images you generated are filled with "jaggies". You should implement anti-aliasing by super-sampling and averaging down. You should provide a slider and an option to control the number of samples per pixel (1, 4, 9 or 16 samples). You need only implement a box filter for the averaging down step. More sophisticated anti-aliasing methods are left as bells and whistles below.

Accelerated ray-surface intersection

The goal of this portion of the assignment is to speed up the ray-surface intersection module in your ray tracer. In particular, we want you to improve the running time of the program when ray tracing complex scenes containing large numbers of objects (they are usually triangles) by reducing the number of ray-object intersection tests using a spatial data structure. Use either a k-d tree or a bounding volume hierarchy (BVH), they are very similar and quite effective as described in class.

The sample scenes include several simple scenes and three complex test scenes: trimesh1, trimesh2, and trimesh3. You will notice that trimesh1 has per-vertex normals and materials, and trimesh2 has per-vertex materials but not normals. Per-vertex normals and materials imply interpolation of these quantities at the current ray-triangle intersection point (using barycentric coordinates).

Acceleration grading criteria

The test scenes each contain up to thousands of triangles. A portion of your grade for this assignment will be based on the speed of your ray tracer running on these scenes. The faster you can render a picture, the higher your grade.

For grading on the rendering speed, the scenes will be traced at the specific size with one ray traced per pixel, and the rays should be traced with 5 levels of recursion, i.e. each ray should bounce 5 times. If during these bounces you strike surfaces with a zero specular reflectance and zero refraction, stop there. At each bounce, rays should be traced to all light sources, including shadow testing. The command line for testing rendering speed looks like: ray -w 400 -r 5 [in.ray] [out.bmp]. Be sure to enable your acceleration structure when the program is invoked from the command line. Don't try to customize your ray tracer for the test scenes; we will also use other scenes during grading. All timing will be done on departmental lab Linux machines, another reason why you need to port and test your code on those machines if you develop on another platform.

Bells and Whistles

This assignment is large, and, furthermore, the optimization element is completely open-ended, so you can profitably work on that until the project is due. Therefore we don't necessarily expect a bunch of bells and whistles. We can't stop you, though. Here are some interesting extensions to this project.

Approved Bells and Whistles

[whistle] Implement an adaptive termination criterion for tracing rays, based on ray contribution.  Control the adaptation threshold with a slider.

[whistle] Implement stochastic (jittered) supersampling. See Glassner, Chapter 5, Section 4.1 - 4.2 and the first 4 pages of Section 7.

[bell] Add a menu option that lets you specify a background image to replace the environment's ambient color during the rendering. That is, any ray that goes off into infinity behind the scene should return a color from the loaded image, instead of just black. The background should appear as the backplane of the rendered image with suitable reflections and refractions to it.

[bell] Deal with overlapping objects intelligently. In class, we discussed how to handle refraction for non-overlapping objects in air. This approach breaks down when objects intersect or are wholly contained inside other objects. Add support to the refraction code for detecting this and handling it in a more realistic fashion. Note, however, that in the real world, objects can't coexist in the same place at the same time. You will have to make assumptions as to how to choose the index of refraction in the overlapping space. Make those assumptions clear when demonstrating the results.

[bell] Implement spot lights.

[bell] Implement antialiasing by adaptive supersampling, as described in Glassner, Chapter 1, Section 4.5 and Figure 19 or in Foley, et al., 15.10.4. For full credit, you must show some sort of visualization of the sampling pattern that results. For example, you could create another image where each pixel is given an intensity proportional to the number of rays used to calculate the color of the corresponding pixel in the ray traced image. Implementing this bell/whistle is a big win -- nice antialiasing at low cost.

[bell] Add some new types of geometry to the ray tracer. Consider implementing torii or general quadrics. Many other objects are possible here.

[bell] [bell] for the first, [bell] for each additional. Implement stochastic or distributed ray tracing to produce one or more or the following effects: depth of field, soft shadows, motion blur, glossy reflection (See Glassner, chapter 5, or Foley, et al., 16.12.4).

[bell] [bell] Implement texture mapping.

[bell] [bell] Implement bump mapping.

[bell] [bell] Implement solid textures or some other form of procedural texture mapping, as described in Foley, et al., 20.1.2 and 20.8.3. Solid textures are a way to easily generate a semi-random texture like wood grain or marble.

[bell] [bell] Extend the ray-tracer to create Single Image Random Dot Stereograms (SIRDS). Here is a paper on how to make them. Also check out this page of examples. Or, create 3D images like this one for viewing with red-blue glasses.

[bell] [bell] [bell] [bell] Implement a more realistic shading model. Credit will vary depending on the sophistication of the model. A simple model factors in the Fresnel term to compute the amount of light reflected and transmitted at a perfect dielectric (e.g., glass). A more complex model incorporates the notion of a microfacet distribution to broaden the specular highlight. Accounting for the color dependence in the Fresnel term permits a more metallic appearance. Even better, include anisotropic reflections for a plane with parallel grains or a sphere with grains that follow the lines of latitude or longitude. Sources: Watt, Chapter 7, Foley et al, Section 16.7; Glassner, Chapter 4, Section 4; Ward's SIGGRAPH '92 paper; Schlick's Eurographics Rendering Workshop '93 paper.

This all sounds kind of complex, and the physics behind it is. But the coding doesn't have to be. It can be worthwhile to look up one of these alternate models, since they do a much better job at surface shading.  Be sure to demo the results in a way that makes the value added clear.

Theoretically, you could also invent new shading models. For instance, you could implement a less realistic model! Could you implement a shading model that produces something that looks like cel animation? Variable extra credit will be given for these "alternate" shading models. Links to ideas: Stylized Depictions

Note that you must still implement the Phong model.

[bell] [bell] [bell] [bell] Implement CSG, constructive solid geometry. This extension allows you to create very interesting models. See page 108 of Glassner for some implementation suggestions. An excellent example of CSG was built by a grad student in the University of Washington version of this graphics course.

[bell] [bell] [bell] [bell] Add a particle systems simulation and renderer (Foley 20.5, Watt 17.7).

[bell] [bell] [bell] [bell] Implement caustics. Caustics are variations in light intensity caused by refractive focusing--everything from simple magnifying-glass points to the shifting patterns on the bottom of a swimming pool. An introduction, and a paper discussing a ray-trace project that included caustics.

Project Turn-in

You will need to use the "Canvas" system on the web to turn this program in. Once again, the artifact is a separate turnin from the code. As usual, if you developed your program on Windows or some other platform, you will need to port your work to the lab Linux machines before submitting it.

1) To turn in the main project, first clean your development area so that all *.o and binary executables are removed. Then follow submission directions on Canvas.

References