Project 3: ShaderAssigned: Wednesday Oct 16, 2013Due: Wednesday, Oct 30, 2013 (by 11:59PM) No artifact due

In this project, you’ll gain experience writing programmable shaders. You’ll write a vertex shader that “bends” a flat 2D grid into a torus. You’ll also write several fragment shaders. Each fragment shader will implement an increasingly sophisticated lighting model. All the shader code you’ll write will be written with OpenGL Shading Language, otherwise known as GLSL. In order to give your torus a bump appearance, you’ll also write a small amount of C++ code to convert texels in a height map texture into normal vectors to be stored in a normal map. Your fragment shaders implementing bumpy lighting models will then be able to sample the resulting normal map textures.
You may work in a group of up to 2 people on this project.
Get the project source code as a compressed tar file or a zip file.
Uncompress the tar file & extract its contents with the command:
tar xvfz shader.tgz
or unzip the zip file.
You’ll now have a directory called shader containing all the source code for this project. Within the shader> directory, you’ll find three subdirectories:
Consult the ReadMe.txt file in the src subdirectory for instructions on how to build and run the code.
The project is intended to build on the CS Ubuntu Linux machines. These machines have NVIDIA Quadro GPUs so can easily execute the project code — and this is the platform on which the turned in project is tested.
You can build this project code with Visual Studio 2010 on Windows 7. Make sure you have OpenGL 2.0 or better as well as the boost libraries on your PC. Again, consult the ReadMe.txt file for more information.
You should be able to build this on OSX (tested on Lion) using the MACmakefile. Instructions for this are in the ReadMe.txt file.
Note the comments in main.cpp if you get a seg fault when you try to run it or if it seems to run correctly but you cannot see the red mesh.
Running the program for the first time you’ll see a red square with a “cloudy hills” backdrop.
Mouse Controls
When you click the right mouse button, you’ll get a popup menu that includes several submenus. With these submenus, you can select between various decals, normal maps, environment maps, light colors, and fragment shaders. (There’s just a single vertex shader all the fragment shaders use.)
Clicking and dragging the left mouse button moves the viewer (or camera) location. Moving leftright rotates the viewer around a circle. Up and down moves the viewer up and down but always looking at the object. Essentially the viewer is constrained to move on a cylinder surrounding the objectsoontobetorus.
Clicking and dragging the middle mouse button (that is, clicking the track wheel) moves the light position. The light source is visualized as a sphere. The light source too is constrained to a virtual cylinder.
Holding down Ctrl and then clicking and dragging the left mouse button can spin the objectsoontobetorus. The object spins as if connected to a trackball and keeps spinning when you release. CtrlLeft clicking and releasing without motion will stop the spinning. (There is also a menu option explicitly to stop the spinning motion.)
Keyboard Controls
There are a few keys supported. Pressing ‘w’ toggles the wireframe mode. This might be helpful to debug your modifications to torus.vert.
Pressing ‘0’ through ‘9’ switches between shaders which are numbered 00 through 09 in the src/glsl directory.
You’ll be modifying the GLSL shaders in the src/glsl subdirectory. You’ll find ten fragment shaders with the .frag suffix and a single vertex shader with the .vert suffix.
Task 0: Rolling up the Square into a Torus
Modify the torus.vert vertex shader implementation so that the square patch with vertex positions ranging from [0..1] in both the x and y (or u and v) components is rolled up into a torus.
Research the torus to find a parametric function F(u,v)=(x,y,z) for a torus. Hint: Wikipedia is a fine place to look.
Be sure to adjust u and v if their range is not the [0..1] range of the square patch vertex components. This figure visualizes what the vertex shader’s job:
The incoming (u,v) attribute to torus.vert is named parametric.
Get the outer and inner radius of the torus from the respective x and y components of the torusInfo uniform variable. Notice that the existing code for torus.vert declares several attribute, uniform, and varying variables that are used to communicate with the C++ application and the downstream vertex shader.
When you compute your (x,y,z) position for the vertex on the torus in object space, then transform this vertex position by the current modelviewprojection matrix. Hint: GLSL provides builtin variables prefixed with gl_ that track current OpenGL fixedfunction state such as the modelviewprojection matrix.
For subsequent tasks, you’ll need to compute additional varying quantities in torus.vert for use by the downstream fragment shader. The first fragment shader is 00_red.frag that unconditionally outputs red as the fragment color so initially you can simply out gl_Position for the torus (x,y,z) position in clip space.
When you complete this task, your program should render a result like:
Task 1: Applying a Texture Decal
Once you can roll the red square into a torus, your next task is to shader the torus with a decal. This will require generating texture coordinates as a function of the parametric attributes. Output from your vertex shader to the normalMapTexCoord varying 2component vector (s,t).
Then in the 01_decal.frag fragment shader, use this texture coordinate set to access the decal sampler.
Make sure the decal tiles 2 times in the inner (smaller) radius and 6 times in the outer (larger) radius. Assuming there are more fragments generated than vertices transformed, would this scaling be more efficiently performed in the vertex or fragment shader?
When you complete this task, your program should render a result like:
Try picking other decals from the “Decal texture…” menu.
Task 2: Diffuse Illumination
In this task, you’ll shade the torus with a perfragment ambient + diffuse illumination term by modifying the 02_diffuse.frag GLSL shader.
To compute diffuse illumination, you’ll need a surface normal vectorQ and a light vector. Both of these vectors should be normalized. (GLSL has a normalize standard library function to normalizing vectors is easy in GLSL.) The dot product of these two normalized vector (clamp to zero if negative) models the diffuse lighting contribution.
You must make sure the light and surface normal vectors are in the same coordinate system (or sometimes stated "in the same coordinate frame"). This could be object space, eye space, or surface space. For efficiency reasons (and to facilitate normal mapping, particularly environment mapping of normal mapping), it makes sense to choose surface space. In surface space, the (unperturbed) surface normal is always in the direction (0,0,1) — pointing straight up in the direction of the positive Z axis.
The uniform vector lightPosition provides the position of the light in object space. The (unnormalized) light direction vector is the vector from the vertex position to the light position.
To transform an objectspace direction into a surfacespace, version you must construct a orthonormal basis (a rotation matrix) that can rotate directions from object space to surface space.
First compute the gradients of \(F(u,v)\) in terms of \(u\) and v, that is \(\frac{\partial F(u,v)}{\partial u}, \frac{\partial F(u,v)}{\partial v}\)
Don’t trust yourself to differentiate a complicated function involved trigonometric functions? Wolfram Alpha can differentiate for you! As a simple example, try “diff(u^2,u)”.
We call the normalized gradient of F in terms of u the tangent.
The cross product of these two normalized gradients is the (normalized) normal to the surface in object space as a function of (u,v).
In general, the cross product of the normal and tangent vector is a normalized vector mutually orthogonal to both the normal and the tangent called the binormal.
These three normalized vectors T, B, and N for the tangent, binormal, and normal respectively can be used as column vectors of a 3x3 matrix M useful for converting directions and positions to and from object and surface space. So \(\mathbf{M = [T\ B\ N]}\)
When this matrix M is multiplied by a surfacespace vector, the result is an objectspace vector. Because M is orthogonal, the inverse of M is the transpose of M so \(\mathbf{M^{1}} = \begin{bmatrix}\mathbf{T} \\ \mathbf{B} \\ \mathbf{N}\end{bmatrix}\)
So multiplying \(\mathbf{M^{1}}\) by a vector in object space is the same as premultiplying that vector by M to convert that vector into surface space.
In GLSL, you can construct a 3x3 matrix with the mat3 constructor with three vec3 (3component vector) treated as column vectors.
With this approach, the vertex shader can compute the objectspace light vector (simply the light position minus the surface position, with both in object space) and transform this light vector into surface space. There is no need to normalize this vector in the vertex shader — indeed, it is better to normalize it in the fragment shader after interpolation. The vertex and fragment shaders have a lightDirection varying vector intended to interpolate the surfacespace light vector.
Computing the diffuse contribution in surface space is easy. The (unperturbed) surface normal is always (0,0,1) so the Z component of the interpolated and normalized lightDirection is the diffuse lighting coefficient.
(Later for some of the bumpy shaders using normal mapping, the shader will substitute a perturbed normal obtained from a normal map texture to use instead of the unperturbed (0,0,1) surface space normal.)
In the accompanying diffuse fragment shader for this task, we need to normalize the interpolated lightDirection and use the Z component as the diffuse contribution. Because the diffuse coefficient is a magnitude and should not be negative, the fragment shader should clamp the coefficient to zero with the GLSL max standard library function.
The LMa, LMd, and LMs uniform variables provide an RGB color that is the light color (hence the L) and the material color (hence the M; with a, d, and s indicating the ambient, diffuse, and specular material color) modulated on a percomponent basis. See the Torus::draw method to see where these uniforms are set.
In order for the diffuse shading to reflect the light and material colors, you should modulate LMd by the diffuse coefficient and add LMa to output a final fragment color for this task.
When you complete this task, your program should render a result like:
Try changing the “Material…” and “Light color…” settings to verify this shader is operating correctly. Move the light or spin the object. The region of the torus most facing the light surface should be brightest.
Task 3: Specular
For this task, you should modify the 04_specular.frag shader so the shading just shows a specular contribution.
Compute a Blinn specular contribution. For this you need to compute the dot product between the (unperturbed) surface normal and the normalized halfangle vector.
The halfangle vector is the average of the light vector and view vector.
Whereas Tasks 2 and 3 computed the objectspace light vector and transformed it into surface space, Task 4 requires doing the same for the halfangle vector.
You have two choices:
Choice B is more expensive so the shader_scene examples have a halfAngle varying to interpolate the halfangle vector, but the view vector is also available so you can choose either approach.
Use the shininess uniform to control the specular exponent.
Remember to force the specular coefficient to zero if the diffuse contribution is nonzero.
Also remember to force the specular coefficient to zero when it is negative.
Modulate the specular color result by the LMs uniform value.
Note: The initial “Material…” has zero specular color so you should switch to a different material to see the specular highly properly. Otherwise you won’t see a specular highlight if you are modulating by LMs.
When you complete this task, your program should render a result like:
Task 4: Specular + Diffuse + Ambient
For this task, you should modify the 05_diffspec.frag shader to include the ambient, diffuse, and specular lighting contributions—assuming an unperturbed normal.
When you complete this task, your program should render a result like:
With this task, the lighting should change as the “Material…” and “Light color…” selections change but should not depend on the “Environment map”, “Bump texture”, or “Decal texture” choices.
Bumpy Diffuse
You should modify the 03_bump_diffuse.frag shader so it operates in the same manner as the shader in Task 2 — except rather than using an unperturbed surface normal, a perturbed normal sampled from a normal map is used instead.
Instead of sampling the decal sampler as in Task 1, sample the normalMap sampler to fetch an RGB value that represents a perturbed surface normal, based on a height map converted to a normal map.
Normals are stored as signed components but RGB textures store [0..1] values. For this reason, the fragment shader in this task needs to expand the [0..1] RGB values to be [1..+1] normal components.
The code for generating normals from heightfield data is broken in shader_scene/texture.cpp — the perturbed normal is always returned as [0,0,1]. (You’ll notice a comment reading “XXX fix me” in this function.) You must fix the NormalMap::computeNormal method. You need to approximate the gradients for indicated (i,j) location in the height field based on adjacent height field texels to compute a surface normal. Be sure that you adjust for the fact that texels on the edge of the height field should “wrap” to the opposite side.
You should also use the scale parameter to scale the height field Z component. By increasing scale, the bumpiness becomes more pronounced. When the scale is zero, all bumpiness should disappear. If the scale is negative the sense of the bumpiness will reverse — so outward bumps become inward bumps and vice versa. The ‘b’ key increase the scale while the ‘B’ key decreases the scale.
Warning: If you don’t fix the NormalMap::computeNormal method, you won’t get bump shading. The broken implementation always returns (0,0,1) meaning all height fields are mistakenly generated as flat normal maps.
Once the normal map texels are generated correctly and the normalMap is sampled properly in the shader, the shader needs to compute the dot product of the sampled perturbed normal and the interpolated and normalized lightDirection vector. This dot product result becomes your diffuse coefficient once clamped to zero avoid negative values.
When you complete this task, your program should render a result like:
Try varying the “Bump texture…” setting. Make sure when the normal map is “Outward bumps” that the bumps appear to bump outward consistently over the entire torus. Make sure the bump lighting on the torus responds to changes in “Material…” and “Light color…” menus.
Bumped and Lit
You should modify the 06_bump_lit.frag shader to include the ambient, diffuse, and specular lighting contributions with a perturbed normal from the normalMap sampler and with a decal color from the decal sampler.
Think of this task as combining Tasks 1, 3, and 4.
When you complete this task, your program should render a result like:
Environment Reflections
You should modify the 07_reflection.frag
shader to reflect the object’s surroundings based on the
Use the reflect GLSL standard library call to compute a reflection vector.
Bumpy Environment Reflections
You should modify the 08_bump_reflection.frag shader to reflect the object’s surroundings based on the envmap environment map and the surfaces perturbed expanded normal from the normalMap sampler.
Everything
You should modify the 09_combo.frag shader to combine bumpy ambient, diffuse, and specular with bumpy reflections too. To avoid oversaturation, combine 50% of the ambient+diffuse, 50% of the specular, and 60% of the bumpy reflection.
As usual, you can develop project on whatever operating system you like, but before you submit it, you must make sure that it builds and runs properly on the department Linux machines! All grading will take place on these machines, so if your code doesn’t work on them, you’re in trouble.
To submit your code, use the department turnin script, as usual:
Replace shader/ with whatever your code directory is named. Make sure that all the necessary code is submitted, as projects that do not build are worth nothing!
If the following does not work in your project directory
make clean
make all
./bin.debug64/shade_scene
then the project receives no credit.
Make sure you have included your name (and your partner’s name, if applicable) and UTCS ID in a comment at the top of each of your files. Also, include a README explaining the usage of your program, including any menu options or keyboard commands you might have added.