You may work with one partner on this assignment. This lab will count as your midterm project and use a variety of techniques that you have learned so far. You will design and implement a ray tracer. Your ray tracer should model spheres, triangles, and rectangles, using the Phong lighting model for ambient, diffuse, and specular lighting. You should also model shadows.
git fetch origin git checkout -b raytrace origin/master git push -u private raytraceThe above commands assume that the default class code is on the origin remote, and your personal remote is named private. Furthermore, your working directory must be clean (no uncommitted changes to files under version control), before checking out a new branch. If this is not the case, add and commit you changes locally before switching to a raytrace branch.
Once you and your partner have pushed the raytrace branches, you can each checkout a sharedray branch to follow your partner's changes
git fetch partner git checkout -b sharedray partner/raytraceNote you should not merge into either the master, shared, or sharedray branches as you will later be unable to push your changes. Make sure you are on either the working or raytrace branch before merging.
rgbImage, png_reader, png_writer and rgbColor are from Project 01. Recall that in the RGBImage class pixel 0,0 is in the upper left.
view, material, light and ray are very lightweight classes or structs that just are containers for grouping related elements together. In many cases, there is no associated cpp files since the member variables can be accessed directly. Feel free to modify these classes/structs if you like, but you shouldn't need to add a ton of stuff to these files to get a basic ray tracer working.
Shape.h describes a virtual base class, much like the drawable class from Project 02. It is important that each shape you add to your scene is able to compute intersections with rays and the shape. Also each shape should compute normals for points on the shape. You should implement spheres, triangles, and rectangles as classes derived from the Shape class. I have started this for you in Sphere.h, but you need to add some stuff and implement the appropriate methods in Sphere.cpp. Don't forget to update your CMakeLists.txt file. You will need to add Triangle and Rectangle classes from scratch.
That leaves the parser, which reads a text file like input.txt and converts it into an internal format that your raytracer can use. Writing parsers in C++ can be very tedious. I got you started by writing some helper function in parser.h. Reading this file may be helpful as you parse some commands I have left out. Reading parser.cpp is probably less helpful. It has the tedious and annoying details of C++ string manipulation. raytracer.cpp contains the start of a full parser that opens the input file and parsers each command line by line. Check out parseLine which is similar to a giant switch statement (except you can't switch on string types). When you run the parser initially, you will find some commands are completely missing and some are only partially implemented. Examine the other parts of parseLine and use it to fill in any missing details. It is recommended that you store all the information about the input file in the m_scene object. I use two QHash dictionaries in the parser to refer to certain color and material variables by a string name, like "red". Take a look at a few examples in the parser and feel free to ask questions.
To make material handling a bit easier, there is a notion of the "current" material. Changing the properties of a material through the used of mat amb color changes the "current" material which can be saved as a special name and retrieved later. When you create a new sphere, triangle, or rectangle, you do not need to specify nine material coefficients. The semantics is that these objects should just use the "current" material at the time the object is created. It's very OpenGL-esque, for better or worse.
As for implementing the actual raytracer, it is helpful to have a function that can convert i,j png pixel coordinates to world coordinates using the origin, horiz and vert point/vector information. For each pixel, create a ray from the eye to the pixel position in world coordinates then have a function that traces one ray and returns a single RGBColor which can be assigned to the final output png file.
Don't try to handle all the components at once. Focus on maybe getting the ambient lighting working for one sphere or one rectangle in the center of the image. Once you have the basic outline correct, adding diffuse and specular lighting should be easier.
Notes about ray-tracing can be found in chapters 11.2 and 11.3 of the text. A review of the Phong lighting model can be found in chapter 5.3. Note, the text describe a light source as having separate red, green, and blue intensities, whereas we only define a single (white) intensity for each light.
You do not need a draw method in your shape classes. The drawing is done by the rays traveling through the image plane.
You do not need a perspective or ortho projection matrix. As all rays originate from the origin and go through the image plane, you will get a perspective effect from simply tracing rays.