You may work with one partner on this assignment. This lab will count as your midterm project and use a variety of techniques that you have learned so far. You will design and implement a ray tracer. Your ray tracer should model spheres, triangles, and rectangles, using the Phong lighting model for ambient, diffuse, and specular lighting. You should also model shadows.
rgbImage, png_reader, png_writer and rgbColor are from lab 01. Recall that in the RGBImage class pixel 0,0 is in the upper left.
We talked about the Vector3 class prior to Spring break. You should use it extensively to store points, vectors, colors (with rgb components in the range 0 to 1), and whatever else you feel is appropriate. Note that things like normalize, length, dot and cross are already implemented.
view, material, light and ray are very lightweight classes or structs that just are containers for grouping related elements together. In many cases, there is no associated cpp files since the member variables can be accessed directly. Feel free to modify these classes/structs if you like, but you shouldn't need to add a ton of stuff to these files to get a basic ray tracer working.
Shape.h describes a virtual base class, much like the drawable class from lab 02. It is important that each shape you add to you scene is able to compute intersections with rays and the shape and compute normals for points on the shape. You should implement spheres, triangles, and rectangles as classes derived from the Shape class. I have started this for you in Sphere.h, but you need to add some stuff and implement the appropriate methods in Sphere.cpp. Don't forget to update your CMakeLists.txt file. You will need to add Triangle and Rectangle classes from scratch.
That leaves the parser, which reads a text file like input.txt and converts it into an internal format that your raytracer can use. Writing parsers in C++ can be very tedious. I got you started by writing some helper function in parser.h. Reading this file may be helpful as you parse some commands I have left out. Reading parser.cpp is probably less helpful. It has the tedious and annoying details of C++ string manipulation. raytracer.cpp contains the start of a full parser that opens the input file and parsers each command line by line. Check out parseLine which is similar to a giant switch staement (except you can't switch on string types). When you run the parser initially, you will find some commands are completely missing and some are only partially implemented. Examine the other parts of parseLine and use it to fill in any missing details. It is recommended that you store all the information about the input file in the global scene object.
easy_map. I should probably say something about this. It is similar to a python disctionary. It is very similar to a C++ map, so looking at the documentation for C++ map may be helpful. Getting a value can be done using val = mymap[key]. Setting a value uses the mymap[key] = value. The default C++ map does something weird though when you say val = mymap[key], and the key does not exist already in the dictionary: it just makes up a new value using the default constructor for the value type and stores it in the dictionary. I think this is silly. Maybe you don't. Also the standard C++ map does not have an easy way to test if a key already exists in a map. So I made easy_map, which is just like the C++ map but has a working bool has_key(key) method. I used these easy_maps in the parser to refer to certain color and material variables by a string name, like "red". Take a look at a few examples in the parser and feel free to ask questions.
To make material handling a bit easier, there is a notion of the "current" material. Changing the properties of a material through the used of mat amb color changes the "current" material which can be saved as a special name and retrieved later. When you create a new sphere, triangle, or rectangle, you do not need to specify nine material coefficients. The semantics is that these objects should just use the "current" material at the time the object is created. It's very OpenGL-esque, for better or worse.
As for implementing the actual raytracer, it is helpful to have a function that can convert i,j png pixel coordinates to world coordinates using the origin, horiz and vert point/vector information. For each pixel, create a ray from the eye to the pixel position in world coordinates then have a function that traces one ray and returns a single RGBColor which can be assigned to the final output png file.
Don't try to handle all the components at once. Focus on maybe getting the ambient lighting working for one sphere or one rectangle in the center of the image. Once you have the basic outline correct, adding lighting should be easier.
Notes about ray-tracing can be found in chapters 13.2 and 13.3 of the text. A review of the Phong lighting model can be found in chapter 6.3. Note, the text describe a light source as having separate red, green, and blue intensities, whereas we only define a single (white) intensity for each light.