You may work with one partner on this assignment. This lab will count as your midterm project and use a variety of techniques that you have learned so far. You will design and implement a ray tracer. Your ray tracer should model spheres, triangles, and rectangles, using the Phong lighting model for ambient, diffuse, and specular lighting. You should also model shadows.
$ cd [~]$ ssh-add Enter passphrase for /home/ghopper1/.ssh/id_rsa: Identity added: /home/ghopper1/.ssh/id_rsa (/home/ghopper1/.ssh/id_rsa) [~]$ cd ~/cs40/labs [labs]$ git clone git@github.swarthmore.edu:CS40-F16/raytracer-YOURUSERNAME-YOURPARTNERNAME.git raytracer
[raytracer]$ cd ~/cs40/labs/raytracer [raytracer]$ mkdir build [raytracer]$ cd build [build]$ cmake .. [build]$ make -j8 [build]$ ./makescene input.txtYou program does not display a window. Instead it creates a png image file (test.png as specified in input.txt). You can view image files on the CS system using the program geeqie. The image should be a blank black image at this point. Your raytracer code will construct the correct image.
The first few lines of this file describe the name of the output image and the height and the width of the image (in pixels)
#comment output test.png outsize 512 512This image also represents a planar rectangle in the world. This rectangle is specified by the lower left corner of the rectangle and two vectors: a horizontal vector pointing from the lower left to the lower right, and a vertical vector pointing from the lower left to the upper right.
origin -5 -5 8 horiz 10 0 0 vert 0 10 0The center of every pixel at row, column in the output image file has a corresponding position in world coordinates using origin, horiz and vert.
eye 0 0 15You will be tracing rays from the eye location through the center of each pixel to the scene, usually located on the opposite side of the image plane from the eye.
#center x,y,z radius sphere 0 0 0 2
Triangles and rectangles will be specified by three points. In the case of rectangles, assume the first three points are the lower left, lower right, and upper right, respectively. You can compute the upper left with this information.
#xyz of p1,p2,p3 triangle -5 5 5 5 5 5 5 5 -5 #xyz of ll, lr, ur rectangle -5 -5 5 5 -5 5 5 -5 -5
Shape.h describes a virtual base class, much like the drawable class from Project 02. It is important that each shape you add to your scene is able to compute intersections with rays and the shape. Also, each shape should compute normals for points on the shape. You should implement spheres, triangles, and rectangles as classes derived from the Shape class. I have started this for you in Sphere.h, but you need to add some stuff and implement the appropriate methods in Sphere.cpp. Don't forget to update your CMakeLists.txt file. You will need to add Triangle and Rectangle classes from scratch.
#global ambient intensity amblight 0.1 #xyz pos intensity light 0 3 10 0.3 light -5 0 0 0.3 light 5 8 0 0.3
QImage, QColor and QRgb are from Project 01. Recall that in the QImage class, pixel 0,0 is in the upper left.
view, material, light, ray and hit are very lightweight classes or structs that just are containers for grouping related elements together. In many cases, there is no associated cpp files since the member variables can be accessed directly. Feel free to modify these classes/structs if you like, but you shouldn't need to add a ton of stuff to these files to get a basic ray tracer working.
common.h/.cpp contains a simple debugging function that allows you to print out QVector3D (vec3) objects. Feel free to add other "common" functions here if needed.
makescene.cpp is a small wrapper around the bulk of the raytracer. This is the main executable you run. It checks that you provided an input file, creates an instance of the RayTracer class, parses, the file, traces the scene, and saves the result. You do not need to modify this code. Instead, modify raytracer.cpp.
If you look at raytracer.cpp initially, you'll see that RayTracer::save() creates a QImage object and saves it to a file. You will probably want to move the creation of this image into RayTracer::trace(), make the QImage object a member variable, and only have save write the output image created in trace(). This was my final implememtation of save:
void RayTracer::save() { QString imgName = QString(m_view.fname.c_str()); m_img->save(QString(imgName.fname.c_str()), "PNG"); cout << "Saved result to " << m_view.fname << endl; }
That leaves the parser, which reads a text file like input.txt and converts it into an internal format that your raytracer can use. Writing parsers in C++ can be very tedious. I got you started by writing some helper function in parser.h. Reading this file may be helpful as you parse some commands I have left out. Reading parser.cpp is probably less helpful. It has the tedious and annoying details of C++ string manipulation. raytracer.cpp contains the start of a full parser that opens the input file and parsers each command line by line. Check out parseLine which is similar to a giant switch statement (except you can't switch on string types). When you run the parser initially, you will find some commands are completely missing and some are only partially implemented. Examine the other parts of parseLine and use it to fill in any missing details. It is recommended that you store all the information about the input file in the m_scene object. I use two QHash dictionaries in the parser to refer to certain color and material variables by a string name, like "red". Take a look at a few examples in the parser and feel free to ask questions.
To make material handling a bit easier, there is a notion of the "current" material. Changing the properties of a material through the used of mat amb color changes the "current" material which can be saved as a special name and retrieved later. When you create a new sphere, triangle, or rectangle, you do not need to specify nine material coefficients. The semantics is that these objects should just use the "current" material at the time the object is created. It's very OpenGL-esque, for better or worse.
As for implementing the actual raytracer, it is helpful to have a function that can convert i,j png pixel coordinates to world coordinates using the origin, horiz and vert point/vector information. For each pixel, create a ray from the eye to the pixel position in world coordinates then have a function that traces one ray and returns a single RGBColor which can be assigned to the final output png file.
Don't try to handle all the components at once. Focus on maybe getting the ambient lighting working for one sphere or one rectangle in the center of the image. Once you have the basic outline correct, adding diffuse and specular lighting should be easier.
You do not need a draw method in your shape classes. The drawing is done by the rays traveling through the image plane.
You do not need a perspective or ortho projection matrix. As all rays originate from the origin and go through the image plane, you will get a perspective effect from simply tracing rays.
RayTracer::Run(): Init image for each col,row in image: ray = MakeRay(col, row) clr = traceRay(ray) image(col, row) = clr image.save()
makeRay
RayTracer::makeRay(col, row): /* Use origin, horiz, vert, nrows, ncols to * map col, row to point in world coordinates */ Point p = convertToWorld(col,row) Ray ray ray.origin = eye ray.direction = p-eye return ray
traceRay
RayTracer::traceRay(ray): Hit closest = findClosestShape(ray) Color clr = background if closest.ID != -1: /* We hit a shape with the ray, compute * color using lighting model */ clr = doLighting(ray, closest.shape) return clr
doLighting
RayTracer::doLighting(ray, shape): /* include global ambient */ clr = ambient*shape.clr; for each light L: if shape not in shadow of L: clr += phong(ray, L, shape) return clr
findClosest
RayTrace::findClosest(ray): closest.ID = -1 /* haven't found anything yet */ for each object O: time = O.hitTime(ray) if time > 0: if closest.ID == -1 or time < closest.time: /* O is closer, update closest */