Week 6: Lighting, Intro to Path tracing methods
Reading
We will begin talking more about the ray marching method of rendering on Wednesday. Prior to class, please read the introduction to ray tracing on Scratch Pixel, and the overview of the ray marching technique by Jamie Wong.
-
Ray Marching and Signed Distance Fields by Jamie Wong.
Also browse some of the short examples of ray marching on Jamie Wong’s Shader Toy profile. Or browse the general Shader Toy site. Shader Toy allows users to browse, edit, and view GLSL code in a slightly modified fragment shader. You may notice there is no vertex shader code. Why not? Where is the geometry specified in these demos?
Monday
-
Vertex shader lighting
-
Fragment Shader lighting
-
Normal matrix
Note that in GLSL, matrix-vector multiplication, matrix-matrix multiplication, dot products, vector max, and other common graphics tools are built-in. On the Javascript side, you typically have to use a third party library like twgl to get the same support.
All our computations are done in eye space in this example, as it makes computing the view vector easy. The light position is assumed to be provided in world coordinates. We apply only the view matrix to convert it to eye coordinates. In other applications you could have a model matrix for e.g., a moving light source. Alternatively, you could specify the light coordinates in eye space so they follow the camera.
Wednesday
We mentioned briefly the normal matrix on Monday. When specifying the object vertices and normals, we use object coordinates. The model-view matrix transforms the coordinates of the vertex geometry to eye space by first applying the model matrix and then applying the view matrix. But we also need to transform the normal vectors when moving to eye space.
Consider the image above. When we apply the model-view matrix \(\mathbf{M}\) to all the points on this triangular shape, the tangent vector \(\vec{v}\) is also modified by the model-view matrix.
In object space, the normal was perpendicular to the tangent vector by definition, so \(\vec{v} \cdot \hat{n} = v^T n = 0\). After transformation of the geometry by the model view matrix \(\mathbf{M}\) and the normals by a normal matrix \(\mathbf{N}\), we would like to preserve the orthogonality of these vectors.
If \(\mathbf{M^TN}=I\), the identity matrix, the transformed vectors will still be orthogonal. So we choose \(\mathbf{N}=\mathbf{M^{-T}}\).
Transition to Path Tracing methods
Until this point we have been focused primarily on the standard OpenGL Pipeline consisting of the following primary steps:
-
Creation of Geometry Buffers on the CPU
-
Geometry processing in the Vertex Shader on the GPU
-
Primitive Assembly and Rasterization (GPU)
-
Fragment Processing in the Fragment Shader in the GPU
-
Output of the final image
The key components we implemented or used including model transforms, the view matrix, orthographic and perspective projections, and the Blinn-Phong lighting model were once built into a legacy version of OpenGL as a fixed function pipeline. The flexibility of programmable shaders allows us to recreate these now deprecated or disabled features of OpenGL as well as explore more advanced topics not possible in the fixed function pipeline.
We will now take a pause from the standard pipeline, to talk about path tracing methods, an alternative rendering technique. We will be using some new tools, features, and ideas that we have not seen before in this course, but we will also be reusing some ideas and concepts such as our linear algebra toolbox and components of our lighting model.
Historically, path tracing, ray tracing, and ray marching relied more on CPU methods than GPU methods as the the GPU architecture was better suited for the traditional geometry buffer pipeline. But this is beginning to change, and you will see how we can leverage the fragment shader to write a basic ray marcher for your midterm project.
Rays
At the core of ray tracing and ray marching are the geometric primitives rays. A ray is a path extending infinitely in a given direction from an origin point. We typically use a parametric definition of a ray:
The ray \(\vec{r}(t)\) above starts at point \(p\) and travels in the direction \(\vec{d}\). We can define points along the ray with the parameter \(t\). At \(t=0\), we are at the point \(p\). At \(t=2\), we are at the point \(p + 2 \vec{d}\)
Path tracing basics
To render an image with path tracing techniques we define the following key components.
-
The eye or camera origin. This will be the origin of all our primary rays
-
An image plane. The image plane is a region in space where we will be projecting our final image on to. You can think of it is the near plane in the perspective projection frustum, though we will not be explicitly using a perspective projection matrix.
-
Scene objects: what we want to view.
The basic path tracing algorithm divides the image plane into a 2D array of pixels. For each pixel, we create a ray starting from the eye and passing through the center of the pixel. The goal in path tracing is to determine the color of each pixel by tracing the ray as it interacts with the scene.
Since the basic step is something that is done on a per pixel basis, it seems like a good candidate for parallelization through the fragment shader. Two primary methods of path tracing include ray tracing and ray marching. In ray tracing, you can specify the scene as a set of analytic surfaces. To model e.g., a sphere, you can specify simply its center and radius. Some basic algebra allows you to compute the intersection point between a ray and a sphere if it exists. This can simplify the need to partition a sphere into a triangular mesh. However, to determine which object a ray intersects, you may need to check all the objects in the scene, or build separate acceleration structures. Ray tracing is an active area of Graphics development and research, but we will explore a different path tracing method known as ray marching or sphere tracing.
Signed distance fields and Ray Marching
The primary challenge in all path tracing techniques is how to model the scene and compute ray object intersections. In the ray marching method, an object surface is represented by a signed distance field, or SDF. The SDF is a function that when given a point returns a scalar distance indicating how close that point is to a surface. The result is signed with a positive value indicating the point is outside the surface and negative value is inside the surface.
The SDF for a sphere of radius \(r\) centered at the origin is given by
When the value of the SDF is 0, the point is on the surface. The basic ray marching method uses an SDF for the entire scene to gradually traverse or march along the ray until it hits the surface of an object.
Friday
-
Basic ShaderToy mock up
-
RayMarching Steps
ShaderToy Mock up
ShaderToy is a site for sharing procedural shaders, many of which use ray marching methods. At its core, you can edit a modified glsl fragment shader to create real time images in the browser. The Shader Toy Demo shows how we might create something similar using the tools we are familiar with in CS40 without all the extra features of ShaderToy.
One question you might have regarding Shader Toy in the context of our prior work is "what happened to the Vertex Shader?" Take a look at the demo and see if you can determine why we don’t need to specify the vertex shader in Shader Toy.
Shader Toy supports several built in variables that are not part of the GLSL standard. Examples include iTime
and iResolution
. We can recreate these using uniforms in our code. Shader Toy also has a slightly different function prototype for its main
function, but again, this is mostly syntactic sugar that we could easily recreate if needed.
With this mock up, you can create full images completely in the fragment shader. Very little twgl or vertex shader code is necessary.
Ray Marching Sketch
We can convert the Ray Marching concepts into the pseudocode sketch below:
given: eye, image plane, scene
for each pixel in image plane:
ray = makeRay(eye, pixel)
object = march(ray)
if object :
color = lighting(object.clr, object.p, object.n, eye)
else :
color = background
When implementing a ray marcher in the fragment shader, the shader automatically runs for each pixel in the image plane, so the outer for loop is not necessary in the shader source.
To make a ray, we just need to determine the coordinates of the center of the pixel and create a direction vector from the eye to the center of the current pixel.
The lighting model can be the standard Blinn-Phong illumination model, though with the availability of the entire scene, we can do some global illumination methods like detecting if an object is in shadow.
The primary steps to figure out are how to march and how to compute surface normals given only signed distance fields.
Marching
To march a given ray, we start at the eye and query the SDF of the entire scene to find the distance \(d\) to closest surface. We then march along the ray direction a distance \(d\) and repeat the queries until:
-
You hit something, SDF returns a small \(\epsilon > 0\)
-
You travel too far (some max distance) from eye
-
You take too many steps.