QWidget
class.
QWidget
class. In a QOpenGLWidget
, you should call update()
instead of paintGL()
to refresh the scene. update()
will schedule a paintGL()
call for you.
Let's try this again.
The examples repo should have two remotes. The first, origin, refers to your personal copy on the github server. The second, upstream, refers to a read-only copy that I will update throughout the semester. Let's check the status of our remotes and get some updates
$ cd ~/cs40/examples $ git remote show origin upstream
If you do not have an upstream remote, you can add it now.
git remote add upstream git@github.swarthmore.edu:CS40-F18/examples.git git fetch upstream
$ git remote show origin X11 forwarding request failed on channel 0 * remote origin Fetch URL: git@github.swarthmore.edu:CS40-F18/examples-adanner1.git Push URL: git@github.swarthmore.edu:CS40-F18/examples-adanner1.git HEAD branch: master Remote branches: master tracked Local branches configured for 'git pull': master merges with remote master Local refs configured for 'git push': master pushes to master (up to date) $ git remote show upstream X11 forwarding request failed on channel 0 * remote origin Fetch URL: git@github.swarthmore.edu:CS40-F18/examples.git Push URL: git@github.swarthmore.edu:CS40-F18/examples.git HEAD branch: master Remote branches: master tracked $ git branch -avv master ddee7fe [origin/master] Last commit message remotes/origin/HEAD -> origin/master remotes/origin/master ddee7fe Last commit message remotes/upstream/master 19169f2 upstream commit messageTry to get the latest changes from upstream
git fetch upstream git merge upstream/master fatal: refusing to merge unrelated histories
Let's fix the issue of unrelated histories by forcing the merge, but keeping any of our local changes.
git merge -X ours --allow-unrelated-histories upstream/master
After this step, you will need to manually edit the top level CMakeLists.txt
to add the new subdirectories.
$ tail ~/cs40/examples/CMakeLists.txt add_subdirectory(w01-intro) # add these two lines add_subdirectory(w02-opengl) add_subdirectory(w03-cube)
If everything went well, you should be able to go into your build
directory and run make -j8
to compile the week 02 and 03 samples.
cd build make -j8 cd w03-cube ./cube
The x,y
, and z
keys allow you to rotate the cube. It's kind of hard to tell it's a 3D cube at this point. We'll be working on that in the next few weeks.
Often after a merge, git will drop you into an editor to edit the commit message. For some of you the default editor may be the dreaded vim
. To accept the default message and quit vim, type <ESC> :wq
. To change your editor, you can use
git config --global core.editor
and some editor. For atom users, you can use, e.g.,
git config --global core.editor "atom --wait"
~/cs40/examples/w03-cube/mypanelopengl.cpp
to see what is the same/different.
initializeGL()
, paintGL()
and resizeGL(int width, int height)
.
keyPressEvent
for user interaction
mat4
type in glsl/vertex shader.
w03-cube
demonstrates a few design patterns to manage the complexity of 3D geometric objects. Let's look at brief overview of creating the VBO.
In the header file mypanelopengl.h
, we use typedef
to alias the QVector3D
class to the types point3, vec3
, and color3
. While the compiler won't prevent us from doing something like
point3 pt(10,0,0); color3 c=pt;using descriptive variable names and the type aliases appropriately might tip us off if we start doing something odd. But in general, these are not new types, they are just nicknames for
QVector3D
.
A cube has eight unique vertices. We use m_vertices
and m_vertex_colors
to store the geometry and color of each of these vertices once to avoid copy/paste errors. In the VBO, we need to specify the vertices of each triangle composing the cube. We can try to be clever and come up with a nice TRIANGLE_STRIP
layout to reduce repetition, but for now, let's keep the design simple and specify the cube as six faces, 12 triangles, or 36 total vertices in the VBO. The makeCube()
method shows how to leverage a small helper method quad(...)
to create the geometry and color for one face of the cube. Calling this function six times builds the entire cube while reducing code duplication and weird for loop indices. Note the use of a static
variable in the quad(...)
method to keep track of where we are in the the array of all the triangle vertices.
Once all the geometry is created in CPU memory, we can copy it to the VBO with two writes in createVBO()
: one for the vertices, and one for the colors. It is possible to interleave the geometry and color and perform one write, but I find this approach easier to grasp the first time around.
With the VBO and program in place, we can connect the VBO data to the shader inputs in setupVAO()
. Note in this example we are connecting the position information and color information. The variables vPostion
and vColor
are declared as in
variables in vshader.glsl
to indicate they come from the VBO and vary for each parallel call of the vertex shader.
vColor
as an additional vertex shader input, we already saw one new feature of our shaders for this example, though it is handled just like we handled any other varying attribute: it is included in the VBO and connected to the VAO using enableAttributeArray
and setAttributeBuffer
.
We can also pass variables from the vertex shader to the fragment shader. Note the variable declaration out vec3 color;
in our vertex. Any output variable from a vertex shader is transmitted to the fragment shader as an additional input variable, and indeed, you can see in vec3 color;
in the corresponding fshader.glsl
. Recall from our original triangle example though that the vertex shader inputs are only defined at the vertices of each triangle. But a triangle with three vertices could generate much more than three fragments. How are the vertex shader outputs mapped to fragment shader inputs? Usually this is done through interpolation. To see an example, try the following exercise:
The quad(...)
uses four indices to generate two triangles consisting of six total vertices. You'll notice that the implementation provided assigns the color of each vertex to be the color of the first index in the quad(...)
call. Modify the first triangle to use the first three indices for the first three colors of the vertices. You can leave the other triangle unmodified. Compile and run. Note how the colors change smoothly on half of each face.
A static cube looks a lot like a square. To get a 3D illusion, it helps to rotate the scene around to see different faces. In our vertex shader we add a uniform vec3 theta
which is the angle in degrees to rotate about the $x$, $y$, and $z$ axes in degrees. Instead of converting each component to radians and computing sines and cosines, GLSL, the language of shaders, allows you to apply these functions directly to vector types and it will implicitly do the looping for you. Thanks GLSL! That makes it easier to define $4 \times 4$ rotation matrices along each access and apply the rotations to original geometry.
The angles can be controled through the x,y
, and z
keys on the keyboard. See the special QWidget
method keyPressEvent
. A couple of items to note when using this method. First, the widget must have focus to respond to keyboard events. In this example, I set this in the UI designer, but you can also add setFocusPolicy(Qt::StrongFocus);
in your Widget constructor to set the focus. Second, if the user presses a key that you don't care about, don't just ignore it, but pass the event to the base class using e.g., QWidget::keyPressEvent(event);
. Perhaps this base class is interested in the event. Finally, you will probably want to call update()
at the end of you handler if the keypress modifies the scene.
You may notice that the x
and y
keys do not always change the scene in ways you might expect. Think a bit about why this might be and consider the order in which the rotations are applied. Also think about the resulting frames that are created after each rotation. You'll get much more practice with this in Lab 04.
OpenGL is aware of the orientation of your triangles and this can be a blessing or a curse. Consider a closed surface like our cube. Obviously we want to render/draw the outside of the box, but do we need to draw the back side of each triangle? If the box is closed, we would never see it and rendering these triangles is a waste of time. In some applications, we may want to render both sides of an object, e.g., a piece of paper or a thin sheet of metal. Or maybe we have a box with a removable lid. So sometimes rendering both sides of a triangle is helpful and sometimes it isn't. For the times when it isn't can we speed things up by telling OpenGL not to draw one (invisible) side of a triangle? Yes, this is possible with face culling and orientation testing.
In this example, the faces of the cube were oriented so that anytime a viewer is looking at the box from the outside, the face is oriented counter clockwise. Knowing that we oriented our faces this way, we can tell OpenGL not to rende, or cull, faces that are oriented clockwise.
glEnable(GL_CULL_FACE); //disabled is default glCullFace(GL_BACK); //cull back facing triangles glFrontFace(GL_CCW); //front facing triangle are oriented ccw
quad(5, 4, 0, 1)
and modify the indices so that it oriented clockwise. What happens when you run the code now and rotate the scene to view this face?
glEnable(GL_CULL_FACE);
from the previous example, but add the following lines to the fragment shader:
fragColor = vec4(color,1.); /* New */ if(!gl_FrontFacing){ fragColor=vec4(0.3,0.3,0.3,1); }The $-\hat{x}$ magenta face should now be grey, but still visible.
For most 3D applications we will want to enable the depth buffer in initializeGL()
glEnable(GL_DEPTH_TEST); /* temporary weirdness until we do projections */ glClearDepth(0); //1 is default glDepthFunc(GL_GEQUAL); // GL_LESS is default
In paintGL()
, we clear the color buffer and depth buffer before each draw. Later in the course, we may see multi-pass rendering approaches where we don't always clear these buffers.
/* clear both color and depth buffer */ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
We can also view the depth values in the fragment shader
in vec3 color; out vec4 fragColor; void main() { /*gl_FragCoord.z contains depth info in range -1 to 1 */ float d = (gl_FragCoord.z+1)/2.; fragColor = vec4(d,d,d,1.); }
In examples/w04-stack-texture
, we will explore some concepts used in lab04
.
sphere
geometric object
Prior to compiling these examples, it is important to add a symlink in your source tree to point to the directory containing images used for texture mapping.
cd ~/cs40/examples/w04-stack-texture ln -s /usr/local/doc/textures data cd ..
At the top level CMakeLists.txt
, add this subdirectory.
add_subdirectory(w04-stack-texture)
Then build and run
cd ~/cs40/examples/build make -j8 cd w04-stack-texture ./sphere
Initially, two solid colored squares should appear. The S
key will toggle between drawing a sphere and the two squares. The geometry of the square and sphere are set up much like the cube case we discussed on Wednesday. Instead of assigning a color per vertex, the shape color is assigned via a uniform
variable in the vertex shader.
As an alternative to uniform colors, or per vertex colors, we can sample colors from an image in a technique called texture mapping. Let's modify the fragment shader to see this effect and then we will walk through how the image is mapped to the geometry.
/* Use this color in the Fragment shader */ fragColor = texture(tex0,texCoord);
The general outline for texture mapping is as follows:
Like many OpenGL features, including texturing, options can be enabled or disabled with the glEnable
, or glDisable
functions.
glEnable(GL_TEXTURE_2D);
Qt has a QOpenGLTexture class that supports easy loading of image data onto the GPU.
m_texture = new QOpenGLTexture(QImage("data/earth.png").mirrored());
square
code, but it is also done in the sphere
code. After constructing the texture coordinates, we write them to the VBO and connect them to the vertex shader similar to what we did with the vertex colors on Wednesday.
In the vertex shader, we have the texture coordinate as an input from the VBO and as an output to the fragment shader. The output value is interpolated between vertices.
uniform sampler2D tex0;
that will connect to the image data on the GPU. The line fragColor = texture(tex0,texCoord);
will sample the color from the image using the interpolated texture coordinates.
Everything is set up on the GPU side in the shaders and VBOs. We just need to make a few connections between the CPU and GPU in paintGL()
. The first is to enable the texture sampler.
/* There is a small typo "Tex0" in the code released to you */ m_shaderProgram->setUniformValue("tex0", 0);This allows the fragment shader to sample from the current texture. Like many OpenGL things, we set the current texture through
bind
.
m_texture->bind();
We could have multiple textures loaded on the GPU and switch between them simply by calling bind on the appropriate texture.
The Square
and Sphere
classes are the same as those used in lab4, and similar to lab2/lab3 shapes in that they build off a Drawable
base class. Instead of sharing a VBO through a GPUHelper
, each Drawable
has its own VAO/VBO.
These objects currently have geometry and texture coordinates in the VBO. We may expand them later.
The Drawable
header file has a small typo, Drawable::release()
only releases the underlying VAO, not the program. Since the program usually stays the same across objects, it is not necessary to be overly aggressive in releasing it. It is safe to call bind
on an object that is already bound.
The Square is easiest to understand in terms of implementation. The Sphere uses the same ideas, but uses spherical coordinates to step through the slices and stacks and construct the individual triangles. This sphere is rendered as several latitudinal triangle strips and two polar triangle fans. The poles are aligned with the $z$ axis.
The matrix stack is primarily used to move objects from object space to world space via model matrix transforms. The primary model transforms we will be using are the affine transforms: translate, rotate, and scale. Each can be encoded as a 4x4 matrix as described on the Learn OpenGL tutorial, but for this course, I want you to focus more on the geometric interpretation. The QMatrix4x4 class will provide and implementation of most of the needed methods.
Any time you apply a QMatrix4x4
transform method like translate
, or even ortho
, the code will build the appropriate transform matrix and combine it with the current matrix by multiplying on the right. Thus m.translate(...)
is equivalent to $M' \longrightarrow MT$ where $M'$ is the new value of m
, $M$ is the original value of the m
matrix and $T$ is a translation matrix.
If you want to get just a fresh translation matrix, it may be best to reset your matrix object to the identity matrix first, or use the matrix stack which we will describe shortly.
m.setToIdentity(); m.translate(...);