GraphicsBlast
The Basics | C++ | Linux | Command Line

Lesson 4: My First Triangle Part A

Rendering Triangles
Rendering Triangles: The basic building block of computer graphics

Now we've learned the basics of working with a window, we can finally begin to look at rendering graphics!

This tutorial is going to split over two parts, as we introduce shaders and everything we need to draw our first triangle. This first part will cover what shaders are, and how we can implement them to draw a triangle in the simplest possible way. Then in part two we'll refactor our code into a shader class, add proper error handling, and cover loading and hot re-loading of them from a file.

Shaders

So what is a shader?

Originally, they started out as small programs that would run on the GPU, and calculate what colour each pixel should be taking into account things like lighting, hence the name. Over time they have grown in scope, and now we have several kinds of shaders, somewhat unrelated to the original name.

The original shaders have now become known as fragment shaders (pixel shaders in other libraries), and are incredibly powerful. Like before, they're typically a small program we write which runs on the GPU massively in parallel for every pixel on screen which is being rasterised. Here, rasterising is the process of figuring out which (if any) on-screen pixels a triangle occupies, and then setting those pixels to the correct colour.

Despite whatever the name might imply, shaders really are just programs designed to be executed on the GPU rather than on the CPU. They are generally small and simple in scope, and designed to be run in parallel. These days there are five types of shaders forming the GPU rendering pipeline. They are the vertex shader, then two types of tessellation shaders, the geometry shader, and finally the fragment shader. Unless you are rendering complex effects, which we will cover in later tutorials, you generally only need two types: the vertex shader, and the fragment shader. These two are really fundamental and required by OpenGL, while the other 3 shader types are newer and optional extras.

So for now we will concentrate on vertex shaders and fragment shaders.

In OpenGL, you draw things onto the screen by passing the vertex (corner) coordinates of the triangles you want to draw. Triangles are used to draw everything, whether it be putting two next to each other to draw a square, or drawing hundreds to form a model of a human; it's always triangles! This is because these are the simplest shape to have an internal area, and therefore can be combined to for any other shape (or approximate it in the case or circles).

Each of these vertices are first passed to a vertex shader program. The vertex shader's task is to manipulate these vertices in some way. This could be applying transformations like scaling or rotations, or projecting them into screen coordinates. You might be reading this thinking, well I could just apply all of these transformations myself in the normal way; looping over each vertex and applying them how I normally write code. And you would be correct! However, the key point here is the GPU. It can do all of that in one shot. Looping over hundreds of thousands of coordinates and performing these transformations yourself will take a lot of processor time. The GPU is basically a specialised piece of hardware for the task of running these simple programs in vast numbers on all the vertices all at once.

As we aren't using the other shaders yet, our transformed vertex coordinates will flow out of the vertex shader and be rasterised by OpenGL using our fragment shader. Simply put, for each triangle we draw, the GPU figures out which pixels in our window lie inside that triangle. If a pixel lies inside, the GPU then runs the fragment shader to decide which colour it should now be coloured based on the properties of the triangle.

In summary, shaders are just small programs that run on the GPU. When we want to draw a triangle, the vertex shader transforms it's coordinates as required. Our GPU then figures out which pixels it occupies, and runs the fragment shader to set those pixel's colours. Doing it this way really maximally exploits the GPUs hardware, whose only job is to perform these very tasks as fast as possible. And all of this happens blazingly fast!

GLSL

To write our shader programs, we use a special language called GLSL (GL Shading Language). This is a simple language very similar to C++, and designed to match the GPU's rendering capabilities. It's analogous and pretty similar to HLSL (High-Level Shader Language), the DirectX equivalent.

Conceptually, at run-time you would get the source code for you shader, usually reading it from disk, but it can be stored just as a regular string in your code. You them pass the code to the GPU driver, which will compile it and optimise the shader for the specific GPU of the machine the program is running on - which is not necessarily the machine in front of you right now. Once the driver has compiled your shader program, you can then use it to draw things.

Now, that's the theory. It's time to get our hands dirty writing our shaders!

Vertex Shader

As we only want to get a triangle on screen for now, and not dynamically manipulate it's position in any way, our vertex shader is going to be fairly simple. We'll actually add it to main.cpp as a regular string in a moment, but let's just look at it as a whole for a moment.

1.
#version 460
2.
3.
layout(location = 0) in vec3 aPosition;
4.
5.
void main()
6.
{
7.
    gl_Position = vec4(aPosition, 1.0);
8.
}

So what's going on here?

We always start our shader programs by writing the version of GLSL on the first line. In fact on some systems it's a hard requirement that line one of any GLSL one must contain this. This way we can be sure that any functions we call will be understood by the version of OpenGL our program is running.

Since OpenGL 3.3, the GLSL language version has nicely tracked the version of OpenGL itself. So GLSL version 330 was designed to work with OpenGL 3.3, and as we're using OpenGL 4.6 we use version 460. The "core" profile of the version is assumed by default, but you can also specify that here, for example #version 460 compatibility.

Before our main function, we also declare the inputs of our vertex shader. As we spoke about before, our vertex shader program will be executed on every single vertex we want to draw, and our shader will modify them somehow. So we declare a variable starting with layout(location = 0). We say that the variable we're declaring will be located in the shader's first buffer.

The in part denotes that this is an input variable for our program. The variable type is a vec3, or vector of three elements, or simply an array of three floats. If you've not seen this mathematical wording before, just remember that a vector is just another word for an array!

Finally we have the name of the variable, aPosition, as it will contain the position of the vertex in 3D space. The prefix "a" is used here as a reminder that the variable is a vertex attribute - it is a variable which is different for each vertex we're drawing. There are other kinds too, like uniform variables, which are simply the same for every vertex. Differentiating between these kinds of variables allows the GPU to optimise how the memory is laid out.

We then have the main function of our program. Like C++, this is the program's entry point. This is fairly straight-forward for now. We take the input position, and set the final rendered position of the vertex to that input, without really modifying it.

We make use of gl_Position, a special GLSL build-in variable, to set the final output position of our vertex. We do however convert our aPosition variable to a vec4 type, and set the final component to a value of one. This is because gl_Position expects a homogeneous coordinate. Essentially, this form of writing coordinates uses an extra float value of 1 to represent points in 3D space, and 0 to represent directions. This is really useful when dealing with matrix mathematics, which is why it's used here, and allows your mathematics to not care about whether it's working with directions or positions. Any way you don't need to worry about this for now, just remember that a final value of 1 means you're specifying a point in 3D space. And for now this is all we need to do here, our triangle will appear exactly where we tell it to appear. The power of these shaders will become more apparent soon though!

Fragment Shader

As we mentioned, OpenGL will figure out for our triangle exactly which pixels in the final rendered image belong to this triangle, and then run this program to see what colour they should be. They are capable of doing complex lighting calculations, but for now we will just set each rendered pixel of our triangle to be an orange colour. Please forgive my taste of colour.

1.
#version 460
2.
3.
out vec4 fragment;
4.
5.
void main()
6.
{
7.
    fragment = vec4(1.0, 0.48, 0.02, 1.0);
8.
}

Like our vertex shader, we start by defining the version. We don't have any inputs in this example as we don't need any special data to be able to calculate orange.

We do however declare an output - the pixel's colour. We declare an output of our program named "fragment", where OpenGL assumes the first output if not otherwise specified will be rendered to our window. We declare it as of type "vec4", or a vector of four floats corresponding to the pixel's RGBA values (red, green, blue, and alpha (transparency)).

In the main function of our program, we then set this output pixel equal to a hard-coded set of values. Here, a value of one is the maximum, rather than the more common 255. So this implies the red channel is fully saturated (on), our green channel is about half on, and we have barely any blue. The alpha value of one means the pixel is fully opaque - although this only makes sense if we performing blending in our code, so changing it won't have any effect just yet. So in summary, this program will set the colour of every pixel it is run on to a hard coded orange value, and that's it for now!

Using Our Shaders

OK so we've got our shader source code planned out, let's adjust our main code to actually use these shaders to draw a triangle. Let's begin by laying out a few variables we'll need:

13.
bool programRunning = true;
14.
bool isFullscreen = false;
15.
+ 16.
GLuint shaderProgram;
+ 17.
+ 18.
GLuint vao;
+ 19.
GLuint vbo;
20.
21.
int init()
22.
{

We create a variable of type GLuint to act as a handle to our shader program. Here, GLuint refers to an "uint" or unsigned integer. As C++ integers can vary in size depending on the machine architecture, which is not great for writing portable APIs, OpenGL defines it's own variable types of fixed size. Don't worry about this too much though, other than that shader program handles are given to us as GLuint's.

Similarly, we create two variables for holding data about the triangle we'll draw. To do this, we will use two kinds of buffers.

Let's first talk about the Vertex Buffer Object (VBO). The VBO contains any raw data about vertices you wish to draw. They reside on the GPU and can be accessed for drawing there extremely quickly. For example, the coordinates of the vertices for an object we wish to draw go into a VBO. But we may also use a VBO to store colour information or texture coordinates as well, and various other bits of data too. It's exactly as the name describes, an object to store data about vertices.

The second kind of buffer we will use is perhaps slightly less intuitively named: the Vertex Array Object (VAO). These store the state of the current buffers. What I mean by this is that if you need to bind three VBOs to draw something, instead of binding all of them every single time you need to draw, you would instead just bind them when setting your program up, and effectively "save" this state of "three bound VBOs" into a VAO.

This means that rather binding each VBO individually again each time you draw, with all the overhead that involves, you just send one single command to use your VAO and all your buffers are ready.

Next up, let's update our init function. First, we'll add the GLSL code for our shaders:

80.
    if(majorVersion < 4 || (majorVersion == 4 && minorVersion < 6))
81.
    {
82.
        printf("Unable to get a recent OpenGL version!\n");
83.
        return -1;
84.
    }
85.
    printf("%s\n", glGetString(GL_VERSION));
86.
+ 87.
    const char* vertexShaderSource =
+ 88.
        "#version 460\n"
+ 89.
        "layout(location = 0) in vec3 aPosition;\n"
+ 90.
        "void main()\n"
+ 91.
        "{\n"
+ 92.
        "    gl_Position = vec4(aPosition, 1.0f);\n"
+ 93.
        "}";
+ 94.
+ 95.
    const char* fragmentShaderSource = 
+ 96.
        "#version 460\n"
+ 97.
        "out vec4 fragment;\n"
+ 98.
        "void main()\n"
+ 99.
        "{\n"
+ 100.
        "   fragment = vec4(1.0, 0.48, 0.02, 1.0);\n"
+ 101.
        "}\n";
102.
103.
    ...

Ehhhhh....multi-line string literals are so ugly in C++.

Don't worry. This is only temporary until the second part of this lesson.

For now, I've simply taken each of our shader's source code from above and stored it in a string.

Right, let's create our shader program. Note that the terminology here can be a little confusing. The OpenGL calls denote a program as being the combination of a vertex shader and a fragment shader together.

100.
        "   fragment = vec4(1.0, 0.48, 0.02, 1.0);\n"
101.
        "}\n";
102.
+ 103.
    shaderProgram = glCreateProgram();
+ 104.
+ 105.
    GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
+ 106.
    glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);
+ 107.
    glCompileShader(vertexShader);
+ 108.
    
+ 109.
    glAttachShader(shaderProgram, vertexShader);
110.
111.
    ...

We make a call to glCreateProgram which creates a program on the GPU and returns us a handle to it. We store this in the variable we set up earlier.

We then create our vertex shader in the same manner. This time the call to glCreateShader also requires us to specify which kind of shader we want, so we start by going for a vertex shader.

The call to glShaderSource then allows us to pass in the source code for the shader. The first parameter is the shader in question. Then next we pass in the number of items in our array, which as everything is in a single string is just one. Then the function expects a char** or array of strings. As we have a char*, passing in the address of our pointer is fine, as the "first" value in the array will be our char* string. Finally we pass in the lengths of our strings, which can be NULL if the strings are NULL-terminated, which it is.

Now we've created a vertex shader on the GPU and set the source code for it, we can call glCompileShader to tell the driver to actually compile it. We finish up by calling glAttachShader to attach the vertex shader to our shader program.

Next we do the same for the fragment shader:

109.
    glAttachShader(shaderProgram, vertexShader);
110.
+ 111.
    GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
+ 112.
    glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
+ 113.
    glCompileShader(fragmentShader);
+ 114.
+ 115.
    glAttachShader(shaderProgram, fragmentShader);
116.
 
117.
    ...

We create our fragment shader, set the source, compile it and attach it to our shader program. These calls are all pretty much identical, just with "fragment" instead of "vertex" this time.

Just like regular C++ code, after compiling, the code needs to be linked:

115.
    glAttachShader(shaderProgram, fragmentShader);
116.
+ 117.
    glLinkProgram(shaderProgram);
+ 118.
+ 119.
    glDetachShader(shaderProgram, vertexShader);
+ 120.
    glDeleteShader(vertexShader);
+ 121.
    glDetachShader(shaderProgram, fragmentShader);
+ 122.
    glDeleteShader(fragmentShader);
123.
124.
    ...

Again fortunately the driver takes care of all this, and we just need to make a single call.

And with that, our shader program is complete! We're then free to detach and delete each of the shaders. We don't need these objects any more as we already have a compiled and linked executable shader program.

Creating Triangles

Great, so we now have a simple shader program ready to go. We can now define the triangle it will actually draw:

121.
    glDetachShader(shaderProgram, fragmentShader);
122.
    glDeleteShader(fragmentShader);
123.
+ 124.
    GLfloat vertices[] = 
+ 125.
    {
+ 126.
        -0.5f, -0.5f, 0.0f,
+ 127.
        0.5f, -0.5f, 0.0f,
+ 128.
        0.5f, 0.5f, 0.0f,
+ 129.
    };
130.
131.
    ...

These are our triangle's coordinates!

As this data will be passed to the GPU, we've used the OpenGL data-type GLfloat here. OpenGL types are entirely analogous to regular types C++ types. For example we can use GLchar and GLint types.

However C++ types are sometimes allowed to vary slightly in size (number of bits). You can imagine the mess if you're passing an int to your GPU where you're CPU thinks an int has 64 bits and your GPU thinks it has 32 bits. Therefore to make the interface with the GPU as simple as possible, OpenGL defines it's own types which are always strictly of a fixed size laid out in the spec. Using these for GPU data is safer, and means casting between types can be done explicitly in our code.

The first row of our coordinates defines the first vertex's x, y and z values. The next line is then the second vertex, and I guess you can figure out the third line.

Until our lesson on cameras, we will be working in screen coordinates - that means they are (almost) just like regular images. The x-axis ranges from -1.0 to 1.0 across the width of your window. If you set a vertex's x value to -1.0, it will be on the left-most part of the window, and +1.0 on the right. Likewise, the second coordinate corresponding to the y value will set the vertical position of the vertex. As you guessed, -1.0 will push it to the bottom of the window, and +1.0 to the top.

The third float for each vertex is for the z coordinate. This is used to determine the depth of the vertex; things with a greater z will always be drawn behind things with a smaller z value (ie. the distance to the vertex is greater), regardless of the order they were drawn.

I want to stress at this point that this coordinate system is entirely analogous to a 2D image coordinate systems that you may have seen before. The coordinates are normalised (ie. are always between -1 and +1) so you don't have to consider how wide or tall the window is, but otherwise you are practically working on a 2D image.

Let's now create our GPU buffers, and copy our coordinate data into them:

128.
        0.5f, 0.5f, 0.0f,
129.
    };
130.
+ 131.
    glGenVertexArrays(1, &vao);
+ 132.
    glBindVertexArray(vao);
+ 133.
+ 134.
    glGenBuffers(1, &vbo);
+ 135.
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
+ 136.
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); 
137.
138.
    ...

The call to glGenVertexArrays generates the actual VAO buffer on the GPU. Remember, the VAO just remembers which other buffers are bound. The first parameter of 1 indicates that we only want a single VAO to be created, as this call can actually be used to create an array of VAOs, but for now as we only want to draw a single thing, we will just use one VAO. Passing in our vao variable by address in the second parameter effectively makes this an array of one element. With the VAO created on the GPU, the next line binds it, to make it the current VAO that OpenGL is working with. Any buffer changes from now on will be remembered by the VAO, until we unbind it.

We then create and bind the VBO, our buffer for storing the vertices. The first line is similar to the VAO, this time generating a buffer on the GPU, and the next line binds it as a vertex array buffer.

However for our VBO, we also make a call to glBufferData which sets the buffer's data -- in effect copying our array of vertices from RAM onto the GPU and into the VBO. We again let OpenGL know the kind of buffer it is, how much data to copy (in raw bytes, not number of elements!), the actual data itself, and we pass GL_STATIC_DRAW. The static draw argument it just a hint to OpenGL that we don't plan on modifying this data once it's been set, which the GPU can use to optimise its memory. Alternatively you can call GL_DYNAMIC_DRAW if you are likely to modify the raw coordinates regularly, although please realise that you can move objects around within a world and perform many fancy effects using the shader, without actually touching these raw coordinates. Therefore, just to stress this, you almost certainly should leave this as static unless you have good reason to change it.

Now we have our vertices on the GPU, let's finish up...

135.
    glBindBuffer(GL_ARRAY_BUFFER, vbo);
136.
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); 
137.
+ 138.
    glEnableVertexAttribArray(0);
+ 139.
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
+ 140.
+ 141.
    glBindVertexArray(0);
142.
143.
    glClearColor(0.04f, 0.23f, 0.51f, 1.0f);

The call to glEnableVertexAttribArray is used to tell OpenGL that the first (hence 0) vertex attribute is in array form. If you remember back to when we wrote our shader, we bound the input aPosition to location zero. We also passed in these vertices just above as an array of values which varies for each vertex. Well, you can also pass in values which are exactly the same for each vertex. For historical reasons, the latter is the default behaviour, even if it's perhaps less commonly used. So to pass in our vertices in array form we need to tell OpenGL that for attribute zero (it's position), it should expect an array.

The last thing we need to do is to make a call to glVertexAttribPointer. Here we explain to OpenGL how exactly the data array of vertices we pass in should be interpreted. The first value is the buffer index, so the data is to go to location zero in the shader. Next up, we specify that each vertex is to receive three floats from the array, which will represent our x, y, and z coordinates. The next parameter defines the data type of our coordinates as floats. The fourth parameter is quite specific, but determines whether variables should be automatically normalised or not, which we don't want to do.

The fifth parameter, which we set to zero, defines the stride. A stride of zero means the floats we want to use from this data are all right next to each in the array. It could be that our data alternates between a set of spatial vertex coordinates, then a set of texture coordinates, then the next spatial coordinates, and then back to texture coordinates etc. We can therefore use the stride to tell OpenGL to skip a certain number of bytes before reading the next value in the array - but in our case we pass zero as they are all tightly packed into the array.

This raises the question of why would we want to have data inter-twined like this instead of just using another array? Well, perhaps we are loading a 3D model from disk which has data in this format. This way we can just dump it all straight to the GPU and not have to worry about separating it ourselves. Perhaps more interestingly, I have heard that there can be speed advantages to this, as all the data for a single vertex is accessed from contiguous GPU memory, which on some architectures may perform better. However, I haven't verified this myself, and it may be very architecture dependent, so take this with a grain of salt.

The final parameter of this function determines the offset from the start for where to start reading the array, useful if the stride is non-zero. But we just pass a zero to tell OpenGL to start reading our vertex array from the beginning.

We finish up this block of code with a call to glBindVertexArray(0). This is actually the same call we made a few lines above but passing zero. Doing so with a value of zero is essentially unbinding our VAO, as we've now finished generating our buffers, passing in the data, explaining to OpenGL how to interpret it, and where it should go in the shader. Our VBO now holds our vertices on the GPU, and we have set up a VAO which knows to bind a single VBO when drawing.

Before we finish our initialisation code, I just want to add one more thing. Do you remember when I said before the object with a greater z coordinate have a greater distance from us, the viewer, so will be drawn behind? That's not true. Or at least not right now. By default OpenGL will draw primitives (triangles) in the order your code draws them. We need to explicitly enable depth-testing, in order for the z coordinate to actually be considered. As it is disabled by default, let's enable it with the following line:

143.
    glClearColor(0.04f, 0.23f, 0.51f, 1.0f);
144.
+ 145.
    glEnable(GL_DEPTH_TEST);
146.
147.
    return 0;
148.
}

That's our initialisation done! As we've initialised something, before we go any further, let's not forget to uninitialise it too:

150.
void close()
151.
{
+ 152.
    glDeleteVertexArrays(1, &vao);
+ 153.
    glDeleteBuffers(1, &vbo);
+ 154.
    glDeleteProgram(shaderProgram);
155.
156.
    SDL_GL_DeleteContext(context);
157.
    SDL_DestroyWindow(window);
158.
    SDL_Quit();
159.
}

Entirely analogous to the generate buffer calls, we delete the VAO and then the VBO, and finally our shader program.

Drawing

With that in place, we can finally get to the last part of this lesson, drawing! Fortunately after all our good work preparing the data, the actual drawing is easy!

199.
void draw()
200.
{
201.
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
202.
+ 203.
    glUseProgram(shaderProgram);
+ 204.
    glBindVertexArray(vao);
+ 205.
+ 206.
    glDrawArrays(GL_TRIANGLES, 0, 3);
+ 207.
    
+ 208.
    glBindVertexArray(0);
+ 209.
    glUseProgram(0);
210.
211.
    SDL_GL_SwapWindow(window);
212.
}

Every time we draw, we bind our shader, bind our VAO (which in turn sets any states, binds any VBOs etc), and just make a call to glDrawArrays. This call at long last draws our actual triangle!

The first parameter of glDrawArrays tells OpenGL what to draw, in this case triangles. Unfortunately this isn't some parameter where we can tell the GPU to draw squares or pentagons or cars or any other shapes. It basically amounts to lines, points, and triangles, and some interesting variations on each. We will go into some of these later, for example GL_TRIANGLE_STRIP for drawing terrains. The full list can be seen here, but usually you will probably just be using triangles.

The next parameters define which vertex index to start with, and the third parameter how many to draw. So this call will draw all three vertices, and make our triangle appear on screen.

Finally, after finishing drawing, we bind a value of zero for our VAO and shader program, in effect unbinding both of these.

TIP: Is it really necessary to unbind everything after use? This applies to shaders/VAOs/VBOs and much more in OpenGL, but let's consider the binding of a shader program. In our code, we continually unbind the shader at the end of every loop only to rebind it again on the next loop. If we removed the unbind, we could still be sure that our triangle is drawn with the correct shader even if our code was much more complex, as we have made sure the correct shader is bound before the draw. The topic is discussed in the comments in this Stack Overflow thread, and the answer essentially boils down to not performing unbinds may be slightly faster, but can lead to undesired corruption if using third-party libraries, so it's considered good practice to follow each bind with a corresponding unbind.

Conclusion

After all that, if you compile and run your code, you should now see your first triangle on screen! Congratulations!

Now, we've covered a huge amount in this tutorial, and I appreciate that much of it may seem superfluous and unnecessary. However, the power of the shaders will become readily apparent over the next few tutorials.

What's more, we spent lots of time making lots of calls to control our GPU buffers. What's more, we spent some time looking at creating and using the GPU buffers from our main well. Again, this might seem like we're calling a lot of functions, but essentially it doesn't get much more complicated that that.

Creating the buffer, filling it, and telling the shader how to use it for a million triangles is not really much different than doing it for a single triangle. Over the next few lessons, hopefully this way of working will become second nature to you; again you've pretty much seen most of the function calls that even large programs use!