Lesson 11: Model Loading

In the previous lessons, we rendered a square and used texture mapping to render a wooden crate texture onto it. If we instead wanted to render the full 3D cube to make it look like a proper wooden crate, we could manually add all the vertex positions and texture coordinates to our buffers. But at this point, we may as well start thinking about a better long-term solution than crafting every model by hand. It's very tedious!
So in this lesson, we going to learn how to import 3D models into our virtual world!
Wavefront OBJ file format
Our life will be much easier if we can create models in 3D modelling software, export them to a file, and then directly load these models into our program. So let's look at how we can achieve that.
I'm not going to cover how to use 3D modelling software in these tutorials - there are plenty of guides online. But if you're interested, for creating 3D models I use Blender. It's an open-source, cross-platform and free modelling suite, capable of industry standard 3D modelling and animation.
Looking at the possible file formats Blender can export to, the Wavefront (.obj) format is relatively straight-forward for us to implement a reader for. It's essentially a text file containing a list of vertices, a list of normal vectors (which we'll need later for lighting), and a list of texture coordinates. The file format then ends by specifying "faces", similar to the indices we use in our Index Buffer Objects. You can read quite a good overview of the format and it's layout on Wikipedia.
The format is widely supported, known, and used, with virtually all other modelling suites also supporting this this format. Moreover it's simplicity makes it great as a starting point, and easy to debug and understand. It is usually just a starting point though - it isn't perfect. The data is stored as text rather than in binary, which has the benefit that you can open it in any text editor to see what's going on. But it also means the file size tends to be rather large, and parsing text files is slower. It also doesn't support any form of animation.
It is a good place to start though. Don't worry too much - we'll add additional loaders for more feature rich formats later on!
For this lesson, I've created a simple cube in Blender and texture mapped it.
I've then exported it (File > Export > Wavefront(.obj)
), making sure that "Triangulated Mesh" is ticked.
This makes sure that square faces for example are broken down into triangles in the exported file, which is important as GPUs can only natively render triangles.
Therefore we'll write our model importer assuming that the file only contains triangles, which is much easier.
Also make sure that the "Forward Axis" is set to "Y", and "Up Axis" set to "Z", to align the Blender axes with our choice of world axes.
I've then saved this model into our resources
folder - you can download my workspace at the bottom if you want to use my cube model!
Normals
Up until now, we've rendered our vertices with a VBO for the vertex coordinates, a VBO for the vertex colours, and a VBO for the texture coordinates. However colours are not often used on real models - objects in the real world are rarely a simple colour gradient, and therefore this field is often not included in model files. Models generally use texture mapping instead to provide their colour information.
As a result, we no longer need the triplet of RGB colours to be passed into our shaders.
However, for later tutorials on lighting, it's important to know the normal of a face. This is a directional vector pointing perpendicular to the surface, which is used for calculating how light reflects.
For now we don't need to know much more than this. But as the normal vectors are specified in the model file, and we'll need them later, we'll write our model importer to repurpose the colour VBO to instead use it to handle the model's normals.
Conveniently, both the colour (RGB) and the normal vector (XYZ) are composed of three floats, so we can just perform a drop in replacement of the colour VBO with the new normal VBO and rename a few variables.
Specification
The full Wavefront OBJ (sometimes just "OBJ") file specification includes some features which we don't have any use for right now, such as curved surfaces. So full disclosure, we're not writing a comprehensive parser for the format, but one only for the most common and useful set of features - which are all we can render right now anyway.
Let's take a look at an example model file. If you open an OBJ file in a text editor, you will see something like this:
1. |
|
2. |
|
3. |
|
4. |
|
5. |
|
6. |
|
7. |
|
8. |
|
9. |
|
10. |
|
11. |
|
12. |
|
13. |
|
14. |
|
15. |
|
16. |
|
17. |
|
18. |
|
19. |
|
20. |
|
This is actually an OBJ file for the square we've been rendering in the last few tutorials!
The first thing you'll notice is that any lines beginning with a #
symbol are comments.
They're only for humans reading the file, but can otherwise be ignored by our parser.
The data is laid out in the following way:
- Lines beginning with a 'v' define a vertex position. They are followed by 3 numbers on the same line, separated by spaces, specifying the vertex coordinates.
- Lines beginning with a 'vt' define a texture coordinate. They're followed by 2 numbers specifying just that.
- Lines beginning with a 'vn' define a vertex normal. They are followed by 3 numbers specifying the normal's directional vector.
Finally, at the bottom of the file, we have the actual face definitions for the geometry, on lines beginning with 'f'. As we triangulated our models when exporting them, you can understand the word face as just meaning a triangle here.
We can see that there are two lines beginning with 'f' in this sample, so two faces will be defined, giving us the two triangles we've been rendering up until now. For now let's just concentrate on that first line beginning with an 'f', defining our first triangle:
|
The face definition starts with the element '1/1/1', defining the triangle's first vertex. Unfortunately despite almost the entire field of computer science agreeing that a zero represents the first item in an array, OBJ files represent the first element with a one, so be aware of that.
The first number in the '1/1/1' element specifies which vertex coordinate to use, so this vertex's position coordinates can be found in the very first line in the file beginning with 'v' (line 4). The second number in the element, between the two slashes, gives us the texture coordinate, so this vertex's texture coordinate can be found in the very first line containing a 'vt' (line 10). The third number then denotes the vertex's normal, the first line in the file containing a 'vn' (line 16). With that, our first triangle's first vertex has been defined.
The second element of the line ('2/2/1') can then be decoded in the same way to get the triangle's second vertex data. Here, being formed from the second vertex coordinate (line 5), and second texture coordinate (line 11), but reuses the same normal definition as the first vertex (line 16). The triangle's final vertex ('3/3/1') is then composed of the third vertex coordinate, the third texture coordinate, and again the first (and only) vertex normal.
The file then defines a second triangle in the same manner. This time using the first, third and fourth vertex coordinates and texture coordinates, and again reusing the same vertex normal.
That's basically it. Simple right?
To be clear, the OBJ specification does allow faces to have a fourth (or potentially more) elements per face for encoding squares or other shapes. Again this is why when we export OBJ files from Blender we need to tick 'Triangulated Mesh' to ensure each face only has three elements in the file. If somehow there are more than three vertices per face in an OBJ file we read, which is a valid OBJ file, rather than handle the complexities involved we'll just throw an error and remind the modeller to triangulate the model when exporting it. If you find some model online containing non-triangles, don't forget you can always open it in Blender, and then re-export it as a new file with a triangulated mesh.
For what it's worth, the format does not require that models must have texture coordinates or normals defined. If the face is defined simply as 'f 1 2 3', then we are looking at a model which only has the vertex coordinates defined. Alternatively 'f 1/1 2/2 3/3' would indicate a model with vertex and texture coordinates, but no normals. Faces can even take the form 'f 1//1 2//2 3//3' for vertices with normals but no texture coordinates. But again to keep things simple, we're going to assume any models we read have been exported with the full set of data, and throw errors otherwise - it keeps both our importer and rendering code simple!
Complications
So far this file format seems fairly reasonable and straight-forward. We can simply take the vertex positions and pass them into a VBO, pass the textures coordinates into another VBO, and the normals into a third VBO. Then we can create an Index Buffer Object from the faces.
But there's a catch - it's not quite so simple.
Let's go back to our first triangle's definition:
|
The second element here is defined as "2/2/1". This means that the vertex is composed of the second element from the vertex VBO, the second element from the texture VBO, and the first element from the normal VBO.
However IBOs cannot select different items from different buffers in this way! Our IBO can only contain a single value, "3" for instance, meaning use the third item from every VBO. This complicates everything.
Fortunately it is possible to solve this, it will just mean a bit of re-ordering and extra work as we parse the file.
What we can do is when we read the first element, "1/1/1" in our example, is to then write the first vertex coordinate into the vertex VBO, and do the same for the texture coordinates and normals. Then we can write a zero into our IBO, indicating that the first vertex to be drawn uses the first first value in each array.
For the second face element, "2/2/1", we can then look up the second vertex position and push that to the corresponding VBO, push the second texture coordinate to that VBO, and push the first normal into the normal VBO again. For the IBO, we can then append a one as this vertex's data is found at index one in the VBO. We can keep doing this for all subsequent vertices, looking up it's data, writing it to the end of the VBOs, and then appending a new incremented value to the IBO.
If we just did this for each vertex of each face, we would actually have a working solution! But, a working solution with quite a bit of data duplication. When we hit the second "1/1/1" at the start of the second triangle, all of the vertex's data in each VBO will have been duplicated, and the IBO would point to this duplicate set.
A more efficient solution would be if we can recognise that we've seen the face element "1/1/1" before, and if so not push anything to the VBOs, but just push the index of that data into the IBO.
So to improve our parser, we can cache each face element's string (eg. "1/1/1") in an array each time we parse it. Then for future face elements, we can first check if the element already exists in this buffer. If not, then it's the first time we've seen this combination of values, so we write the data to the VBOs, append it's index to the IBO, and buffer the face element for the future. If we have seen the element's string before though, we just need to find what value was written to the IBO when it was parsed and push it to the IBO again.
Going back to the previous example file, this means that for the first triangle, for each element we would write it's vertex coordinate, texture coordinate, and normal into each VBO, as well as write a new incrementing index to the IBO. At the end of the first triangle's definition, each VBO would contain three values, and the IBO would contain "0, 1, 2". When we hit the first element of the second triangle though, we've already seen "1/1/1" before. When it was last seen, we pushed a zero to the IBO, so we just need to write this value to the IBO again, now containing "0, 1, 2, 0", causing OpenGL to re-use the first value in each VBO for this vertex.
Similarly, the second triangle's second vertex definition has been seen before, so we would just push a "1" to the IBO. Then for the final element of the second triangle, as it hasn't been seen before we would write the full set of data to the VBOs including a new IBO value and buffering the element's string so we can recognise if we seen this vertex again in the future. This approach results in us having the exact same set of buffers that we've been hard-coding up until now, and with that we have a working OBJ parser!
Writing a model importer class
To encapsulate models, including their data, file parsers, and draw functions, we'll create another class.
So let's create a new header file called model.h
:
+ 1. |
|
+ 2. |
|
+ 3. |
|
+ 4. |
|
+ 5. |
|
+ 6. |
|
+ 7. |
|
+ 8. |
|
+ 9. |
|
+ 10. |
|
+ 11. |
|
+ 12. |
|
+ 13. |
|
+ 14. |
|
+ 15. |
|
+ 16. |
|
+ 17. |
|
+ 18. |
|
+ 19. |
|
+ 20. |
|
+ 21. |
|
+ 22. |
|
+ 23. |
|
+ 24. |
|
+ 25. |
|
+ 26. |
|
+ 27. |
|
+ 28. |
|
+ 29. |
|
+ 30. |
|
+ 31. |
|
Like our previous classes, we begin with a Pragma Once so we can include this file in multiple places without redefinition errors while compiling.
For our model class, we'll include the GLEW library so we can access OpenGL types, and the string library for handling the filenames and error messages.
Like before, I'm using namespace std
purely for brevity here.
Our model class has a constructor for initialising it's variables to safe starting values.
It then expects to be passed the filename of the model to be loaded with the setFilename
function, and just as before this won't actually load the model, but simply store the filename for future use.
The class then has a function loadOBJModel
to actually load and parse the model file and prepare everything for drawing. The function returns a bool, true on success.
It there was a problem while loading, an error message will be written to the errorMessage
variable, which can be retrieved with the getError
function.
The model then has functions for deleting the model for when our program is finished, and functions for binding and unbinding the model. Any models we load will be assembled into a VAO just as we've used for drawing geometry up until to now, so the bind and unbind effectively just bind and unbind the model's VAO. The VAO along with it's VBOs can be seen in the private variables.
We also have a function for getting the number of indices (or number of vertices to be drawn), getIndexCount
, which is necessary for drawing the model.
All in all though this structure should be feeling quite familiar by now!
Building a model importer
With a header file for our model loader in place to act as a template, we can now start defining those functions in a new file, model.cpp
:
+ 1. |
|
+ 2. |
|
+ 3. |
|
+ 4. |
|
+ 5. |
|
+ 6. |
|
+ 7. |
|
+ 8. |
|
+ 9. |
|
+ 10. |
|
+ 11. |
|
+ 12. |
|
+ 13. |
|
+ 14. |
|
+ 15. |
|
+ 16. |
|
+ 17. |
|
+ 18. |
|
+ 19. |
|
+ 20. |
|
+ 21. |
|
+ 22. |
|
+ 23. |
|
In this file we begin by including our model.h
(which in turn includes the string library and GLEW).
We also include SDL, as well as fstream
(filestream) for working with files, and sstream
(stringstream) for manipulating the strings we read from them.
Previously we knew exactly the size of the VBO buffers at compile time so could just use a fixed size float array, but as we know longer know how big the files are ahead of time we also include the vector header to give us an easy way of buffering data from the file.
For our constructor, all it needs to do is initialise our indexCount
, vao
, and each element of our vbo
array to zero, safe starting values, so that's fairly straight-forward.
The other member variables are strings which are initialised automatically, so we don't need to worry about those.
Likewise the code for the setFilename
function is exactly the same as we've done before, converting the model filename passed in to an absolute path and saving it to the filename
member variable.
Next up is the big one, the loadOBJModel
function where we actually parse the data:
22. | filename = SDL_GetBasePath() + newModelFilename; |
23. | } |
24. | |
+ 25. |
|
+ 26. |
|
+ 27. |
|
+ 28. |
|
+ 29. |
|
+ 30. |
|
+ 31. |
|
+ 32. |
|
+ 33. |
|
+ 34. |
|
+ 35. |
|
+ 36. |
|
+ 37. |
|
+ 38. |
|
+ 39. |
|
+ 40. |
|
+ 41. |
|
+ 42. |
|
+ 43. |
|
We start off with a call to deleteModel
to make sure that if the model's already been loaded and we're reloading it, any previous data is freed and the object reverts back to a default state before trying to load a new file.
After that we make sure a filename has been set, and if not write an error message and return false to indicate a problem while loading.
We then attempt to open the file as an input file stream, and again if this fails we bail out.
Thinking about how we can handle the data parsing, we know that as we loop over the faces, we will need to perform lots of lookups of the n-th line beginning with v
value for example.
If we search the whole file each time looking for these, our code is going to be horrendously slow.
Instead, it's much more efficient if we first read the entire file and create a buffer holding all lines beginning with v, another for lines beginning with vt, and a third for vn. Then each time we need a particular texture coordinate for example, it's just a matter of looking it up from these arrays, rather than searching through the file until we hit the line we need. As the arrays will be ordered, we will be able to directly access the array at a specific location to get the data we need.
So the first thing we'll do with our OBJ file parser is to simply loop through the file line-by-line, and store the data for each vertex coordinate, normal, and texture coordinate, into arrays for future lookups.
To achieve this, let's create these buffers using an std::vector
of floats for each type:
39. | errorMessage += filename; |
40. | return false; |
41. | } |
42. | |
+ 43. |
|
+ 44. |
|
+ 45. |
|
46. | |
47. | ... |
The idea being that every time we see a line beginning with a "v", we'll push each of the three floats on the line into the fileVertexData
.
When we need to perform a lookup of the n-th vertex coordinates, it's then just a matter of reading the data at a certain index of these buffers rather than searching through the file.
The same is true of the normals and texture coordinates of course, using fileTextureData
and fileNormalData
to hold their data.
With the buffers ready, we'll then loop over the file looking for any lines beginning with 'v', 'vt' or 'vn':
43. | vector<float> fileVertexData; |
44. | vector<float> fileTextureData; |
45. | vector<float> fileNormalData; |
46. | |
+ 47. |
|
+ 48. |
|
+ 49. |
|
+ 50. |
|
+ 51. |
|
+ 52. |
|
+ 53. |
|
+ 54. |
|
+ 55. |
|
+ 56. |
|
+ 57. |
|
+ 58. |
|
+ 59. |
|
+ 60. |
|
+ 61. |
|
+ 62. |
|
+ 63. |
|
+ 64. |
|
+ 65. |
|
+ 66. |
|
+ 67. |
|
+ 68. |
|
+ 69. |
|
+ 70. |
|
+ 71. |
|
+ 72. |
|
73. | |
74. | ... |
We set up two string variables, one to hold the entire line we're currently reading, and another to hold what I've called the "lineIdentifier", or the first few letters at the start of the line (eg. "vt"). I've also declared 3 floats which we'll use to buffer the vertex data into.
We loop over the file with the getline
function, reading them iteratively into our line
variable.
We then use a stringstream to parse the whole line into a string (lineIdentifier) and 3 floats.
This style of parsing will split the input variable by spaces, and then attempt to parse it into the relevant variables, and is safe to call even if the parsing is not possible.
If the line begins with a "v" (assuming your files are not corrupt) then we can expect that three floats will also be parsed from the line.
So we push each of these to our fileVertexData
.
Of course if the file is corrupt, then our buffers will be corrupted too, so I'm not sure how much effort it's worth trying to detect issues here. Generally my models are just directly exported from Blender so aren't at risk of accidental human mistakes slipping through. But you could verify each float is being set to a parsed from the file if you wanted to.
For parsing texture coordinates (lines with "vt") we only push the first two float variables into the relevant buffer. The third float will fail to parse and be in an invalid state, but that doesn't matter as we wont use it. Similarly, if the first part of the line is "vn" then again we expect three floats and push them into the normal buffer.
All other lines are therefore ignored by this loop, which will end once it hits the bottom of the file.
Once this block of code has run, our buffers will contain all the relevant vertex information.
It will also be in the correct order for the faces to index.
We just need to remember that to access the n-th vertex coordinate, we need to subtract 1 (as OBJ files are indexed starting from 1), and then multiply by 3 (or 2 for texture coordinates).
So the if our face references normal 3, performing the above calculation tells us that we can find the normal's first float at index 6, and the other two values will be immediately after this in the fileNormalData
array.
With our those buffers prepared, we can now begin looking at those face definitions. Let's first jump back to the start of the file, and then prepare the buffers that will become our new VBOs:
70. | fileNormalData.push_back(value3); |
71. | } |
72. | } |
73. | |
+ 74. |
|
+ 75. |
|
+ 76. |
|
+ 77. |
|
+ 78. |
|
+ 79. |
|
+ 80. |
|
81. | |
82. | ... |
So we reset the flags for the file (eg. the flag indicating we've hit the end of the file), and then jump back to the first byte.
We then set up four new buffers which will be the buffers actually uploaded to the GPU.
These arrays are exactly like the ones we previously hard-coded into our main.cpp
and passed in to the GPU VBO buffers.
They have the same data-types too, although this time I've declared them as vectors so I can dynamically push data into them as we process it from the model's face definitions.
79. | vector<GLfloat> normals; |
80. | vector<GLuint> indices; |
81. | |
+ 82. |
|
+ 83. |
|
+ 84. |
|
+ 85. |
|
+ 86. |
|
+ 87. |
|
+ 88. |
|
+ 89. |
|
+ 90. |
|
+ 91. |
|
92. | |
93. | ... |
We also set up a few more variables we'll need before we start parsing the faces.
We start with uniqueElements
, which will keep track of how many unique face elements we've seen.
Everytime we see a new element, we can increment this variable, and then push it's new value to the index buffer.
We then set up a buffer to hold all the unique elements we've seen so far. We'll do a lookup on this every time we see a new element to check if we've seen it before.
We then start looping through the file once more, and try to parse a string from each line into the lineIdentifier
variable.
This again will grab everything up until the first space in the line, and we can use this to skip any lines we're not interested in by continuing if doesnt equal "f".
If we are interested in this line, we can begin processing it:
90. | if(lineIdentifier != "f") |
91. | continue; |
92. | |
+ 93. |
|
+ 94. |
|
+ 95. |
|
+ 96. |
|
+ 97. |
|
+ 98. |
|
+ 99. |
|
+ 100. |
|
+ 101. |
|
+ 102. |
|
+ 103. |
|
+ 104. |
|
+ 105. |
|
+ 106. |
|
+ 107. |
|
+ 108. |
|
+ 109. |
|
+ 110. |
|
111. | |
112. | ... |
We set up a string which we'll read each face element into (eg. "1/1/1").
Fortunately each of the face's elements can be handled in exactly the same way, so we can loop three times, reading the next face element on the line and then processing it.
For each of the elements, we parse it from the current line in the file.
Remember this will read everything up until the next space character into our element
string variable.
Fortunately, OBJ files are not allowed to have spaces in the element (eg. cannot be "1 / 1 / 1") so parsing the line is quite easy.
Once we have the element, the first thing we need to do is decide if we've seen it before. If so we can re-use the VBO data, else we'll need to push it's data into the VBOs.
So we set up boolean flag to hold if we've seen it before, and then loop over all the elements we've previously seen.
If we find this string in the buffer, as we've already seen it before we can push it's index into the IBO, and in effect we'll just be re-using the previous element's data.
This works quite nicely because the element's position in the elementBuffer
also corresponds to it's index in the VBOs, hence why we can simply push i
.
If we re-use the data, we then set the alreadyExists
flag to true to indicate that we've finished handling this element.
We break out the loop and then this flag will be caught and we'll move on to the next element.
If we search the whole buffer and don't find it, then we know we're dealing with a element we've never seen before. Our code can then begin processing it:
109. | if(alreadyExists) |
110. | continue; |
111. | |
+ 112. |
|
+ 113. |
|
114. | |
115. | ... |
To process a new element, we'll start by parsing it from a string ("1/1/1") into the individual integers.
We set up 3 integers to hold the three parts of the element (we'll assume all 3 components are present for this parser).
We can then use sscanf
to do the parsing.
Now we can begin looking up these indices from the file buffers:
112. | int vertexIndex, textureIndex, normalIndex; |
113. | sscanf(element.c_str(), "%i/%i/%i", &vertexIndex, &textureIndex, &normalIndex); |
114. | |
+ 115. |
|
+ 116. |
|
+ 117. |
|
+ 118. |
|
119. | |
120. | ... |
So for a given element, say "2/3/4", the vertex index is the first number, now parsed into the vertexIndex
variable.
To get the actual vertex coordinates for this index, we need to subtract 1 as OBJ indexing starts at 1, and then multiply by 3.
This float, the vertex's X coordinate, is pushed into the vertices
variable, along with the two following two numbers in the array (the Y and Z coordinates).
We can then do the same for the texture coordinates and the normals:
115. | int vertexLocation = (vertexIndex - 1) * 3; |
116. | vertices.push_back(fileVertexData[vertexLocation + 0]); |
117. | vertices.push_back(fileVertexData[vertexLocation + 1]); |
118. | vertices.push_back(fileVertexData[vertexLocation + 2]); |
119. | |
+ 120. |
|
+ 121. |
|
+ 122. |
|
+ 123. |
|
+ 124. |
|
+ 125. |
|
+ 126. |
|
+ 127. |
|
128. | |
129. | ... |
Again note that we only need to multiply by 2 for the texture coordinates.
Now, our vertices, texture coordinates and normal buffers contain the relevant data for this unique element.
Let's now update the rest of the variables for this element:
124. | int normalLocation = (normalIndex - 1) * 3; |
125. | normals.push_back(fileNormalData[normalLocation + 0]); |
126. | normals.push_back(fileNormalData[normalLocation + 1]); |
127. | normals.push_back(fileNormalData[normalLocation + 2]); |
128. | |
+ 129. |
|
+ 130. |
|
+ 131. |
|
+ 132. |
|
+ 133. |
|
134. | |
135. | ... |
We push uniqueElements
to the index buffer to make sure that this new data pushed into the VBOs actually gets drawn.
We then write the face element to the elementBuffer
so that in the future we can recognise if we see this exact element again, so we can re-use it's data.
We then finish up by incrementing the number of unique elements that we've seen.
The code will then loop back and do the same for the second and third vertex of the triangle. But otherwise that's it for the parsing of the file!
Next let's perform a few safety checks:
131. | elementBuffer.push_back(element); |
132. | uniqueElements++; |
133. | } |
134. | |
+ 135. |
|
+ 136. |
|
+ 137. |
|
+ 138. |
|
+ 139. |
|
+ 140. |
|
+ 141. |
|
+ 142. |
|
+ 143. |
|
144. | |
145. | ... |
Let's empty our element
variable, and then try to parse one more element from the line into it.
If the model was triangulated when it was exported, then this won't exist - there will only be 3 elements per face. But if we attempt to parse another string from the line and the string is not then empty, then we know the model wasn't properly triangulated when exported.
It would be possible to try to triangulate it, but we're just looking at a basic parser here, so we throw an error saying that the model needs to be fixed before it can be used and bail out the function if this happens.
With that checked, we can close the while-block looping over each line in the file, and when we're done, close the file:
140. | errorMessage += filename; |
141. | return false; |
142. | } |
143. | } |
144. | |
+ 145. |
|
+ 146. |
|
+ 147. |
|
+ 148. |
|
+ 149. |
|
+ 150. |
|
+ 151. |
|
+ 152. |
|
153. | |
154. | ... |
We perform one more check that we've actually read something from the file into each buffer. It's easy to spend a lot of time debugging and digging in to your code to find why a model isn't loading correctly only to find the answer is dumb things like the model being exported without texture coordinates, so these tests really are worth the time to implement!
At this point, all our data is read in, and in the correct format, so we can begin pushing it to our GPU just as we've done before with our hard-coded data:
149. | errorMessage = "Essential data missing from file: "; |
150. | errorMessage += filename; |
151. | return false; |
152. | } |
153. | |
+ 154. |
|
+ 155. |
|
+ 156. |
|
+ 157. |
|
158. | |
159. | ... |
Just as we did when we set up our original geometry, we start by generating a VAO on the GPU to maintain the state of our buffers, and then bind it.
We then generate four VBOs on the GPU for the coordinates, normals, texture coordinates and indices. If you need a refresher on VAOs and VBOs, you can always head back to Lesson 6 for a recap.
Next, we need to copy the data in to each of these buffers, starting with the vertex coordinates:
157. | glGenBuffers(4, vbo); |
158. | |
+ 159. |
|
+ 160. |
|
+ 161. |
|
+ 162. |
|
+ 163. |
|
164. | |
165. | ... |
The functions to copy to our GPU buffers are the exact same as we've used before.
We start by binding the first buffer.
Then we actually copy our vertices
array into the GPU buffer, passing as parameters that it is an array, the size of the array in bytes, the data itself, and then that the data is static and won't be changed regularly.
As we are now using an std::vector
to hold the data we're passing in, we cannot just call sizeof(vertices)
to get the number of bytes it takes up - this instead returns the size of the helper structure.
So instead, to get the size of the array in bytes, we get the number of bytes the first element in the array takes up, and then multiply that by the total number of elements in the array.
Likewise we can no longer pass the address of the array in for the data, as it is no longer a regular array.
But we can get a pointer to the raw underlying array with vertices.data()
, which is fine as the underlying array is guaranteed to be stored in contiguous memory just like a normal array.
With the data on the GPU, we then tell OpenGL how the shaders should interpret it, first passing that location 0 - where our shaders expect to read their coordinate data from - is an array. The second call then tell OpenGL that at location zero, 3 floats should be passed in to each vertex. Like before the final 3 parameters tell OpenGL not to normalise the data, that it has zero stride, and starts from element zero in the array.
We then do the same for the normals:
162. | glEnableVertexAttribArray(0); |
163. | glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); |
164. | |
+ 165. |
|
+ 166. |
|
+ 167. |
|
+ 168. |
|
+ 169. |
|
170. | |
171. | ... |
This time pushing them in to the second VBO, but otherwise the same.
Then, the texture coordinates:
168. | glEnableVertexAttribArray(1); |
169. | glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0); |
170. | |
+ 171. |
|
+ 172. |
|
+ 173. |
|
+ 174. |
|
+ 175. |
|
176. | |
177. | ... |
This time into the third VBO, and updating the call to glVertexAttribPointer
to tell OpenGL that only 2 variables from the array should be passed in to each shader.
We can now process the index buffer, and finish our model loading function:
174. | glEnableVertexAttribArray(2); |
175. | glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, 0); |
176. | |
+ 177. |
|
+ 178. |
|
+ 179. |
|
+ 180. |
|
+ 181. |
|
+ 182. |
|
+ 183. |
|
+ 184. |
|
+ 185. |
|
Again this is just as we've covered in previous lessons. We bind the fourth VBO as an element array buffer, and then upload the indices data into it.
All our buffers are then ready, so we can call glBindVertexArray
with zero to unbind our VAO.
We then update our indexCount
variable so we can keep track of how many indices the model has.
We need this to tell OpenGL how many vertices to draw each frame.
Finally, we finish the function by returning true to indicate we loaded the model successfully.
Great - we can now load any 3D model we like from 3D modelling suites into our VBOs via Wavefront OBJs. Now that's out of the way, the rest of the functions and changes to the code are relatively easy. Let's move on to the delete function:
+ 187. |
|
+ 188. |
|
+ 189. |
|
+ 190. |
|
+ 191. |
|
+ 192. |
|
+ 193. |
|
+ 194. |
|
+ 195. |
|
+ 196. |
|
+ 197. |
|
+ 198. |
|
+ 199. |
|
+ 200. |
|
+ 201. |
|
When the model's delete function is called, we tell OpenGL to delete it's VAO and VBO buffers on the GPU.
We can then reset all our object's variables to their default values.
With the delete function sorted, we can then define the final few functions of our model:
+ 203. |
|
+ 204. |
|
+ 205. |
|
+ 206. |
|
+ 207. |
|
+ 208. |
|
+ 209. |
|
+ 210. |
|
+ 211. |
|
+ 212. |
|
+ 213. |
|
+ 214. |
|
+ 215. |
|
+ 216. |
|
+ 217. |
|
+ 218. |
|
+ 219. |
|
+ 220. |
|
+ 221. |
|
+ 222. |
|
+ 223. |
|
+ 224. |
|
+ 225. |
|
+ 226. |
|
Just as we were binding the VAO from our draw function before, now when there is a call to the model's bind function, the model calls glBindVertexArray
on it's VAO, which will cause all the relevant VBOs set during loading to become bound.
Then to unbind the model, again we just bind zero.
We also define the final few getter functions for the number of indices the model has, it's filename, and any error messages from loading.
Updating our Makefile
As we've created a new source file, let's not forget about including it in our Makefile:
1. | CC = g++ |
2. | |
+ 3. |
|
4. | |
5. | LIBRARIES = -lSDL3 -lSDL3_image -lGL -lGLEW |
I've appended our new model.cpp
to the list of source code files to be compiled when we build our program.
Using Our Models
To start using our new model class, we'll need to start by including it's header file in our main.cpp
:
1. | #include "shader.h" |
2. | #include "texture.h" |
+ 3. |
|
4. | |
5. | #include <SDL3/SDL.h> |
6. | #include <SDL3/SDL_main.h> |
As we'll be exclusively using models to store our geometry data from now on, our main.cpp
will no longer need to create and use VAOs and VBOs itself.
Therefore we can remove both of these variable declarations from the top of our source code, and instead replace them with a single model object declaration:
32. | float pitch = 0; |
33. | float yaw = 0; |
34. | |
35. | Shader mainShader; |
36. | Texture crateTexture; |
+ 37. |
|
38. | |
39. | bool init() |
40. | { |
Similarly, in our init
function, we no longer need all that code to manually specify all our vertex positions, texture coordinates, etc.
Everything to do with the VAOs and VBOs can be removed from here, everything between loading the texture and setting the mouse into relative mode.
This can all be replaced with setting the model object's filename, and attempting to load it:
117. | crateTexture.setFilename("resources/crate/diffuse.png"); |
118. | if(!crateTexture.loadTexture()) |
119. | { |
120. | printf("Unable to load texture: %s\n", crateTexture.getError().c_str()); |
121. | return false; |
122. | } |
123. | |
+ 124. |
|
+ 125. |
|
+ 126. |
|
+ 127. |
|
+ 128. |
|
+ 129. |
|
130. | |
131. | SDL_SetWindowRelativeMouseMode(window, true); |
132. | |
133. | glClearColor(0.04f, 0.23f, 0.51f, 1.0f); |
134. | |
135. | glEnable(GL_DEPTH_TEST); |
136. | |
137. | previousTimestamp = SDL_GetTicks(); |
138. | |
139. | return true; |
140. | } |
Great, that's really simplified our initialisation code quite a bit!
Hopefully this habit will be burned in to you by now, but as soon as we have finished with the initialisation, we should immediately go and update our close
function.
Again, we can remove our VAO and VBO code, and just call the delete function of our model:
142. | void close() |
143. | { |
+ 144. |
|
145. | crateTexture.deleteTexture(); |
146. | mainShader.deleteShader(); |
147. | |
148. | SDL_GL_DestroyContext(context); |
149. | SDL_DestroyWindow(window); |
150. | SDL_Quit(); |
151. | } |
Because of how we've structured our code, like with our textures and shaders we can easily implement hot-reloading for our model.
All we need to do is make a new call to loadOBJModel
:
197. | else if(event.key.key == SDLK_R) |
198. | { |
199. | if(!mainShader.loadShader()) |
200. | { |
201. | printf("Unable to create shader from files: %s\n", mainShader.getFilenames().c_str()); |
202. | printf("Error message: %s\n", mainShader.getError().c_str()); |
203. | programRunning = false; |
204. | } |
205. | if(!crateTexture.loadTexture()) |
206. | { |
207. | printf("Unable to load texture: %s\n", crateTexture.getError().c_str()); |
208. | programRunning = false; |
209. | } |
+ 210. |
|
+ 211. |
|
+ 212. |
|
+ 213. |
|
+ 214. |
|
215. | } |
So like with the shader and texture, if the "r" key is pressed while the program is running, we try to reload the model.
We can finish up the changes to main.cpp
by updating the draw function to bind our model rather than the VAO:
293. | glActiveTexture(GL_TEXTURE0); |
294. | crateTexture.bind(); |
295. | |
+ 296. |
|
297. | |
298. | glm::mat4 mMatrix = glm::mat4(1.0f); |
299. | mMatrix = glm::translate(mMatrix, glm::vec3(5.0, 0.0, 0.0)); |
300. | glUniformMatrix4fv(2, 1, GL_FALSE, glm::value_ptr(mMatrix)); |
301. | |
+ 302. |
|
303. | |
+ 304. |
|
305. | |
306. | crateTexture.unbind(); |
307. | |
308. | mainShader.unbind(); |
309. | |
310. | SDL_GL_SwapWindow(window); |
Where previously we called glBindVertexArray
to bind and unbind the VAO, we replace this by simply calling bind
and unbind
on our model, which internally is just doing the same thing.
We also update our call to glDrawElements
so instead of just drawing 6 vertices, we instead query how many vertices the model has, and then draw that number.
One final thing I'm going to do, purely for aesthetic reasons, is that as we're now loading "real" objects into our virtual world rather than just drawing a square, I'm going to raise the camera off the floor a bit:
29. | float x = 0; |
30. | float y = 0; |
+ 31. |
|
32. | float pitch = 0; |
33. | float yaw = 0; |
Previously, the camera's z coordinate was zero, meaning it was positioned on the ground.
To give the scene some realistic feel to it, I'm going to position it 1.8
units above the ground.
This will position it very approximately at eye-level, assuming each unit in our world represents one metre. If you load my cube model, it should feel a little like a first person camera walking around as though the cube is on the ground.
Don't forget if you want to use a unit other than metres that's fine, just remember you need to use that unit consistently throughout your code. That includes inside any model files you read.
The cube I'm using as an example model is just under 1 unit in size, so if you're using feet throughout your code, the cube will seem tiny. Don't forget you can import any model in Blender, and then re-export the model to fix this. When exporting there is a "scale" property you can set to resize the model as you wish.
Updating our Shaders
Our current shaders expect to be passed vertex positions, colours, and texture coordinates from which they should do their drawing.
We therefore need to update these slightly to remove the code for handling the colour
input, and replace it with the model's normals, which we won't do anything with for now.
Here's the updated main_vertex.glsl
:
1. | #version 460 |
2. | |
3. | layout(location = 0) uniform mat4 uPMatrix; |
4. | layout(location = 1) uniform mat4 uVMatrix; |
5. | layout(location = 2) uniform mat4 uMMatrix; |
6. | |
7. | layout(location = 0) in vec3 aPosition; |
+ 8. |
|
9. | layout(location = 2) in vec2 aTextureCoords; |
10. | |
11. | out vec2 textureCoords; |
12. | |
13. | void main() |
14. | { |
15. | textureCoords = aTextureCoords; |
16. | gl_Position = uPMatrix * uVMatrix * uMMatrix * vec4(aPosition, 1.0); |
17. | } |
The colour
output variable has been removed, as well as the code for setting it in the main
function.
I've then just renamed location 1 to aNormal
, and left this variable unused for now.
Likewise for main_fragment.glsl
:
1. |
|
2. |
|
3. |
|
4. |
|
5. |
|
6. |
|
7. |
|
8. |
|
9. |
|
10. |
|
11. |
|
12. |
|
Nothing added here!
I've simply made sure to remove the colour
input, and any references to it when setting the fragment's colour.
Conclusion
If you build your code, you should hopefully now be able to see our wooden crate has gained an extra dimension.
The nice thing about reaching this point is that you should now be able to start crafting any models you like, texturing them, and importing them into your slowly evolving world!
While our program only supports Wavefront OBJ models right now, restricting us to static, non-animated models, we'll add support for more formats in the future.
In the next lesson though, we'll look at how we can build an "entity" system, allowing us to import a model once, but place it in multiple locations within our world. See you there!