In this chapter, we will cover:
- Setting up the OpenGL v3.3 core profile on Visual Studio 2010 using the GLEW and freeglut libraries
- Designing a GLSL shader class
- Rendering a simple colored triangle using shaders
- Doing a ripple mesh deformer using the vertex shader
- Dynamically subdividing a plane using the geometry shader
- Dynamically subdividing a plane using the geometry shader with instanced rendering
- Drawing a 2D image in a window using the fragment shader and SOIL image loading library
The OpenGL API has seen various changes since its creation in 1992. With every new version, new features were added and additional functionality was exposed on supporting hardware through extensions. Until OpenGL v2.0 (which was introduced in 2004), the functionality in the graphics pipeline was fixed, that is, there were fixed set of operations hardwired in the graphics hardware and it was impossible to modify the graphics pipeline. With OpenGL v2.0, the shader objects were introduced for the first time. That enabled programmers to modify the graphics pipeline through special programs called shaders, which were written in a special language called OpenGL shading language (GLSL).
After OpenGL v2.0, the next major version was v3.0. This version introduced two profiles for working with OpenGL; the core profile and the compatibility profile. The core profile basically contains all of the non-deprecated functionality whereas the compatibility profile retains deprecated functionality for backwards compatibility. As of 2012, the latest version of OpenGL available is OpenGL v4.3. Beyond OpenGL v3.0, the changes introduced in the application code are not as drastic as compared to those required for moving from OpenGL v2.0 to OpenGL v3.0 and above.
We will start with a very basic example in which we will set up the modern OpenGL v3.3 core profile. This example will simply create a blank window and clear the window with red color.
OpenGL or any other graphics API for that matter requires a window to display graphics in. This is carried out through platform specific codes. Previously, the GLUT library was invented to provide windowing functionality in a platform independent manner. However, this library was not maintained with each new OpenGL release. Fortunately, another independent project, freeglut, followed in the GLUT footsteps by providing similar (and in some cases better) windowing support in a platform independent way. In addition, it also helps with the creation of the OpenGL core/compatibility profile contexts. The latest version of freeglut may be downloaded from http://freeglut.sourceforge.net. The version used in the source code accompanying this book is v2.8.0. After downloading the freeglut library, you will have to compile it to generate the libs/dlls.
The extension mechanism provided by OpenGL still exists. To aid with getting the appropriate function pointers, the GLEW library is used. The latest version can be downloaded from http://glew.sourceforge.net. The version of GLEW used in the source code accompanying this book is v1.9.0. If the source release is downloaded, you will have to build GLEW first to generate the libs and dlls on your platform. You may also download the pre-built binaries.
Prior to OpenGL v3.0, the OpenGL API provided support for matrices by providing specific matrix stacks such as the modelview, projection, and texture matrix stacks. In addition, transformation functions such as translate, rotate, and scale, as well as projection functions were also provided. Moreover, immediate mode rendering was supported, allowing application programmers to directly push the vertex information to the hardware.
In OpenGL v3.0 and above, all of these functionalities are removed from the core profile, whereas for backward compatibility they are retained in the compatibility profile. If we use the core profile (which is the recommended approach), it is our responsibility to implement all of these functionalities including all matrix handling and transformations. Fortunately, a library called glm
exists that provides math related classes such as vectors and matrices. It also provides additional convenience functions and classes. For all of the demos in this book, we will use the glm
library. Since this is a headers only library, there are no linker libraries for glm
. The latest version of glm
can be downloaded from http://glm.g-truc.net. The version used for the source code in this book is v0.9.4.0.
There are several image formats available. It is not a trivial task to write an image loader for such a large number of image formats. Fortunately, there are several image loading libraries that make image loading a trivial task. In addition, they provide support for both loading as well as saving of images into various formats. One such library is the
SOIL
image loading library. The latest version of SOIL
can be downloaded from http://www.lonesock.net/soil.html.
Once we have downloaded the SOIL
library, we extract the file to a location on the hard disk. Next, we set up the include and library paths in the Visual Studio environment. The include path on my development machine is D:\Libraries\soil\Simple OpenGL Image Library\src
whereas, the library path is set to D:\Libraries\soil\Simple OpenGL Image Library\lib\VC10_Debug
. Of course, the path for your system will be different than mine but these are the folders that the directories should point to.
These steps will help us to set up our development environment. For all of the recipes in this book, Visual Studio 2010 Professional version is used. Readers may also use the free express edition or any other version of Visual Studio (for example, Ultimate/Enterprise). Since there are a myriad of development environments, to make it easier for users on other platforms, we have provided premake script files as well.
The code for this recipe is in the Chapter1/GettingStarted
directory.
Tip
You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
Let us setup the development environment using the following steps:
- After downloading the required libraries, we set up the Visual Studio 2010 environment settings.
- We first create a new Win32 Console Application project as shown in the preceding screenshot. We set up an empty Win32 project as shown in the following screenshot:
- Next, we set up the include and library paths for the project by going into the Project menu and selecting project Properties. This opens a new dialog box. In the left pane, click on the Configuration Properties option and then on VC++ Directories.
- In the right pane, in the Include Directories field, add the GLEW and freeglut subfolder paths.
- Similarly, in the Library Directories, add the path to the lib subfolder of GLEW and freeglut libraries as shown in the following screenshot:
- Next, we add a new
.cpp
file to the project and name itmain.cpp
. This is the main source file of our project. You may also browse throughChapter1/ GettingStarted/GettingStarted/main.cpp
which does all this setup already. - Let us skim through the
Chapter1/ GettingStarted/GettingStarted/main.cpp
file piece by piece.#include <GL/glew.h> #include <GL/freeglut.h> #include <iostream>
These lines are the include files that we will add to all of our projects. The first is the GLEW header, the second is the freeglut header, and the final include is the standard input/output header.
- In Visual Studio, we can add the required linker libraries in two ways. The first way is through the Visual Studio environment (by going to the Properties menu item in the Project menu). This opens the project's property pages. In the configuration properties tree, we collapse the Linker subtree and click on the Input item. The first field in the right pane is
Additional Dependencies
. We can add the linker library in this field as shown in the following screenshot: - The second way is to add the
glew32.lib
file to the linker settings programmatically. This can be achieved by adding the followingpragma
:#pragma comment(lib, "glew32.lib")
- The next line is the using directive to enable access to the functions in the std namespace. This is not mandatory but we include this here so that we do not have to prefix
std::
to any standard library function from the iostream header file.using namespace std;
- The next lines define the width and height constants which will be the screen resolution for the window. After these declarations, there are five function definitions . The
OnInit()
function is used for initializing any OpenGL state or object,OnShutdown()
is used to delete an OpenGL object,OnResize()
is used to handle the resize event,OnRender()
helps to handle the paint event, andmain()
is the entry point of the application. We start with the definition of themain()
function.const int WIDTH = 1280; const int HEIGHT = 960; int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); glutInitContextVersion (3, 3); glutInitContextFlags (GLUT_CORE_PROFILE | GLUT_DEBUG); glutInitContextProfile(GLUT_FORWARD_COMPATIBLE); glutInitWindowSize(WIDTH, HEIGHT);
- The first line
glutInit
initializes the GLUT environment. We pass the command line arguments to this function from our entry point. Next, we set up the display mode for our application. In this case, we request the GLUT framework to provide support for a depth buffer, double buffering (that is a front and a back buffer for smooth, flicker-free rendering), and the format of the frame buffer to be RGBA (that is with red, green, blue, and alpha channels). Next, we set the required OpenGL context version we desire by using theglutInitContextVersion
. The first parameter is the major version of OpenGL and the second parameter is the minor version of OpenGL. For example, if we want to create an OpenGL v4.3 context, we will callglutInitContextVersion (4, 3)
. Next, the context flags are specified:glutInitContextFlags (GLUT_CORE_PROFILE | GLUT_DEBUG); glutInitContextProfile(GLUT_FORWARD_COMPATIBLE);
- For any version of OpenGL including OpenGL v3.3 and above, there are two profiles available: the core profile (which is a pure shader based profile without support for OpenGL fixed functionality) and the compatibility profile (which supports the OpenGL fixed functionality). All of the matrix stack functionality
glMatrixMode(*)
,glTranslate*
,glRotate*
,glScale*
, and so on, and immediate mode calls such asglVertex*
,glTexCoord*
, andglNormal*
of legacy OpenGL, are retained in the compatibility profile. However, they are removed from the core profile. In our case, we will request a forward compatible core profile which means that we will not have any fixed function OpenGL functionality available. - Next, we set the screen size and create the window:
glutInitWindowSize(WIDTH, HEIGHT); glutCreateWindow("Getting started with OpenGL 3.3");
- Next, we initialize the GLEW library. It is important to initialize the GLEW library after the OpenGL context has been created. If the function returns
GLEW_OK
the function succeeds, otherwise the GLEW initialization fails.glewExperimental = GL_TRUE; GLenum err = glewInit(); if (GLEW_OK != err){ cerr<<"Error: "<<glewGetErrorString(err)<<endl; } else { if (GLEW_VERSION_3_3) { cout<<"Driver supports OpenGL 3.3\nDetails:"<<endl; } } cout<<"\tUsing glew "<<glewGetString(GLEW_VERSION)<<endl; cout<<"\tVendor: "<<glGetString (GL_VENDOR)<<endl; cout<<"\tRenderer: "<<glGetString (GL_RENDERER)<<endl; cout<<"\tVersion: "<<glGetString (GL_VERSION)<<endl; cout<<"\tGLSL: "<<glGetString(GL_SHADING_LANGUAGE_VERSION)<<endl;
The
glewExperimental
global switch allows the GLEW library to report an extension if it is supported by the hardware but is unsupported by the experimental or pre-release drivers. After the function is initialized, the GLEW diagnostic information such as the GLEW version, the graphics vendor, the OpenGL renderer, and the shader language version are printed to the standard output. - Finally, we call our initialization function
OnInit()
and then attach our uninitialization functionOnShutdown()
as theglutCloseFunc
method—the close callback function which will be called when the window is about to close. Next, we attach our display and reshape function to their corresponding callbacks. The main function is terminated with a call to theglutMainLoop()
function which starts the application's main loop.OnInit(); glutCloseFunc(OnShutdown); glutDisplayFunc(OnRender); glutReshapeFunc(OnResize); glutMainLoop(); return 0; }
The remaining functions are defined as follows:
For this simple example, we set the clear color to red (R:1, G:0, B:0, A:0). The first three are the red, green, and blue channels and the last is the alpha channel which is used in alpha blending. The only other function defined in this simple example is the OnRender()
function, which is our display callback function that is called on the paint event. This function first clears the color and depth buffers to the clear color and clear depth values respectively.
Tip
Similar to the color buffer, there is another buffer called the depth buffer. Its clear value can be set using the glClearDepth
function. It is used for hardware based hidden surface removal. It simply stores the depth of the nearest fragment encountered so far. The incoming fragment's depth value overwrites the depth buffer value based on the depth clear function specified for the depth test using the glDepthFunc
function. By default the depth value gets overwritten if the current fragment's depth is lower than the existing depth in the depth buffer.
The glutSwapBuffers
function is then called to set the current back buffer as the current front buffer that is shown on screen. This call is required in a double buffered OpenGL application. Running the code gives us the output shown in the following screenshot.
setup the development environment using the following steps:
- After downloading the required libraries, we set up the Visual Studio 2010 environment settings.
- We first create a new Win32 Console Application project as shown in the preceding screenshot. We set up an empty Win32 project as shown in the following screenshot:
- Next, we set up the include and library paths for the project by going into the Project menu and selecting project Properties. This opens a new dialog box. In the left pane, click on the Configuration Properties option and then on VC++ Directories.
- In the right pane, in the Include Directories field, add the GLEW and freeglut subfolder paths.
- Similarly, in the Library Directories, add the path to the lib subfolder of GLEW and freeglut libraries as shown in the following screenshot:
- Next, we add a new
.cpp
file to the project and name itmain.cpp
. This is the main source file of our project. You may also browse throughChapter1/ GettingStarted/GettingStarted/main.cpp
which does all this setup already. - Let us skim through the
Chapter1/ GettingStarted/GettingStarted/main.cpp
file piece by piece.#include <GL/glew.h> #include <GL/freeglut.h> #include <iostream>
These lines are the include files that we will add to all of our projects. The first is the GLEW header, the second is the freeglut header, and the final include is the standard input/output header.
- In Visual Studio, we can add the required linker libraries in two ways. The first way is through the Visual Studio environment (by going to the Properties menu item in the Project menu). This opens the project's property pages. In the configuration properties tree, we collapse the Linker subtree and click on the Input item. The first field in the right pane is
Additional Dependencies
. We can add the linker library in this field as shown in the following screenshot: - The second way is to add the
glew32.lib
file to the linker settings programmatically. This can be achieved by adding the followingpragma
:#pragma comment(lib, "glew32.lib")
- The next line is the using directive to enable access to the functions in the std namespace. This is not mandatory but we include this here so that we do not have to prefix
std::
to any standard library function from the iostream header file.using namespace std;
- The next lines define the width and height constants which will be the screen resolution for the window. After these declarations, there are five function definitions . The
OnInit()
function is used for initializing any OpenGL state or object,OnShutdown()
is used to delete an OpenGL object,OnResize()
is used to handle the resize event,OnRender()
helps to handle the paint event, andmain()
is the entry point of the application. We start with the definition of themain()
function.const int WIDTH = 1280; const int HEIGHT = 960; int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA); glutInitContextVersion (3, 3); glutInitContextFlags (GLUT_CORE_PROFILE | GLUT_DEBUG); glutInitContextProfile(GLUT_FORWARD_COMPATIBLE); glutInitWindowSize(WIDTH, HEIGHT);
- The first line
glutInit
initializes the GLUT environment. We pass the command line arguments to this function from our entry point. Next, we set up the display mode for our application. In this case, we request the GLUT framework to provide support for a depth buffer, double buffering (that is a front and a back buffer for smooth, flicker-free rendering), and the format of the frame buffer to be RGBA (that is with red, green, blue, and alpha channels). Next, we set the required OpenGL context version we desire by using theglutInitContextVersion
. The first parameter is the major version of OpenGL and the second parameter is the minor version of OpenGL. For example, if we want to create an OpenGL v4.3 context, we will callglutInitContextVersion (4, 3)
. Next, the context flags are specified:glutInitContextFlags (GLUT_CORE_PROFILE | GLUT_DEBUG); glutInitContextProfile(GLUT_FORWARD_COMPATIBLE);
- For any version of OpenGL including OpenGL v3.3 and above, there are two profiles available: the core profile (which is a pure shader based profile without support for OpenGL fixed functionality) and the compatibility profile (which supports the OpenGL fixed functionality). All of the matrix stack functionality
glMatrixMode(*)
,glTranslate*
,glRotate*
,glScale*
, and so on, and immediate mode calls such asglVertex*
,glTexCoord*
, andglNormal*
of legacy OpenGL, are retained in the compatibility profile. However, they are removed from the core profile. In our case, we will request a forward compatible core profile which means that we will not have any fixed function OpenGL functionality available. - Next, we set the screen size and create the window:
glutInitWindowSize(WIDTH, HEIGHT); glutCreateWindow("Getting started with OpenGL 3.3");
- Next, we initialize the GLEW library. It is important to initialize the GLEW library after the OpenGL context has been created. If the function returns
GLEW_OK
the function succeeds, otherwise the GLEW initialization fails.glewExperimental = GL_TRUE; GLenum err = glewInit(); if (GLEW_OK != err){ cerr<<"Error: "<<glewGetErrorString(err)<<endl; } else { if (GLEW_VERSION_3_3) { cout<<"Driver supports OpenGL 3.3\nDetails:"<<endl; } } cout<<"\tUsing glew "<<glewGetString(GLEW_VERSION)<<endl; cout<<"\tVendor: "<<glGetString (GL_VENDOR)<<endl; cout<<"\tRenderer: "<<glGetString (GL_RENDERER)<<endl; cout<<"\tVersion: "<<glGetString (GL_VERSION)<<endl; cout<<"\tGLSL: "<<glGetString(GL_SHADING_LANGUAGE_VERSION)<<endl;
The
glewExperimental
global switch allows the GLEW library to report an extension if it is supported by the hardware but is unsupported by the experimental or pre-release drivers. After the function is initialized, the GLEW diagnostic information such as the GLEW version, the graphics vendor, the OpenGL renderer, and the shader language version are printed to the standard output. - Finally, we call our initialization function
OnInit()
and then attach our uninitialization functionOnShutdown()
as theglutCloseFunc
method—the close callback function which will be called when the window is about to close. Next, we attach our display and reshape function to their corresponding callbacks. The main function is terminated with a call to theglutMainLoop()
function which starts the application's main loop.OnInit(); glutCloseFunc(OnShutdown); glutDisplayFunc(OnRender); glutReshapeFunc(OnResize); glutMainLoop(); return 0; }
The remaining functions are defined as follows:
For this simple example, we set the clear color to red (R:1, G:0, B:0, A:0). The first three are the red, green, and blue channels and the last is the alpha channel which is used in alpha blending. The only other function defined in this simple example is the OnRender()
function, which is our display callback function that is called on the paint event. This function first clears the color and depth buffers to the clear color and clear depth values respectively.
Tip
Similar to the color buffer, there is another buffer called the depth buffer. Its clear value can be set using the glClearDepth
function. It is used for hardware based hidden surface removal. It simply stores the depth of the nearest fragment encountered so far. The incoming fragment's depth value overwrites the depth buffer value based on the depth clear function specified for the depth test using the glDepthFunc
function. By default the depth value gets overwritten if the current fragment's depth is lower than the existing depth in the depth buffer.
The glutSwapBuffers
function is then called to set the current back buffer as the current front buffer that is shown on screen. This call is required in a double buffered OpenGL application. Running the code gives us the output shown in the following screenshot.
OnRender()
function,
Tip
Similar to the color buffer, there is another buffer called the depth buffer. Its clear value can be set using the glClearDepth
function. It is used for hardware based hidden surface removal. It simply stores the depth of the nearest fragment encountered so far. The incoming fragment's depth value overwrites the depth buffer value based on the depth clear function specified for the depth test using the glDepthFunc
function. By default the depth value gets overwritten if the current fragment's depth is lower than the existing depth in the depth buffer.
The glutSwapBuffers
function is then called to set the current back buffer as the current front buffer that is shown on screen. This call is required in a double buffered OpenGL application. Running the code gives us the output shown in the following screenshot.
We will now have a look at how to set up shaders. Shaders are special programs that are run on the GPU. There are different shaders for controlling different stages of the programmable graphics pipeline. In the modern GPU, these include the vertex shader (which is responsible for calculating the clip-space position of a vertex), the tessellation control shader (which is responsible for determining the amount of tessellation of a given patch), the tessellation evaluation shader (which computes the interpolated positions and other attributes on the tessellation result), the geometry shader (which processes primitives and can add additional primitives and vertices if needed), and the fragment shader (which converts a rasterized fragment into a colored pixel and a depth). The modern GPU pipeline highlighting the different shader stages is shown in the following figure.
The GLSLShader
class is defined in the GLSLShader.[h/cpp]
files. We first declare the constructor and destructor which initialize the member variables. The next three functions, LoadFromString
, LoadFromFile
, and CreateAndLinkProgram
handle the shader compilation, linking, and program creation. The next two functions, Use
and UnUse
functions bind and unbind the program. Two std::map
datastructures are used. They store the attribute's/uniform's name as the key and its location as the value. This is done to remove the redundant call to get the attribute's/uniform's location each frame or when the location is required to access the attribute/uniform. The next two functions, AddAttribute
and AddUniform
add the locations of the attribute and uniforms into their respective std::map
(_attributeList
and _uniformLocationList
).
In a typical shader application, the usage of the GLSLShader
object is as follows:
- Create the
GLSLShader
object either on stack (for example,GLSLShader
shader;) or on the heap (for example,GLSLShader* shader=new GLSLShader();
) - Call
LoadFromFile
on theGLSLShader
object reference - Call
CreateAndLinkProgram
on theGLSLShader
object reference - Call
Use
on theGLSLShader
object reference to bind the shader object - Call
AddAttribute
/AddUniform
to store locations of all of the shader's attributes and uniforms respectively - Call
UnUse
on theGLSLShader
object reference to unbind the shader object
Execution of the above four functions creates a shader object. After the shader object is created, a shader program object is created using the following set of functions in the following sequence:
In the GLSLShader
class, the first four steps are handled in the LoadFromString
function and the later four steps are handled by the CreateAndLinkProgram
member function. After the shader program object has been created, we can set the program for execution on the GPU. This process is called shader binding. This is carried out by the glUseProgram
function which is called through the Use
/UnUse
functions in the GLSLShader
class.
For accessing any attribute/uniform location, we provide an indexer in the GLSLShader
class. In cases where there is an error in the compilation or linking stage, the shader log is printed to the console. Say for example, our GLSLshader
object is called shader
and our shader
contains a uniform called MVP
. We can first add it to the map of GLSLShader
by calling shader.AddUniform("MVP")
. This function adds the uniform's location to the map. Then when we want to access the uniform, we directly call shader("MVP")
and it returns the location of our uniform.
GLSLShader
class is
In a typical shader application, the usage of the GLSLShader
object is as follows:
- Create the
GLSLShader
object either on stack (for example,GLSLShader
shader;) or on the heap (for example,GLSLShader* shader=new GLSLShader();
) - Call
LoadFromFile
on theGLSLShader
object reference - Call
CreateAndLinkProgram
on theGLSLShader
object reference - Call
Use
on theGLSLShader
object reference to bind the shader object - Call
AddAttribute
/AddUniform
to store locations of all of the shader's attributes and uniforms respectively - Call
UnUse
on theGLSLShader
object reference to unbind the shader object
Execution of the above four functions creates a shader object. After the shader object is created, a shader program object is created using the following set of functions in the following sequence:
In the GLSLShader
class, the first four steps are handled in the LoadFromString
function and the later four steps are handled by the CreateAndLinkProgram
member function. After the shader program object has been created, we can set the program for execution on the GPU. This process is called shader binding. This is carried out by the glUseProgram
function which is called through the Use
/UnUse
functions in the GLSLShader
class.
For accessing any attribute/uniform location, we provide an indexer in the GLSLShader
class. In cases where there is an error in the compilation or linking stage, the shader log is printed to the console. Say for example, our GLSLshader
object is called shader
and our shader
contains a uniform called MVP
. We can first add it to the map of GLSLShader
by calling shader.AddUniform("MVP")
. This function adds the uniform's location to the map. Then when we want to access the uniform, we directly call shader("MVP")
and it returns the location of our uniform.
usage of the GLSLShader
object is as follows:
- Create the
GLSLShader
object either on stack (for example,GLSLShader
shader;) or on the heap (for example,GLSLShader* shader=new GLSLShader();
) - Call
LoadFromFile
on theGLSLShader
object reference - Call
CreateAndLinkProgram
on theGLSLShader
object reference - Call
Use
on theGLSLShader
object reference to bind the shader object - Call
AddAttribute
/AddUniform
to store locations of all of the shader's attributes and uniforms respectively - Call
UnUse
on theGLSLShader
object reference to unbind the shader object
Execution of the above four functions creates a shader object. After the shader object is created, a shader program object is created using the following set of functions in the following sequence:
In the GLSLShader
class, the first four steps are handled in the LoadFromString
function and the later four steps are handled by the CreateAndLinkProgram
member function. After the shader program object has been created, we can set the program for execution on the GPU. This process is called shader binding. This is carried out by the glUseProgram
function which is called through the Use
/UnUse
functions in the GLSLShader
class.
For accessing any attribute/uniform location, we provide an indexer in the GLSLShader
class. In cases where there is an error in the compilation or linking stage, the shader log is printed to the console. Say for example, our GLSLshader
object is called shader
and our shader
contains a uniform called MVP
. We can first add it to the map of GLSLShader
by calling shader.AddUniform("MVP")
. This function adds the uniform's location to the map. Then when we want to access the uniform, we directly call shader("MVP")
and it returns the location of our uniform.
In the GLSLShader
class, the first four steps are handled in the LoadFromString
function and the later four steps are handled by the CreateAndLinkProgram
member function. After the shader program object has been created, we can set the program for execution on the GPU. This process is called shader binding. This is carried out by the glUseProgram
function which is called through the Use
/UnUse
functions in the GLSLShader
class.
For accessing any attribute/uniform location, we provide an indexer in the GLSLShader
class. In cases where there is an error in the compilation or linking stage, the shader log is printed to the console. Say for example, our GLSLshader
object is called shader
and our shader
contains a uniform called MVP
. We can first add it to the map of GLSLShader
by calling shader.AddUniform("MVP")
. This function adds the uniform's location to the map. Then when we want to access the uniform, we directly call shader("MVP")
and it returns the location of our uniform.
GLSLShader
class, the first four steps are handled in the LoadFromString
function and the later four steps are handled by the CreateAndLinkProgram
member function. After the shader program object has been created, we can set the program for execution on the GPU. This process is called shader binding
For accessing any attribute/uniform location, we provide an indexer in the GLSLShader
class. In cases where there is an error in the compilation or linking stage, the shader log is printed to the console. Say for example, our GLSLshader
object is called shader
and our shader
contains a uniform called MVP
. We can first add it to the map of GLSLShader
by calling shader.AddUniform("MVP")
. This function adds the uniform's location to the map. Then when we want to access the uniform, we directly call shader("MVP")
and it returns the location of our uniform.
We will now put the GLSLShader
class to use by implementing an application to render a simple colored triangle on screen.
Let us start this recipe using the following steps:
- Define a vertex shader (
shaders/shader.vert
) to transform the object space vertex position to clip space.#version 330 core layout(location = 0) in vec3 vVertex; layout(location = 1) in vec3 vColor; smooth out vec4 vSmoothColor; uniform mat4 MVP; void main() { vSmoothColor = vec4(vColor,1); gl_Position = MVP*vec4(vVertex,1); }
- Define a fragment shader (
shaders/shader.frag
) to output a smoothly interpolated color from the vertex shader to the frame buffer.#version 330 core smooth in vec4 vSmoothColor; layout(location=0) out vec4 vFragColor; void main() { vFragColor = vSmoothColor; }
- Load the two shaders using the
GLSLShader
class in theOnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddAttribute("vColor"); shader.AddUniform("MVP"); shader.UnUse();
- Create the geometry and topology. We will store the attributes together in an interleaved vertex format, that is, we will store the vertex attributes in a struct containing two attributes, position and color.
vertices[0].color=glm::vec3(1,0,0); vertices[1].color=glm::vec3(0,1,0); vertices[2].color=glm::vec3(0,0,1); vertices[0].position=glm::vec3(-1,-1,0); vertices[1].position=glm::vec3(0,1,0); vertices[2].position=glm::vec3(1,-1,0); indices[0] = 0; indices[1] = 1; indices[2] = 2;
- Store the geometry and topology in the buffer object(s). The stride parameter controls the number of bytes to jump to reach the next element of the same attribute. For the interleaved format, it is typically the size of our vertex struct in bytes, that is,
sizeof(Vertex)
.glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,stride,0); glEnableVertexAttribArray(shader["vColor"]); glVertexAttribPointer(shader["vColor"], 3, GL_FLOAT, GL_FALSE,stride, (const GLvoid*)offsetof(Vertex, color)); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set up the resize handler to set up the viewport and projection matrix.
void OnResize(int w, int h) { glViewport (0, 0, (GLsizei) w, (GLsizei) h); P = glm::ortho(-1,1,-1,1); }
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms, and then draw the geometry.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); }
For this simple example, we will only use a vertex shader (shaders/shader.vert
) and a fragment shader (shaders/shader.frag
). The first line in the shader signifies the GLSL version of the shader. Starting from OpenGL v3.0, the version specifiers correspond to the OpenGL version used. So for OpenGL v3.3, the GLSL version is 330. In addition, since we are interested in the core profile, we add another keyword following the version number to signify that we have a core profile shader.
The vertex shader simply outputs the input per-vertex color to the output (vSmoothColor
). Such attributes that are interpolated across shader stages are called varying attributes. It also calculates the clip space position by multiplying the per-vertex position (vVertex
) with the combined modelview projection (MVP) matrix.
The fragment shader writes the input color (vSmoothColor
) to the frame buffer output (vFragColor
).
In the simple triangle demo application code, we store the GLSLShader
object reference in the global scope so that we can access it in any function we desire. We modify the OnInit()
function by adding the following lines:
The first two lines create the GLSL shader of the given type by reading the contents of the file with the given filename. In all of the recipes in this book, the vertex shader files are stored with a .vert
extension, the geometry shader files with a .geom
extension, and the fragment shader files with a .frag
extension. Next, the GLSLShader::CreateAndLinkProgram
function is called to create the shader program from the shader object. Next, the program is bound and then the locations of attributes and uniforms are stored.
In OpenGL v3.3 and above, we typically store the geometry information in buffer objects, which is a linear array of memory managed by the GPU. In order to facilitate the handling of buffer object(s) during rendering, we use a vertex array object (VAO). This object stores references to buffer objects that are bound after the VAO is bound. The advantage we get from using a VAO is that after the VAO is bound, we do not have to bind the buffer object(s).
In this demo, we declare three variables in global scope; vaoID
for VAO handling, and vboVerticesID
and vboIndicesID
for buffer object handling. The VAO object is created by calling the glGenVertexArrays
function. The buffer objects are generated using the glGenBuffers
function. The first parameter for both of these functions is the total number of objects required, and the second parameter is the reference to where the object handle is stored. These functions are called in the OnInit()
function.
In the next few calls, we enable the vertex attributes. This function needs the location of the attribute, which we obtain by the GLSLShader::operator[]
, passing it the name of the attribute whose location we require. We then call glVertexAttributePointer
to tell the GPU how many elements there are and what is their type, whether the attribute is normalized, the stride (which means the total number of bytes to skip to reach the next element; for our case since the attributes are stored in a Vertex
struct, the next element's stride is the size of our Vertex
struct), and finally, the pointer to the attribute in the given array. The last parameter requires explanation in case we have interleaved attributes (as we have). The offsetof
operator returns the offset in bytes, to the attribute in the given struct. Hence, the GPU knows how many bytes it needs to skip in order to access the next attribute of the given type. For the vVertex
attribute, the last parameter is 0
since the next element is accessed immediately after the stride. For the second attribute vColor
, it needs to hop 12 bytes before the next vColor
attribute is obtained from the given vertices array.
To complement the object generation in the OnInit()
function, we must provide the object deletion code. This is handled in the OnShutdown()
function. We first delete the shader program by calling the GLSLShader::DeleteShaderProgram
function. Next, we delete the two buffer objects (vboVerticesID
and vboIndicesID
) and finally we delete the vertex array object (vaoID
).
The rendering code of the simple triangle demo is as follows:
The rendering code first clears the color and depth buffer and binds the shader program by calling the GLSLShader::Use()
function. It then passes the combined modelview and projection matrix to the GPU by invoking the glUniformMatrix4fv
function. The first parameter is the location of the uniform which we obtain from the GLSLShader::operator()
function, by passing it the name of the uniform whose location we need. The second parameter is the total number of matrices we wish to pass. The third parameter is a Boolean signifying if the matrix needs to be transposed, and the final parameter is the float pointer to the matrix object. Here we use the glm::value_ptr
function to get the float pointer from the matrix object. Note that the OpenGL matrices are concatenated right to left since it follows a right handed coordinate system in a column major layout. Hence we keep the projection matrix on the left and the modelview matrix on the right. For this simple example, the modelview matrix (MV
) is set as the identity matrix.
After this function, the glDrawElements
call is made. Since we have left our VAO object (vaoID
) bound, we pass 0
to the final parameter of this function. This tells the GPU to use the references of the GL_ELEMENT_ARRAY_BUFFER
and GL_ARRAY_BUFFER
binding points of the bound VAO. Thus we do not need to explicitly bind the vboVerticesID
and vboIndicesID
buffer objects again. After this call, we unbind the shader program by calling the GLSLShader::UnUse()
function. Finally, we call the glutSwapBuffer
function to show the back buffer on screen. After compiling and running, we get the output as shown in the following figure:
Chapter1/SimpleTriangle
directory.
Let us start this recipe using the following steps:
- Define a vertex shader (
shaders/shader.vert
) to transform the object space vertex position to clip space.#version 330 core layout(location = 0) in vec3 vVertex; layout(location = 1) in vec3 vColor; smooth out vec4 vSmoothColor; uniform mat4 MVP; void main() { vSmoothColor = vec4(vColor,1); gl_Position = MVP*vec4(vVertex,1); }
- Define a fragment shader (
shaders/shader.frag
) to output a smoothly interpolated color from the vertex shader to the frame buffer.#version 330 core smooth in vec4 vSmoothColor; layout(location=0) out vec4 vFragColor; void main() { vFragColor = vSmoothColor; }
- Load the two shaders using the
GLSLShader
class in theOnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddAttribute("vColor"); shader.AddUniform("MVP"); shader.UnUse();
- Create the geometry and topology. We will store the attributes together in an interleaved vertex format, that is, we will store the vertex attributes in a struct containing two attributes, position and color.
vertices[0].color=glm::vec3(1,0,0); vertices[1].color=glm::vec3(0,1,0); vertices[2].color=glm::vec3(0,0,1); vertices[0].position=glm::vec3(-1,-1,0); vertices[1].position=glm::vec3(0,1,0); vertices[2].position=glm::vec3(1,-1,0); indices[0] = 0; indices[1] = 1; indices[2] = 2;
- Store the geometry and topology in the buffer object(s). The stride parameter controls the number of bytes to jump to reach the next element of the same attribute. For the interleaved format, it is typically the size of our vertex struct in bytes, that is,
sizeof(Vertex)
.glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,stride,0); glEnableVertexAttribArray(shader["vColor"]); glVertexAttribPointer(shader["vColor"], 3, GL_FLOAT, GL_FALSE,stride, (const GLvoid*)offsetof(Vertex, color)); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set up the resize handler to set up the viewport and projection matrix.
void OnResize(int w, int h) { glViewport (0, 0, (GLsizei) w, (GLsizei) h); P = glm::ortho(-1,1,-1,1); }
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms, and then draw the geometry.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); }
For this simple example, we will only use a vertex shader (shaders/shader.vert
) and a fragment shader (shaders/shader.frag
). The first line in the shader signifies the GLSL version of the shader. Starting from OpenGL v3.0, the version specifiers correspond to the OpenGL version used. So for OpenGL v3.3, the GLSL version is 330. In addition, since we are interested in the core profile, we add another keyword following the version number to signify that we have a core profile shader.
The vertex shader simply outputs the input per-vertex color to the output (vSmoothColor
). Such attributes that are interpolated across shader stages are called varying attributes. It also calculates the clip space position by multiplying the per-vertex position (vVertex
) with the combined modelview projection (MVP) matrix.
The fragment shader writes the input color (vSmoothColor
) to the frame buffer output (vFragColor
).
In the simple triangle demo application code, we store the GLSLShader
object reference in the global scope so that we can access it in any function we desire. We modify the OnInit()
function by adding the following lines:
The first two lines create the GLSL shader of the given type by reading the contents of the file with the given filename. In all of the recipes in this book, the vertex shader files are stored with a .vert
extension, the geometry shader files with a .geom
extension, and the fragment shader files with a .frag
extension. Next, the GLSLShader::CreateAndLinkProgram
function is called to create the shader program from the shader object. Next, the program is bound and then the locations of attributes and uniforms are stored.
In OpenGL v3.3 and above, we typically store the geometry information in buffer objects, which is a linear array of memory managed by the GPU. In order to facilitate the handling of buffer object(s) during rendering, we use a vertex array object (VAO). This object stores references to buffer objects that are bound after the VAO is bound. The advantage we get from using a VAO is that after the VAO is bound, we do not have to bind the buffer object(s).
In this demo, we declare three variables in global scope; vaoID
for VAO handling, and vboVerticesID
and vboIndicesID
for buffer object handling. The VAO object is created by calling the glGenVertexArrays
function. The buffer objects are generated using the glGenBuffers
function. The first parameter for both of these functions is the total number of objects required, and the second parameter is the reference to where the object handle is stored. These functions are called in the OnInit()
function.
In the next few calls, we enable the vertex attributes. This function needs the location of the attribute, which we obtain by the GLSLShader::operator[]
, passing it the name of the attribute whose location we require. We then call glVertexAttributePointer
to tell the GPU how many elements there are and what is their type, whether the attribute is normalized, the stride (which means the total number of bytes to skip to reach the next element; for our case since the attributes are stored in a Vertex
struct, the next element's stride is the size of our Vertex
struct), and finally, the pointer to the attribute in the given array. The last parameter requires explanation in case we have interleaved attributes (as we have). The offsetof
operator returns the offset in bytes, to the attribute in the given struct. Hence, the GPU knows how many bytes it needs to skip in order to access the next attribute of the given type. For the vVertex
attribute, the last parameter is 0
since the next element is accessed immediately after the stride. For the second attribute vColor
, it needs to hop 12 bytes before the next vColor
attribute is obtained from the given vertices array.
To complement the object generation in the OnInit()
function, we must provide the object deletion code. This is handled in the OnShutdown()
function. We first delete the shader program by calling the GLSLShader::DeleteShaderProgram
function. Next, we delete the two buffer objects (vboVerticesID
and vboIndicesID
) and finally we delete the vertex array object (vaoID
).
The rendering code of the simple triangle demo is as follows:
The rendering code first clears the color and depth buffer and binds the shader program by calling the GLSLShader::Use()
function. It then passes the combined modelview and projection matrix to the GPU by invoking the glUniformMatrix4fv
function. The first parameter is the location of the uniform which we obtain from the GLSLShader::operator()
function, by passing it the name of the uniform whose location we need. The second parameter is the total number of matrices we wish to pass. The third parameter is a Boolean signifying if the matrix needs to be transposed, and the final parameter is the float pointer to the matrix object. Here we use the glm::value_ptr
function to get the float pointer from the matrix object. Note that the OpenGL matrices are concatenated right to left since it follows a right handed coordinate system in a column major layout. Hence we keep the projection matrix on the left and the modelview matrix on the right. For this simple example, the modelview matrix (MV
) is set as the identity matrix.
After this function, the glDrawElements
call is made. Since we have left our VAO object (vaoID
) bound, we pass 0
to the final parameter of this function. This tells the GPU to use the references of the GL_ELEMENT_ARRAY_BUFFER
and GL_ARRAY_BUFFER
binding points of the bound VAO. Thus we do not need to explicitly bind the vboVerticesID
and vboIndicesID
buffer objects again. After this call, we unbind the shader program by calling the GLSLShader::UnUse()
function. Finally, we call the glutSwapBuffer
function to show the back buffer on screen. After compiling and running, we get the output as shown in the following figure:
this recipe using the following steps:
- Define a vertex shader (
shaders/shader.vert
) to transform the object space vertex position to clip space.#version 330 core layout(location = 0) in vec3 vVertex; layout(location = 1) in vec3 vColor; smooth out vec4 vSmoothColor; uniform mat4 MVP; void main() { vSmoothColor = vec4(vColor,1); gl_Position = MVP*vec4(vVertex,1); }
- Define a fragment shader (
shaders/shader.frag
) to output a smoothly interpolated color from the vertex shader to the frame buffer.#version 330 core smooth in vec4 vSmoothColor; layout(location=0) out vec4 vFragColor; void main() { vFragColor = vSmoothColor; }
- Load the two shaders using the
GLSLShader
class in theOnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddAttribute("vColor"); shader.AddUniform("MVP"); shader.UnUse();
- Create the geometry and topology. We will store the attributes together in an interleaved vertex format, that is, we will store the vertex attributes in a struct containing two attributes, position and color.
vertices[0].color=glm::vec3(1,0,0); vertices[1].color=glm::vec3(0,1,0); vertices[2].color=glm::vec3(0,0,1); vertices[0].position=glm::vec3(-1,-1,0); vertices[1].position=glm::vec3(0,1,0); vertices[2].position=glm::vec3(1,-1,0); indices[0] = 0; indices[1] = 1; indices[2] = 2;
- Store the geometry and topology in the buffer object(s). The stride parameter controls the number of bytes to jump to reach the next element of the same attribute. For the interleaved format, it is typically the size of our vertex struct in bytes, that is,
sizeof(Vertex)
.glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,stride,0); glEnableVertexAttribArray(shader["vColor"]); glVertexAttribPointer(shader["vColor"], 3, GL_FLOAT, GL_FALSE,stride, (const GLvoid*)offsetof(Vertex, color)); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set up the resize handler to set up the viewport and projection matrix.
void OnResize(int w, int h) { glViewport (0, 0, (GLsizei) w, (GLsizei) h); P = glm::ortho(-1,1,-1,1); }
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms, and then draw the geometry.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); }
For this simple example, we will only use a vertex shader (shaders/shader.vert
) and a fragment shader (shaders/shader.frag
). The first line in the shader signifies the GLSL version of the shader. Starting from OpenGL v3.0, the version specifiers correspond to the OpenGL version used. So for OpenGL v3.3, the GLSL version is 330. In addition, since we are interested in the core profile, we add another keyword following the version number to signify that we have a core profile shader.
The vertex shader simply outputs the input per-vertex color to the output (vSmoothColor
). Such attributes that are interpolated across shader stages are called varying attributes. It also calculates the clip space position by multiplying the per-vertex position (vVertex
) with the combined modelview projection (MVP) matrix.
The fragment shader writes the input color (vSmoothColor
) to the frame buffer output (vFragColor
).
In the simple triangle demo application code, we store the GLSLShader
object reference in the global scope so that we can access it in any function we desire. We modify the OnInit()
function by adding the following lines:
The first two lines create the GLSL shader of the given type by reading the contents of the file with the given filename. In all of the recipes in this book, the vertex shader files are stored with a .vert
extension, the geometry shader files with a .geom
extension, and the fragment shader files with a .frag
extension. Next, the GLSLShader::CreateAndLinkProgram
function is called to create the shader program from the shader object. Next, the program is bound and then the locations of attributes and uniforms are stored.
In OpenGL v3.3 and above, we typically store the geometry information in buffer objects, which is a linear array of memory managed by the GPU. In order to facilitate the handling of buffer object(s) during rendering, we use a vertex array object (VAO). This object stores references to buffer objects that are bound after the VAO is bound. The advantage we get from using a VAO is that after the VAO is bound, we do not have to bind the buffer object(s).
In this demo, we declare three variables in global scope; vaoID
for VAO handling, and vboVerticesID
and vboIndicesID
for buffer object handling. The VAO object is created by calling the glGenVertexArrays
function. The buffer objects are generated using the glGenBuffers
function. The first parameter for both of these functions is the total number of objects required, and the second parameter is the reference to where the object handle is stored. These functions are called in the OnInit()
function.
In the next few calls, we enable the vertex attributes. This function needs the location of the attribute, which we obtain by the GLSLShader::operator[]
, passing it the name of the attribute whose location we require. We then call glVertexAttributePointer
to tell the GPU how many elements there are and what is their type, whether the attribute is normalized, the stride (which means the total number of bytes to skip to reach the next element; for our case since the attributes are stored in a Vertex
struct, the next element's stride is the size of our Vertex
struct), and finally, the pointer to the attribute in the given array. The last parameter requires explanation in case we have interleaved attributes (as we have). The offsetof
operator returns the offset in bytes, to the attribute in the given struct. Hence, the GPU knows how many bytes it needs to skip in order to access the next attribute of the given type. For the vVertex
attribute, the last parameter is 0
since the next element is accessed immediately after the stride. For the second attribute vColor
, it needs to hop 12 bytes before the next vColor
attribute is obtained from the given vertices array.
To complement the object generation in the OnInit()
function, we must provide the object deletion code. This is handled in the OnShutdown()
function. We first delete the shader program by calling the GLSLShader::DeleteShaderProgram
function. Next, we delete the two buffer objects (vboVerticesID
and vboIndicesID
) and finally we delete the vertex array object (vaoID
).
The rendering code of the simple triangle demo is as follows:
The rendering code first clears the color and depth buffer and binds the shader program by calling the GLSLShader::Use()
function. It then passes the combined modelview and projection matrix to the GPU by invoking the glUniformMatrix4fv
function. The first parameter is the location of the uniform which we obtain from the GLSLShader::operator()
function, by passing it the name of the uniform whose location we need. The second parameter is the total number of matrices we wish to pass. The third parameter is a Boolean signifying if the matrix needs to be transposed, and the final parameter is the float pointer to the matrix object. Here we use the glm::value_ptr
function to get the float pointer from the matrix object. Note that the OpenGL matrices are concatenated right to left since it follows a right handed coordinate system in a column major layout. Hence we keep the projection matrix on the left and the modelview matrix on the right. For this simple example, the modelview matrix (MV
) is set as the identity matrix.
After this function, the glDrawElements
call is made. Since we have left our VAO object (vaoID
) bound, we pass 0
to the final parameter of this function. This tells the GPU to use the references of the GL_ELEMENT_ARRAY_BUFFER
and GL_ARRAY_BUFFER
binding points of the bound VAO. Thus we do not need to explicitly bind the vboVerticesID
and vboIndicesID
buffer objects again. After this call, we unbind the shader program by calling the GLSLShader::UnUse()
function. Finally, we call the glutSwapBuffer
function to show the back buffer on screen. After compiling and running, we get the output as shown in the following figure:
simple example, we will only use a vertex shader (shaders/shader.vert
) and a fragment shader (shaders/shader.frag
). The first line in the shader signifies the GLSL version of the shader. Starting from OpenGL v3.0, the version specifiers correspond to the OpenGL version used. So for OpenGL v3.3, the GLSL version is 330. In addition, since we are interested in the core profile, we add another keyword following the version number to signify that we have a core profile shader.
The vertex shader simply outputs the input per-vertex color to the output (vSmoothColor
). Such attributes that are interpolated across shader stages are called varying attributes. It also calculates the clip space position by multiplying the per-vertex position (vVertex
) with the combined modelview projection (MVP) matrix.
The fragment shader writes the input color (vSmoothColor
) to the frame buffer output (vFragColor
).
In the simple triangle demo application code, we store the GLSLShader
object reference in the global scope so that we can access it in any function we desire. We modify the OnInit()
function by adding the following lines:
The first two lines create the GLSL shader of the given type by reading the contents of the file with the given filename. In all of the recipes in this book, the vertex shader files are stored with a .vert
extension, the geometry shader files with a .geom
extension, and the fragment shader files with a .frag
extension. Next, the GLSLShader::CreateAndLinkProgram
function is called to create the shader program from the shader object. Next, the program is bound and then the locations of attributes and uniforms are stored.
In OpenGL v3.3 and above, we typically store the geometry information in buffer objects, which is a linear array of memory managed by the GPU. In order to facilitate the handling of buffer object(s) during rendering, we use a vertex array object (VAO). This object stores references to buffer objects that are bound after the VAO is bound. The advantage we get from using a VAO is that after the VAO is bound, we do not have to bind the buffer object(s).
In this demo, we declare three variables in global scope; vaoID
for VAO handling, and vboVerticesID
and vboIndicesID
for buffer object handling. The VAO object is created by calling the glGenVertexArrays
function. The buffer objects are generated using the glGenBuffers
function. The first parameter for both of these functions is the total number of objects required, and the second parameter is the reference to where the object handle is stored. These functions are called in the OnInit()
function.
In the next few calls, we enable the vertex attributes. This function needs the location of the attribute, which we obtain by the GLSLShader::operator[]
, passing it the name of the attribute whose location we require. We then call glVertexAttributePointer
to tell the GPU how many elements there are and what is their type, whether the attribute is normalized, the stride (which means the total number of bytes to skip to reach the next element; for our case since the attributes are stored in a Vertex
struct, the next element's stride is the size of our Vertex
struct), and finally, the pointer to the attribute in the given array. The last parameter requires explanation in case we have interleaved attributes (as we have). The offsetof
operator returns the offset in bytes, to the attribute in the given struct. Hence, the GPU knows how many bytes it needs to skip in order to access the next attribute of the given type. For the vVertex
attribute, the last parameter is 0
since the next element is accessed immediately after the stride. For the second attribute vColor
, it needs to hop 12 bytes before the next vColor
attribute is obtained from the given vertices array.
To complement the object generation in the OnInit()
function, we must provide the object deletion code. This is handled in the OnShutdown()
function. We first delete the shader program by calling the GLSLShader::DeleteShaderProgram
function. Next, we delete the two buffer objects (vboVerticesID
and vboIndicesID
) and finally we delete the vertex array object (vaoID
).
The rendering code of the simple triangle demo is as follows:
The rendering code first clears the color and depth buffer and binds the shader program by calling the GLSLShader::Use()
function. It then passes the combined modelview and projection matrix to the GPU by invoking the glUniformMatrix4fv
function. The first parameter is the location of the uniform which we obtain from the GLSLShader::operator()
function, by passing it the name of the uniform whose location we need. The second parameter is the total number of matrices we wish to pass. The third parameter is a Boolean signifying if the matrix needs to be transposed, and the final parameter is the float pointer to the matrix object. Here we use the glm::value_ptr
function to get the float pointer from the matrix object. Note that the OpenGL matrices are concatenated right to left since it follows a right handed coordinate system in a column major layout. Hence we keep the projection matrix on the left and the modelview matrix on the right. For this simple example, the modelview matrix (MV
) is set as the identity matrix.
After this function, the glDrawElements
call is made. Since we have left our VAO object (vaoID
) bound, we pass 0
to the final parameter of this function. This tells the GPU to use the references of the GL_ELEMENT_ARRAY_BUFFER
and GL_ARRAY_BUFFER
binding points of the bound VAO. Thus we do not need to explicitly bind the vboVerticesID
and vboIndicesID
buffer objects again. After this call, we unbind the shader program by calling the GLSLShader::UnUse()
function. Finally, we call the glutSwapBuffer
function to show the back buffer on screen. After compiling and running, we get the output as shown in the following figure:
demo application code, we store the GLSLShader
object reference in the global scope so that we can access it in any function we desire. We modify the OnInit()
function by adding the following lines:
The first two lines create the GLSL shader of the given type by reading the contents of the file with the given filename. In all of the recipes in this book, the vertex shader files are stored with a .vert
extension, the geometry shader files with a .geom
extension, and the fragment shader files with a .frag
extension. Next, the GLSLShader::CreateAndLinkProgram
function is called to create the shader program from the shader object. Next, the program is bound and then the locations of attributes and uniforms are stored.
In OpenGL v3.3 and above, we typically store the geometry information in buffer objects, which is a linear array of memory managed by the GPU. In order to facilitate the handling of buffer object(s) during rendering, we use a vertex array object (VAO). This object stores references to buffer objects that are bound after the VAO is bound. The advantage we get from using a VAO is that after the VAO is bound, we do not have to bind the buffer object(s).
In this demo, we declare three variables in global scope; vaoID
for VAO handling, and vboVerticesID
and vboIndicesID
for buffer object handling. The VAO object is created by calling the glGenVertexArrays
function. The buffer objects are generated using the glGenBuffers
function. The first parameter for both of these functions is the total number of objects required, and the second parameter is the reference to where the object handle is stored. These functions are called in the OnInit()
function.
In the next few calls, we enable the vertex attributes. This function needs the location of the attribute, which we obtain by the GLSLShader::operator[]
, passing it the name of the attribute whose location we require. We then call glVertexAttributePointer
to tell the GPU how many elements there are and what is their type, whether the attribute is normalized, the stride (which means the total number of bytes to skip to reach the next element; for our case since the attributes are stored in a Vertex
struct, the next element's stride is the size of our Vertex
struct), and finally, the pointer to the attribute in the given array. The last parameter requires explanation in case we have interleaved attributes (as we have). The offsetof
operator returns the offset in bytes, to the attribute in the given struct. Hence, the GPU knows how many bytes it needs to skip in order to access the next attribute of the given type. For the vVertex
attribute, the last parameter is 0
since the next element is accessed immediately after the stride. For the second attribute vColor
, it needs to hop 12 bytes before the next vColor
attribute is obtained from the given vertices array.
To complement the object generation in the OnInit()
function, we must provide the object deletion code. This is handled in the OnShutdown()
function. We first delete the shader program by calling the GLSLShader::DeleteShaderProgram
function. Next, we delete the two buffer objects (vboVerticesID
and vboIndicesID
) and finally we delete the vertex array object (vaoID
).
The rendering code of the simple triangle demo is as follows:
The rendering code first clears the color and depth buffer and binds the shader program by calling the GLSLShader::Use()
function. It then passes the combined modelview and projection matrix to the GPU by invoking the glUniformMatrix4fv
function. The first parameter is the location of the uniform which we obtain from the GLSLShader::operator()
function, by passing it the name of the uniform whose location we need. The second parameter is the total number of matrices we wish to pass. The third parameter is a Boolean signifying if the matrix needs to be transposed, and the final parameter is the float pointer to the matrix object. Here we use the glm::value_ptr
function to get the float pointer from the matrix object. Note that the OpenGL matrices are concatenated right to left since it follows a right handed coordinate system in a column major layout. Hence we keep the projection matrix on the left and the modelview matrix on the right. For this simple example, the modelview matrix (MV
) is set as the identity matrix.
After this function, the glDrawElements
call is made. Since we have left our VAO object (vaoID
) bound, we pass 0
to the final parameter of this function. This tells the GPU to use the references of the GL_ELEMENT_ARRAY_BUFFER
and GL_ARRAY_BUFFER
binding points of the bound VAO. Thus we do not need to explicitly bind the vboVerticesID
and vboIndicesID
buffer objects again. After this call, we unbind the shader program by calling the GLSLShader::UnUse()
function. Finally, we call the glutSwapBuffer
function to show the back buffer on screen. After compiling and running, we get the output as shown in the following figure:
In this recipe, we will deform a planar mesh using the vertex shader. We know that the vertex shader is responsible for outputting the clip space position of the given object space vertex. In between this conversion, we can apply the modeling transformation to transform the given object space vertex to world space position.
We can implement a ripple shader using the following steps:
- Define the vertex shader that deforms the object space vertex position.
#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 MVP; uniform float time; const float amplitude = 0.125; const float frequency = 4; const float PI = 3.14159; void main() { float distance = length(vVertex); float y = amplitude*sin(-PI*distance*frequency+time); gl_Position = MVP*vec4(vVertex.x, y, vVertex.z,1); }
- Define a fragment shader that simply outputs a constant color.
#version 330 core layout(location=0) out vec4 vFragColor; void main() { vFragColor = vec4(1,1,1,1); }
- Load the two shaders using the
GLSLShader
class in theOnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("MVP"); shader.AddUniform("time"); shader.UnUse();
- Create the geometry and topology.
int count = 0; int i=0, j=0; for( j=0;j<=NUM_Z;j++) { for( i=0;i<=NUM_X;i++) { vertices[count++] = glm::vec3( ((float(i)/(NUM_X-1)) *2-1)* HALF_SIZE_X, 0, ((float(j)/(NUM_Z-1))*2-1)*HALF_SIZE_Z); } } GLushort* id=&indices[0]; for (i = 0; i < NUM_Z; i++) { for (j = 0; j < NUM_X; j++) { int i0 = i * (NUM_X+1) + j; int i1 = i0 + 1; int i2 = i0 + (NUM_X+1); int i3 = i2 + 1; if ((j+i)%2) { *id++ = i0; *id++ = i2; *id++ = i1; *id++ = i1; *id++ = i2; *id++ = i3; } else { *id++ = i0; *id++ = i2; *id++ = i3; *id++ = i0; *id++ = i3; *id++ = i1; } } }
- Store the geometry and topology in the buffer object(s).
glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set up the perspective projection matrix in the resize handler.
P = glm::perspective(45.0f, (GLfloat)w/h, 1.f, 1000.f);
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms and then draw the geometry.void OnRender() { time = glutGet(GLUT_ELAPSED_TIME)/1000.0f * SPEED; glm::mat4 T=glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, dist)); glm::mat4 Rx= glm::rotate(T, rX, glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 MV= glm::rotate(Rx, rY, glm::vec3(0.0f, 1.0f, 0.0f)); glm::mat4 MVP= P*MV; shader.Use(); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform1f(shader("time"), time); glDrawElements(GL_TRIANGLES,TOTAL_INDICES,GL_UNSIGNED_SHORT,0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); }
In this recipe, the only attribute passed in is the per-vertex position (vVertex
). There are two uniforms: the combined modelview projection matrix (MVP
) and the current time (time
). We will use the time
uniform to allow progression of the deformer so we can observe the ripple movement. After these declarations are three constants, namely amplitude
(which controls how much the ripple moves up and down from the zero base line), frequency
(which controls the total number of waves), and PI
(a constant used in the wave formula). Note that we could have replaced the constants with uniforms and had them modified from the application code.
In our formula, we first find the distance (d
) of the vertex from the origin by using the Euclidean distance formula. This is given to us by the length
built-in GLSL function. Next, we input the distance into the sin
function multiplying the distance by the frequency (f
) and (π
). In our vertex shader, we replace the phase (φ
) with time.
Similar to the previous recipe, we declare the GLSLShader
object in the global scope to allow maximum visibility. Next, we initialize the GLSLShader
object in the OnInit()
function.
The only difference in this recipe is the addition of an additional uniform (time
).
This sort of topology can be generated using the following code:
In order to alternate the triangle directions and maintain their winding order, we take two different combinations, one for an even iteration and second for an odd iteration. This can be achieved using the following code:
After filling the vertices and indices arrays, we push this data to the GPU memory. We first create a vertex array object (vaoID
) and two buffer objects, the GL_ARRAY_BUFFER
binding for vertices and the GL_ELEMENT_ARRAY_BUFFER
binding for the indices array. These calls are exactly the same as in the previous recipe. The only difference is that now we only have a single per-vertex attribute, that is, the vertex position (vVertex
). The OnShutdown()
function is also unchanged as in the previous recipe.
Note that the matrix multiplication in glm
follows from right to left. So the order in which we generate the transformations will be applied in the reverse order. In our case the combined modelview matrix will be calculated as MV = (T*(Rx*Ry))
. The translation amount, dist
, and the rotation values, rX
and rY
, are calculated in the mouse input functions based on the user's input.
The OnMouseMove
function is defined as follows:
The OnMouseMove
function has only two parameters, the x
and y
screen location where the mouse currently is. The mouse move event is raised whenever the mouse enters and moves in the application window. Based on the state set in the OnMouseDown
function, we calculate the zoom amount (dist
) if the middle mouse button is pressed. Otherwise, we calculate the two rotation amounts (rX
and rY
). Next, we update the oldX
and oldY
positions for the next event. Finally we request the freeglut framework to repaint our application window by calling glutPostRedisplay()
function. This call sends the repaint event which re-renders our scene.
Chapter1\RippleDeformer
directory.
We can implement a ripple shader using the following steps:
- Define the vertex shader that deforms the object space vertex position.
#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 MVP; uniform float time; const float amplitude = 0.125; const float frequency = 4; const float PI = 3.14159; void main() { float distance = length(vVertex); float y = amplitude*sin(-PI*distance*frequency+time); gl_Position = MVP*vec4(vVertex.x, y, vVertex.z,1); }
- Define a fragment shader that simply outputs a constant color.
#version 330 core layout(location=0) out vec4 vFragColor; void main() { vFragColor = vec4(1,1,1,1); }
- Load the two shaders using the
GLSLShader
class in theOnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("MVP"); shader.AddUniform("time"); shader.UnUse();
- Create the geometry and topology.
int count = 0; int i=0, j=0; for( j=0;j<=NUM_Z;j++) { for( i=0;i<=NUM_X;i++) { vertices[count++] = glm::vec3( ((float(i)/(NUM_X-1)) *2-1)* HALF_SIZE_X, 0, ((float(j)/(NUM_Z-1))*2-1)*HALF_SIZE_Z); } } GLushort* id=&indices[0]; for (i = 0; i < NUM_Z; i++) { for (j = 0; j < NUM_X; j++) { int i0 = i * (NUM_X+1) + j; int i1 = i0 + 1; int i2 = i0 + (NUM_X+1); int i3 = i2 + 1; if ((j+i)%2) { *id++ = i0; *id++ = i2; *id++ = i1; *id++ = i1; *id++ = i2; *id++ = i3; } else { *id++ = i0; *id++ = i2; *id++ = i3; *id++ = i0; *id++ = i3; *id++ = i1; } } }
- Store the geometry and topology in the buffer object(s).
glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set up the perspective projection matrix in the resize handler.
P = glm::perspective(45.0f, (GLfloat)w/h, 1.f, 1000.f);
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms and then draw the geometry.void OnRender() { time = glutGet(GLUT_ELAPSED_TIME)/1000.0f * SPEED; glm::mat4 T=glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, dist)); glm::mat4 Rx= glm::rotate(T, rX, glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 MV= glm::rotate(Rx, rY, glm::vec3(0.0f, 1.0f, 0.0f)); glm::mat4 MVP= P*MV; shader.Use(); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform1f(shader("time"), time); glDrawElements(GL_TRIANGLES,TOTAL_INDICES,GL_UNSIGNED_SHORT,0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); }
In this recipe, the only attribute passed in is the per-vertex position (vVertex
). There are two uniforms: the combined modelview projection matrix (MVP
) and the current time (time
). We will use the time
uniform to allow progression of the deformer so we can observe the ripple movement. After these declarations are three constants, namely amplitude
(which controls how much the ripple moves up and down from the zero base line), frequency
(which controls the total number of waves), and PI
(a constant used in the wave formula). Note that we could have replaced the constants with uniforms and had them modified from the application code.
In our formula, we first find the distance (d
) of the vertex from the origin by using the Euclidean distance formula. This is given to us by the length
built-in GLSL function. Next, we input the distance into the sin
function multiplying the distance by the frequency (f
) and (π
). In our vertex shader, we replace the phase (φ
) with time.
Similar to the previous recipe, we declare the GLSLShader
object in the global scope to allow maximum visibility. Next, we initialize the GLSLShader
object in the OnInit()
function.
The only difference in this recipe is the addition of an additional uniform (time
).
This sort of topology can be generated using the following code:
In order to alternate the triangle directions and maintain their winding order, we take two different combinations, one for an even iteration and second for an odd iteration. This can be achieved using the following code:
After filling the vertices and indices arrays, we push this data to the GPU memory. We first create a vertex array object (vaoID
) and two buffer objects, the GL_ARRAY_BUFFER
binding for vertices and the GL_ELEMENT_ARRAY_BUFFER
binding for the indices array. These calls are exactly the same as in the previous recipe. The only difference is that now we only have a single per-vertex attribute, that is, the vertex position (vVertex
). The OnShutdown()
function is also unchanged as in the previous recipe.
Note that the matrix multiplication in glm
follows from right to left. So the order in which we generate the transformations will be applied in the reverse order. In our case the combined modelview matrix will be calculated as MV = (T*(Rx*Ry))
. The translation amount, dist
, and the rotation values, rX
and rY
, are calculated in the mouse input functions based on the user's input.
The OnMouseMove
function is defined as follows:
The OnMouseMove
function has only two parameters, the x
and y
screen location where the mouse currently is. The mouse move event is raised whenever the mouse enters and moves in the application window. Based on the state set in the OnMouseDown
function, we calculate the zoom amount (dist
) if the middle mouse button is pressed. Otherwise, we calculate the two rotation amounts (rX
and rY
). Next, we update the oldX
and oldY
positions for the next event. Finally we request the freeglut framework to repaint our application window by calling glutPostRedisplay()
function. This call sends the repaint event which re-renders our scene.
#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 MVP; uniform float time; const float amplitude = 0.125; const float frequency = 4; const float PI = 3.14159; void main() { float distance = length(vVertex); float y = amplitude*sin(-PI*distance*frequency+time); gl_Position = MVP*vec4(vVertex.x, y, vVertex.z,1); }
- fragment shader that simply outputs a constant color.
#version 330 core layout(location=0) out vec4 vFragColor; void main() { vFragColor = vec4(1,1,1,1); }
- Load the two shaders using the
GLSLShader
class in theOnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER, "shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("MVP"); shader.AddUniform("time"); shader.UnUse();
- Create the geometry and topology.
int count = 0; int i=0, j=0; for( j=0;j<=NUM_Z;j++) { for( i=0;i<=NUM_X;i++) { vertices[count++] = glm::vec3( ((float(i)/(NUM_X-1)) *2-1)* HALF_SIZE_X, 0, ((float(j)/(NUM_Z-1))*2-1)*HALF_SIZE_Z); } } GLushort* id=&indices[0]; for (i = 0; i < NUM_Z; i++) { for (j = 0; j < NUM_X; j++) { int i0 = i * (NUM_X+1) + j; int i1 = i0 + 1; int i2 = i0 + (NUM_X+1); int i3 = i2 + 1; if ((j+i)%2) { *id++ = i0; *id++ = i2; *id++ = i1; *id++ = i1; *id++ = i2; *id++ = i3; } else { *id++ = i0; *id++ = i2; *id++ = i3; *id++ = i0; *id++ = i3; *id++ = i1; } } }
- Store the geometry and topology in the buffer object(s).
glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set up the perspective projection matrix in the resize handler.
P = glm::perspective(45.0f, (GLfloat)w/h, 1.f, 1000.f);
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms and then draw the geometry.void OnRender() { time = glutGet(GLUT_ELAPSED_TIME)/1000.0f * SPEED; glm::mat4 T=glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, dist)); glm::mat4 Rx= glm::rotate(T, rX, glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 MV= glm::rotate(Rx, rY, glm::vec3(0.0f, 1.0f, 0.0f)); glm::mat4 MVP= P*MV; shader.Use(); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform1f(shader("time"), time); glDrawElements(GL_TRIANGLES,TOTAL_INDICES,GL_UNSIGNED_SHORT,0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); }
In this recipe, the only attribute passed in is the per-vertex position (vVertex
). There are two uniforms: the combined modelview projection matrix (MVP
) and the current time (time
). We will use the time
uniform to allow progression of the deformer so we can observe the ripple movement. After these declarations are three constants, namely amplitude
(which controls how much the ripple moves up and down from the zero base line), frequency
(which controls the total number of waves), and PI
(a constant used in the wave formula). Note that we could have replaced the constants with uniforms and had them modified from the application code.
In our formula, we first find the distance (d
) of the vertex from the origin by using the Euclidean distance formula. This is given to us by the length
built-in GLSL function. Next, we input the distance into the sin
function multiplying the distance by the frequency (f
) and (π
). In our vertex shader, we replace the phase (φ
) with time.
Similar to the previous recipe, we declare the GLSLShader
object in the global scope to allow maximum visibility. Next, we initialize the GLSLShader
object in the OnInit()
function.
The only difference in this recipe is the addition of an additional uniform (time
).
This sort of topology can be generated using the following code:
In order to alternate the triangle directions and maintain their winding order, we take two different combinations, one for an even iteration and second for an odd iteration. This can be achieved using the following code:
After filling the vertices and indices arrays, we push this data to the GPU memory. We first create a vertex array object (vaoID
) and two buffer objects, the GL_ARRAY_BUFFER
binding for vertices and the GL_ELEMENT_ARRAY_BUFFER
binding for the indices array. These calls are exactly the same as in the previous recipe. The only difference is that now we only have a single per-vertex attribute, that is, the vertex position (vVertex
). The OnShutdown()
function is also unchanged as in the previous recipe.
Note that the matrix multiplication in glm
follows from right to left. So the order in which we generate the transformations will be applied in the reverse order. In our case the combined modelview matrix will be calculated as MV = (T*(Rx*Ry))
. The translation amount, dist
, and the rotation values, rX
and rY
, are calculated in the mouse input functions based on the user's input.
The OnMouseMove
function is defined as follows:
The OnMouseMove
function has only two parameters, the x
and y
screen location where the mouse currently is. The mouse move event is raised whenever the mouse enters and moves in the application window. Based on the state set in the OnMouseDown
function, we calculate the zoom amount (dist
) if the middle mouse button is pressed. Otherwise, we calculate the two rotation amounts (rX
and rY
). Next, we update the oldX
and oldY
positions for the next event. Finally we request the freeglut framework to repaint our application window by calling glutPostRedisplay()
function. This call sends the repaint event which re-renders our scene.
only attribute passed in is the per-vertex position (vVertex
). There are two uniforms: the combined modelview projection matrix (MVP
) and the current time (time
). We will use the time
uniform to allow progression of the deformer so we can observe the ripple movement. After these declarations are three constants, namely amplitude
(which controls how much the ripple moves up and down from the zero base line), frequency
(which controls the total number of waves), and PI
(a constant used in the wave formula). Note that we could have replaced the constants with uniforms and had them modified from the application code.
In our formula, we first find the distance (d
) of the vertex from the origin by using the Euclidean distance formula. This is given to us by the length
built-in GLSL function. Next, we input the distance into the sin
function multiplying the distance by the frequency (f
) and (π
). In our vertex shader, we replace the phase (φ
) with time.
Similar to the previous recipe, we declare the GLSLShader
object in the global scope to allow maximum visibility. Next, we initialize the GLSLShader
object in the OnInit()
function.
The only difference in this recipe is the addition of an additional uniform (time
).
This sort of topology can be generated using the following code:
In order to alternate the triangle directions and maintain their winding order, we take two different combinations, one for an even iteration and second for an odd iteration. This can be achieved using the following code:
After filling the vertices and indices arrays, we push this data to the GPU memory. We first create a vertex array object (vaoID
) and two buffer objects, the GL_ARRAY_BUFFER
binding for vertices and the GL_ELEMENT_ARRAY_BUFFER
binding for the indices array. These calls are exactly the same as in the previous recipe. The only difference is that now we only have a single per-vertex attribute, that is, the vertex position (vVertex
). The OnShutdown()
function is also unchanged as in the previous recipe.
Note that the matrix multiplication in glm
follows from right to left. So the order in which we generate the transformations will be applied in the reverse order. In our case the combined modelview matrix will be calculated as MV = (T*(Rx*Ry))
. The translation amount, dist
, and the rotation values, rX
and rY
, are calculated in the mouse input functions based on the user's input.
The OnMouseMove
function is defined as follows:
The OnMouseMove
function has only two parameters, the x
and y
screen location where the mouse currently is. The mouse move event is raised whenever the mouse enters and moves in the application window. Based on the state set in the OnMouseDown
function, we calculate the zoom amount (dist
) if the middle mouse button is pressed. Otherwise, we calculate the two rotation amounts (rX
and rY
). Next, we update the oldX
and oldY
positions for the next event. Finally we request the freeglut framework to repaint our application window by calling glutPostRedisplay()
function. This call sends the repaint event which re-renders our scene.
previous recipe, we declare the GLSLShader
object in the global scope to allow maximum visibility. Next, we initialize the GLSLShader
object in the OnInit()
function.
The only difference in this recipe is the addition of an additional uniform (time
).
This sort of topology can be generated using the following code:
In order to alternate the triangle directions and maintain their winding order, we take two different combinations, one for an even iteration and second for an odd iteration. This can be achieved using the following code:
After filling the vertices and indices arrays, we push this data to the GPU memory. We first create a vertex array object (vaoID
) and two buffer objects, the GL_ARRAY_BUFFER
binding for vertices and the GL_ELEMENT_ARRAY_BUFFER
binding for the indices array. These calls are exactly the same as in the previous recipe. The only difference is that now we only have a single per-vertex attribute, that is, the vertex position (vVertex
). The OnShutdown()
function is also unchanged as in the previous recipe.
Note that the matrix multiplication in glm
follows from right to left. So the order in which we generate the transformations will be applied in the reverse order. In our case the combined modelview matrix will be calculated as MV = (T*(Rx*Ry))
. The translation amount, dist
, and the rotation values, rX
and rY
, are calculated in the mouse input functions based on the user's input.
The OnMouseMove
function is defined as follows:
The OnMouseMove
function has only two parameters, the x
and y
screen location where the mouse currently is. The mouse move event is raised whenever the mouse enters and moves in the application window. Based on the state set in the OnMouseDown
function, we calculate the zoom amount (dist
) if the middle mouse button is pressed. Otherwise, we calculate the two rotation amounts (rX
and rY
). Next, we update the oldX
and oldY
positions for the next event. Finally we request the freeglut framework to repaint our application window by calling glutPostRedisplay()
function. This call sends the repaint event which re-renders our scene.
After the vertex shader, the next programmable stage in the OpenGL v3.3 graphics pipeline is the geometry shader. This shader contains inputs from the vertex shader stage. We can either feed these unmodified to the next shader stage or we can add/omit/modify vertices and primitives as desired. One thing that the vertex shaders lack is the availability of the other vertices of the primitive. Geometry shaders have information of all on the vertices of a single primitive.
In this recipe, we will dynamically subdivide a planar mesh using the geometry shader.
This recipe assumes that the reader knows how to render a simple triangle using vertex and fragment shaders using the OpenGL v3.3 core profile. We render four planar meshes in this recipe which are placed next to each other to create a bigger planar mesh. Each of these meshes is subdivided using the same geometry shader. The code for this recipe is located in the Chapter1\SubdivisionGeometryShader
directory.
We can implement the geometry shader using the following steps:
- Define a vertex shader (
shaders/shader.vert
) which outputs object space vertex positions directly.#version 330 core layout(location=0) in vec3 vVertex; void main() { gl_Position = vec4(vVertex, 1); }
- Define a geometry shader (
shaders/shader.geom
) which performs the subdivision of the quad. The shader is explained in the next section.#version 330 core layout (triangles) in; layout (triangle_strip, max_vertices=256) out; uniform int sub_divisions; uniform mat4 MVP; void main() { vec4 v0 = gl_in[0].gl_Position; vec4 v1 = gl_in[1].gl_Position; vec4 v2 = gl_in[2].gl_Position; float dx = abs(v0.x-v2.x)/sub_divisions; float dz = abs(v0.z-v1.z)/sub_divisions; float x=v0.x; float z=v0.z; for(int j=0;j<sub_divisions*sub_divisions;j++) { gl_Position = MVP * vec4(x,0,z,1); EmitVertex(); gl_Position = MVP * vec4(x,0,z+dz,1); EmitVertex(); gl_Position = MVP * vec4(x+dx,0,z,1); EmitVertex(); gl_Position = MVP * vec4(x+dx,0,z+dz,1); EmitVertex(); EndPrimitive(); x+=dx; if((j+1) %sub_divisions == 0) { x=v0.x; z+=dz; } } }
- Define a fragment shader (
shaders/shader.frag
) that simply outputs a constant color.#version 330 core layout(location=0) out vec4 vFragColor; void main() { vFragColor = vec4(1,1,1,1); }
- Load the shaders using the GLSLShader class in the
OnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_GEOMETRY_SHADER,"shaders/shader.geom"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("MVP"); shader.AddUniform("sub_divisions"); glUniform1i(shader("sub_divisions"), sub_divisions); shader.UnUse();
- Create the geometry and topology.
vertices[0] = glm::vec3(-5,0,-5); vertices[1] = glm::vec3(-5,0,5); vertices[2] = glm::vec3(5,0,5); vertices[3] = glm::vec3(5,0,-5); GLushort* id=&indices[0]; *id++ = 0; *id++ = 1; *id++ = 2; *id++ = 0; *id++ = 2; *id++ = 3;
- Store the geometry and topology in the buffer object(s). Also enable the line display mode.
glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms and then draw the geometry.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 T = glm::translate( glm::mat4(1.0f), glm::vec3(0.0f,0.0f, dist)); glm::mat4 Rx=glm::rotate(T,rX,glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 MV=glm::rotate(Rx,rY, glm::vec3(0.0f,1.0f,0.0f)); MV=glm::translate(MV, glm::vec3(-5,0,-5)); shader.Use(); glUniform1i(shader("sub_divisions"), sub_divisions); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(10,0,0)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(0,0,10)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(-10,0,0)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); cout<<"Shutdown successfull"<<endl; }
Let's dissect the geometry shader.
Next, we calculate the size of the smallest quad for the given subdivision based on the size of the given base triangle and the total number of subdivisions required.
We start from the first vertex. We store the x
and z
values of this vertex in local variables. Next, we iterate N*N
times, where N
is the total number of subdivisions required. For example, if we need to subdivide the mesh three times on both axes, the loop will run nine times, which is the total number of quads. After calculating the positions of the four vertices, they are emitted by calling EmitVertex()
. This function emits the current values of output variables to the current output primitive on the primitive stream. Next, the EndPrimitive()
call is issued to signify that we have emitted the four vertices of triangle_strip
.
The fragment shader outputs a constant color (white: vec4(1,1,1,1)
).
The application code is similar to the last recipes. We have an additional shader (shaders/shader.geom
), which is our geometry shader that is loaded from file.
The notable additions are highlighted, which include the new geometry shader and an additional uniform for the total subdivisions desired (sub_divisions
). We initialize this uniform at initialization. The buffer object handling is similar to the simple triangle recipe. The other difference is in the rendering function where there are some additional modeling transformations (translations) after the viewing transformation.
We can change the subdivision levels by pressing the , and . keys. We then check to make sure that the subdivisions are within the allowed limit. Finally, we request the freeglut function, glutPostRedisplay()
, to repaint the window to show the new mesh. Compiling and running the demo code displays four planar meshes. Pressing the , key decreases the subdivision level and the . key increases the subdivision level. The output from the subdivision geometry shader showing multiple subdivision levels is displayed in the following screenshot:
using the OpenGL v3.3 core profile. We render four planar meshes in this recipe which are placed next to each other to create a bigger planar mesh. Each of these meshes is subdivided using the same geometry shader. The code for this recipe is located in the Chapter1\SubdivisionGeometryShader
directory.
We can implement the geometry shader using the following steps:
- Define a vertex shader (
shaders/shader.vert
) which outputs object space vertex positions directly.#version 330 core layout(location=0) in vec3 vVertex; void main() { gl_Position = vec4(vVertex, 1); }
- Define a geometry shader (
shaders/shader.geom
) which performs the subdivision of the quad. The shader is explained in the next section.#version 330 core layout (triangles) in; layout (triangle_strip, max_vertices=256) out; uniform int sub_divisions; uniform mat4 MVP; void main() { vec4 v0 = gl_in[0].gl_Position; vec4 v1 = gl_in[1].gl_Position; vec4 v2 = gl_in[2].gl_Position; float dx = abs(v0.x-v2.x)/sub_divisions; float dz = abs(v0.z-v1.z)/sub_divisions; float x=v0.x; float z=v0.z; for(int j=0;j<sub_divisions*sub_divisions;j++) { gl_Position = MVP * vec4(x,0,z,1); EmitVertex(); gl_Position = MVP * vec4(x,0,z+dz,1); EmitVertex(); gl_Position = MVP * vec4(x+dx,0,z,1); EmitVertex(); gl_Position = MVP * vec4(x+dx,0,z+dz,1); EmitVertex(); EndPrimitive(); x+=dx; if((j+1) %sub_divisions == 0) { x=v0.x; z+=dz; } } }
- Define a fragment shader (
shaders/shader.frag
) that simply outputs a constant color.#version 330 core layout(location=0) out vec4 vFragColor; void main() { vFragColor = vec4(1,1,1,1); }
- Load the shaders using the GLSLShader class in the
OnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_GEOMETRY_SHADER,"shaders/shader.geom"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("MVP"); shader.AddUniform("sub_divisions"); glUniform1i(shader("sub_divisions"), sub_divisions); shader.UnUse();
- Create the geometry and topology.
vertices[0] = glm::vec3(-5,0,-5); vertices[1] = glm::vec3(-5,0,5); vertices[2] = glm::vec3(5,0,5); vertices[3] = glm::vec3(5,0,-5); GLushort* id=&indices[0]; *id++ = 0; *id++ = 1; *id++ = 2; *id++ = 0; *id++ = 2; *id++ = 3;
- Store the geometry and topology in the buffer object(s). Also enable the line display mode.
glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms and then draw the geometry.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 T = glm::translate( glm::mat4(1.0f), glm::vec3(0.0f,0.0f, dist)); glm::mat4 Rx=glm::rotate(T,rX,glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 MV=glm::rotate(Rx,rY, glm::vec3(0.0f,1.0f,0.0f)); MV=glm::translate(MV, glm::vec3(-5,0,-5)); shader.Use(); glUniform1i(shader("sub_divisions"), sub_divisions); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(10,0,0)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(0,0,10)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(-10,0,0)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); cout<<"Shutdown successfull"<<endl; }
Let's dissect the geometry shader.
Next, we calculate the size of the smallest quad for the given subdivision based on the size of the given base triangle and the total number of subdivisions required.
We start from the first vertex. We store the x
and z
values of this vertex in local variables. Next, we iterate N*N
times, where N
is the total number of subdivisions required. For example, if we need to subdivide the mesh three times on both axes, the loop will run nine times, which is the total number of quads. After calculating the positions of the four vertices, they are emitted by calling EmitVertex()
. This function emits the current values of output variables to the current output primitive on the primitive stream. Next, the EndPrimitive()
call is issued to signify that we have emitted the four vertices of triangle_strip
.
The fragment shader outputs a constant color (white: vec4(1,1,1,1)
).
The application code is similar to the last recipes. We have an additional shader (shaders/shader.geom
), which is our geometry shader that is loaded from file.
The notable additions are highlighted, which include the new geometry shader and an additional uniform for the total subdivisions desired (sub_divisions
). We initialize this uniform at initialization. The buffer object handling is similar to the simple triangle recipe. The other difference is in the rendering function where there are some additional modeling transformations (translations) after the viewing transformation.
We can change the subdivision levels by pressing the , and . keys. We then check to make sure that the subdivisions are within the allowed limit. Finally, we request the freeglut function, glutPostRedisplay()
, to repaint the window to show the new mesh. Compiling and running the demo code displays four planar meshes. Pressing the , key decreases the subdivision level and the . key increases the subdivision level. The output from the subdivision geometry shader showing multiple subdivision levels is displayed in the following screenshot:
shaders/shader.vert
) which outputs object space vertex positions directly.#version 330 core layout(location=0) in vec3 vVertex; void main() { gl_Position = vec4(vVertex, 1); }
shaders/shader.geom
) which performs the subdivision of the quad. The shader is explained in the next section.#version 330 core layout (triangles) in; layout (triangle_strip, max_vertices=256) out; uniform int sub_divisions; uniform mat4 MVP; void main() { vec4 v0 = gl_in[0].gl_Position; vec4 v1 = gl_in[1].gl_Position; vec4 v2 = gl_in[2].gl_Position; float dx = abs(v0.x-v2.x)/sub_divisions; float dz = abs(v0.z-v1.z)/sub_divisions; float x=v0.x; float z=v0.z; for(int j=0;j<sub_divisions*sub_divisions;j++) { gl_Position = MVP * vec4(x,0,z,1); EmitVertex(); gl_Position = MVP * vec4(x,0,z+dz,1); EmitVertex(); gl_Position = MVP * vec4(x+dx,0,z,1); EmitVertex(); gl_Position = MVP * vec4(x+dx,0,z+dz,1); EmitVertex(); EndPrimitive(); x+=dx; if((j+1) %sub_divisions == 0) { x=v0.x; z+=dz; } } }
shaders/shader.frag
) that simply outputs a constant color.#version 330 core layout(location=0) out vec4 vFragColor; void main() { vFragColor = vec4(1,1,1,1); }
- shaders using the GLSLShader class in the
OnInit()
function.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_GEOMETRY_SHADER,"shaders/shader.geom"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("MVP"); shader.AddUniform("sub_divisions"); glUniform1i(shader("sub_divisions"), sub_divisions); shader.UnUse();
- Create the geometry and topology.
vertices[0] = glm::vec3(-5,0,-5); vertices[1] = glm::vec3(-5,0,5); vertices[2] = glm::vec3(5,0,5); vertices[3] = glm::vec3(5,0,-5); GLushort* id=&indices[0]; *id++ = 0; *id++ = 1; *id++ = 2; *id++ = 0; *id++ = 2; *id++ = 3;
- Store the geometry and topology in the buffer object(s). Also enable the line display mode.
glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 3, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
- Set up the rendering code to bind the
GLSLShader
shader, pass the uniforms and then draw the geometry.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 T = glm::translate( glm::mat4(1.0f), glm::vec3(0.0f,0.0f, dist)); glm::mat4 Rx=glm::rotate(T,rX,glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 MV=glm::rotate(Rx,rY, glm::vec3(0.0f,1.0f,0.0f)); MV=glm::translate(MV, glm::vec3(-5,0,-5)); shader.Use(); glUniform1i(shader("sub_divisions"), sub_divisions); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(10,0,0)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(0,0,10)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); MV=glm::translate(MV, glm::vec3(-10,0,0)); glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(P*MV)); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Delete the shader and other OpenGL objects.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); cout<<"Shutdown successfull"<<endl; }
Let's dissect the geometry shader.
Next, we calculate the size of the smallest quad for the given subdivision based on the size of the given base triangle and the total number of subdivisions required.
We start from the first vertex. We store the x
and z
values of this vertex in local variables. Next, we iterate N*N
times, where N
is the total number of subdivisions required. For example, if we need to subdivide the mesh three times on both axes, the loop will run nine times, which is the total number of quads. After calculating the positions of the four vertices, they are emitted by calling EmitVertex()
. This function emits the current values of output variables to the current output primitive on the primitive stream. Next, the EndPrimitive()
call is issued to signify that we have emitted the four vertices of triangle_strip
.
The fragment shader outputs a constant color (white: vec4(1,1,1,1)
).
The application code is similar to the last recipes. We have an additional shader (shaders/shader.geom
), which is our geometry shader that is loaded from file.
The notable additions are highlighted, which include the new geometry shader and an additional uniform for the total subdivisions desired (sub_divisions
). We initialize this uniform at initialization. The buffer object handling is similar to the simple triangle recipe. The other difference is in the rendering function where there are some additional modeling transformations (translations) after the viewing transformation.
We can change the subdivision levels by pressing the , and . keys. We then check to make sure that the subdivisions are within the allowed limit. Finally, we request the freeglut function, glutPostRedisplay()
, to repaint the window to show the new mesh. Compiling and running the demo code displays four planar meshes. Pressing the , key decreases the subdivision level and the . key increases the subdivision level. The output from the subdivision geometry shader showing multiple subdivision levels is displayed in the following screenshot:
Next, we calculate the size of the smallest quad for the given subdivision based on the size of the given base triangle and the total number of subdivisions required.
We start from the first vertex. We store the x
and z
values of this vertex in local variables. Next, we iterate N*N
times, where N
is the total number of subdivisions required. For example, if we need to subdivide the mesh three times on both axes, the loop will run nine times, which is the total number of quads. After calculating the positions of the four vertices, they are emitted by calling EmitVertex()
. This function emits the current values of output variables to the current output primitive on the primitive stream. Next, the EndPrimitive()
call is issued to signify that we have emitted the four vertices of triangle_strip
.
The fragment shader outputs a constant color (white: vec4(1,1,1,1)
).
The application code is similar to the last recipes. We have an additional shader (shaders/shader.geom
), which is our geometry shader that is loaded from file.
The notable additions are highlighted, which include the new geometry shader and an additional uniform for the total subdivisions desired (sub_divisions
). We initialize this uniform at initialization. The buffer object handling is similar to the simple triangle recipe. The other difference is in the rendering function where there are some additional modeling transformations (translations) after the viewing transformation.
We can change the subdivision levels by pressing the , and . keys. We then check to make sure that the subdivisions are within the allowed limit. Finally, we request the freeglut function, glutPostRedisplay()
, to repaint the window to show the new mesh. Compiling and running the demo code displays four planar meshes. Pressing the , key decreases the subdivision level and the . key increases the subdivision level. The output from the subdivision geometry shader showing multiple subdivision levels is displayed in the following screenshot:
The notable additions are highlighted, which include the new geometry shader and an additional uniform for the total subdivisions desired (sub_divisions
). We initialize this uniform at initialization. The buffer object handling is similar to the simple triangle recipe. The other difference is in the rendering function where there are some additional modeling transformations (translations) after the viewing transformation.
We can change the subdivision levels by pressing the , and . keys. We then check to make sure that the subdivisions are within the allowed limit. Finally, we request the freeglut function, glutPostRedisplay()
, to repaint the window to show the new mesh. Compiling and running the demo code displays four planar meshes. Pressing the , key decreases the subdivision level and the . key increases the subdivision level. The output from the subdivision geometry shader showing multiple subdivision levels is displayed in the following screenshot:
In order to avoid pushing the same data multiple times, we can exploit the instanced rendering functions. We will now see how we can omit the multiple glDrawElements
calls in the previous recipe with a single glDrawElementsInstanced
call.
Converting the previous recipe to use instanced rendering requires the following steps:
- Change the vertex shader to handle the instance modeling matrix and output world space positions (
shaders/shader.vert
).#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 M[4]; void main() { gl_Position = M[gl_InstanceID]*vec4(vVertex, 1); }
- Change the geometry shader to replace the
MVP
matrix with thePV
matrix (shaders/shader.geom
).#version 330 core layout (triangles) in; layout (triangle_strip, max_vertices=256) out; uniform int sub_divisions; uniform mat4 PV; void main() { vec4 v0 = gl_in[0].gl_Position; vec4 v1 = gl_in[1].gl_Position; vec4 v2 = gl_in[2].gl_Position; float dx = abs(v0.x-v2.x)/sub_divisions; float dz = abs(v0.z-v1.z)/sub_divisions; float x=v0.x; float z=v0.z; for(int j=0;j<sub_divisions*sub_divisions;j++) { gl_Position = PV * vec4(x,0,z,1); EmitVertex(); gl_Position = PV * vec4(x,0,z+dz,1); EmitVertex(); gl_Position = PV * vec4(x+dx,0,z,1); EmitVertex(); gl_Position = PV * vec4(x+dx,0,z+dz,1); EmitVertex(); EndPrimitive(); x+=dx; if((j+1) %sub_divisions == 0) { x=v0.x; z+=dz; } } }
- Initialize the per-instance model matrices (
M
).void OnInit() { //set the instance modeling matrix M[0] = glm::translate(glm::mat4(1), glm::vec3(-5,0,-5)); M[1] = glm::translate(M[0], glm::vec3(10,0,0)); M[2] = glm::translate(M[1], glm::vec3(0,0,10)); M[3] = glm::translate(M[2], glm::vec3(-10,0,0)); .. shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("PV"); shader.AddUniform("M"); shader.AddUniform("sub_divisions"); glUniform1i(shader("sub_divisions"), sub_divisions); glUniformMatrix4fv(shader("M"), 4, GL_FALSE, glm::value_ptr(M[0])); shader.UnUse();
- Render instances using the
glDrawElementInstanced
call.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 T =glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, dist)); glm::mat4 Rx=glm::rotate(T,rX,glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 V =glm::rotate(Rx,rY,glm::vec3(0.0f, 1.0f,0.0f)); glm::mat4 PV = P*V; shader.Use(); glUniformMatrix4fv(shader("PV"),1,GL_FALSE,glm::value_ptr(PV)); glUniform1i(shader("sub_divisions"), sub_divisions); glDrawElementsInstanced(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0, 4); shader.UnUse(); glutSwapBuffers(); }
First, we need to store the model matrix for each instance separately. Since we have four instances, we store a uniform array of four elements (M[4]
). Second, we multiply the per-vertex position (vVertex
) with the model matrix for the current instance (M[gl_InstanceID]
).
There is always a limit on the maximum number of matrices one can output from the vertex shader and this has some performance implications as well. Some performance improvements can be obtained by replacing the matrix storage with translation and scaling vectors, and an orientation quaternion which can then be converted on the fly into a matrix in the shader.
The official OpenGL wiki can be found at http://www.opengl.org/wiki/Built-in_Variable_%28GLSL%29.
An instance rendering tutorial from OGLDev can be found at http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html.
Chapter1\SubdivisionGeometryShader_Instanced
directory.
Converting the previous recipe to use instanced rendering requires the following steps:
- Change the vertex shader to handle the instance modeling matrix and output world space positions (
shaders/shader.vert
).#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 M[4]; void main() { gl_Position = M[gl_InstanceID]*vec4(vVertex, 1); }
- Change the geometry shader to replace the
MVP
matrix with thePV
matrix (shaders/shader.geom
).#version 330 core layout (triangles) in; layout (triangle_strip, max_vertices=256) out; uniform int sub_divisions; uniform mat4 PV; void main() { vec4 v0 = gl_in[0].gl_Position; vec4 v1 = gl_in[1].gl_Position; vec4 v2 = gl_in[2].gl_Position; float dx = abs(v0.x-v2.x)/sub_divisions; float dz = abs(v0.z-v1.z)/sub_divisions; float x=v0.x; float z=v0.z; for(int j=0;j<sub_divisions*sub_divisions;j++) { gl_Position = PV * vec4(x,0,z,1); EmitVertex(); gl_Position = PV * vec4(x,0,z+dz,1); EmitVertex(); gl_Position = PV * vec4(x+dx,0,z,1); EmitVertex(); gl_Position = PV * vec4(x+dx,0,z+dz,1); EmitVertex(); EndPrimitive(); x+=dx; if((j+1) %sub_divisions == 0) { x=v0.x; z+=dz; } } }
- Initialize the per-instance model matrices (
M
).void OnInit() { //set the instance modeling matrix M[0] = glm::translate(glm::mat4(1), glm::vec3(-5,0,-5)); M[1] = glm::translate(M[0], glm::vec3(10,0,0)); M[2] = glm::translate(M[1], glm::vec3(0,0,10)); M[3] = glm::translate(M[2], glm::vec3(-10,0,0)); .. shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("PV"); shader.AddUniform("M"); shader.AddUniform("sub_divisions"); glUniform1i(shader("sub_divisions"), sub_divisions); glUniformMatrix4fv(shader("M"), 4, GL_FALSE, glm::value_ptr(M[0])); shader.UnUse();
- Render instances using the
glDrawElementInstanced
call.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 T =glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, dist)); glm::mat4 Rx=glm::rotate(T,rX,glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 V =glm::rotate(Rx,rY,glm::vec3(0.0f, 1.0f,0.0f)); glm::mat4 PV = P*V; shader.Use(); glUniformMatrix4fv(shader("PV"),1,GL_FALSE,glm::value_ptr(PV)); glUniform1i(shader("sub_divisions"), sub_divisions); glDrawElementsInstanced(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0, 4); shader.UnUse(); glutSwapBuffers(); }
First, we need to store the model matrix for each instance separately. Since we have four instances, we store a uniform array of four elements (M[4]
). Second, we multiply the per-vertex position (vVertex
) with the model matrix for the current instance (M[gl_InstanceID]
).
There is always a limit on the maximum number of matrices one can output from the vertex shader and this has some performance implications as well. Some performance improvements can be obtained by replacing the matrix storage with translation and scaling vectors, and an orientation quaternion which can then be converted on the fly into a matrix in the shader.
The official OpenGL wiki can be found at http://www.opengl.org/wiki/Built-in_Variable_%28GLSL%29.
An instance rendering tutorial from OGLDev can be found at http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html.
shaders/shader.vert
).#version 330 core
layout(location=0) in vec3 vVertex;
uniform mat4 M[4];
void main()
{
gl_Position = M[gl_InstanceID]*vec4(vVertex, 1);
}
MVP
matrix with the PV
matrix (shaders/shader.geom
).#version 330 core layout (triangles) in; layout (triangle_strip, max_vertices=256) out; uniform int sub_divisions; uniform mat4 PV; void main() { vec4 v0 = gl_in[0].gl_Position; vec4 v1 = gl_in[1].gl_Position; vec4 v2 = gl_in[2].gl_Position; float dx = abs(v0.x-v2.x)/sub_divisions; float dz = abs(v0.z-v1.z)/sub_divisions; float x=v0.x; float z=v0.z; for(int j=0;j<sub_divisions*sub_divisions;j++) { gl_Position = PV * vec4(x,0,z,1); EmitVertex(); gl_Position = PV * vec4(x,0,z+dz,1); EmitVertex(); gl_Position = PV * vec4(x+dx,0,z,1); EmitVertex(); gl_Position = PV * vec4(x+dx,0,z+dz,1); EmitVertex(); EndPrimitive(); x+=dx; if((j+1) %sub_divisions == 0) { x=v0.x; z+=dz; } } }
- the per-instance model matrices (
M
).void OnInit() { //set the instance modeling matrix M[0] = glm::translate(glm::mat4(1), glm::vec3(-5,0,-5)); M[1] = glm::translate(M[0], glm::vec3(10,0,0)); M[2] = glm::translate(M[1], glm::vec3(0,0,10)); M[3] = glm::translate(M[2], glm::vec3(-10,0,0)); .. shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("PV"); shader.AddUniform("M"); shader.AddUniform("sub_divisions"); glUniform1i(shader("sub_divisions"), sub_divisions); glUniformMatrix4fv(shader("M"), 4, GL_FALSE, glm::value_ptr(M[0])); shader.UnUse();
- Render instances using the
glDrawElementInstanced
call.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 T =glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, dist)); glm::mat4 Rx=glm::rotate(T,rX,glm::vec3(1.0f, 0.0f, 0.0f)); glm::mat4 V =glm::rotate(Rx,rY,glm::vec3(0.0f, 1.0f,0.0f)); glm::mat4 PV = P*V; shader.Use(); glUniformMatrix4fv(shader("PV"),1,GL_FALSE,glm::value_ptr(PV)); glUniform1i(shader("sub_divisions"), sub_divisions); glDrawElementsInstanced(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0, 4); shader.UnUse(); glutSwapBuffers(); }
First, we need to store the model matrix for each instance separately. Since we have four instances, we store a uniform array of four elements (M[4]
). Second, we multiply the per-vertex position (vVertex
) with the model matrix for the current instance (M[gl_InstanceID]
).
There is always a limit on the maximum number of matrices one can output from the vertex shader and this has some performance implications as well. Some performance improvements can be obtained by replacing the matrix storage with translation and scaling vectors, and an orientation quaternion which can then be converted on the fly into a matrix in the shader.
The official OpenGL wiki can be found at http://www.opengl.org/wiki/Built-in_Variable_%28GLSL%29.
An instance rendering tutorial from OGLDev can be found at http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html.
There is always a limit on the maximum number of matrices one can output from the vertex shader and this has some performance implications as well. Some performance improvements can be obtained by replacing the matrix storage with translation and scaling vectors, and an orientation quaternion which can then be converted on the fly into a matrix in the shader.
The official OpenGL wiki can be found at http://www.opengl.org/wiki/Built-in_Variable_%28GLSL%29.
An instance rendering tutorial from OGLDev can be found at http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html.
http://www.opengl.org/wiki/Built-in_Variable_%28GLSL%29.
An instance rendering tutorial from OGLDev can be found at http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html.
We will wrap up this chapter with a recipe for creating a simple image viewer in the OpenGL v3.3 core profile using the SOIL
image loading library.
Let us now implement the image loader by following these steps:
- Load the image using the
SOIL
library. Since the loaded image fromSOIL
is inverted vertically, we flip the image on the Y axis.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); if(pData == NULL) { cerr<<"Cannot load image: "<<filename.c_str()<<endl; exit(EXIT_FAILURE); } int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } }
- Set up the OpenGL texture object and free the data allocated by the
SOIL
library.glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up the vertex shader to output the clip space position (
shaders/shader.vert
).#version 330 core layout(location=0) in vec2 vVertex; smooth out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that samples our image texture (
shaders/shader.frag
).#version 330 core layout (location=0) out vec4 vFragColor; smooth in vec2 vUV; uniform sampler2D textureMap; void main() { vFragColor = texture(textureMap, vUV); }
- Set up the application code using the
GLSLShader
shader class.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("textureMap"); glUniform1i(shader("textureMap"), 0); shader.UnUse();
- Set up the geometry and topology and pass data to the GPU using buffer objects.
vertices[0] = glm::vec2(0.0,0.0); vertices[1] = glm::vec2(1.0,0.0); vertices[2] = glm::vec2(1.0,1.0); vertices[3] = glm::vec2(0.0,1.0); GLushort* id=&indices[0]; *id++ =0; *id++ =1; *id++ =2; *id++ =0; *id++ =2; *id++ =3; glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 2, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set the shader and render the geometry.
void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Release the allocated resources.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); glDeleteTextures(1, &textureID); }
The first parameter is the image file name. The next three parameters return the texture width, texture height, and total color channels in the image. These are used when generating the OpenGL texture object. The final parameter is the flag which is used to control further processing on the image. For this simple example, we will use the SOIL_LOAD_AUTO
flag which keeps all of the loading settings set to default. If the function succeeds, it returns unsigned char*
to the image data. If it fails, the return value is NULL (0)
. Since the image data loaded by SOIL
is vertically flipped, we then use two nested loops to flip the image data on the Y axis.
After the image data is loaded, we generate an OpenGL texture object and pass this data to the texture memory.
The glTexImage2D
function is where the actual allocation of the texture object takes place. The first parameter is the texture target (in our case this is GL_TEXTURE_2D
). The second parameter is the mipmap level which we keep to 0
. The third parameter is the internal format. We can determine this by looking at the image properties. The fourth and fifth parameters store the texture width and height respectively. The sixth parameter is 0
for no border and 1
for border. The seventh parameter is the image format. The eighth parameter is the type of the image data pointer, and the final parameter is the pointer to the raw image data. After this function, we can safely release the image data allocated by SOIL
by calling SOIL_free_image_data(pData)
.
The fragment shader has the texture coordinates smoothly interpolated from the vertex shader stage through the rasterizer. The image that we loaded using SOIL
is passed to a texture sampler (uniform sampler2D textureMap
) which is then sampled using the input texture coordinates (vFragColor = texture(textureMap, vUV)
). So in the end, we get the image displayed on the screen.
The OnShutdown()
function is similar to the earlier recipes. In addition, this code adds deletion of the OpenGL texture object. The rendering code first clears the color and depth buffers. Next, it binds the shader program and then invokes the glDrawElement
call to render the triangles. Finally the shader is unbound and then the glutSwapBuffers
function is called to display the current back buffer as the next front buffer. Compiling and running this code displays the image in a window as shown in the following screenshot:
Chapter1/ImageLoader
directory.
Let us now implement the image loader by following these steps:
- Load the image using the
SOIL
library. Since the loaded image fromSOIL
is inverted vertically, we flip the image on the Y axis.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); if(pData == NULL) { cerr<<"Cannot load image: "<<filename.c_str()<<endl; exit(EXIT_FAILURE); } int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } }
- Set up the OpenGL texture object and free the data allocated by the
SOIL
library.glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up the vertex shader to output the clip space position (
shaders/shader.vert
).#version 330 core layout(location=0) in vec2 vVertex; smooth out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that samples our image texture (
shaders/shader.frag
).#version 330 core layout (location=0) out vec4 vFragColor; smooth in vec2 vUV; uniform sampler2D textureMap; void main() { vFragColor = texture(textureMap, vUV); }
- Set up the application code using the
GLSLShader
shader class.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("textureMap"); glUniform1i(shader("textureMap"), 0); shader.UnUse();
- Set up the geometry and topology and pass data to the GPU using buffer objects.
vertices[0] = glm::vec2(0.0,0.0); vertices[1] = glm::vec2(1.0,0.0); vertices[2] = glm::vec2(1.0,1.0); vertices[3] = glm::vec2(0.0,1.0); GLushort* id=&indices[0]; *id++ =0; *id++ =1; *id++ =2; *id++ =0; *id++ =2; *id++ =3; glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 2, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set the shader and render the geometry.
void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Release the allocated resources.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); glDeleteTextures(1, &textureID); }
The first parameter is the image file name. The next three parameters return the texture width, texture height, and total color channels in the image. These are used when generating the OpenGL texture object. The final parameter is the flag which is used to control further processing on the image. For this simple example, we will use the SOIL_LOAD_AUTO
flag which keeps all of the loading settings set to default. If the function succeeds, it returns unsigned char*
to the image data. If it fails, the return value is NULL (0)
. Since the image data loaded by SOIL
is vertically flipped, we then use two nested loops to flip the image data on the Y axis.
After the image data is loaded, we generate an OpenGL texture object and pass this data to the texture memory.
The glTexImage2D
function is where the actual allocation of the texture object takes place. The first parameter is the texture target (in our case this is GL_TEXTURE_2D
). The second parameter is the mipmap level which we keep to 0
. The third parameter is the internal format. We can determine this by looking at the image properties. The fourth and fifth parameters store the texture width and height respectively. The sixth parameter is 0
for no border and 1
for border. The seventh parameter is the image format. The eighth parameter is the type of the image data pointer, and the final parameter is the pointer to the raw image data. After this function, we can safely release the image data allocated by SOIL
by calling SOIL_free_image_data(pData)
.
The fragment shader has the texture coordinates smoothly interpolated from the vertex shader stage through the rasterizer. The image that we loaded using SOIL
is passed to a texture sampler (uniform sampler2D textureMap
) which is then sampled using the input texture coordinates (vFragColor = texture(textureMap, vUV)
). So in the end, we get the image displayed on the screen.
The OnShutdown()
function is similar to the earlier recipes. In addition, this code adds deletion of the OpenGL texture object. The rendering code first clears the color and depth buffers. Next, it binds the shader program and then invokes the glDrawElement
call to render the triangles. Finally the shader is unbound and then the glutSwapBuffers
function is called to display the current back buffer as the next front buffer. Compiling and running this code displays the image in a window as shown in the following screenshot:
- image using the
SOIL
library. Since the loaded image fromSOIL
is inverted vertically, we flip the image on the Y axis.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); if(pData == NULL) { cerr<<"Cannot load image: "<<filename.c_str()<<endl; exit(EXIT_FAILURE); } int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } }
- Set up the OpenGL texture object and free the data allocated by the
SOIL
library.glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up the vertex shader to output the clip space position (
shaders/shader.vert
).#version 330 core layout(location=0) in vec2 vVertex; smooth out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that samples our image texture (
shaders/shader.frag
).#version 330 core layout (location=0) out vec4 vFragColor; smooth in vec2 vUV; uniform sampler2D textureMap; void main() { vFragColor = texture(textureMap, vUV); }
- Set up the application code using the
GLSLShader
shader class.shader.LoadFromFile(GL_VERTEX_SHADER, "shaders/shader.vert"); shader.LoadFromFile(GL_FRAGMENT_SHADER,"shaders/shader.frag"); shader.CreateAndLinkProgram(); shader.Use(); shader.AddAttribute("vVertex"); shader.AddUniform("textureMap"); glUniform1i(shader("textureMap"), 0); shader.UnUse();
- Set up the geometry and topology and pass data to the GPU using buffer objects.
vertices[0] = glm::vec2(0.0,0.0); vertices[1] = glm::vec2(1.0,0.0); vertices[2] = glm::vec2(1.0,1.0); vertices[3] = glm::vec2(0.0,1.0); GLushort* id=&indices[0]; *id++ =0; *id++ =1; *id++ =2; *id++ =0; *id++ =2; *id++ =3; glGenVertexArrays(1, &vaoID); glGenBuffers(1, &vboVerticesID); glGenBuffers(1, &vboIndicesID); glBindVertexArray(vaoID); glBindBuffer (GL_ARRAY_BUFFER, vboVerticesID); glBufferData (GL_ARRAY_BUFFER, sizeof(vertices), &vertices[0], GL_STATIC_DRAW); glEnableVertexAttribArray(shader["vVertex"]); glVertexAttribPointer(shader["vVertex"], 2, GL_FLOAT, GL_FALSE,0,0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndicesID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), &indices[0], GL_STATIC_DRAW);
- Set the shader and render the geometry.
void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
- Release the allocated resources.
void OnShutdown() { shader.DeleteShaderProgram(); glDeleteBuffers(1, &vboVerticesID); glDeleteBuffers(1, &vboIndicesID); glDeleteVertexArrays(1, &vaoID); glDeleteTextures(1, &textureID); }
The first parameter is the image file name. The next three parameters return the texture width, texture height, and total color channels in the image. These are used when generating the OpenGL texture object. The final parameter is the flag which is used to control further processing on the image. For this simple example, we will use the SOIL_LOAD_AUTO
flag which keeps all of the loading settings set to default. If the function succeeds, it returns unsigned char*
to the image data. If it fails, the return value is NULL (0)
. Since the image data loaded by SOIL
is vertically flipped, we then use two nested loops to flip the image data on the Y axis.
After the image data is loaded, we generate an OpenGL texture object and pass this data to the texture memory.
The glTexImage2D
function is where the actual allocation of the texture object takes place. The first parameter is the texture target (in our case this is GL_TEXTURE_2D
). The second parameter is the mipmap level which we keep to 0
. The third parameter is the internal format. We can determine this by looking at the image properties. The fourth and fifth parameters store the texture width and height respectively. The sixth parameter is 0
for no border and 1
for border. The seventh parameter is the image format. The eighth parameter is the type of the image data pointer, and the final parameter is the pointer to the raw image data. After this function, we can safely release the image data allocated by SOIL
by calling SOIL_free_image_data(pData)
.
The fragment shader has the texture coordinates smoothly interpolated from the vertex shader stage through the rasterizer. The image that we loaded using SOIL
is passed to a texture sampler (uniform sampler2D textureMap
) which is then sampled using the input texture coordinates (vFragColor = texture(textureMap, vUV)
). So in the end, we get the image displayed on the screen.
The OnShutdown()
function is similar to the earlier recipes. In addition, this code adds deletion of the OpenGL texture object. The rendering code first clears the color and depth buffers. Next, it binds the shader program and then invokes the glDrawElement
call to render the triangles. Finally the shader is unbound and then the glutSwapBuffers
function is called to display the current back buffer as the next front buffer. Compiling and running this code displays the image in a window as shown in the following screenshot:
SOIL
library provides a lot of functions but for now we are only interested in the SOIL_load_image
function.
processing on the image. For this simple example, we will use the SOIL_LOAD_AUTO
flag which keeps all of the loading settings set to default. If the function succeeds, it returns unsigned char*
to the image data. If it fails, the return value is NULL (0)
. Since the image data loaded by SOIL
is vertically flipped, we then use two nested loops to flip the image data on the Y axis.
After the image data is loaded, we generate an OpenGL texture object and pass this data to the texture memory.
The glTexImage2D
function is where the actual allocation of the texture object takes place. The first parameter is the texture target (in our case this is GL_TEXTURE_2D
). The second parameter is the mipmap level which we keep to 0
. The third parameter is the internal format. We can determine this by looking at the image properties. The fourth and fifth parameters store the texture width and height respectively. The sixth parameter is 0
for no border and 1
for border. The seventh parameter is the image format. The eighth parameter is the type of the image data pointer, and the final parameter is the pointer to the raw image data. After this function, we can safely release the image data allocated by SOIL
by calling SOIL_free_image_data(pData)
.
The fragment shader has the texture coordinates smoothly interpolated from the vertex shader stage through the rasterizer. The image that we loaded using SOIL
is passed to a texture sampler (uniform sampler2D textureMap
) which is then sampled using the input texture coordinates (vFragColor = texture(textureMap, vUV)
). So in the end, we get the image displayed on the screen.
The OnShutdown()
function is similar to the earlier recipes. In addition, this code adds deletion of the OpenGL texture object. The rendering code first clears the color and depth buffers. Next, it binds the shader program and then invokes the glDrawElement
call to render the triangles. Finally the shader is unbound and then the glutSwapBuffers
function is called to display the current back buffer as the next front buffer. Compiling and running this code displays the image in a window as shown in the following screenshot:
vVertex
) by simple arithmetic. Using the vertex positions, it also generates the texture coordinates (vUV
) for sampling of the texture in the fragment shader.
The OnShutdown()
function is similar to the earlier recipes. In addition, this code adds deletion of the OpenGL texture object. The rendering code first clears the color and depth buffers. Next, it binds the shader program and then invokes the glDrawElement
call to render the triangles. Finally the shader is unbound and then the glutSwapBuffers
function is called to display the current back buffer as the next front buffer. Compiling and running this code displays the image in a window as shown in the following screenshot:
The recipes covered in this chapter include:
- Implementing a vector-based camera model with FPS style input support
- Implementing the free camera
- Implementing target camera
- Implementing the view frustum culling
- Implementing object picking using the depth buffer
- Implementing object picking using color based picking
- Implementing object picking using scene intersection queries
In this chapter, we will look at the recipes for handling 3D viewing tasks and object picking in OpenGL v3.3 and above. All of the real-time simulations, games, and other graphics applications require a virtual camera or a virtual viewer from the point of view of which the 3D scene is rendered. The virtual camera is itself placed in the 3D world and has a specific direction called the camera look direction. Internally, the virtual camera is itself a collection of translations and rotations, which is stored inside the viewing matrix.
We will begin this chapter by designing a simple class to handle the camera. In a typical OpenGL application, the viewing operations are carried out to place a virtual object on screen. We leave the details of the transformations required in between to a typical graduate text on computer graphics like the one given in the See also section of this recipe. This recipe will focus on designing a simple and efficient camera class. We create a simple inheritance from a base class called CAbstractCamera
. We will inherit two classes from this parent class, CFreeCamera
and CTargetCamera
, as shown in the following figure:
We first declare the constructor/destructor pair. Next, the function for setting the projection for the camera is specified. Then some functions for updating the camera matrices based on rotation values are declared. Following these, the accessors and mutators are defined.
- Check for the keyboard key press event.
- If the W or S key is pressed, move the camera in the
look
vector direction:if( GetAsyncKeyState(VK_W) & 0x8000) cam.Walk(dt); if( GetAsyncKeyState(VK_S) & 0x8000) cam.Walk(-dt);
- If the A or D key is pressed, move the camera in the right vector direction:
if( GetAsyncKeyState(VK_A) & 0x8000) cam.Strafe(-dt); if( GetAsyncKeyState(VK_D) & 0x8000) cam.Strafe(dt);
- If the Q or Z key is pressed, move the camera in the up vector direction:
if( GetAsyncKeyState(VK_Q) & 0x8000) cam.Lift(dt); if( GetAsyncKeyState(VK_Z) & 0x8000) cam.Lift(-dt);
For handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:
- Define the mouse down and mouse move event handlers.
- Determine the mouse input choice (the zoom or rotate state) in the mouse down event handler based on the mouse button clicked:
if(button == GLUT_MIDDLE_BUTTON) state = 0; else state = 1;
- If zoom state is chosen, calculate the
fov
value based on the drag amount and then set up the camera projection matrix:if (state == 0) { fov += (y - oldY)/5.0f; cam.SetupProjection(fov, cam.GetAspectRatio()); }
- If the rotate state is chosen, calculate the rotation amount (pitch and yaw). If mouse filtering is enabled, use the filtered mouse input, otherwise use the raw rotation amount:
else { rY += (y - oldY)/5.0f; rX += (oldX-x)/5.0f; if(useFiltering) filterMouseMoves(rX, rY); else { mouseX = rX; mouseY = rY; } cam.Rotate(mouseX,mouseY, 0); }
It is always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:
- Smooth mouse filtering FAQ by Paul Nettle (http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml)
- Real-time Rendering 3rd Edition by Tomas Akenine-Moller, Eric Haines, and Naty Hoffman, AK Peters/CRC Press, 2008
Chapter2/src
directory. The CAbstractCamera
class is defined in the AbstractCamera.[h/cpp]
files.
declare the constructor/destructor pair. Next, the function for setting the projection for the camera is specified. Then some functions for updating the camera matrices based on rotation values are declared. Following these, the accessors and mutators are defined.
- Check for the keyboard key press event.
- If the W or S key is pressed, move the camera in the
look
vector direction:if( GetAsyncKeyState(VK_W) & 0x8000) cam.Walk(dt); if( GetAsyncKeyState(VK_S) & 0x8000) cam.Walk(-dt);
- If the A or D key is pressed, move the camera in the right vector direction:
if( GetAsyncKeyState(VK_A) & 0x8000) cam.Strafe(-dt); if( GetAsyncKeyState(VK_D) & 0x8000) cam.Strafe(dt);
- If the Q or Z key is pressed, move the camera in the up vector direction:
if( GetAsyncKeyState(VK_Q) & 0x8000) cam.Lift(dt); if( GetAsyncKeyState(VK_Z) & 0x8000) cam.Lift(-dt);
For handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:
- Define the mouse down and mouse move event handlers.
- Determine the mouse input choice (the zoom or rotate state) in the mouse down event handler based on the mouse button clicked:
if(button == GLUT_MIDDLE_BUTTON) state = 0; else state = 1;
- If zoom state is chosen, calculate the
fov
value based on the drag amount and then set up the camera projection matrix:if (state == 0) { fov += (y - oldY)/5.0f; cam.SetupProjection(fov, cam.GetAspectRatio()); }
- If the rotate state is chosen, calculate the rotation amount (pitch and yaw). If mouse filtering is enabled, use the filtered mouse input, otherwise use the raw rotation amount:
else { rY += (y - oldY)/5.0f; rX += (oldX-x)/5.0f; if(useFiltering) filterMouseMoves(rX, rY); else { mouseX = rX; mouseY = rY; } cam.Rotate(mouseX,mouseY, 0); }
It is always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:
- Smooth mouse filtering FAQ by Paul Nettle (http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml)
- Real-time Rendering 3rd Edition by Tomas Akenine-Moller, Eric Haines, and Naty Hoffman, AK Peters/CRC Press, 2008
CAbstractCamera
class. Instead, we will use either the CFreeCamera
class or the CTargetCamera
class, as detailed in the following recipes. In this recipe, we will see how to handle input using the mouse and keyboard.
look
vector direction:if( GetAsyncKeyState(VK_W) & 0x8000) cam.Walk(dt); if( GetAsyncKeyState(VK_S) & 0x8000) cam.Walk(-dt);
if( GetAsyncKeyState(VK_A) & 0x8000) cam.Strafe(-dt); if( GetAsyncKeyState(VK_D) & 0x8000) cam.Strafe(dt);
if( GetAsyncKeyState(VK_Q) & 0x8000) cam.Lift(dt); if( GetAsyncKeyState(VK_Z) & 0x8000) cam.Lift(-dt);
handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:
- Define the mouse down and mouse move event handlers.
- Determine the mouse input choice (the zoom or rotate state) in the mouse down event handler based on the mouse button clicked:
if(button == GLUT_MIDDLE_BUTTON) state = 0; else state = 1;
- If zoom state is chosen, calculate the
fov
value based on the drag amount and then set up the camera projection matrix:if (state == 0) { fov += (y - oldY)/5.0f; cam.SetupProjection(fov, cam.GetAspectRatio()); }
- If the rotate state is chosen, calculate the rotation amount (pitch and yaw). If mouse filtering is enabled, use the filtered mouse input, otherwise use the raw rotation amount:
else { rY += (y - oldY)/5.0f; rX += (oldX-x)/5.0f; if(useFiltering) filterMouseMoves(rX, rY); else { mouseX = rX; mouseY = rY; } cam.Rotate(mouseX,mouseY, 0); }
It is always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:
- Smooth mouse filtering FAQ by Paul Nettle (http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml)
- Real-time Rendering 3rd Edition by Tomas Akenine-Moller, Eric Haines, and Naty Hoffman, AK Peters/CRC Press, 2008
always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:
- Smooth mouse filtering FAQ by Paul Nettle (http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml)
- Real-time Rendering 3rd Edition by Tomas Akenine-Moller, Eric Haines, and Naty Hoffman, AK Peters/CRC Press, 2008
- FAQ by Paul Nettle (http://www.flipcode.com/archives/Smooth_Mouse_Filtering.shtml)
- Real-time Rendering 3rd Edition by Tomas Akenine-Moller, Eric Haines, and Naty Hoffman, AK Peters/CRC Press, 2008
Free camera is the first camera type which we will implement in this recipe. A free camera does not have a fixed target. However it does have a fixed position from which it can look in any direction.
The following figure shows a free viewing camera. When we rotate the camera, it rotates at its position. When we move the camera, it keeps looking in the same direction.
The steps needed to implement the free camera are as follows:
- Define the
CFreeCamera
class and add a vector to store the current translation. - In the
Update
method, calculate the new orientation (rotation) matrix, using the current camera orientations (that is, yaw, pitch, and roll amount):glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
- Translate the camera position by the translation amount:
position+=translation;
If we need to implement a free camera which gradually comes to a halt, we should gradually decay the translation vector by adding the following code after the key events are handled:
glm::vec3 t = cam.GetTranslation(); if(glm::dot(t,t)>EPSILON2) { cam.SetTranslation(t*0.95f); }
If no decay is needed, then we should clear the translation vector to
0
in theCFreeCamera::Update
function after translating the position:translation = glm::vec3(0);
- Transform the
look
vector by the current rotation matrix, and determine theright
andup
vectors to calculate the orthonormal basis:look = glm::vec3(R*glm::vec4(0,0,1,0)); up = glm::vec3(R*glm::vec4(0,1,0,0)); right = glm::cross(look, up);
- Determine the camera target point:
glm::vec3 tgt = position+look;
- Use the
glm::lookat
function to calculate the new view matrix using the camera position, target, and theup
vector:V = glm::lookAt(position, tgt, up);
The Walk
function simply translates the camera in the look direction:
The Strafe
function translates the camera in the right direction:
The Lift
function translates the camera in the up direction:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
The steps needed to implement the free camera are as follows:
- Define the
CFreeCamera
class and add a vector to store the current translation. - In the
Update
method, calculate the new orientation (rotation) matrix, using the current camera orientations (that is, yaw, pitch, and roll amount):glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
- Translate the camera position by the translation amount:
position+=translation;
If we need to implement a free camera which gradually comes to a halt, we should gradually decay the translation vector by adding the following code after the key events are handled:
glm::vec3 t = cam.GetTranslation(); if(glm::dot(t,t)>EPSILON2) { cam.SetTranslation(t*0.95f); }
If no decay is needed, then we should clear the translation vector to
0
in theCFreeCamera::Update
function after translating the position:translation = glm::vec3(0);
- Transform the
look
vector by the current rotation matrix, and determine theright
andup
vectors to calculate the orthonormal basis:look = glm::vec3(R*glm::vec4(0,0,1,0)); up = glm::vec3(R*glm::vec4(0,1,0,0)); right = glm::cross(look, up);
- Determine the camera target point:
glm::vec3 tgt = position+look;
- Use the
glm::lookat
function to calculate the new view matrix using the camera position, target, and theup
vector:V = glm::lookAt(position, tgt, up);
The Walk
function simply translates the camera in the look direction:
The Strafe
function translates the camera in the right direction:
The Lift
function translates the camera in the up direction:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
CFreeCamera
class and add a vector to store the current translation.
Update
method, calculate the new orientation (rotation) matrix, using the current camera orientations (that is, yaw, pitch, and roll amount):glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
- Translate the camera position by the translation amount:
position+=translation;
If we need to implement a free camera which gradually comes to a halt, we should gradually decay the translation vector by adding the following code after the key events are handled:
glm::vec3 t = cam.GetTranslation(); if(glm::dot(t,t)>EPSILON2) { cam.SetTranslation(t*0.95f); }
If no decay is needed, then we should clear the translation vector to
0
in theCFreeCamera::Update
function after translating the position:translation = glm::vec3(0);
- Transform the
look
vector by the current rotation matrix, and determine theright
andup
vectors to calculate the orthonormal basis:look = glm::vec3(R*glm::vec4(0,0,1,0)); up = glm::vec3(R*glm::vec4(0,1,0,0)); right = glm::cross(look, up);
- Determine the camera target point:
glm::vec3 tgt = position+look;
- Use the
glm::lookat
function to calculate the new view matrix using the camera position, target, and theup
vector:V = glm::lookAt(position, tgt, up);
The Walk
function simply translates the camera in the look direction:
The Strafe
function translates the camera in the right direction:
The Lift
function translates the camera in the up direction:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
Walk
function
simply translates the camera in the look direction:
The Strafe
function translates the camera in the right direction:
The Lift
function translates the camera in the up direction:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
- http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
The target camera works the opposite way. Rather than the position, the target remains fixed, while the camera moves or rotates around the target. Some operations like panning, move both the target and the camera position together.
The following figure shows an illustration of a target camera. Note that the small box is the target position for the camera.
The code for this recipe resides in the Chapter2/TargetCamera
directory. The CTargetCamera
class is defined in the Chapter2/src/TargetCamera.[h/cpp]
files. The class declaration is as follows:
We implement the target camera as follows:
- Define the
CTargetCamera
class with a target position (target
), the rotation limits (minRy
andmaxRy
), the distance between the target and the camera position (distance
), and the distance limits (minDistance
andmaxDistance
). - In the
Update
method, calculate the new orientation (rotation) matrix using the current camera orientations (that is, yaw, pitch, and roll amount):glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
- Use the distance to get a vector and then translate this vector by the current rotation matrix:
glm::vec3 T = glm::vec3(0,0,distance); T = glm::vec3(R*glm::vec4(T,0.0f));
- Get the new camera position by adding the translation vector to the target position:
position = target + T;
- Recalculate the orthonormal basis and then the view matrix:
look = glm::normalize(target-position); up = glm::vec3(R*glm::vec4(UP,0.0f)); right = glm::cross(look, up); V = glm::lookAt(position, target, up);
The Move
function moves both the position and target by the same amount in both look
and right
vector directions.
The Pan
function moves in the xy plane only, hence the up
vector is used instead of the look
vector:
The Zoom
function moves the position in the look
direction:
The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
The code for this recipe resides in the Chapter2/TargetCamera
directory. The CTargetCamera
class is defined in the Chapter2/src/TargetCamera.[h/cpp]
files. The class declaration is as follows:
We implement the target camera as follows:
- Define the
CTargetCamera
class with a target position (target
), the rotation limits (minRy
andmaxRy
), the distance between the target and the camera position (distance
), and the distance limits (minDistance
andmaxDistance
). - In the
Update
method, calculate the new orientation (rotation) matrix using the current camera orientations (that is, yaw, pitch, and roll amount):glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
- Use the distance to get a vector and then translate this vector by the current rotation matrix:
glm::vec3 T = glm::vec3(0,0,distance); T = glm::vec3(R*glm::vec4(T,0.0f));
- Get the new camera position by adding the translation vector to the target position:
position = target + T;
- Recalculate the orthonormal basis and then the view matrix:
look = glm::normalize(target-position); up = glm::vec3(R*glm::vec4(UP,0.0f)); right = glm::cross(look, up); V = glm::lookAt(position, target, up);
The Move
function moves both the position and target by the same amount in both look
and right
vector directions.
The Pan
function moves in the xy plane only, hence the up
vector is used instead of the look
vector:
The Zoom
function moves the position in the look
direction:
The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
CTargetCamera
class with a target position (target
), the rotation limits (minRy
and maxRy
), the distance between the target and the camera position (distance
), and the distance limits (minDistance
and maxDistance
).
Update
method, calculate the new orientation (rotation) matrix using the current camera orientations (that is, yaw, pitch, and roll amount):glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
glm::vec3 T = glm::vec3(0,0,distance); T = glm::vec3(R*glm::vec4(T,0.0f));
position = target + T;
The Move
function moves both the position and target by the same amount in both look
and right
vector directions.
The Pan
function moves in the xy plane only, hence the up
vector is used instead of the look
vector:
The Zoom
function moves the position in the look
direction:
The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
Move
function
moves both the position and target by the same amount in both look
and right
vector directions.
The Pan
function moves in the xy plane only, hence the up
vector is used instead of the look
vector:
The Zoom
function moves the position in the look
direction:
The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:
- DHPOWare OpenGL camera demo – Part 1 (http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
- http://www.dhpoware.com/demos/glCamera1.html)
- DHPOWare OpenGL camera demo – Part 2 (http://www.dhpoware.com/demos/glCamera2.html)
- DHPOWare OpenGL camera demo – Part 3 (http://www.dhpoware.com/demos/glCamera3.html)
When working with a lot of polygonal data, there is a need to reduce the amount of geometry pushed to the GPU for processing. There are several techniques for scene management, such as quadtrees, octrees, and bsp trees. These techniques help in sorting the geometry in visibility order, so that the objects are sorted (and some of these even culled from the display). This helps in reducing the work load on the GPU.
Even before such techniques can be used, there is an additional step which most graphics applications do and that is view frustum culling. This process removes the geometry if it is not in the current camera's view frustum. The idea is that if the object is not viewable, it should not be processed. A frustum is a chopped pyramid with its tip at the camera position and the base is at the far clip plane. The near clip plane is where the pyramid is chopped, as shown in the following figure. Any geometry inside the viewing frustum is displayed.
We will implement view frustum culling by taking the following steps:
- Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
#version 330 core layout(location = 0) in vec3 vVertex; uniform float t; const float PI = 3.141562; void main() { gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0); }
- Define a geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
#version 330 core layout (points) in; layout (points, max_vertices=3) out; uniform mat4 MVP; uniform vec4 FrustumPlanes[6]; bool PointInFrustum(in vec3 p) { for(int i=0; i < 6; i++) { vec4 plane=FrustumPlanes[i]; if ((dot(plane.xyz, p)+plane.w) < 0) return false; } return true; } void main() { //get the basic vertices for(int i=0;i<gl_in.length(); i++) { vec4 vInPos = gl_in[i].gl_Position; vec2 tmp = (vInPos.xz*2-1.0)*5; vec3 V = vec3(tmp.x, vInPos.y, tmp.y); gl_Position = MVP*vec4(V,1); if(PointInFrustum(V)) { EmitVertex(); } } EndPrimitive(); }
- To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
#version 330 core layout(location = 0) out vec4 vFragColor; void main() { vec2 pos = (gl_PointCoord.xy-0.5); if(0.25<dot(pos,pos)) discard; vFragColor = vec4(0,0,1,1); }
- On the CPU side, call the
CAbstractCamera::CalcFrustumPlanes()
function to calculate the viewing frustum planes. Get the calculated frustum planes as aglm::vec4
array by callingCAbstractCamera::GetFrustumPlanes()
, and then pass these to the shader. Thexyz
components store the plane's normal, and thew
coordinate stores the distance of the plane. After these calls we draw the points:pCurrentCam->CalcFrustumPlanes(); glm::vec4 p[6]; pCurrentCam->GetFrustumPlanes(p); pointShader.Use(); glUniform1f(pointShader("t"), current_time); glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0])); glBindVertexArray(pointVAOID); glDrawArrays(GL_POINTS,0,MAX_POINTS); pointShader.UnUse();
There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes()
function. Refer to the Chapter2/src/AbstractCamera.cpp
files for details.
This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p
with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.
Chapter2/ViewFrustumCulling
directory.
We will implement view frustum culling by taking the following steps:
- Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
#version 330 core layout(location = 0) in vec3 vVertex; uniform float t; const float PI = 3.141562; void main() { gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0); }
- Define a geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
#version 330 core layout (points) in; layout (points, max_vertices=3) out; uniform mat4 MVP; uniform vec4 FrustumPlanes[6]; bool PointInFrustum(in vec3 p) { for(int i=0; i < 6; i++) { vec4 plane=FrustumPlanes[i]; if ((dot(plane.xyz, p)+plane.w) < 0) return false; } return true; } void main() { //get the basic vertices for(int i=0;i<gl_in.length(); i++) { vec4 vInPos = gl_in[i].gl_Position; vec2 tmp = (vInPos.xz*2-1.0)*5; vec3 V = vec3(tmp.x, vInPos.y, tmp.y); gl_Position = MVP*vec4(V,1); if(PointInFrustum(V)) { EmitVertex(); } } EndPrimitive(); }
- To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
#version 330 core layout(location = 0) out vec4 vFragColor; void main() { vec2 pos = (gl_PointCoord.xy-0.5); if(0.25<dot(pos,pos)) discard; vFragColor = vec4(0,0,1,1); }
- On the CPU side, call the
CAbstractCamera::CalcFrustumPlanes()
function to calculate the viewing frustum planes. Get the calculated frustum planes as aglm::vec4
array by callingCAbstractCamera::GetFrustumPlanes()
, and then pass these to the shader. Thexyz
components store the plane's normal, and thew
coordinate stores the distance of the plane. After these calls we draw the points:pCurrentCam->CalcFrustumPlanes(); glm::vec4 p[6]; pCurrentCam->GetFrustumPlanes(p); pointShader.Use(); glUniform1f(pointShader("t"), current_time); glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0])); glBindVertexArray(pointVAOID); glDrawArrays(GL_POINTS,0,MAX_POINTS); pointShader.UnUse();
There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes()
function. Refer to the Chapter2/src/AbstractCamera.cpp
files for details.
This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p
with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.
#version 330 core
layout(location = 0) in vec3 vVertex;
uniform float t;
const float PI = 3.141562;
void main()
{
gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0);
}
- geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
#version 330 core layout (points) in; layout (points, max_vertices=3) out; uniform mat4 MVP; uniform vec4 FrustumPlanes[6]; bool PointInFrustum(in vec3 p) { for(int i=0; i < 6; i++) { vec4 plane=FrustumPlanes[i]; if ((dot(plane.xyz, p)+plane.w) < 0) return false; } return true; } void main() { //get the basic vertices for(int i=0;i<gl_in.length(); i++) { vec4 vInPos = gl_in[i].gl_Position; vec2 tmp = (vInPos.xz*2-1.0)*5; vec3 V = vec3(tmp.x, vInPos.y, tmp.y); gl_Position = MVP*vec4(V,1); if(PointInFrustum(V)) { EmitVertex(); } } EndPrimitive(); }
- To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
#version 330 core layout(location = 0) out vec4 vFragColor; void main() { vec2 pos = (gl_PointCoord.xy-0.5); if(0.25<dot(pos,pos)) discard; vFragColor = vec4(0,0,1,1); }
- On the CPU side, call the
CAbstractCamera::CalcFrustumPlanes()
function to calculate the viewing frustum planes. Get the calculated frustum planes as aglm::vec4
array by callingCAbstractCamera::GetFrustumPlanes()
, and then pass these to the shader. Thexyz
components store the plane's normal, and thew
coordinate stores the distance of the plane. After these calls we draw the points:pCurrentCam->CalcFrustumPlanes(); glm::vec4 p[6]; pCurrentCam->GetFrustumPlanes(p); pointShader.Use(); glUniform1f(pointShader("t"), current_time); glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0])); glBindVertexArray(pointVAOID); glDrawArrays(GL_POINTS,0,MAX_POINTS); pointShader.UnUse();
There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes()
function. Refer to the Chapter2/src/AbstractCamera.cpp
files for details.
This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p
with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.
This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p
with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.
Often when working on projects, we need the ability to pick graphical objects on screen. While in OpenGL versions before OpenGL 3.0, the selection buffer was used for this purpose, this buffer is removed in the modern OpenGL 3.3 core profile. However, this leaves us with some alternate methods. We will implement a simple picking technique using the depth buffer in this recipe.
Picking using depth buffer can be implemented as follows:
- Enable depth testing:
glEnable(GL_DEPTH_TEST);
- In the mouse down event handler, read the depth value from the depth buffer using the
glReadPixels
function at the clicked point:glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
- Unproject the 3D point,
vec3(x,HEIGHT-y,winZ)
, to obtain the object-space point from the clicked screen-space pointx,y
and the depth valuewinZ
. Make sure to invert they
value by subtractingHEIGHT
from the screen-spacey
value:glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
- Check the distances of all of the scene objects from the object-space point
objPt
. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:size_t i=0; float minDist = 1000; selected_box=-1; for(i=0;i<3;i++) { float dist = glm::distance(box_positions[i], objPt); if( dist<1 && dist<minDist) { selected_box = i; minDist = dist; } }
- Based on the selected index, color the object as selected:
glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]); cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0); cube->Render(glm::value_ptr(MVP*T)); T = glm::translate(glm::mat4(1), box_positions[1]); cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0); cube->Render(glm::value_ptr(MVP*T)); T = glm::translate(glm::mat4(1), box_positions[2]); cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1); cube->Render(glm::value_ptr(MVP*T));
This recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject
) the clicked point (x,HEIGHT-y, winZ
). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.
Chapter2/Picking_DepthBuffer
folder. Relevant source files are in the Chapter2/src
folder.
Picking using depth buffer can be implemented as follows:
- Enable depth testing:
glEnable(GL_DEPTH_TEST);
- In the mouse down event handler, read the depth value from the depth buffer using the
glReadPixels
function at the clicked point:glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
- Unproject the 3D point,
vec3(x,HEIGHT-y,winZ)
, to obtain the object-space point from the clicked screen-space pointx,y
and the depth valuewinZ
. Make sure to invert they
value by subtractingHEIGHT
from the screen-spacey
value:glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
- Check the distances of all of the scene objects from the object-space point
objPt
. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:size_t i=0; float minDist = 1000; selected_box=-1; for(i=0;i<3;i++) { float dist = glm::distance(box_positions[i], objPt); if( dist<1 && dist<minDist) { selected_box = i; minDist = dist; } }
- Based on the selected index, color the object as selected:
glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]); cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0); cube->Render(glm::value_ptr(MVP*T)); T = glm::translate(glm::mat4(1), box_positions[1]); cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0); cube->Render(glm::value_ptr(MVP*T)); T = glm::translate(glm::mat4(1), box_positions[2]); cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1); cube->Render(glm::value_ptr(MVP*T));
This recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject
) the clicked point (x,HEIGHT-y, winZ
). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.
glEnable(GL_DEPTH_TEST);
glReadPixels
function at the clicked point:glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
vec3(x,HEIGHT-y,winZ)
, to obtain the object-space point from the clicked screen-space point x,y
and the depth value winZ
. Make sure to invert the y
value by subtracting HEIGHT
from the screen-space y
value:glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
objPt
. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:size_t i=0; float minDist = 1000; selected_box=-1; for(i=0;i<3;i++) { float dist = glm::distance(box_positions[i], objPt); if( dist<1 && dist<minDist) { selected_box = i; minDist = dist; } }
glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]); cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0); cube->Render(glm::value_ptr(MVP*T)); T = glm::translate(glm::mat4(1), box_positions[1]); cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0); cube->Render(glm::value_ptr(MVP*T)); T = glm::translate(glm::mat4(1), box_positions[2]); cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1); cube->Render(glm::value_ptr(MVP*T));
This recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject
) the clicked point (x,HEIGHT-y, winZ
). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.
recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject
) the clicked point (x,HEIGHT-y, winZ
). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.

Another method which is used for picking objects in a 3D world is color-based picking. In this recipe, we will use the same scene as in the last recipe.
To enable picking with the color buffer, the following steps are needed:
- Disable dithering. This is done to prevent any color mismatch during the query:
glDisable(GL_DITHER);
- In the mouse down event handler, read the color value at the clicked position from the color buffer using the
glReadPixels
function:GLubyte pixel[4]; glReadPixels(x, HEIGHT-y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
- Compare the color value at the clicked point to the color values of all objects to find the intersection:
selected_box=-1; if(pixel[0]==255 && pixel[1]==0 && pixel[2]==0) { cout<<"picked box 1"<<endl; selected_box = 0; } if(pixel[0]==0 && pixel[1]==255 && pixel[2]==0) { cout<<"picked box 2"<<endl; selected_box = 1; } if(pixel[0]==0 && pixel[1]==0 && pixel[2]==255) { cout<<"picked box 3"<<endl; selected_box = 2; }
This method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT
, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE
.
Chapter2/Picking_ColorBuffer
folder. Relevant source files are in the Chapter2/src
folder.
To enable picking with the color buffer, the following steps are needed:
- Disable dithering. This is done to prevent any color mismatch during the query:
glDisable(GL_DITHER);
- In the mouse down event handler, read the color value at the clicked position from the color buffer using the
glReadPixels
function:GLubyte pixel[4]; glReadPixels(x, HEIGHT-y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
- Compare the color value at the clicked point to the color values of all objects to find the intersection:
selected_box=-1; if(pixel[0]==255 && pixel[1]==0 && pixel[2]==0) { cout<<"picked box 1"<<endl; selected_box = 0; } if(pixel[0]==0 && pixel[1]==255 && pixel[2]==0) { cout<<"picked box 2"<<endl; selected_box = 1; } if(pixel[0]==0 && pixel[1]==0 && pixel[2]==255) { cout<<"picked box 3"<<endl; selected_box = 2; }
This method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT
, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE
.
glDisable(GL_DITHER);
glReadPixels
function:GLubyte pixel[4]; glReadPixels(x, HEIGHT-y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
selected_box=-1; if(pixel[0]==255 && pixel[1]==0 && pixel[2]==0) { cout<<"picked box 1"<<endl; selected_box = 0; } if(pixel[0]==0 && pixel[1]==255 && pixel[2]==0) { cout<<"picked box 2"<<endl; selected_box = 1; } if(pixel[0]==0 && pixel[1]==0 && pixel[2]==255) { cout<<"picked box 3"<<endl; selected_box = 2; }
This method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT
, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE
.
method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT
, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE
.
The final method we will cover for picking involves casting rays in the scene to determine the nearest object to the viewer. We will use the same scene as in the last two recipes, three cubes (red, green, and blue colored) placed near the origin.
For picking with scene intersection queries, take the following steps:
- Get two object-space points by unprojecting the screen-space point (
x, HEIGHT-y
), with different depth value, one at z=0 and the other at z=1:glm::vec3 start = glm::unProject(glm::vec3(x,HEIGHT-y,0), MV, P, glm::vec4(0,0,WIDTH,HEIGHT)); glm::vec3 end = glm::unProject(glm::vec3(x,HEIGHT-y,1), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
- Get the current camera position as
eyeRay.origin
and geteyeRay.direction
by subtracting and normalizing the difference of the two object-space points,end
andstart
, as follows:eyeRay.origin = cam.GetPosition(); eyeRay.direction = glm::normalize(end-start);
- For all of the objects in the scene, find the intersection of the eye ray with the Axially Aligned Bounding Box (AABB) of the object. Store the nearest intersected object index:
float tMin = numeric_limits<float>::max(); selected_box = -1; for(int i=0;i<3;i++) { glm::vec2 tMinMax = intersectBox(eyeRay, boxes[i]); if(tMinMax.x<tMinMax.y && tMinMax.x<tMin) { selected_box=i; tMin = tMinMax.x; } } if(selected_box==-1) cout<<"No box picked"<<endl; else cout<<"Selected box: "<<selected_box<<endl;
The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.
After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox
function is defined as follows:
The intersectBox
function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear
and tFar
values. The box can only intersect with the ray if tNear
is less than tFar
for all of the three axes. So the code finds the smallest tFar
value and the largest tMin
value. If the smallest tFar
value is less than the largest tNear
value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:
Chapter2/Picking_SceneIntersection
folder. Relevant source files are in the Chapter2/src
folder.
For picking with scene intersection queries, take the following steps:
- Get two object-space points by unprojecting the screen-space point (
x, HEIGHT-y
), with different depth value, one at z=0 and the other at z=1:glm::vec3 start = glm::unProject(glm::vec3(x,HEIGHT-y,0), MV, P, glm::vec4(0,0,WIDTH,HEIGHT)); glm::vec3 end = glm::unProject(glm::vec3(x,HEIGHT-y,1), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
- Get the current camera position as
eyeRay.origin
and geteyeRay.direction
by subtracting and normalizing the difference of the two object-space points,end
andstart
, as follows:eyeRay.origin = cam.GetPosition(); eyeRay.direction = glm::normalize(end-start);
- For all of the objects in the scene, find the intersection of the eye ray with the Axially Aligned Bounding Box (AABB) of the object. Store the nearest intersected object index:
float tMin = numeric_limits<float>::max(); selected_box = -1; for(int i=0;i<3;i++) { glm::vec2 tMinMax = intersectBox(eyeRay, boxes[i]); if(tMinMax.x<tMinMax.y && tMinMax.x<tMin) { selected_box=i; tMin = tMinMax.x; } } if(selected_box==-1) cout<<"No box picked"<<endl; else cout<<"Selected box: "<<selected_box<<endl;
The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.
After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox
function is defined as follows:
The intersectBox
function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear
and tFar
values. The box can only intersect with the ray if tNear
is less than tFar
for all of the three axes. So the code finds the smallest tFar
value and the largest tMin
value. If the smallest tFar
value is less than the largest tNear
value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:
x, HEIGHT-y
), with different depth value, one at z=0 and the other at z=1:glm::vec3 start = glm::unProject(glm::vec3(x,HEIGHT-y,0), MV, P, glm::vec4(0,0,WIDTH,HEIGHT)); glm::vec3 end = glm::unProject(glm::vec3(x,HEIGHT-y,1), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
eyeRay.origin
and get eyeRay.direction
by subtracting and normalizing the difference of the two object-space points, end
and start
, as follows:eyeRay.origin = cam.GetPosition(); eyeRay.direction = glm::normalize(end-start);
The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.
After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox
function is defined as follows:
The intersectBox
function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear
and tFar
values. The box can only intersect with the ray if tNear
is less than tFar
for all of the three axes. So the code finds the smallest tFar
value and the largest tMin
value. If the smallest tFar
value is less than the largest tNear
value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:
discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.
After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox
function is defined as follows:
The intersectBox
function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear
and tFar
values. The box can only intersect with the ray if tNear
is less than tFar
for all of the three axes. So the code finds the smallest tFar
value and the largest tMin
value. If the smallest tFar
value is less than the largest tNear
value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:
intersectBox
function
In this chapter, we will cover:
- Implementing the twirl filter using fragment shader
- Rendering a skybox using static cube mapping
- Implementing a mirror with render-to-texture using FBO
- Rendering a reflective object using dynamic cube mapping
- Implementing area filtering (sharpening/blurring/embossing) on an image using convolution
- Implementing the glow effect
Offscreen rendering functionality is a powerful feature of modern graphics API. In modern OpenGL, this is implemented by using the Framebuffer objects (FBOs). Some of the applications of the offscreen rendering include post processing effects such as glows, dynamic cubemaps, mirror effect, deferred rendering techniques, image processing techniques, and so on. Nowadays almost all games use this feature to carry out stunning visual effects with high rendering quality and detail. With the FBOs, the offscreen rendering is greatly simplified, as the programmer uses FBO the way he would use any other OpenGL object. This chapter will focus on using FBO to carry out image processing effects for implementing digital convolution and glow. In addition, we will also elaborate on how to use the FBO for mirror effect and dynamic cube mapping.
We will use a simple image manipulation operator in the fragment shader by implementing the twirl filter on the GPU.
This recipe builds up on the image loading recipe from Chapter 1, Introduction to Modern OpenGL. The code for this recipe is contained in the Chapter3/TwirlFilter
directory.
Let us get started with the recipe as follows:
- Load the image as in the
ImageLoader
recipe from Chapter 1, Introduction to Modern OpenGL. Set the texture wrap mode toGL_CLAMP_TO_BORDER
.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } } glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up a simple pass through vertex shader that outputs the texture coordinates for texture lookup in the fragment shader, as given in the
ImageLoader
recipe of Chapter 1.void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that first shifts the texture coordinates, performs the twirl transformation, and then converts the shifted texture coordinates back for texture lookup.
void main() { vec2 uv = vUV-0.5; float angle = atan(uv.y, uv.x); float radius = length(uv); angle+= radius*twirl_amount; vec2 shifted = radius* vec2(cos(angle), sin(angle)); vFragColor = texture(textureMap, (shifted+0.5)); }
- Render a 2D screen space quad and apply the two shaders as was done in the
ImageLoader
recipe in Chapter 1.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniform1f(shader("twirl_amount"), twirl_amount); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
Here, x and y are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
Chapter 1, Introduction to Modern OpenGL. The code for this recipe is contained in the Chapter3/TwirlFilter
directory.
Let us get started with the recipe as follows:
- Load the image as in the
ImageLoader
recipe from Chapter 1, Introduction to Modern OpenGL. Set the texture wrap mode toGL_CLAMP_TO_BORDER
.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } } glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up a simple pass through vertex shader that outputs the texture coordinates for texture lookup in the fragment shader, as given in the
ImageLoader
recipe of Chapter 1.void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that first shifts the texture coordinates, performs the twirl transformation, and then converts the shifted texture coordinates back for texture lookup.
void main() { vec2 uv = vUV-0.5; float angle = atan(uv.y, uv.x); float radius = length(uv); angle+= radius*twirl_amount; vec2 shifted = radius* vec2(cos(angle), sin(angle)); vFragColor = texture(textureMap, (shifted+0.5)); }
- Render a 2D screen space quad and apply the two shaders as was done in the
ImageLoader
recipe in Chapter 1.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniform1f(shader("twirl_amount"), twirl_amount); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
Here, x and y are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
ImageLoader
recipe from
- Chapter 1, Introduction to Modern OpenGL. Set the texture wrap mode to
GL_CLAMP_TO_BORDER
.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } } glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up a simple pass through vertex shader that outputs the texture coordinates for texture lookup in the fragment shader, as given in the
ImageLoader
recipe of Chapter 1.void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that first shifts the texture coordinates, performs the twirl transformation, and then converts the shifted texture coordinates back for texture lookup.
void main() { vec2 uv = vUV-0.5; float angle = atan(uv.y, uv.x); float radius = length(uv); angle+= radius*twirl_amount; vec2 shifted = radius* vec2(cos(angle), sin(angle)); vFragColor = texture(textureMap, (shifted+0.5)); }
- Render a 2D screen space quad and apply the two shaders as was done in the
ImageLoader
recipe in Chapter 1.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniform1f(shader("twirl_amount"), twirl_amount); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
Here, x and y are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.


are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.

texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
This recipe will show how to render a skybox object using static cube mapping. Cube mapping is a simple technique for generating a surrounding environment. There are several methods, such as sky dome, which uses a spherical geometry; skybox, which uses a cubical geometry; and skyplane, which uses a planar geometry. For this recipe, we will focus on skyboxes using the static cube mapping approach. The cube mapping process needs six images that are placed on each face of a cube. The skybox is a very large cube that moves with the camera but does not rotate with it.
Let us get started with the recipe as follows:
- Set up the vertex array and vertex buffer objects to store a unit cube geometry.
- Load the skybox images using an image loading library, such as
SOIL
.int texture_widths[6]; int texture_heights[6]; int channels[6]; GLubyte* pData[6]; cout<<"Loading skybox images: ..."<<endl; for(int i=0;i<6;i++) { cout<<"\tLoading: "<<texture_names[i]<<" ... "; pData[i] = SOIL_load_image(texture_names[i], &texture_widths[i], &texture_heights[i], &channels[i], SOIL_LOAD_AUTO); cout<<"done."<<endl; }
- Generate a cubemap OpenGL texture object and bind the six loaded images to the
GL_TEXTURE_CUBE_MAP
texture targets. Also make sure that the image data loaded by theSOIL
library is deleted after the texture data has been stored into the OpenGL texture.glGenTextures(1, &skyboxTextureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTextureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); GLint format = (channels[0]==4)?GL_RGBA:GL_RGB; for(int i=0;i<6;i++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]); SOIL_free_image_data(pData[i]); }
- Set up a vertex shader (see
Chapter3/Skybox/shaders/skybox.vert
) that outputs the vertex's object space position as the texture coordinate.smooth out vec3 uv; void main() { gl_Position = MVP*vec4(vVertex,1); uv = vVertex; }
- Add a cubemap sampler to the fragment shader. Use the texture coordinates output from the vertex shader to sample the cubemap sampler object in the fragment shader (see
Chapter3/Skybox/shaders/skybox.frag
).layout(location=0) out vec4 vFragColor; uniform samplerCube cubeMap; smooth in vec3 uv; void main() { vFragColor = texture(cubeMap, uv); }
There are two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.
Chapter3/Skybox
directory.
Let us get started with the recipe as follows:
- Set up the vertex array and vertex buffer objects to store a unit cube geometry.
- Load the skybox images using an image loading library, such as
SOIL
.int texture_widths[6]; int texture_heights[6]; int channels[6]; GLubyte* pData[6]; cout<<"Loading skybox images: ..."<<endl; for(int i=0;i<6;i++) { cout<<"\tLoading: "<<texture_names[i]<<" ... "; pData[i] = SOIL_load_image(texture_names[i], &texture_widths[i], &texture_heights[i], &channels[i], SOIL_LOAD_AUTO); cout<<"done."<<endl; }
- Generate a cubemap OpenGL texture object and bind the six loaded images to the
GL_TEXTURE_CUBE_MAP
texture targets. Also make sure that the image data loaded by theSOIL
library is deleted after the texture data has been stored into the OpenGL texture.glGenTextures(1, &skyboxTextureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTextureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); GLint format = (channels[0]==4)?GL_RGBA:GL_RGB; for(int i=0;i<6;i++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]); SOIL_free_image_data(pData[i]); }
- Set up a vertex shader (see
Chapter3/Skybox/shaders/skybox.vert
) that outputs the vertex's object space position as the texture coordinate.smooth out vec3 uv; void main() { gl_Position = MVP*vec4(vVertex,1); uv = vVertex; }
- Add a cubemap sampler to the fragment shader. Use the texture coordinates output from the vertex shader to sample the cubemap sampler object in the fragment shader (see
Chapter3/Skybox/shaders/skybox.frag
).layout(location=0) out vec4 vFragColor; uniform samplerCube cubeMap; smooth in vec3 uv; void main() { vFragColor = texture(cubeMap, uv); }
There are two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.
- skybox images using an image loading library, such as
SOIL
.int texture_widths[6]; int texture_heights[6]; int channels[6]; GLubyte* pData[6]; cout<<"Loading skybox images: ..."<<endl; for(int i=0;i<6;i++) { cout<<"\tLoading: "<<texture_names[i]<<" ... "; pData[i] = SOIL_load_image(texture_names[i], &texture_widths[i], &texture_heights[i], &channels[i], SOIL_LOAD_AUTO); cout<<"done."<<endl; }
- Generate a cubemap OpenGL texture object and bind the six loaded images to the
GL_TEXTURE_CUBE_MAP
texture targets. Also make sure that the image data loaded by theSOIL
library is deleted after the texture data has been stored into the OpenGL texture.glGenTextures(1, &skyboxTextureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTextureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); GLint format = (channels[0]==4)?GL_RGBA:GL_RGB; for(int i=0;i<6;i++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]); SOIL_free_image_data(pData[i]); }
- Set up a vertex shader (see
Chapter3/Skybox/shaders/skybox.vert
) that outputs the vertex's object space position as the texture coordinate.smooth out vec3 uv; void main() { gl_Position = MVP*vec4(vVertex,1); uv = vVertex; }
- Add a cubemap sampler to the fragment shader. Use the texture coordinates output from the vertex shader to sample the cubemap sampler object in the fragment shader (see
Chapter3/Skybox/shaders/skybox.frag
).layout(location=0) out vec4 vFragColor; uniform samplerCube cubeMap; smooth in vec3 uv; void main() { vFragColor = texture(cubeMap, uv); }
There are two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.
two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.

We will now use the FBO to render a mirror object on the screen. In a typical offscreen rendering OpenGL application, we set up the FBO first, by calling the glGenFramebuffers
function and passing it the number of FBOs desired. The second parameter stores the returned identifier. After the FBO object is generated, it has to be bound to the GL_FRAMEBUFFER
, GL_DRAW_FRAMEBUFFER,
or GL_READ_FRAMEBUFFER
target. Following this call, the texture to be bound to the FBOs color attachment is attached by calling the glFramebufferTexture2D
function.
If depth testing is required, a render buffer is also generated and bound by calling glGenRenderbuffers
followed by the glBindRenderbuffer
function. For render buffers, the depth buffer's data type and its dimensions have to be specified. After all these steps, the render buffer is attached to the frame buffer by calling the glFramebufferRenderbuffer
function.
After the setup of the frame buffer and render buffer objects, the frame buffer completeness status has to be checked by calling glCheckFramebufferStatus
by passing it the framebuffer
target. This ensures that the FBO setup is correct. The function returns the status as an identifier. If this returned value is anything other than GL_FRAMEBUFFER_COMPLETE
, the FBO setup is unsuccessful.
Let us get started with the recipe as follows:
- Initialize the
framebuffer
andrenderbuffer
objects' color and depth attachments respectively. The render buffer is required if we need depth testing for the offscreen rendering, and the depth precision is specified using theglRenderbufferStorage
function.glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rbID); glBindRenderbuffer(GL_RENDERBUFFER, rbID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,WIDTH, HEIGHT);
- Generate the offscreen texture on which FBO will render to. The last parameter of
glTexImage2D
isNULL
, which tells OpenGL that we do not have any content yet, please provide a new block of GPU memory which gets filled when the FBO is used as a render target.glGenTextures(1, &renderTextureID); glBindTexture(GL_TEXTURE_2D, renderTextureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, L_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, WIDTH, HEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
- Attach
Renderbuffer
to the boundFramebuffer
object and check forFramebuffer
completeness.glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D, renderTextureID, 0); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER, rbID); GLuint status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status==GL_FRAMEBUFFER_COMPLETE) { printf("FBO setup succeeded."); } else { printf("Error in FBO setup."); }
- Unbind the
Framebuffer
object as follows:glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Create a quad geometry to act as a mirror:
mirror = new CQuad(-2);
- Render the scene normally from the point of view of camera. Since the unit color cube is rendered at origin, we translate it on the Y axis to shift it up in Y axis which effectively moves the unit color cube in Y direction so that the unit color cube's image can be viewed completely in the mirror.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); grid->Render(glm::value_ptr(MVP)); localR[3][1] = 0.5; cube->Render(glm::value_ptr(P*MV*localR));
- Store the current modelview matrix and then change the modelview matrix such that the camera is placed at the mirror object position. Also make sure to laterally invert this modelview matrix by scaling by -1 on the X axis.
glm::mat4 oldMV = MV; glm::vec3 target; glm::vec3 V = glm::vec3(-MV[2][0], -MV[2][1], -MV[2][2]); glm::vec3 R = glm::reflect(V, mirror->normal); MV = glm::lookAt(mirror->position, mirror->position + R, glm::vec3(0,1,0)); MV = glm::scale(MV, glm::vec3(-1,1,1));
- Bind the FBO, set up the FBO color attachment for
Drawbuffer
(GL_COLOR_ATTACHMENT0
) or any other attachment to which texture is attached, and clear the FBO. TheglDrawBuffer
function enables the code to draw to a specific color attachment on the FBO. In our case, there is a single color attachment so we set it as the draw buffer.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
- Set the modified modelview matrix and render the scene again. Also make sure to only render from the shiny side of the mirror.
if(glm::dot(V,mirror->normal)<0) { grid->Render(glm::value_ptr(P*MV)); cube->Render(glm::value_ptr(P*MV*localR)); }
- Unbind the FBO and restore the default
Drawbuffer
(GL_BACK_LEFT).
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT);
- Finally render the mirror quad at the saved modelview matrix.
MV = oldMV; glBindTexture(GL_TEXTURE_2D, renderTextureID); mirror->Render(glm::value_ptr(P*MV));
The mirror algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Chapter3/MirrorUsingFBO
directory.
Let us get started with the recipe as follows:
- Initialize the
framebuffer
andrenderbuffer
objects' color and depth attachments respectively. The render buffer is required if we need depth testing for the offscreen rendering, and the depth precision is specified using theglRenderbufferStorage
function.glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rbID); glBindRenderbuffer(GL_RENDERBUFFER, rbID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,WIDTH, HEIGHT);
- Generate the offscreen texture on which FBO will render to. The last parameter of
glTexImage2D
isNULL
, which tells OpenGL that we do not have any content yet, please provide a new block of GPU memory which gets filled when the FBO is used as a render target.glGenTextures(1, &renderTextureID); glBindTexture(GL_TEXTURE_2D, renderTextureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, L_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, WIDTH, HEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
- Attach
Renderbuffer
to the boundFramebuffer
object and check forFramebuffer
completeness.glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D, renderTextureID, 0); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER, rbID); GLuint status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status==GL_FRAMEBUFFER_COMPLETE) { printf("FBO setup succeeded."); } else { printf("Error in FBO setup."); }
- Unbind the
Framebuffer
object as follows:glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Create a quad geometry to act as a mirror:
mirror = new CQuad(-2);
- Render the scene normally from the point of view of camera. Since the unit color cube is rendered at origin, we translate it on the Y axis to shift it up in Y axis which effectively moves the unit color cube in Y direction so that the unit color cube's image can be viewed completely in the mirror.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); grid->Render(glm::value_ptr(MVP)); localR[3][1] = 0.5; cube->Render(glm::value_ptr(P*MV*localR));
- Store the current modelview matrix and then change the modelview matrix such that the camera is placed at the mirror object position. Also make sure to laterally invert this modelview matrix by scaling by -1 on the X axis.
glm::mat4 oldMV = MV; glm::vec3 target; glm::vec3 V = glm::vec3(-MV[2][0], -MV[2][1], -MV[2][2]); glm::vec3 R = glm::reflect(V, mirror->normal); MV = glm::lookAt(mirror->position, mirror->position + R, glm::vec3(0,1,0)); MV = glm::scale(MV, glm::vec3(-1,1,1));
- Bind the FBO, set up the FBO color attachment for
Drawbuffer
(GL_COLOR_ATTACHMENT0
) or any other attachment to which texture is attached, and clear the FBO. TheglDrawBuffer
function enables the code to draw to a specific color attachment on the FBO. In our case, there is a single color attachment so we set it as the draw buffer.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
- Set the modified modelview matrix and render the scene again. Also make sure to only render from the shiny side of the mirror.
if(glm::dot(V,mirror->normal)<0) { grid->Render(glm::value_ptr(P*MV)); cube->Render(glm::value_ptr(P*MV*localR)); }
- Unbind the FBO and restore the default
Drawbuffer
(GL_BACK_LEFT).
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT);
- Finally render the mirror quad at the saved modelview matrix.
MV = oldMV; glBindTexture(GL_TEXTURE_2D, renderTextureID); mirror->Render(glm::value_ptr(P*MV));
The mirror algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
framebuffer
and renderbuffer
objects' color and depth attachments respectively. The render buffer is required if we need depth testing for the offscreen rendering, and the depth precision is specified using the glRenderbufferStorage
function.glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rbID); glBindRenderbuffer(GL_RENDERBUFFER, rbID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,WIDTH, HEIGHT);
glTexImage2D
is NULL
, which tells OpenGL that we do not have any content yet, please provide a new block of GPU memory which gets filled when the FBO is used as a render target.glGenTextures(1, &renderTextureID); glBindTexture(GL_TEXTURE_2D, renderTextureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, L_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, WIDTH, HEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Renderbuffer
to the bound Framebuffer
object and check for Framebuffer
completeness.glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D, renderTextureID, 0); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER, rbID); GLuint status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status==GL_FRAMEBUFFER_COMPLETE) { printf("FBO setup succeeded."); } else { printf("Error in FBO setup."); }
- the
Framebuffer
object as follows:glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Create a quad geometry to act as a mirror:
mirror = new CQuad(-2);
- Render the scene normally from the point of view of camera. Since the unit color cube is rendered at origin, we translate it on the Y axis to shift it up in Y axis which effectively moves the unit color cube in Y direction so that the unit color cube's image can be viewed completely in the mirror.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); grid->Render(glm::value_ptr(MVP)); localR[3][1] = 0.5; cube->Render(glm::value_ptr(P*MV*localR));
- Store the current modelview matrix and then change the modelview matrix such that the camera is placed at the mirror object position. Also make sure to laterally invert this modelview matrix by scaling by -1 on the X axis.
glm::mat4 oldMV = MV; glm::vec3 target; glm::vec3 V = glm::vec3(-MV[2][0], -MV[2][1], -MV[2][2]); glm::vec3 R = glm::reflect(V, mirror->normal); MV = glm::lookAt(mirror->position, mirror->position + R, glm::vec3(0,1,0)); MV = glm::scale(MV, glm::vec3(-1,1,1));
- Bind the FBO, set up the FBO color attachment for
Drawbuffer
(GL_COLOR_ATTACHMENT0
) or any other attachment to which texture is attached, and clear the FBO. TheglDrawBuffer
function enables the code to draw to a specific color attachment on the FBO. In our case, there is a single color attachment so we set it as the draw buffer.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
- Set the modified modelview matrix and render the scene again. Also make sure to only render from the shiny side of the mirror.
if(glm::dot(V,mirror->normal)<0) { grid->Render(glm::value_ptr(P*MV)); cube->Render(glm::value_ptr(P*MV*localR)); }
- Unbind the FBO and restore the default
Drawbuffer
(GL_BACK_LEFT).
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT);
- Finally render the mirror quad at the saved modelview matrix.
MV = oldMV; glBindTexture(GL_TEXTURE_2D, renderTextureID); mirror->Render(glm::value_ptr(P*MV));
The mirror algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Framebuffer
object can be obtained from the Framebuffer
object specifications (see the See also section). The output from the demo application implementing this recipe is as follows:

- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
- http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Now we will see how to use dynamic cube mapping to render a real-time scene to a cubemap render target. This allows us to create reflective surfaces. In modern OpenGL, offscreen rendering (also called render-to-texture) functionality is exposed through FBOs.
Let us get started with the recipe as follows:
- Create a cubemap texture object.
glGenTextures(1, &dynamicCubeMapID); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_CUBE_MAP, dynamicCubeMapID); glTexParameterf(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (int face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_RGBA,CUBEMAP_SIZE, CUBEMAP_SIZE, 0, GL_RGBA, GL_FLOAT, NULL); }
- Set up an FBO with the cubemap texture as an attachment.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rboID); glBindRenderbuffer(GL_RENDERBUFFER, rboID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, CUBEMAP_SIZE, CUBEMAP_SIZE); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO setup successfully."<<endl; }
- Set the viewport to the size of the offscreen texture and render the scene six times without the reflective object to the six sides of the cubemap using FBO.
glViewport(0,0,CUBEMAP_SIZE,CUBEMAP_SIZE); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV1 = glm::lookAt(glm::vec3(0),glm::vec3(1,0,0),glm::vec3(0,-1,0)); DrawScene( MV1*T, Pcubemap); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV2 = glm::lookAt(glm::vec3(0),glm::vec3(-1,0,0), glm::vec3(0,-1,0)); DrawScene( MV2*T, Pcubemap); ...//similar for rest of the faces glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Restore the viewport and the modelview matrix, and render the scene normally.
glViewport(0,0,WIDTH,HEIGHT); DrawScene(MV, P);
- Set the cubemap shader and then render the reflective object.
glBindVertexArray(sphereVAOID); cubemapShader.Use(); T = glm::translate(glm::mat4(1), p); glUniformMatrix4fv(cubemapShader("MVP"), 1, GL_FALSE, glm::value_ptr(P*(MV*T))); glUniform3fv(cubemapShader("eyePosition"), 1, glm::value_ptr(eyePos)); glDrawElements(GL_TRIANGLES,indices.size(),GL_UNSIGNED_SHORT,0); cubemapShader.UnUse();
Dynamic cube mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The cubemap vertex shader outputs the object space vertex positions and normals.
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.
Chapter3/DynamicCubemap
directory.
Let us get started with the recipe as follows:
- Create a cubemap texture object.
glGenTextures(1, &dynamicCubeMapID); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_CUBE_MAP, dynamicCubeMapID); glTexParameterf(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (int face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_RGBA,CUBEMAP_SIZE, CUBEMAP_SIZE, 0, GL_RGBA, GL_FLOAT, NULL); }
- Set up an FBO with the cubemap texture as an attachment.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rboID); glBindRenderbuffer(GL_RENDERBUFFER, rboID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, CUBEMAP_SIZE, CUBEMAP_SIZE); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO setup successfully."<<endl; }
- Set the viewport to the size of the offscreen texture and render the scene six times without the reflective object to the six sides of the cubemap using FBO.
glViewport(0,0,CUBEMAP_SIZE,CUBEMAP_SIZE); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV1 = glm::lookAt(glm::vec3(0),glm::vec3(1,0,0),glm::vec3(0,-1,0)); DrawScene( MV1*T, Pcubemap); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV2 = glm::lookAt(glm::vec3(0),glm::vec3(-1,0,0), glm::vec3(0,-1,0)); DrawScene( MV2*T, Pcubemap); ...//similar for rest of the faces glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Restore the viewport and the modelview matrix, and render the scene normally.
glViewport(0,0,WIDTH,HEIGHT); DrawScene(MV, P);
- Set the cubemap shader and then render the reflective object.
glBindVertexArray(sphereVAOID); cubemapShader.Use(); T = glm::translate(glm::mat4(1), p); glUniformMatrix4fv(cubemapShader("MVP"), 1, GL_FALSE, glm::value_ptr(P*(MV*T))); glUniform3fv(cubemapShader("eyePosition"), 1, glm::value_ptr(eyePos)); glDrawElements(GL_TRIANGLES,indices.size(),GL_UNSIGNED_SHORT,0); cubemapShader.UnUse();
Dynamic cube mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The cubemap vertex shader outputs the object space vertex positions and normals.
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.
glGenTextures(1, &dynamicCubeMapID); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_CUBE_MAP, dynamicCubeMapID); glTexParameterf(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (int face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_RGBA,CUBEMAP_SIZE, CUBEMAP_SIZE, 0, GL_RGBA, GL_FLOAT, NULL); }
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rboID); glBindRenderbuffer(GL_RENDERBUFFER, rboID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, CUBEMAP_SIZE, CUBEMAP_SIZE); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO setup successfully."<<endl; }
- viewport to the size of the offscreen texture and render the scene six times without the reflective object to the six sides of the cubemap using FBO.
glViewport(0,0,CUBEMAP_SIZE,CUBEMAP_SIZE); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV1 = glm::lookAt(glm::vec3(0),glm::vec3(1,0,0),glm::vec3(0,-1,0)); DrawScene( MV1*T, Pcubemap); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV2 = glm::lookAt(glm::vec3(0),glm::vec3(-1,0,0), glm::vec3(0,-1,0)); DrawScene( MV2*T, Pcubemap); ...//similar for rest of the faces glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Restore the viewport and the modelview matrix, and render the scene normally.
glViewport(0,0,WIDTH,HEIGHT); DrawScene(MV, P);
- Set the cubemap shader and then render the reflective object.
glBindVertexArray(sphereVAOID); cubemapShader.Use(); T = glm::translate(glm::mat4(1), p); glUniformMatrix4fv(cubemapShader("MVP"), 1, GL_FALSE, glm::value_ptr(P*(MV*T))); glUniform3fv(cubemapShader("eyePosition"), 1, glm::value_ptr(eyePos)); glDrawElements(GL_TRIANGLES,indices.size(),GL_UNSIGNED_SHORT,0); cubemapShader.UnUse();
Dynamic cube mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The cubemap vertex shader outputs the object space vertex positions and normals.
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.
mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The cubemap vertex shader outputs the object space vertex positions and normals.
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.

Framebuffer
object layer. This can be achieved by outputting to the appropriate gl_Layer
attribute from the geometry shader and setting the appropriate viewing transformation. This is left as an exercise for the reader.
- http://www.opengl.org/wiki/Geometry_Shader#Layered_rendering
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
We will now see how to do area filtering, that is, 2D image convolution to implement effects like sharpening, blurring, and embossing. There are several ways to achieve image convolution in the spatial domain. The simplest approach is to use a loop that iterates through a given image window and computes the sum of products of the image intensities with the convolution kernel. The more efficient method, as far as the implementation is concerned, is separable convolution which breaks up the 2D convolution into two 1D convolutions. However, this approach requires an additional pass.
Let us get started with the recipe as follows:
- Create a simple pass-through vertex shader that outputs the clip space position and the texture coordinates which are to be passed into the fragment shader for texture lookup.
#version 330 core in vec2 vVertex; out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- In the fragment shader, we declare a constant array called
kernel
which stores our convolutionkernel
. Changing the convolutionkernel
values dictates the output of convolution. The defaultkernel
sets up a sharpening convolution filter. Refer toChapter3/Convolution/shaders/shader_convolution.frag
for details.const float kernel[]=float[9] (-1,-1,-1,-1, 8,-1,-1,-1,-1);
- In the fragment shader, we run a nested loop that loops through the current pixel's neighborhood and multiplies the
kernel
value with the current pixel's value. This is continued in an n x n neighborhood, where n is the width/height of thekernel
.for(int j=-1;j<=1;j++) { for(int i=-1;i<=1;i++) { color += kernel[index--] * texture(textureMap, vUV+(vec2(i,j)*delta)); } }
- After the nested loops, we divide the color value with the total number of values in the
kernel
. For a 3 x 3kernel
, we have nine values. Finally, we add the convolved color value to the current pixel's value.color/=9.0; vFragColor = color + texture(textureMap, vUV);
After the image is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Effect |
Kernel matrix |
---|---|
![]() | |
Blurring / Unweighted Smoothing |
![]() |
![]() | |
![]() | |
![]() | |
![]() | |
![]() |
Chapter3/Convolution
directory. For this recipe, most of the work takes place in the fragment shader.
Let us get started with the recipe as follows:
- Create a simple pass-through vertex shader that outputs the clip space position and the texture coordinates which are to be passed into the fragment shader for texture lookup.
#version 330 core in vec2 vVertex; out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- In the fragment shader, we declare a constant array called
kernel
which stores our convolutionkernel
. Changing the convolutionkernel
values dictates the output of convolution. The defaultkernel
sets up a sharpening convolution filter. Refer toChapter3/Convolution/shaders/shader_convolution.frag
for details.const float kernel[]=float[9] (-1,-1,-1,-1, 8,-1,-1,-1,-1);
- In the fragment shader, we run a nested loop that loops through the current pixel's neighborhood and multiplies the
kernel
value with the current pixel's value. This is continued in an n x n neighborhood, where n is the width/height of thekernel
.for(int j=-1;j<=1;j++) { for(int i=-1;i<=1;i++) { color += kernel[index--] * texture(textureMap, vUV+(vec2(i,j)*delta)); } }
- After the nested loops, we divide the color value with the total number of values in the
kernel
. For a 3 x 3kernel
, we have nine values. Finally, we add the convolved color value to the current pixel's value.color/=9.0; vFragColor = color + texture(textureMap, vUV);
After the image is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Effect |
Kernel matrix |
---|---|
![]() | |
Blurring / Unweighted Smoothing |
![]() |
![]() | |
![]() | |
![]() | |
![]() | |
![]() |
#version 330 core in vec2 vVertex; out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
kernel
which stores our convolution kernel
. Changing the convolution kernel
values dictates the output of convolution. The default kernel
sets up a sharpening convolution filter. Refer to Chapter3/Convolution/shaders/shader_convolution.frag
for details.const float kernel[]=float[9] (-1,-1,-1,-1, 8,-1,-1,-1,-1);
kernel
value with the current pixel's value. This is continued in an n x n neighborhood, where n is the width/height of the kernel
.for(int j=-1;j<=1;j++) { for(int i=-1;i<=1;i++) { color += kernel[index--] * texture(textureMap, vUV+(vec2(i,j)*delta)); } }
After the image is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Effect |
Kernel matrix |
---|---|
![]() | |
Blurring / Unweighted Smoothing |
![]() |
![]() | |
![]() | |
![]() | |
![]() | |
![]() |

is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Effect |
Kernel matrix |
---|---|
![]() | |
Blurring / Unweighted Smoothing |
![]() |
![]() | |
![]() | |
![]() | |
![]() | |
![]() |
Now that we know how to perform offscreen rendering and blurring, we will put this knowledge to use by implementing the glow effect. The code for this recipe is in the Chapter3/Glow
directory. In this recipe, we will render a set of points encircling a cube. Every 50 frames, four alternate points glow.
Let us get started with the recipe as follows:
- Render the scene normally by rendering the points and the cube. The particle shader renders the
GL_POINTS
value (which by default, renders as quads) as circles.grid->Render(glm::value_ptr(MVP)); cube->Render(glm::value_ptr(MVP)); glBindVertexArray(particlesVAO); particleShader.Use(); glUniformMatrix4fv(particleShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP*Rot)); glDrawArrays(GL_POINTS, 0, 8);
The particle vertex shader is as follows:
#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 MVP; smooth out vec4 color; const vec4 colors[8]=vec4[8](vec4(1,0,0,1), vec4(0,1,0,1), vec4(0,0,1,1),vec4(1,1,0,1), vec4(0,1,1,1), vec4(1,0,1,1), vec4(0.5,0.5,0.5,1), vec4(1,1,1,1)) ; void main() { gl_Position = MVP*vec4(vVertex,1); color = colors[gl_VertexID/4]; }
The particle fragment shader is as follows:
#version 330 core layout(location=0) out vec4 vFragColor; smooth in vec4 color; void main() { vec2 pos = gl_PointCoord-0.5; if(dot(pos,pos)>0.25) discard; else vFragColor = color; }
- Set up a single FBO with two color attachments. The first attachment is for rendering of scene elements requiring glow and the second attachment is for blurring.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenTextures(2, texID); glActiveTexture(GL_TEXTURE0); for(int i=0;i<2;i++) { glBindTexture(GL_TEXTURE_2D, texID[i]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameterf(GL_TXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR) glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, RENDER_TARGET_WIDTH, RENDER_TARGET_HEIGHT, 0, GL_RGBA,GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,texID[i],0); } GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO set up successfully."<<endl; } glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Bind FBO, set the viewport to the size of the attachment texture, set
Drawbuffer
to render to the first color attachment (GL_COLOR_ATTACHMENT0
), and render the part of the scene which needs glow.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glViewport(0,0,RENDER_TARGET_WIDTH,RENDER_TARGET_HEIGHT); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_POINTS, offset, 4); particleShader.UnUse();
- Set
Drawbuffer
to render to the second color attachment (GL_COLOR_ATTACHMENT1
) and bind the FBO texture attached to the first color attachment. Set the blur shader by convolving with a simple unweighted smoothing filter.glDrawBuffer(GL_COLOR_ATTACHMENT1); glBindTexture(GL_TEXTURE_2D, texID[0]);
- Render a screen-aligned quad and apply the blur shader to the rendering result from the first color attachment of the FBO. This output is written to the second color attachment.
blurShader.Use(); glBindVertexArray(quadVAOID); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0);
- Disable FBO rendering, reset the default drawbuffer (
GL_BACK_LEFT
) and viewport, bind the texture attached to the FBO's second color attachment, draw a screen-aligned quad, and blend the blur output to the existing scene using additive blending.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT); glBindTexture(GL_TEXTURE_2D, texID[1]); glViewport(0,0,WIDTH, HEIGHT); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0); glBindVertexArray(0); blurShader.UnUse(); glDisable(GL_BLEND);
Note that we could also enable blending in the fragment shader. Assuming that the two images to be blended are bound to their texture units and their shader samplers are texture1
and texture2
, the additive blending shader code will be like this:
The second color attachment is then set as output while the output results from the vertical smoothing pass (which was written to the third color attachment) is set as input. The horizontal smoothing shader is then applied on a column of pixels which smoothes the entire image. The image is then rendered to the second color attachment. Finally, the blend shader combines the result from the first color attachment with the result from the second color attachment. Note that the same effect could be carried out by using two separate FBOs: a rendering FBO and a filtering FBO, which gives us more flexibility as we can down sample the filtering result to take advantage of hardware linear filtering. This technique has been used in the Implementing variance shadow mapping recipe in Chapter 4, Lights and Shadows.
GL_POINTS
value (which by default, renders as quads) as circles.grid->Render(glm::value_ptr(MVP)); cube->Render(glm::value_ptr(MVP)); glBindVertexArray(particlesVAO); particleShader.Use(); glUniformMatrix4fv(particleShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP*Rot)); glDrawArrays(GL_POINTS, 0, 8);
The particle vertex shader is as follows:
#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 MVP; smooth out vec4 color; const vec4 colors[8]=vec4[8](vec4(1,0,0,1), vec4(0,1,0,1), vec4(0,0,1,1),vec4(1,1,0,1), vec4(0,1,1,1), vec4(1,0,1,1), vec4(0.5,0.5,0.5,1), vec4(1,1,1,1)) ; void main() { gl_Position = MVP*vec4(vVertex,1); color = colors[gl_VertexID/4]; }
The particle fragment shader is as follows:
#version 330 core layout(location=0) out vec4 vFragColor; smooth in vec4 color; void main() { vec2 pos = gl_PointCoord-0.5; if(dot(pos,pos)>0.25) discard; else vFragColor = color; }
- FBO with two color attachments. The first attachment is for rendering of scene elements requiring glow and the second attachment is for blurring.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenTextures(2, texID); glActiveTexture(GL_TEXTURE0); for(int i=0;i<2;i++) { glBindTexture(GL_TEXTURE_2D, texID[i]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameterf(GL_TXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR) glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, RENDER_TARGET_WIDTH, RENDER_TARGET_HEIGHT, 0, GL_RGBA,GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,texID[i],0); } GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO set up successfully."<<endl; } glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Bind FBO, set the viewport to the size of the attachment texture, set
Drawbuffer
to render to the first color attachment (GL_COLOR_ATTACHMENT0
), and render the part of the scene which needs glow.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glViewport(0,0,RENDER_TARGET_WIDTH,RENDER_TARGET_HEIGHT); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_POINTS, offset, 4); particleShader.UnUse();
- Set
Drawbuffer
to render to the second color attachment (GL_COLOR_ATTACHMENT1
) and bind the FBO texture attached to the first color attachment. Set the blur shader by convolving with a simple unweighted smoothing filter.glDrawBuffer(GL_COLOR_ATTACHMENT1); glBindTexture(GL_TEXTURE_2D, texID[0]);
- Render a screen-aligned quad and apply the blur shader to the rendering result from the first color attachment of the FBO. This output is written to the second color attachment.
blurShader.Use(); glBindVertexArray(quadVAOID); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0);
- Disable FBO rendering, reset the default drawbuffer (
GL_BACK_LEFT
) and viewport, bind the texture attached to the FBO's second color attachment, draw a screen-aligned quad, and blend the blur output to the existing scene using additive blending.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT); glBindTexture(GL_TEXTURE_2D, texID[1]); glViewport(0,0,WIDTH, HEIGHT); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0); glBindVertexArray(0); blurShader.UnUse(); glDisable(GL_BLEND);
Note that we could also enable blending in the fragment shader. Assuming that the two images to be blended are bound to their texture units and their shader samplers are texture1
and texture2
, the additive blending shader code will be like this:
The second color attachment is then set as output while the output results from the vertical smoothing pass (which was written to the third color attachment) is set as input. The horizontal smoothing shader is then applied on a column of pixels which smoothes the entire image. The image is then rendered to the second color attachment. Finally, the blend shader combines the result from the first color attachment with the result from the second color attachment. Note that the same effect could be carried out by using two separate FBOs: a rendering FBO and a filtering FBO, which gives us more flexibility as we can down sample the filtering result to take advantage of hardware linear filtering. This technique has been used in the Implementing variance shadow mapping recipe in Chapter 4, Lights and Shadows.

The second color attachment is then set as output while the output results from the vertical smoothing pass (which was written to the third color attachment) is set as input. The horizontal smoothing shader is then applied on a column of pixels which smoothes the entire image. The image is then rendered to the second color attachment. Finally, the blend shader combines the result from the first color attachment with the result from the second color attachment. Note that the same effect could be carried out by using two separate FBOs: a rendering FBO and a filtering FBO, which gives us more flexibility as we can down sample the filtering result to take advantage of hardware linear filtering. This technique has been used in the Implementing variance shadow mapping recipe in Chapter 4, Lights and Shadows.

In this chapter, we will cover:
- Implementing per-vertex and per-fragment point lighting
- Implementing per-fragment directional light
- Implementing per-fragment point light with attenuation
- Implementing per-fragment spot light
- Implementing shadow mapping with FBO
- Implementing shadow mapping with percentage closer filtering (PCF)
- Implementing variance shadow mapping
To give more realism to 3D graphic scenes, we add lighting. In OpenGL's fixed function pipeline, per-vertex lighting is provided (which is deprecated in OpenGL v3.3 and above). Using shaders, we can not only replicate the per-vertex lighting of fixed function pipeline but also go a step further by implementing per-fragment lighting. The per-vertex lighting is also known as Gouraud shading and the per-fragment shading is known as Phong shading. So, without further ado, let's get started.
In this recipe, we will render many cubes and a sphere. All of these objects are generated and stored in the buffer objects. For details, refer to the CreateSphere
and CreateCube
functions in Chapter4/PerVertexLighting/main.cpp
. These functions generate both vertex positions as well as per-vertex normals, which are needed for the lighting calculations. All of the lighting calculations take place in the vertex shader of the per-vertex lighting recipe (Chapter4/PerVertexLighting/
), whereas, for the per-fragment lighting recipe (Chapter4/PerFragmentLighting/
) they take place in the fragment shader.
Let us start our recipe by following these simple steps:
- Set up the vertex shader that performs the lighting calculation in the view/eye space. This generates the color after the lighting calculation.
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat4 MV; uniform mat3 N; uniform vec3 light_position; //light position in object space uniform vec3 diffuse_color; uniform vec3 specular_color; uniform float shininess; smooth out vec4 color; const vec3 vEyeSpaceCameraPosition = vec3(0,0,0); void main() { vec4 vEyeSpaceLightPosition = MV*vec4(light_position,1); vec4 vEyeSpacePosition = MV*vec4(vVertex,1); vec3 vEyeSpaceNormal = normalize(N*vNormal); vec3 L = normalize(vEyeSpaceLightPosition.xyz –vEyeSpacePosition.xyz); vec3 V = normalize(vEyeSpaceCameraPosition.xyz- vEyeSpacePosition.xyz); vec3 H = normalize(L+V); float diffuse = max(0, dot(vEyeSpaceNormal, L)); float specular = max(0, pow(dot(vEyeSpaceNormal, H), shininess)); color = diffuse*vec4(diffuse_color,1) + specular*vec4(specular_color, 1); gl_Position = MVP*vec4(vVertex,1); }
- Set up a fragment shader which, inputs the shaded color from the vertex shader interpolated by the rasterizer, and set it as the current output color.
#version 330 core layout(location=0) out vec4 vFragColor; smooth in vec4 color; void main() { vFragColor = color; }
- In the rendering code, set the shader and render the objects by passing their modelview/projection matrices to the shader as shader uniforms.
shader.Use(); glBindVertexArray(cubeVAOID); for(int i=0;i<8;i++) { float theta = (float)(i/8.0f*2*M_PI); glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(radius*cos(theta), 0.5,radius*sin(theta))); glm::mat4 M = T; glm::mat4 MV = View*M; glm::mat4 MVP = Proj*MV; glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV)))); glUniform3fv(shader("diffuse_color"),1, &(colors[i].x)); glUniform3fv(shader("light_position"),1,&(lightPosOS.x)); glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0); } glBindVertexArray(sphereVAOID); glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(0,1,0)); glm::mat4 M = T; glm::mat4 MV = View*M; glm::mat4 MVP = Proj*MV; glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV)))); glUniform3f(shader("diffuse_color"), 0.9f, 0.9f, 1.0f); glUniform3fv(shader("light_position"),1, &(lightPosOS.x)); glDrawElements(GL_TRIANGLES, totalSphereTriangles, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glBindVertexArray(0); grid->Render(glm::value_ptr(Proj*View));
We can perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV
) matrix.
These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ)
, where σ
is the shininess value; the larger the shininess, the more focused the specular.
In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.
Next, the diffuse component is calculated using the dot product with the eye space normal.
The specular component is calculated as in the per-vertex case.
Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.
many cubes and a sphere. All of these objects are generated and stored in the buffer objects. For details, refer to the CreateSphere
and CreateCube
functions in Chapter4/PerVertexLighting/main.cpp
. These functions generate both vertex positions as well as per-vertex normals, which are needed for the lighting calculations. All of the lighting calculations take place in the vertex shader of the per-vertex lighting recipe (Chapter4/PerVertexLighting/
), whereas, for the per-fragment lighting recipe (Chapter4/PerFragmentLighting/
) they take place in the fragment shader.
Let us start our recipe by following these simple steps:
- Set up the vertex shader that performs the lighting calculation in the view/eye space. This generates the color after the lighting calculation.
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat4 MV; uniform mat3 N; uniform vec3 light_position; //light position in object space uniform vec3 diffuse_color; uniform vec3 specular_color; uniform float shininess; smooth out vec4 color; const vec3 vEyeSpaceCameraPosition = vec3(0,0,0); void main() { vec4 vEyeSpaceLightPosition = MV*vec4(light_position,1); vec4 vEyeSpacePosition = MV*vec4(vVertex,1); vec3 vEyeSpaceNormal = normalize(N*vNormal); vec3 L = normalize(vEyeSpaceLightPosition.xyz –vEyeSpacePosition.xyz); vec3 V = normalize(vEyeSpaceCameraPosition.xyz- vEyeSpacePosition.xyz); vec3 H = normalize(L+V); float diffuse = max(0, dot(vEyeSpaceNormal, L)); float specular = max(0, pow(dot(vEyeSpaceNormal, H), shininess)); color = diffuse*vec4(diffuse_color,1) + specular*vec4(specular_color, 1); gl_Position = MVP*vec4(vVertex,1); }
- Set up a fragment shader which, inputs the shaded color from the vertex shader interpolated by the rasterizer, and set it as the current output color.
#version 330 core layout(location=0) out vec4 vFragColor; smooth in vec4 color; void main() { vFragColor = color; }
- In the rendering code, set the shader and render the objects by passing their modelview/projection matrices to the shader as shader uniforms.
shader.Use(); glBindVertexArray(cubeVAOID); for(int i=0;i<8;i++) { float theta = (float)(i/8.0f*2*M_PI); glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(radius*cos(theta), 0.5,radius*sin(theta))); glm::mat4 M = T; glm::mat4 MV = View*M; glm::mat4 MVP = Proj*MV; glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV)))); glUniform3fv(shader("diffuse_color"),1, &(colors[i].x)); glUniform3fv(shader("light_position"),1,&(lightPosOS.x)); glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0); } glBindVertexArray(sphereVAOID); glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(0,1,0)); glm::mat4 M = T; glm::mat4 MV = View*M; glm::mat4 MVP = Proj*MV; glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV)))); glUniform3f(shader("diffuse_color"), 0.9f, 0.9f, 1.0f); glUniform3fv(shader("light_position"),1, &(lightPosOS.x)); glDrawElements(GL_TRIANGLES, totalSphereTriangles, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glBindVertexArray(0); grid->Render(glm::value_ptr(Proj*View));
We can perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV
) matrix.
These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ)
, where σ
is the shininess value; the larger the shininess, the more focused the specular.
In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.
Next, the diffuse component is calculated using the dot product with the eye space normal.
The specular component is calculated as in the per-vertex case.
Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat4 MV; uniform mat3 N; uniform vec3 light_position; //light position in object space uniform vec3 diffuse_color; uniform vec3 specular_color; uniform float shininess; smooth out vec4 color; const vec3 vEyeSpaceCameraPosition = vec3(0,0,0); void main() { vec4 vEyeSpaceLightPosition = MV*vec4(light_position,1); vec4 vEyeSpacePosition = MV*vec4(vVertex,1); vec3 vEyeSpaceNormal = normalize(N*vNormal); vec3 L = normalize(vEyeSpaceLightPosition.xyz –vEyeSpacePosition.xyz); vec3 V = normalize(vEyeSpaceCameraPosition.xyz- vEyeSpacePosition.xyz); vec3 H = normalize(L+V); float diffuse = max(0, dot(vEyeSpaceNormal, L)); float specular = max(0, pow(dot(vEyeSpaceNormal, H), shininess)); color = diffuse*vec4(diffuse_color,1) + specular*vec4(specular_color, 1); gl_Position = MVP*vec4(vVertex,1); }
- fragment shader which, inputs the shaded color from the vertex shader interpolated by the rasterizer, and set it as the current output color.
#version 330 core layout(location=0) out vec4 vFragColor; smooth in vec4 color; void main() { vFragColor = color; }
- In the rendering code, set the shader and render the objects by passing their modelview/projection matrices to the shader as shader uniforms.
shader.Use(); glBindVertexArray(cubeVAOID); for(int i=0;i<8;i++) { float theta = (float)(i/8.0f*2*M_PI); glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(radius*cos(theta), 0.5,radius*sin(theta))); glm::mat4 M = T; glm::mat4 MV = View*M; glm::mat4 MVP = Proj*MV; glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV)))); glUniform3fv(shader("diffuse_color"),1, &(colors[i].x)); glUniform3fv(shader("light_position"),1,&(lightPosOS.x)); glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0); } glBindVertexArray(sphereVAOID); glm::mat4 T = glm::translate(glm::mat4(1), glm::vec3(0,1,0)); glm::mat4 M = T; glm::mat4 MV = View*M; glm::mat4 MVP = Proj*MV; glUniformMatrix4fv(shader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniformMatrix4fv(shader("MV"), 1, GL_FALSE, glm::value_ptr(MV)); glUniformMatrix3fv(shader("N"), 1, GL_FALSE, glm::value_ptr(glm::inverseTranspose(glm::mat3(MV)))); glUniform3f(shader("diffuse_color"), 0.9f, 0.9f, 1.0f); glUniform3fv(shader("light_position"),1, &(lightPosOS.x)); glDrawElements(GL_TRIANGLES, totalSphereTriangles, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glBindVertexArray(0); grid->Render(glm::value_ptr(Proj*View));
We can perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV
) matrix.
These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ)
, where σ
is the shininess value; the larger the shininess, the more focused the specular.
In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.
Next, the diffuse component is calculated using the dot product with the eye space normal.
The specular component is calculated as in the per-vertex case.
Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.
perform the lighting calculations in any coordinate space we wish, that is, object space, world space, or eye/view space. Similar to the lighting in the fixed function OpenGL pipeline, in this recipe we also do our calculations in the eye space. The first step in the vertex shader is to obtain the vertex position and light position in the eye space. This is done by multiplying the current vertex and light position with the modelview (MV
) matrix.
These are used for specular component calculation in the Blinn Phong lighting model. The specular component is then obtained using pow(dot(N,H), σ)
, where σ
is the shininess value; the larger the shininess, the more focused the specular.
In the fragment shader, the rest of the calculation, including the diffuse and specular component contributions, is carried out.
Next, the diffuse component is calculated using the dot product with the eye space normal.
The specular component is calculated as in the per-vertex case.
Finally, the combined color is obtained by summing the diffuse and specular contributions. The diffuse contribution is obtained by multiplying the diffuse color with the diffuse component and the specular contribution is obtained by multiplying the specular component with the specular color.


In this recipe, we will now implement directional light. The only difference between a point light and a directional light is that in the case of the directional light source, there is no position, however, there is direction, as shown in the following figure.
We will build on the geometry handling code from the per-fragment lighting recipe, but, instead of the pulsating cubes, we will now render a single cube with a sphere. The code for this recipe is contained in the Chapter4/DirectionalLight
folder. The same code also works for per-vertex directional light.
Let us start the recipe by following these simple steps:
- Calculate the light direction in eye space and pass it as shader uniform. Note that the last component is
0
since now we have a light direction vector.lightDirectionES = glm::vec3(MV*glm::vec4(lightDirectionOS,0));
- In the vertex shader, output the eye space normal.
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat3 N; smooth out vec3 vEyeSpaceNormal; void main() { vEyeSpaceNormal = N*vNormal; gl_Position = MVP*vec4(vVertex,1); }
- In the fragment shader, compute the diffuse component by calculating the dot product between the light direction vector in eye space with the eye space normal, and multiply with the diffuse color to get the fragment color. Note that here, the light vector is independent of the eye space vertex position.
#version 330 core layout(location=0) out vec4 vFragColor; uniform vec3 light_direction; uniform vec3 diffuse_color; smooth in vec3 vEyeSpaceNormal; void main() { vec3 L = (light_direction); float diffuse = max(0, dot(vEyeSpaceNormal, L)); vFragColor = diffuse*vec4(diffuse_color,1); }
The only difference between this recipe and the previous one is that we now pass the light direction instead of the position to the fragment shader. The rest of the calculation remains unchanged. If we want to apply attenuation, we can add the relevant shader snippets from the previous recipe.
Let us start the recipe by following these simple steps:
- Calculate the light direction in eye space and pass it as shader uniform. Note that the last component is
0
since now we have a light direction vector.lightDirectionES = glm::vec3(MV*glm::vec4(lightDirectionOS,0));
- In the vertex shader, output the eye space normal.
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat3 N; smooth out vec3 vEyeSpaceNormal; void main() { vEyeSpaceNormal = N*vNormal; gl_Position = MVP*vec4(vVertex,1); }
- In the fragment shader, compute the diffuse component by calculating the dot product between the light direction vector in eye space with the eye space normal, and multiply with the diffuse color to get the fragment color. Note that here, the light vector is independent of the eye space vertex position.
#version 330 core layout(location=0) out vec4 vFragColor; uniform vec3 light_direction; uniform vec3 diffuse_color; smooth in vec3 vEyeSpaceNormal; void main() { vec3 L = (light_direction); float diffuse = max(0, dot(vEyeSpaceNormal, L)); vFragColor = diffuse*vec4(diffuse_color,1); }
The only difference between this recipe and the previous one is that we now pass the light direction instead of the position to the fragment shader. The rest of the calculation remains unchanged. If we want to apply attenuation, we can add the relevant shader snippets from the previous recipe.
0
since now we have a light direction vector.lightDirectionES = glm::vec3(MV*glm::vec4(lightDirectionOS,0));
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat3 N; smooth out vec3 vEyeSpaceNormal; void main() { vEyeSpaceNormal = N*vNormal; gl_Position = MVP*vec4(vVertex,1); }
#version 330 core layout(location=0) out vec4 vFragColor; uniform vec3 light_direction; uniform vec3 diffuse_color; smooth in vec3 vEyeSpaceNormal; void main() { vec3 L = (light_direction); float diffuse = max(0, dot(vEyeSpaceNormal, L)); vFragColor = diffuse*vec4(diffuse_color,1); }
The only difference between this recipe and the previous one is that we now pass the light direction instead of the position to the fragment shader. The rest of the calculation remains unchanged. If we want to apply attenuation, we can add the relevant shader snippets from the previous recipe.

Implementing per-fragment point light is demonstrated by following these steps:
- From the vertex shader, output the eye space vertex position and normal.
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat4 MV; uniform mat3 N; smooth out vec3 vEyeSpaceNormal; smooth out vec3 vEyeSpacePosition; void main() { vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz; vEyeSpaceNormal = N*vNormal; gl_Position = MVP*vec4(vVertex,1); }
- In the fragment shader, calculate the light position in eye space, and then calculate the vector from the eye space vertex position to the eye space light position. Store the light distance before normalizing the light vector.
#version 330 core layout(location=0) out vec4 vFragColor; uniform vec3 light_position; //light position in object space uniform vec3 diffuse_color; uniform mat4 MV; smooth in vec3 vEyeSpaceNormal; smooth in vec3 vEyeSpacePosition; const float k0 = 1.0; //constant attenuation const float k1 = 0.0; //linear attenuation const float k2 = 0.0; //quadratic attenuation void main() { vec3 vEyeSpaceLightPosition = (MV*vec4(light_position,1)).xyz; vec3 L = (vEyeSpaceLightPosition-vEyeSpacePosition); float d = length(L); L = normalize(L); float diffuse = max(0, dot(vEyeSpaceNormal, L)); float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount; vFragColor = diffuse*vec4(diffuse_color,1); }
- Apply attenuation based on the distance from the light source to the diffuse component.
float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount;
- Multiply the diffuse component to the diffuse color and set it as the fragment color.
vFragColor = diffuse*vec4(diffuse_color,1);
The recipe follows the Implementing per-fragment directional light recipe. In addition, it performs the attenuation calculation. The attenuation of light is calculated by using the following formula:
Implementing per-fragment point light is demonstrated by following these steps:
- From the vertex shader, output the eye space vertex position and normal.
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat4 MV; uniform mat3 N; smooth out vec3 vEyeSpaceNormal; smooth out vec3 vEyeSpacePosition; void main() { vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz; vEyeSpaceNormal = N*vNormal; gl_Position = MVP*vec4(vVertex,1); }
- In the fragment shader, calculate the light position in eye space, and then calculate the vector from the eye space vertex position to the eye space light position. Store the light distance before normalizing the light vector.
#version 330 core layout(location=0) out vec4 vFragColor; uniform vec3 light_position; //light position in object space uniform vec3 diffuse_color; uniform mat4 MV; smooth in vec3 vEyeSpaceNormal; smooth in vec3 vEyeSpacePosition; const float k0 = 1.0; //constant attenuation const float k1 = 0.0; //linear attenuation const float k2 = 0.0; //quadratic attenuation void main() { vec3 vEyeSpaceLightPosition = (MV*vec4(light_position,1)).xyz; vec3 L = (vEyeSpaceLightPosition-vEyeSpacePosition); float d = length(L); L = normalize(L); float diffuse = max(0, dot(vEyeSpaceNormal, L)); float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount; vFragColor = diffuse*vec4(diffuse_color,1); }
- Apply attenuation based on the distance from the light source to the diffuse component.
float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount;
- Multiply the diffuse component to the diffuse color and set it as the fragment color.
vFragColor = diffuse*vec4(diffuse_color,1);
The recipe follows the Implementing per-fragment directional light recipe. In addition, it performs the attenuation calculation. The attenuation of light is calculated by using the following formula:
#version 330 core layout(location=0) in vec3 vVertex; layout(location=1) in vec3 vNormal; uniform mat4 MVP; uniform mat4 MV; uniform mat3 N; smooth out vec3 vEyeSpaceNormal; smooth out vec3 vEyeSpacePosition; void main() { vEyeSpacePosition = (MV*vec4(vVertex,1)).xyz; vEyeSpaceNormal = N*vNormal; gl_Position = MVP*vec4(vVertex,1); }
- fragment shader, calculate the light position in eye space, and then calculate the vector from the eye space vertex position to the eye space light position. Store the light distance before normalizing the light vector.
#version 330 core layout(location=0) out vec4 vFragColor; uniform vec3 light_position; //light position in object space uniform vec3 diffuse_color; uniform mat4 MV; smooth in vec3 vEyeSpaceNormal; smooth in vec3 vEyeSpacePosition; const float k0 = 1.0; //constant attenuation const float k1 = 0.0; //linear attenuation const float k2 = 0.0; //quadratic attenuation void main() { vec3 vEyeSpaceLightPosition = (MV*vec4(light_position,1)).xyz; vec3 L = (vEyeSpaceLightPosition-vEyeSpacePosition); float d = length(L); L = normalize(L); float diffuse = max(0, dot(vEyeSpaceNormal, L)); float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount; vFragColor = diffuse*vec4(diffuse_color,1); }
- Apply attenuation based on the distance from the light source to the diffuse component.
float attenuationAmount = 1.0/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount;
- Multiply the diffuse component to the diffuse color and set it as the fragment color.
vFragColor = diffuse*vec4(diffuse_color,1);
The recipe follows the Implementing per-fragment directional light recipe. In addition, it performs the attenuation calculation. The attenuation of light is calculated by using the following formula:
follows the Implementing per-fragment directional light recipe. In addition, it performs the attenuation calculation. The attenuation of light is calculated by using the following formula:

We will now implement per-fragment spot light. Spot light is a special point light that emits light in a directional cone. The size of this cone is determined by the spot cutoff amount, which is given in angles, as shown in the following figure. In addition, the sharpness of the spot is controlled by the parameter spot exponent. A higher value of the exponent gives a sharper falloff and vice versa.
The code for this recipe is contained in the Chapter4/SpotLight
directory. The vertex shader is the same as in the point light recipe. The fragment shader calculates the diffuse component, as in the Implementing per-vertex and per-fragment point lighting recipe.
Let us start this recipe by following these simple steps:
- From the light's object space position and spot light target's position, calculate the spot light direction vector in eye space.
spotDirectionES = glm::normalize(glm::vec3(MV*glm::vec4(spotPositionOS-lightPosOS,0)))
- In the fragment shader, calculate the diffuse component as in point light. In addition, calculate the spot effect by finding the angle between the light direction and the spot direction vector.
vec3 L = (light_position.xyz-vEyeSpacePosition); float d = length(L); L = normalize(L); vec3 D = normalize(spot_direction); vec3 V = -L; float diffuse = 1; float spotEffect = dot(V,D);
- If the angle is greater than the spot cutoff, apply the spot exponent and then use the diffuse shader on the fragment.
if(spotEffect > spot_cutoff) { spotEffect = pow(spotEffect, spot_exponent); diffuse = max(0, dot(vEyeSpaceNormal, L)); float attenuationAmount = spotEffect/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount; vFragColor = diffuse*vec4(diffuse_color,1); }
The spot light is a special point light source that illuminates in a certain cone of direction. The amount of cone and the sharpness is controlled using the spot cutoff and spot exponent parameters respectively. Similar to the point light source, we first calculate the diffuse component. Instead of using the vector to light source (L
) we use the opposite vector, which points in the direction of light (V=-L
). Then we find out if the angle between the spot direction and the light direction vector is within the cutoff angle range. If it is, we apply the diffuse shading calculation. In addition, the sharpness of the spot light is controlled using the spot exponent parameter. This reduces the light in a falloff, giving a more pleasing spot light effect.
Let us start this recipe by following these simple steps:
- From the light's object space position and spot light target's position, calculate the spot light direction vector in eye space.
spotDirectionES = glm::normalize(glm::vec3(MV*glm::vec4(spotPositionOS-lightPosOS,0)))
- In the fragment shader, calculate the diffuse component as in point light. In addition, calculate the spot effect by finding the angle between the light direction and the spot direction vector.
vec3 L = (light_position.xyz-vEyeSpacePosition); float d = length(L); L = normalize(L); vec3 D = normalize(spot_direction); vec3 V = -L; float diffuse = 1; float spotEffect = dot(V,D);
- If the angle is greater than the spot cutoff, apply the spot exponent and then use the diffuse shader on the fragment.
if(spotEffect > spot_cutoff) { spotEffect = pow(spotEffect, spot_exponent); diffuse = max(0, dot(vEyeSpaceNormal, L)); float attenuationAmount = spotEffect/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount; vFragColor = diffuse*vec4(diffuse_color,1); }
The spot light is a special point light source that illuminates in a certain cone of direction. The amount of cone and the sharpness is controlled using the spot cutoff and spot exponent parameters respectively. Similar to the point light source, we first calculate the diffuse component. Instead of using the vector to light source (L
) we use the opposite vector, which points in the direction of light (V=-L
). Then we find out if the angle between the spot direction and the light direction vector is within the cutoff angle range. If it is, we apply the diffuse shading calculation. In addition, the sharpness of the spot light is controlled using the spot exponent parameter. This reduces the light in a falloff, giving a more pleasing spot light effect.
spotDirectionES = glm::normalize(glm::vec3(MV*glm::vec4(spotPositionOS-lightPosOS,0)))
vec3 L = (light_position.xyz-vEyeSpacePosition); float d = length(L); L = normalize(L); vec3 D = normalize(spot_direction); vec3 V = -L; float diffuse = 1; float spotEffect = dot(V,D);
if(spotEffect > spot_cutoff) { spotEffect = pow(spotEffect, spot_exponent); diffuse = max(0, dot(vEyeSpaceNormal, L)); float attenuationAmount = spotEffect/(k0 + (k1*d) + (k2*d*d)); diffuse *= attenuationAmount; vFragColor = diffuse*vec4(diffuse_color,1); }
The spot light is a special point light source that illuminates in a certain cone of direction. The amount of cone and the sharpness is controlled using the spot cutoff and spot exponent parameters respectively. Similar to the point light source, we first calculate the diffuse component. Instead of using the vector to light source (L
) we use the opposite vector, which points in the direction of light (V=-L
). Then we find out if the angle between the spot direction and the light direction vector is within the cutoff angle range. If it is, we apply the diffuse shading calculation. In addition, the sharpness of the spot light is controlled using the spot exponent parameter. This reduces the light in a falloff, giving a more pleasing spot light effect.

For this recipe, we will use the previous scene but instead of a grid object, we will use a plane object so that the generated shadows can be seen. The code for this recipe is contained in the Chapter4/ShadowMapping
directory.
Let us start with this recipe by following these simple steps:
- Create an OpenGL texture object which will be our shadow map texture. Make sure to set the clamp mode to
GL_CLAMP_TO_BORDER
, set the border color to{1,0,0,0}
, give the texture comparison mode toGL_COMPARE_REF_TO_TEXTURE
, and set the compare function toGL_LEQUAL
. Set the texture internal format toGL_DEPTH_COMPONENT24
.glGenTextures(1, &shadowMapTexID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, shadowMapTexID); GLfloat border[4]={1,0,0,0}; glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_REF_TO_TEXTURE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL); glTexParameterfv(GL_TEXTURE_2D,GL_TEXTURE_BORDER_COLOR,border); glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT24,SHADOWMAP_WIDTH,SHADOWMAP_HEIGHT,0,GL_DEPTH_COMPONENT,GL_UNSIGNED_BYTE,NULL);
- Set up an FBO and use the shadow map texture as a single depth attachment. This will store the scene's depth from the point of view of light.
glGenFramebuffers(1,&fboID); glBindFramebuffer(GL_FRAMEBUFFER,fboID); glFramebufferTexture2D(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_2D,shadowMapTexID,0); GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(status == GL_FRAMEBUFFER_COMPLETE) { cout<<"FBO setup successful."<<endl; } else { cout<<"Problem in FBO setup."<<endl; } glBindFramebuffer(GL_FRAMEBUFFER,0);
- Using the position and the direction of the light, set up the shadow matrix (
S
) by combining the light modelview matrix (MV_L
), projection matrix (P_L
), and bias matrix (B
). For reducing runtime calculation, we store the combined projection and bias matrix (BP
) at initialization.MV_L = glm::lookAt(lightPosOS,glm::vec3(0,0,0), glm::vec3(0,1,0)); P_L = glm::perspective(50.0f,1.0f,1.0f, 25.0f); B = glm::scale(glm::translate(glm::mat4(1), glm::vec3(0.5,0.5,0.5)),glm::vec3(0.5,0.5,0.5)); BP = B*P_L; S = BP*MV_L;
- Bind the FBO and render the scene from the point of view of the light. Make sure to enable front-face culling (
glEnable(GL_CULL_FACE)
andglCullFace(GL_FRONT)
) so that the back-face depth values are rendered. Otherwise our objects will suffer from shadow acne. - Disable FBO, restore default viewport, and render the scene normally from the point of view of the camera.
glBindFramebuffer(GL_FRAMEBUFFER,0); glViewport(0,0,WIDTH, HEIGHT); DrawScene(MV, P, 0 );