Home Programming GLSL Essentials

GLSL Essentials

By Jacobo Rodriguez
books-svg-icon Book
eBook $22.99 $15.99
Print $38.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $22.99 $15.99
Print $38.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
About this book
Shader programming has been the largest revolution in graphics programming. OpenGL Shading Language (abbreviated: GLSL or GLslang), is a high-level shading language based on the syntax of the C programming language.With GLSL you can execute code on your GPU (aka graphics card). More sophisticated effects can be achieved with this technique.Therefore, knowing how OpenGL works and how each shader type interacts with each other, as well as how they are integrated into the system, is imperative for graphic programmers. This knowledge is crucial in order to be familiar with the mechanisms for rendering 3D objects. GLSL Essentials is the only book on the market that teaches you about shaders from the very beginning. It shows you how graphics programming has evolved, in order to understand why you need each stage in the Graphics Rendering Pipeline, and how to manage it in a simple but concise way. This book explains how shaders work in a step-by-step manner, with an explanation of how they interact with the application assets at each stage. This book will take you through the graphics pipeline and will describe each section in an interactive and clear way. You will learn how the OpenGL state machine works and all its relevant stages. Vertex shaders, fragment shaders, and geometry shaders will be covered, as well some use cases and an introduction to the math needed for lighting algorithms or transforms. Generic GPU programming (GPGPU) will also be covered. After reading GLSL Essentials you will be ready to generate any rendering effect you need.
Publication date:
December 2013
Publisher
Packt
Pages
116
ISBN
9781849698009

 

Chapter 1. The Graphics Rendering Pipeline

If this is your first approach to shader technology, you should know a few things before we start writing GLSL code. The differences between the usual CPU architecture and a GPU are big enough to warrant mentioning them.

When you programmed applications in the past, you were aware of the underlying hardware: it has a CPU, an ALU, and memory (both volatile or for massive storage) and certain types of I/O devices (keyboard, screen, and so on). You also knew that your program would run sequentially, one instruction after another (unless you use multithreading, but that is not the point). When programming shaders, they will be running in an isolated unit called GPU, which has a very different architecture than the one you are used to.

Now, your application will run in a massive parallel environment. The I/O devices are totally different; you won't have direct access of any kind of memory, nor will it be generic for you to use at your will. Also, the system will spawn your program in tens or hundreds of instances, as if they were running using hundreds of real hardware threads.

In order to understand this fairly new architecture, this chapter will cover the following topics:

  • A brief history of graphics hardware

  • The Graphics Rendering Pipeline

  • Types of shaders

  • The shader environment

  • Scalar versus vectorial execution

  • Parallel execution

 

A brief history of graphics hardware


Graphics hardware (also called a graphics card or GPU) is not only a bunch of transistors that receive some generic orders and input data; it acts consequently like a CPU does. Orders issued to the hardware must be consistent and have an explicit and well known order at every stage. There are also data requirements in order to make things work as expected (for example, you cannot use vertices as input for fragment shaders, or textures as output in geometry shaders). Data and orders must follow a path and have to pass through some stages, and that cannot be altered.

This path is commonly called The Graphics Rendering Pipeline. Think of it like a pipe where we insert some data into one end—vertices, textures, shaders—and they start to travel through some small machines that perform very precise and concrete operations on the data and produce the final output at the other end: the final rendering.

In the early OpenGL years, the Graphics Rendering Pipeline was completely fixed, which means that the data always had to go through the same small machines, that always did the same operations, in the same order, and no operation could be skipped. These were the pre-shader ages (2002 and earlier).

The following is a simplified representation of the fixed pipeline, showing the most important building blocks and how the data flows through:

Between the years 2002 and 2004, some kind of programmability inside the GPU was made available, replacing some of those fixed stages. Those were the first shaders that graphics programmers had to code in a pseudo assembler language, and were very platform specific. In fact, programmers had to code at least one shader variant for each graphics hardware vendor, because they didn't share even the same assembler language, but at least they were able to replace some of the old-fashioned fixed pipeline stages by small low-level programs. Nonetheless, this was the beginning of the biggest revolution in real-time graphics programming history.

Some companies provided the programmers with other high-level programming solutions, such as Cg (from NVidia) or HLSL (from Microsoft), but those solutions weren't multiplatform. Cg was only usable with NVidia GPUs and HLSL was part of Direct3D.

During the year 2004, some companies realized the need for a high-level shader language, which would be common for different platforms; something like a standard for shader programming. Hence, OpenGL Shading Language (GLSL) was born and it allowed programmers to replace their multiple assembler code paths by a unique (at least in theory, because different GPUs have different capabilities) C-like shader, common for every hardware vendor.

In that year, only two pieces of the fixed pipeline could be replaced: the vertex processing unit, which took care of transform and lighting (T&L), and the fragment processing unit which was responsible for assigning colors to pixels. Those new programmable units were called vertex shaders and fragment shaders respectively. Also, another two stages were added later; geometry shaders and compute shaders were added to the official OpenGL specification in 2008 and 2012 respectively.

The following diagram shows an aspect of the new programmable pipeline after programmability changes:

 

The Graphics Rendering Pipeline


In accordance with the programmable pipeline diagram, I'll describe, in a summarized way, the module that the data goes through to explain how it is transformed in every stage.

Geometry stages (per-vertex operations)

This block of stages focuses on the transformation of vertex data from its initial state (model coordinates system) to its final state (viewport coordinates system):

  • Vertex data: This is the input data for the whole process. Here we feed the pipeline with all the vectorial data of our geometry: vertices, normals, indices, tangents, binormals, texture coordinates, and so on.

  • Textures: When shaders showed up, this new input for the vertex stage was possible. In addition to making our renders colorful, textures might serve as an input in vertex and geometry shaders, to, for example, displace vertices according with the values stored into a texture (displacement mapping technique).

  • Vertex shader: This system is responsible for the transformation of the vertices from their local coordinate system to the clip space, applying the adequate transform matrices (model, view, and projection).

  • Geometry shader: New primitives could be generated using this module, with the outcome of the vertex shader as input.

  • Clipping: Once the primitive's vertices are in the so-called clipping space, it is easier and computationally cheaper to clip and discard the outer triangles here rather than in any other space.

  • Perspective division: This operation converts our visualization volume (a truncated pyramid, usually called a frustum) into a regular and normalized cube.

  • Viewport transform: The near plane of the clipping volume (the normalized cube) is translated and scaled to the viewport coordinates. This means that the coordinates will be mapped to our viewport (usually our screen or our window).

  • Data is passed to the rasterizer: This is the stage that transforms our vectorial data (the primitive's vertices) to a discrete representation (the framebuffer) to be processed in further steps.

Fragment stages (per-fragment operations)

Here is where our vectorial data is transformed into discrete data, ready to be rasterized. The stages inside the superblock controls show that discrete data will finally be presented:

  • Fragment shader: This is the stage where texture, colors, and lights are calculated, applied, and combined to form a fragment.

  • Post fragment processing: This is the stage where blending, depth tests, scissor tests, alpha tests, and so on take place. Fragments are combined, tested, and discarded in this stage and the ones that finally pass, are written to the framebuffer.

External stages

Outside the per-vertex and per-fragment big blocks lies the compute shader stage. This stage can be written to affect any other programmable part of the pipeline.

Differences between fixed and programmable designs

It is worth understanding the fixed pipeline, because the programmable pipeline is heavily based on it. Shaders only replace a few well defined modules that previously existed in a fixed way, so the concept of a "pipeline" has not actually changed very much.

In the case of the vertex shaders, they replace the whole transform and lighting module. Now we have to write a program that can perform equivalent tasks. Inside your vertex shader, you can perform the calculations that you would need for your purposes, but there is a minimum requirement. In order not to break the pipeline, the output of your shader must feed the input of the next module. You can achieve this by calculating the vertex position in clipping coordinates and writing it out for the next stage.

Regarding fragment shaders, they replace the fixed texture stages. In the past, this module cared about how a fragment was produced by combining textures in a very limited way. Currently, the final outcome of a fragment shader is a fragment. As implicitly said before, a fragment is a candidate to a pixel, so, in its most simple form, it is simply an RGBA color. To connect the fragment shader with the following pipeline's modules, you have to output that color, but you can compute it the way you want.

When your fragment shader produces a color, other data is also associated to it, mainly its raster position and depth, so further tests such as depth or scissor tests could go straight on. After all the fragments for a current raster position are processed, the color that remains is what is commonly called a pixel.

Optionally, you can specify two additional modules that did not exist in the fixed pipeline before:

  • The geometry shader: This module is placed after the vertex shader, but before clipping happens. The responsibility of this module is to emit new primitives (not vertices!) based on the incoming ones.

  • The compute shader: This is a complementary module. In some way, this is quite different to the other shaders because it affects the whole pipeline globally. Its main purpose is to provide a method for generic GPGPU (General-Purpose computation on GPUs); not very graphics related. It is like OpenCL, but more handy for graphics programmers because it is fully integrated with the entire pipeline. As graphic usage examples, they could be used for image transforms or for deferred rendering in a more efficient way than OpenCL.

 

Types of shaders


Vertex and fragment shaders are the most important shaders in the whole pipeline, because they expose the pure basic functionality of the GPU. With vertex shaders, you can compute the geometry of the object that you are going to render as well as other important elements, such as the scene's camera, the projection, or how the geometry is clipped. With fragment shaders, you can control how your geometry will look onscreen: colors, lighting, textures, and so on.

As you can see, with only vertex and fragment shaders, you can control almost everything in your rendering process, but there is room for more improvement in the OpenGL machine.

Let's put an example: suppose that you process point primitives with a complex vertex shader. Using those processed vertices, you can use a geometry shader to create arbitrary shaped primitives (for instance, quads) using the points as the quad's center. Then you can use those quads for a particle system.

During that process you have saved bandwidth, because you have sent points instead of quads that have four times more vertices and processing power because, once you have transformed the points, the other four vertices already lie in the same space, so you transformed one vertex with a complex shader instead of four.

Unlike vertex and fragment shaders (it is mandatory to have one of each kind to complete the pipeline) the geometry shader is only optional. So, if you do not want to create a new geometry after the vertex shader execution, simply do not link a geometry shader in your application, and the results of the vertex shader will pass unchanged to the clipping stage, which is perfectly fine.

The compute shader stage was the latest addition to the pipeline. It is also optional, like the geometry shader, and is intended for generic computations.

Inside the pipeline, some of the following shaders can exist: vertex shaders, fragment shaders, geometry shaders, tessellation shaders (meant to subdivide triangle meshes on the fly, but we are not covering them in this book), and compute shaders. OpenGL evolves every day, so don't be surprised if other shader classes appear and change the pipeline layout from time to time.

Before going deeper into the matter, there is an important concept that we have to speak about; the concept of a shader program. A shader program is nothing more than a working pipeline configuration. This means that at least a vertex shader and a fragment shader must have been compiled without errors, and linked together. As for geometry and compute shaders, they could form part of a program too, being compiled and linked together with the other two shaders into the same shader program.

Vertex shaders

In order to take your 3D model's coordinates and transform them to the clip space, we usually apply the model, view, and projection matrices to the vertices. Also, we can perform any other type of data transform, such as apply noise (from a texture or computed on the fly) to the positions for a pseudorandom displacement, calculate normals, calculate texture coordinates, calculate vertex colors, prepare the data for a normal mapping shader, and so on.

You can do a lot more with this shader; however, the most important aspect of it is to provide the vertex positions to clip coordinates, to take us to the next stage.

Tip

A vertex shader is a piece of code that is executed in the GPU processors, and it's executed once, and only once for each vertex you send to the graphics card. So, if you have a 3D model with 1000 vertices, the vertex shader will be executed 1000 times, so remember to keep your calculations always simple.

Fragment shaders

Fragment shaders are responsible for painting each primitive's area. The minimum task for a fragment shader is to output an RGBA color. You can calculate that color by any means: procedurally, from textures, or using vertex shader's output data. But in the end, you have to output at least a color to the framebuffer.

The execution model of a fragment shader is like the vertex shader's one. A fragment shader is a piece of code that is executed once, and only once, per fragment. Let us elaborate on this a bit. Suppose that you have a screen with a size of 1.024 x 768. That screen contains 786.432 pixels. Now suppose you paint one quad that covers exactly the whole screen (also known as a full screen quad). This means that your fragment shader will be executed 786.432 times, but the reality is worse. What if you paint several full screen quads (something normal when doing post-processing shaders such as motion blur, glows, or screen space ambient occlusion), or simply many triangles that overlap on the screen? Each time you paint a triangle on the screen, all its area must be rasterized, so all the triangle's fragments must be calculated. In reality, a fragment shader is executed millions of times. Optimization in a fragment shader is more critical than in the vertex shaders.

Geometry shaders

The geometry shader's stage is responsible for the creation of new rendering primitives parting from the output of the vertex shader. A geometry shader is executed once per primitive, which is, in the worst case (when it is used to emit point primitives), the same as the vertex shader. The best case scenario is when it is used to emit triangles, because only then will it be executed three times less than the vertex shader, but this complexity is relative. Although the geometry shader's execution could be cheap, it always increases the scene's complexity, and that always translates into more computational time spent by the GPU to render the scene.

Compute shaders

This special kind of shader does not relate directly to a particular part of the pipeline. They can be written to affect vertex, fragment, or geometry shaders.

As compute shaders lie in some manner outside the pipeline, they do not have the same constraints as the other kind of shaders. This makes them ideal for generic computations. Compute shaders are less specific, but have the advantage of having access to all functions (matrix, advanced texture functions, and so on) and data types (vectors, matrices, all texture formats, and vertex buffers) that exist in GLSL, while other GPGPU solutions, such as OpenCL or CUDA have their own specific data types and do not fit easily with the rendering pipeline.

 

GPU, a vectorial and parallel architecture


GPUs provide an incredible processing power in certain situations. If you ever tried to program a software rasterizer for your CPU, you would have noticed that the performance was terrible. Even the most advanced software rasterizer, taking advantage of vectorial instruction sets such as SSE3, or making intensive use of all available cores through multithreading, offers very poor performance compared with a GPU. CPUs are simply not meant for pixels.

So, why are GPUs so fast at processing fragments, pixels, and vertices compared to a CPU? The answer is that by the scalar nature of a CPU, it always process one instruction after another. On the other side, GPUs process hundreds of instructions simultaneously. A CPU has few (or only one) big multipurpose cores that can execute one shader's instance at once, but a GPU has dozens or hundreds of small and very specific cores that execute many shaders' instances in parallel.

Another great advantage of GPU over CPU is that all native types are vectorial. Imagine a typical CPU structure for a vector of floats:

struct Vector3
{
  float x, y, z;
};

Now suppose that you want to calculate the cross product of two vectors:

vec3 a;
vec3 b = {1, 2, 3};
vec3 c = {1, 1, 1};
// a = cross(b, c);
a.x = (b.y * c.z) – (b.z * c.y);
a.y = (b.z * c.x) – (b.x * c.z);
a.z = (b.x * c.y) – (b.y * c.x);

As you can see, this simple scalar operation in CPU took six multiplications, three subtractions, and three assignments; whereas in a GPU, vectorial types are native. A vec3 type is like a float or an int for a CPU. Also native types' operations are native too.

vec3 b = vec3(1, 2, 3);
vec3 c = vec3(1, 1, 1);
vec3 a = cross(b, c);

And that is all. The cross product operation is done in a single and atomic operation. This is a pretty simple example, but now think in the number of operations of these kinds that are done to process vertices and fragments per second and how a CPU would handle that. The number of multiplications and additions involved in a 4 x 4 matrix multiplication is quite large, while in GPU, it's only a matter of one single operation.

In a GPU, there are many other built-in operations (directly native or based on native operations) for native types: addition, subtraction, dot products, and inner/outer multiplications, geometric, trigonometric, or exponential functions. All these built-in operations are mapped directly (totally or partially) into the graphics hardware and therefore, all of them cost only a small fraction of the CPU equivalents.

All shader computations rely heavily on linear algebra calculations, mostly used to compute things such as light vectors, surface normals, displacement vectors, refractions and diffractions, cube maps, and so on. All these computations and many more are vector-based, so it is easy to see why a GPU has great advantages over a CPU to perform these tasks.

The following are the reasons why GPUs are faster than CPUs for vectorial calculations and graphics computations:

  • Many shaders can be executed at the same time

  • Inside a shader, many instructions can be executed in a block

 

The shader environment


Other applications that you might have coded in the past are built to run inside a CPU. This means that you have used a compiler that took your program (programmed in your favorite high-level programming language) and compiled it down into a representation that a CPU could understand. It does not matter if the programming language is compiled or interpreted, because in the end, all programs are translated to something the CPU can deal with.

Shaders are a little different because they are meant only for graphics, so they are closely related to the following two points:

  • First, they need a graphics card, because inside the graphics card lies the processor that will run them. This special kind of processor is called the GPU (Graphics Processing Unit).

  • A piece of software to reach the GPU: the GPU driver.

Tip

If you are going to program shaders, the first thing that you have to do is prepare your development environment, and that starts by downloading, and always keeping your graphics card driver updated.

Now suppose you are ready to start and have your first shader finished. You should compile and pass it to the GPU for execution. As GLSL relies on OpenGL, you must use OpenGL to compile and execute the shader. OpenGL has specific API calls for shader compilation: link, execution, and debug. Your OpenGL application now acts as a host application, from where you can manage your shaders and the resources that they might need, like for instance: textures, vertices, normals, framebuffers, or rendering states.

 

Summary


In this chapter, we learnt that there exists other worlds beyond the CPUs: GPUs and parallel computation. We also learnt how the internals of a graphics rendering pipeline are, which parts is it composed of, and a brief understanding of their functions.

In the next chapter, we will face the details of the language that controls the pipeline; a bit of grammar, and a bit of syntax.

About the Author
  • Jacobo Rodriguez

    Jacobo Rodríguez is a real-time computer graphics programmer living in the north of Spain. He has working experience with computer graphics, digital photogrammetry, computer vision, and video game development. Jacobo has worked for cutting-edge technology companies such as Metria Digital and Blit Software, and has also worked as an entrepreneur and freelancer for a variety of clients of platforms such as PC, iOS, PlayStation 3, PlayStation Vita, and PlayStation Portable. Jacobo has been working and learning at the same time for the last 20 years in the computer graphics field in roles ranging from junior programmer to project manager, passing through R&D director as well. Jacobo has always been very committed to the computer graphics community, having released for free the OpenGL Shader Designer: the first application in the world (even before NVIDIA with FX Composer or ATI with RenderMonkey) designed to visually develop and program GLSL shaders, as well as some OpenGL programming tutorials, all forming part of the Official OpenGL SDK.

    Browse publications by this author
Latest Reviews (1 reviews total)
GLSL Essentials
Unlock this book and the full library FREE for 7 days
Start now