Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
GLSL Essentials
GLSL Essentials

GLSL Essentials: If you're involved in graphics programming, you need to know about shaders, and this is the book to do it. A hands-on guide to the OpenGL Shading Language, it walks you through the absolute basics to advanced techniques.

By Jacobo Rodriguez
$22.99 $15.99
Book Dec 2013 116 pages 1st Edition
eBook
$22.99 $15.99
Print
$38.99
Subscription
$15.99 Monthly
eBook
$22.99 $15.99
Print
$38.99
Subscription
$15.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 26, 2013
Length 116 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781849698009
Category :
Languages :
Table of content icon View table of contents Preview book icon Preview Book

GLSL Essentials

Chapter 1. The Graphics Rendering Pipeline

If this is your first approach to shader technology, you should know a few things before we start writing GLSL code. The differences between the usual CPU architecture and a GPU are big enough to warrant mentioning them.

When you programmed applications in the past, you were aware of the underlying hardware: it has a CPU, an ALU, and memory (both volatile or for massive storage) and certain types of I/O devices (keyboard, screen, and so on). You also knew that your program would run sequentially, one instruction after another (unless you use multithreading, but that is not the point). When programming shaders, they will be running in an isolated unit called GPU, which has a very different architecture than the one you are used to.

Now, your application will run in a massive parallel environment. The I/O devices are totally different; you won't have direct access of any kind of memory, nor will it be generic for you to use at your will. Also, the system will spawn your program in tens or hundreds of instances, as if they were running using hundreds of real hardware threads.

In order to understand this fairly new architecture, this chapter will cover the following topics:

  • A brief history of graphics hardware

  • The Graphics Rendering Pipeline

  • Types of shaders

  • The shader environment

  • Scalar versus vectorial execution

  • Parallel execution

A brief history of graphics hardware


Graphics hardware (also called a graphics card or GPU) is not only a bunch of transistors that receive some generic orders and input data; it acts consequently like a CPU does. Orders issued to the hardware must be consistent and have an explicit and well known order at every stage. There are also data requirements in order to make things work as expected (for example, you cannot use vertices as input for fragment shaders, or textures as output in geometry shaders). Data and orders must follow a path and have to pass through some stages, and that cannot be altered.

This path is commonly called The Graphics Rendering Pipeline. Think of it like a pipe where we insert some data into one end—vertices, textures, shaders—and they start to travel through some small machines that perform very precise and concrete operations on the data and produce the final output at the other end: the final rendering.

In the early OpenGL years, the Graphics Rendering Pipeline was completely fixed, which means that the data always had to go through the same small machines, that always did the same operations, in the same order, and no operation could be skipped. These were the pre-shader ages (2002 and earlier).

The following is a simplified representation of the fixed pipeline, showing the most important building blocks and how the data flows through:

Between the years 2002 and 2004, some kind of programmability inside the GPU was made available, replacing some of those fixed stages. Those were the first shaders that graphics programmers had to code in a pseudo assembler language, and were very platform specific. In fact, programmers had to code at least one shader variant for each graphics hardware vendor, because they didn't share even the same assembler language, but at least they were able to replace some of the old-fashioned fixed pipeline stages by small low-level programs. Nonetheless, this was the beginning of the biggest revolution in real-time graphics programming history.

Some companies provided the programmers with other high-level programming solutions, such as Cg (from NVidia) or HLSL (from Microsoft), but those solutions weren't multiplatform. Cg was only usable with NVidia GPUs and HLSL was part of Direct3D.

During the year 2004, some companies realized the need for a high-level shader language, which would be common for different platforms; something like a standard for shader programming. Hence, OpenGL Shading Language (GLSL) was born and it allowed programmers to replace their multiple assembler code paths by a unique (at least in theory, because different GPUs have different capabilities) C-like shader, common for every hardware vendor.

In that year, only two pieces of the fixed pipeline could be replaced: the vertex processing unit, which took care of transform and lighting (T&L), and the fragment processing unit which was responsible for assigning colors to pixels. Those new programmable units were called vertex shaders and fragment shaders respectively. Also, another two stages were added later; geometry shaders and compute shaders were added to the official OpenGL specification in 2008 and 2012 respectively.

The following diagram shows an aspect of the new programmable pipeline after programmability changes:

The Graphics Rendering Pipeline


In accordance with the programmable pipeline diagram, I'll describe, in a summarized way, the module that the data goes through to explain how it is transformed in every stage.

Geometry stages (per-vertex operations)

This block of stages focuses on the transformation of vertex data from its initial state (model coordinates system) to its final state (viewport coordinates system):

  • Vertex data: This is the input data for the whole process. Here we feed the pipeline with all the vectorial data of our geometry: vertices, normals, indices, tangents, binormals, texture coordinates, and so on.

  • Textures: When shaders showed up, this new input for the vertex stage was possible. In addition to making our renders colorful, textures might serve as an input in vertex and geometry shaders, to, for example, displace vertices according with the values stored into a texture (displacement mapping technique).

  • Vertex shader: This system is responsible for the transformation of the vertices from their local coordinate system to the clip space, applying the adequate transform matrices (model, view, and projection).

  • Geometry shader: New primitives could be generated using this module, with the outcome of the vertex shader as input.

  • Clipping: Once the primitive's vertices are in the so-called clipping space, it is easier and computationally cheaper to clip and discard the outer triangles here rather than in any other space.

  • Perspective division: This operation converts our visualization volume (a truncated pyramid, usually called a frustum) into a regular and normalized cube.

  • Viewport transform: The near plane of the clipping volume (the normalized cube) is translated and scaled to the viewport coordinates. This means that the coordinates will be mapped to our viewport (usually our screen or our window).

  • Data is passed to the rasterizer: This is the stage that transforms our vectorial data (the primitive's vertices) to a discrete representation (the framebuffer) to be processed in further steps.

Fragment stages (per-fragment operations)

Here is where our vectorial data is transformed into discrete data, ready to be rasterized. The stages inside the superblock controls show that discrete data will finally be presented:

  • Fragment shader: This is the stage where texture, colors, and lights are calculated, applied, and combined to form a fragment.

  • Post fragment processing: This is the stage where blending, depth tests, scissor tests, alpha tests, and so on take place. Fragments are combined, tested, and discarded in this stage and the ones that finally pass, are written to the framebuffer.

External stages

Outside the per-vertex and per-fragment big blocks lies the compute shader stage. This stage can be written to affect any other programmable part of the pipeline.

Differences between fixed and programmable designs

It is worth understanding the fixed pipeline, because the programmable pipeline is heavily based on it. Shaders only replace a few well defined modules that previously existed in a fixed way, so the concept of a "pipeline" has not actually changed very much.

In the case of the vertex shaders, they replace the whole transform and lighting module. Now we have to write a program that can perform equivalent tasks. Inside your vertex shader, you can perform the calculations that you would need for your purposes, but there is a minimum requirement. In order not to break the pipeline, the output of your shader must feed the input of the next module. You can achieve this by calculating the vertex position in clipping coordinates and writing it out for the next stage.

Regarding fragment shaders, they replace the fixed texture stages. In the past, this module cared about how a fragment was produced by combining textures in a very limited way. Currently, the final outcome of a fragment shader is a fragment. As implicitly said before, a fragment is a candidate to a pixel, so, in its most simple form, it is simply an RGBA color. To connect the fragment shader with the following pipeline's modules, you have to output that color, but you can compute it the way you want.

When your fragment shader produces a color, other data is also associated to it, mainly its raster position and depth, so further tests such as depth or scissor tests could go straight on. After all the fragments for a current raster position are processed, the color that remains is what is commonly called a pixel.

Optionally, you can specify two additional modules that did not exist in the fixed pipeline before:

  • The geometry shader: This module is placed after the vertex shader, but before clipping happens. The responsibility of this module is to emit new primitives (not vertices!) based on the incoming ones.

  • The compute shader: This is a complementary module. In some way, this is quite different to the other shaders because it affects the whole pipeline globally. Its main purpose is to provide a method for generic GPGPU (General-Purpose computation on GPUs); not very graphics related. It is like OpenCL, but more handy for graphics programmers because it is fully integrated with the entire pipeline. As graphic usage examples, they could be used for image transforms or for deferred rendering in a more efficient way than OpenCL.

Types of shaders


Vertex and fragment shaders are the most important shaders in the whole pipeline, because they expose the pure basic functionality of the GPU. With vertex shaders, you can compute the geometry of the object that you are going to render as well as other important elements, such as the scene's camera, the projection, or how the geometry is clipped. With fragment shaders, you can control how your geometry will look onscreen: colors, lighting, textures, and so on.

As you can see, with only vertex and fragment shaders, you can control almost everything in your rendering process, but there is room for more improvement in the OpenGL machine.

Let's put an example: suppose that you process point primitives with a complex vertex shader. Using those processed vertices, you can use a geometry shader to create arbitrary shaped primitives (for instance, quads) using the points as the quad's center. Then you can use those quads for a particle system.

During that process you have saved bandwidth, because you have sent points instead of quads that have four times more vertices and processing power because, once you have transformed the points, the other four vertices already lie in the same space, so you transformed one vertex with a complex shader instead of four.

Unlike vertex and fragment shaders (it is mandatory to have one of each kind to complete the pipeline) the geometry shader is only optional. So, if you do not want to create a new geometry after the vertex shader execution, simply do not link a geometry shader in your application, and the results of the vertex shader will pass unchanged to the clipping stage, which is perfectly fine.

The compute shader stage was the latest addition to the pipeline. It is also optional, like the geometry shader, and is intended for generic computations.

Inside the pipeline, some of the following shaders can exist: vertex shaders, fragment shaders, geometry shaders, tessellation shaders (meant to subdivide triangle meshes on the fly, but we are not covering them in this book), and compute shaders. OpenGL evolves every day, so don't be surprised if other shader classes appear and change the pipeline layout from time to time.

Before going deeper into the matter, there is an important concept that we have to speak about; the concept of a shader program. A shader program is nothing more than a working pipeline configuration. This means that at least a vertex shader and a fragment shader must have been compiled without errors, and linked together. As for geometry and compute shaders, they could form part of a program too, being compiled and linked together with the other two shaders into the same shader program.

Vertex shaders

In order to take your 3D model's coordinates and transform them to the clip space, we usually apply the model, view, and projection matrices to the vertices. Also, we can perform any other type of data transform, such as apply noise (from a texture or computed on the fly) to the positions for a pseudorandom displacement, calculate normals, calculate texture coordinates, calculate vertex colors, prepare the data for a normal mapping shader, and so on.

You can do a lot more with this shader; however, the most important aspect of it is to provide the vertex positions to clip coordinates, to take us to the next stage.

Tip

A vertex shader is a piece of code that is executed in the GPU processors, and it's executed once, and only once for each vertex you send to the graphics card. So, if you have a 3D model with 1000 vertices, the vertex shader will be executed 1000 times, so remember to keep your calculations always simple.

Fragment shaders

Fragment shaders are responsible for painting each primitive's area. The minimum task for a fragment shader is to output an RGBA color. You can calculate that color by any means: procedurally, from textures, or using vertex shader's output data. But in the end, you have to output at least a color to the framebuffer.

The execution model of a fragment shader is like the vertex shader's one. A fragment shader is a piece of code that is executed once, and only once, per fragment. Let us elaborate on this a bit. Suppose that you have a screen with a size of 1.024 x 768. That screen contains 786.432 pixels. Now suppose you paint one quad that covers exactly the whole screen (also known as a full screen quad). This means that your fragment shader will be executed 786.432 times, but the reality is worse. What if you paint several full screen quads (something normal when doing post-processing shaders such as motion blur, glows, or screen space ambient occlusion), or simply many triangles that overlap on the screen? Each time you paint a triangle on the screen, all its area must be rasterized, so all the triangle's fragments must be calculated. In reality, a fragment shader is executed millions of times. Optimization in a fragment shader is more critical than in the vertex shaders.

Geometry shaders

The geometry shader's stage is responsible for the creation of new rendering primitives parting from the output of the vertex shader. A geometry shader is executed once per primitive, which is, in the worst case (when it is used to emit point primitives), the same as the vertex shader. The best case scenario is when it is used to emit triangles, because only then will it be executed three times less than the vertex shader, but this complexity is relative. Although the geometry shader's execution could be cheap, it always increases the scene's complexity, and that always translates into more computational time spent by the GPU to render the scene.

Compute shaders

This special kind of shader does not relate directly to a particular part of the pipeline. They can be written to affect vertex, fragment, or geometry shaders.

As compute shaders lie in some manner outside the pipeline, they do not have the same constraints as the other kind of shaders. This makes them ideal for generic computations. Compute shaders are less specific, but have the advantage of having access to all functions (matrix, advanced texture functions, and so on) and data types (vectors, matrices, all texture formats, and vertex buffers) that exist in GLSL, while other GPGPU solutions, such as OpenCL or CUDA have their own specific data types and do not fit easily with the rendering pipeline.

GPU, a vectorial and parallel architecture


GPUs provide an incredible processing power in certain situations. If you ever tried to program a software rasterizer for your CPU, you would have noticed that the performance was terrible. Even the most advanced software rasterizer, taking advantage of vectorial instruction sets such as SSE3, or making intensive use of all available cores through multithreading, offers very poor performance compared with a GPU. CPUs are simply not meant for pixels.

So, why are GPUs so fast at processing fragments, pixels, and vertices compared to a CPU? The answer is that by the scalar nature of a CPU, it always process one instruction after another. On the other side, GPUs process hundreds of instructions simultaneously. A CPU has few (or only one) big multipurpose cores that can execute one shader's instance at once, but a GPU has dozens or hundreds of small and very specific cores that execute many shaders' instances in parallel.

Another great advantage of GPU over CPU is that all native types are vectorial. Imagine a typical CPU structure for a vector of floats:

struct Vector3
{
  float x, y, z;
};

Now suppose that you want to calculate the cross product of two vectors:

vec3 a;
vec3 b = {1, 2, 3};
vec3 c = {1, 1, 1};
// a = cross(b, c);
a.x = (b.y * c.z) – (b.z * c.y);
a.y = (b.z * c.x) – (b.x * c.z);
a.z = (b.x * c.y) – (b.y * c.x);

As you can see, this simple scalar operation in CPU took six multiplications, three subtractions, and three assignments; whereas in a GPU, vectorial types are native. A vec3 type is like a float or an int for a CPU. Also native types' operations are native too.

vec3 b = vec3(1, 2, 3);
vec3 c = vec3(1, 1, 1);
vec3 a = cross(b, c);

And that is all. The cross product operation is done in a single and atomic operation. This is a pretty simple example, but now think in the number of operations of these kinds that are done to process vertices and fragments per second and how a CPU would handle that. The number of multiplications and additions involved in a 4 x 4 matrix multiplication is quite large, while in GPU, it's only a matter of one single operation.

In a GPU, there are many other built-in operations (directly native or based on native operations) for native types: addition, subtraction, dot products, and inner/outer multiplications, geometric, trigonometric, or exponential functions. All these built-in operations are mapped directly (totally or partially) into the graphics hardware and therefore, all of them cost only a small fraction of the CPU equivalents.

All shader computations rely heavily on linear algebra calculations, mostly used to compute things such as light vectors, surface normals, displacement vectors, refractions and diffractions, cube maps, and so on. All these computations and many more are vector-based, so it is easy to see why a GPU has great advantages over a CPU to perform these tasks.

The following are the reasons why GPUs are faster than CPUs for vectorial calculations and graphics computations:

  • Many shaders can be executed at the same time

  • Inside a shader, many instructions can be executed in a block

The shader environment


Other applications that you might have coded in the past are built to run inside a CPU. This means that you have used a compiler that took your program (programmed in your favorite high-level programming language) and compiled it down into a representation that a CPU could understand. It does not matter if the programming language is compiled or interpreted, because in the end, all programs are translated to something the CPU can deal with.

Shaders are a little different because they are meant only for graphics, so they are closely related to the following two points:

  • First, they need a graphics card, because inside the graphics card lies the processor that will run them. This special kind of processor is called the GPU (Graphics Processing Unit).

  • A piece of software to reach the GPU: the GPU driver.

Tip

If you are going to program shaders, the first thing that you have to do is prepare your development environment, and that starts by downloading, and always keeping your graphics card driver updated.

Now suppose you are ready to start and have your first shader finished. You should compile and pass it to the GPU for execution. As GLSL relies on OpenGL, you must use OpenGL to compile and execute the shader. OpenGL has specific API calls for shader compilation: link, execution, and debug. Your OpenGL application now acts as a host application, from where you can manage your shaders and the resources that they might need, like for instance: textures, vertices, normals, framebuffers, or rendering states.

Summary


In this chapter, we learnt that there exists other worlds beyond the CPUs: GPUs and parallel computation. We also learnt how the internals of a graphics rendering pipeline are, which parts is it composed of, and a brief understanding of their functions.

In the next chapter, we will face the details of the language that controls the pipeline; a bit of grammar, and a bit of syntax.

Left arrow icon Right arrow icon

Key benefits

  • Learn about shaders in a step-by-step, interactive manner
  • Create stunning visual effects using vertex and fragment shaders
  • Simplify your CPU code and improve your overall performance with instanced drawing through the use of geometry shaders

Description

Shader programming has been the largest revolution in graphics programming. OpenGL Shading Language (abbreviated: GLSL or GLslang), is a high-level shading language based on the syntax of the C programming language.With GLSL you can execute code on your GPU (aka graphics card). More sophisticated effects can be achieved with this technique.Therefore, knowing how OpenGL works and how each shader type interacts with each other, as well as how they are integrated into the system, is imperative for graphic programmers. This knowledge is crucial in order to be familiar with the mechanisms for rendering 3D objects. GLSL Essentials is the only book on the market that teaches you about shaders from the very beginning. It shows you how graphics programming has evolved, in order to understand why you need each stage in the Graphics Rendering Pipeline, and how to manage it in a simple but concise way. This book explains how shaders work in a step-by-step manner, with an explanation of how they interact with the application assets at each stage. This book will take you through the graphics pipeline and will describe each section in an interactive and clear way. You will learn how the OpenGL state machine works and all its relevant stages. Vertex shaders, fragment shaders, and geometry shaders will be covered, as well some use cases and an introduction to the math needed for lighting algorithms or transforms. Generic GPU programming (GPGPU) will also be covered. After reading GLSL Essentials you will be ready to generate any rendering effect you need.

What you will learn

Use vertex shaders to dynamically displace or deform a mesh on the fly Colorize your pixels unleashing the power of fragment shaders Learn the basics of the Phong Illumination model to add emphasis to your scenes Combine textures to make your scene more realistic Save CPU and GPU cycles by performing instanced drawing Save bandwidth by generating geometry on the fly Learn about GPU Generic programming concepts Convert algorithms from CPU to GPU to increase performance

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 26, 2013
Length 116 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781849698009
Category :
Languages :

Table of Contents

13 Chapters
GLSL Essentials Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Author Chevron down icon Chevron up icon
About the Reviewers Chevron down icon Chevron up icon
www.PacktPub.com Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
1. The Graphics Rendering Pipeline Chevron down icon Chevron up icon
2. GLSL Basics Chevron down icon Chevron up icon
3. Vertex Shaders Chevron down icon Chevron up icon
4. Fragment Shaders Chevron down icon Chevron up icon
5. Geometry Shaders Chevron down icon Chevron up icon
6. Compute Shaders Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.