Introduction to HLSL in 3D Graphics with XNA Game Studio 4.0

Exclusive offer: get 50% off this eBook here
3D Graphics with XNA Game Studio 4.0

3D Graphics with XNA Game Studio 4.0 — Save 50%

A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games.

$26.99    $13.50
by Sean James | December 2010 | Open Source Web Graphics & Video

Most of the special effects that we will be discussing will rely in some way on what are called shaders. Shaders are pieces of code that are run on the graphics card in parts of what are called the programmable pipeline. These pieces of code are written in what is called HLSL or the High Level Shader Language. Shaders are loaded and executed on the graphics card, which allows them to run very quickly and in parallel, directly with the vertices, textures, and so on that have been loaded into the graphics card's memory.

XNA concerns itself with two types of shaders—the vertex shader and the pixel shader.

We will spend this article, by Sean James, author of 3D Graphics with XNA Game Studio 4.0, learning how to build effects and shaders, and then use that information to write some simple effects that will become the foundation of our work.

3D Graphics with XNA Game Studio 4.0

A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games.

  • Improve the appearance of your games by implementing the same techniques used by professionals in the game industry
  • Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline
  • Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them

 

        Read more about this book      

(For more resources on this subject, see here.)

Getting started

The vertex shader and pixel shader are contained in the same code file called an Effect. The vertex shader is responsible for transforming geometry from object space into screen space, usually using the world, view, and projection matrices. The pixel shader's job is to calculate the color of every pixel onscreen. It is giving information about the geometry visible at whatever point onscreen it is being run for and takes into account lighting, texturing, and so on.

For your convenience, I've provided the starting code for this article here.

public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;

List<CModel> models = new List<CModel>();
Camera camera;

MouseState lastMouseState;

public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";

graphics.PreferredBackBufferWidth = 1280;
graphics.PreferredBackBufferHeight = 800;
}

// Called when the game should load its content
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);

models.Add(new CModel(Content.Load<Model>("ship"),
new Vector3(0, 400, 0), Vector3.Zero, new Vector3(1f),
GraphicsDevice));

models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero,
Vector3.Zero, Vector3.One, GraphicsDevice));

camera = new FreeCamera(new Vector3(1000, 500, -2000),
MathHelper.ToRadians(153), // Turned around 153 degrees
MathHelper.ToRadians(5), // Pitched up 13 degrees
GraphicsDevice);
lastMouseState = Mouse.GetState();
}

// Called when the game should update itself
protected override void Update(GameTime gameTime)
{
updateCamera(gameTime);

base.Update(gameTime);
}

void updateCamera(GameTime gameTime)
{
// Get the new keyboard and mouse state
MouseState mouseState = Mouse.GetState();
KeyboardState keyState = Keyboard.GetState();

// Determine how much the camera should turn
float deltaX = (float)lastMouseState.X - (float)mouseState.X;
float deltaY = (float)lastMouseState.Y - (float)mouseState.Y;

// Rotate the camera
((FreeCamera)camera).Rotate(deltaX * .005f, deltaY * .005f);

Vector3 translation = Vector3.Zero;

// Determine in which direction to move the camera
if (keyState.IsKeyDown(Keys.W)) translation += Vector3.Forward;
if (keyState.IsKeyDown(Keys.S)) translation += Vector3.Backward;
if (keyState.IsKeyDown(Keys.A)) translation += Vector3.Left;
if (keyState.IsKeyDown(Keys.D)) translation += Vector3.Right;

// Move 3 units per millisecond, independent of frame rate
translation *= 4 *
(float)gameTime.ElapsedGameTime.TotalMilliseconds;

// Move the camera
((FreeCamera)camera).Move(translation);

// Update the camera
camera.Update();

// Update the mouse state
lastMouseState = mouseState;
}

// Called when the game should draw itself
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);

foreach (CModel model in models)
if (camera.BoundingVolumeIsInView(model.BoundingSphere))
model.Draw(camera.View, camera.Projection,
((FreeCamera)camera).Position);

base.Draw(gameTime);
}
}

Assigning a shader to a model

In order to draw a model with XNA, it needs to have an instance of the Effect class assigned to it. Recall from the first chapter that each ModelMeshPart in a Model has its own Effect. This is because each ModelMeshPart may need to have a different appearance, as one ModelMeshPart may, for example, make up armor on a soldier while another may make up the head. If the two used the same effect (shader), then we could end up with a very shiny head or a very dull piece of armor. Instead, XNA provides us the option to give every ModelMeshPart a unique effect.

In order to draw our models with our own effects, we need to replace the BasicEffect of every ModelMeshPart with our own effect loaded from the content pipeline. For now, we won't worry about the fact that each ModelMeshPart can have its own effect; we'll just be assigning one effect to an entire model. Later, however, we will add more functionality to allow different effects on each part of a model.

However, before we start replacing the instances of BasicEffect assigned to our models, we need to extract some useful information from them, such as which texture and color to use for each ModelMeshPart. We will store this information in a new class that each ModelMeshPart will keep a reference to using its Tag properties:

public class MeshTag
{
public Vector3 Color;
public Texture2D Texture;
public float SpecularPower;
public Effect CachedEffect = null;

public MeshTag(Vector3 Color, Texture2D Texture,
float SpecularPower)
{
this.Color = Color;
this.Texture = Texture;
this.SpecularPower = SpecularPower;
}
}

This information will be extracted using a new function in the CModel class:

private void generateTags()
{
foreach (ModelMesh mesh in Model.Meshes)
foreach (ModelMeshPart part in mesh.MeshParts)
if (part.Effect is BasicEffect)
{
BasicEffect effect = (BasicEffect)part.Effect;
MeshTag tag = new MeshTag(effect.DiffuseColor, effect.Texture,
effect.SpecularPower);
part.Tag = tag;
}
}

This function will be called along with buildBoundingSphere() in the constructor:

...

buildBoundingSphere();
generateTags();

...

Notice that the MeshTag class has a CachedEffect variable that is not currently used. We will use this value as a location to store a reference to an effect that we want to be able to restore to the ModelMeshPart on demand. This is useful when we want to draw a model using a different effect temporarily without having to completely reload the model's effects afterwards. The functions that will allow us to do this are as shown:

// Store references to all of the model's current effects
public void CacheEffects()
{
foreach (ModelMesh mesh in Model.Meshes)
foreach (ModelMeshPart part in mesh.MeshParts)
((MeshTag)part.Tag).CachedEffect = part.Effect;
}

// Restore the effects referenced by the model's cache
public void RestoreEffects()
{
foreach (ModelMesh mesh in Model.Meshes)
foreach (ModelMeshPart part in mesh.MeshParts)
if (((MeshTag)part.Tag).CachedEffect != null)
part.Effect = ((MeshTag)part.Tag).CachedEffect;
}

We are now ready to start assigning effects to our models. We will look at this in more detail in a moment, but it is worth noting that every Effect has a dictionary of effect parameters. These are variables that the Effect takes into account when performing its calculations—the world, view, and projection matrices, or colors and textures, for example. We modify a number of these parameters when assigning a new effect, so that each texture of ModelMeshPart can be informed of its specific properties:

public void SetModelEffect(Effect effect, bool CopyEffect)
{
foreach(ModelMesh mesh in Model.Meshes)
foreach (ModelMeshPart part in mesh.MeshParts)
{
Effect toSet = effect;
// Copy the effect if necessary
if (CopyEffect)
toSet = effect.Clone();
MeshTag tag = ((MeshTag)part.Tag);
// If this ModelMeshPart has a texture, set it to the effect
if (tag.Texture != null)
{
setEffectParameter(toSet, "BasicTexture", tag.Texture);
setEffectParameter(toSet, "TextureEnabled", true);
}
else
setEffectParameter(toSet, "TextureEnabled", false);
// Set our remaining parameters to the effect
setEffectParameter(toSet, "DiffuseColor", tag.Color);
setEffectParameter(toSet, "SpecularPower", tag.SpecularPower);
part.Effect = toSet;
}
}

// Sets the specified effect parameter to the given effect, if it
// has that parameter
void setEffectParameter(Effect effect, string paramName, object val)
{
if (effect.Parameters[paramName] == null)
return;

if (val is Vector3)
effect.Parameters[paramName].SetValue((Vector3)val);
else if (val is bool)
effect.Parameters[paramName].SetValue((bool)val);
else if (val is Matrix)
effect.Parameters[paramName].SetValue((Matrix)val);
else if (val is Texture2D)
effect.Parameters[paramName].SetValue((Texture2D)val);
}

The CopyEffect parameter, an option that this function has, is very important. If we specify false—telling the CModel not to copy the effect per ModelMeshPart—any changes made to the effect will be reflected any other time the effect is used. This is a problem if we want each ModelMeshPart to have a different texture, or if we want to use the same effect on multiple models. Instead, we can specify true to have the CModel copy the effect for each mesh part so that they can set their own effect parameters:

Finally, we need to update the Draw() function to handle Effects other than BasicEffect:

public void Draw(Matrix View, Matrix Projection, Vector3 CameraPosition)
{
// Calculate the base transformation by combining
// translation, rotation, and scaling
Matrix baseWorld = Matrix.CreateScale(Scale)
* Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X,
Rotation.Z)
* Matrix.CreateTranslation(Position);

foreach (ModelMesh mesh in Model.Meshes)
{
Matrix localWorld = modelTransforms[mesh.ParentBone.Index] *
baseWorld;

foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
Effect effect = meshPart.Effect;

if (effect is BasicEffect)
{
((BasicEffect)effect).World = localWorld;
((BasicEffect)effect).View = View;
((BasicEffect)effect).Projection = Projection;
((BasicEffect)effect).EnableDefaultLighting();
}
else
{
setEffectParameter(effect, "World", localWorld);
setEffectParameter(effect, "View", View);
setEffectParameter(effect, "Projection", Projection);
setEffectParameter(effect, "CameraPosition", CameraPosition);
}
}
mesh.Draw();
}
}

Creating a simple effect

We will create our first effect now, and assign it to our models so that we can see the result. To begin, right-click on the content project, choose Add New Item, and select Effect File. Call it something like SimpleEffect.fx:

The code for the new file is as follows. Don't worry, we'll go through each piece in a moment:

float4x4 World;
float4x4 View;
float4x4 Projection;

struct VertexShaderInput
{
float4 Position : POSITION0;
};

struct VertexShaderOutput
{
float4 Position : POSITION0;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);
float4x4 viewProjection = mul(View, Projection);

output.Position = mul(worldPosition, viewProjection);

return output;
}
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return float4(.5, .5, .5, 1);
}

technique Technique1
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}

To assign this effect to the models in our scene, we need to first load it in the game's LoadContent() function, then use the SetModelEffect() function to assign the effect to each model. Add the following to the end of the LoadContent function:

Effect simpleEffect = Content.Load<Effect>("SimpleEffect");

models[0].SetModelEffect(simpleEffect, true);
models[1].SetModelEffect(simpleEffect, true);

If you were to run the game now, you would notice that the models appear both flat and gray. This is the correct behavior, as the effect doesn't have the code necessary to do anything else at the moment. After we break down each piece of the shader, we will add some more exciting behavior:

Let's begin at the top. The first three lines in this effect are its effect paremeters. These three should be familiar to you—they are the world, view, and projection matrices (in HLSL, float4x4 is the equivelant of XNA's Matrix class). There are many types of effect parameters and we will see more later.

float4x4 World;
float4x4 View;
float4x4 Projection;

The next few lines are where we define the structures used in the shaders. In this case, the two structs are VertexShaderInput and VertexShaderOutput. As you might guess, these two structs are used to send input into the vertex shader and retrieve the output from it. The data in the VertexShaderOutput struct is then interpolated between vertices and sent to the pixel shader. This way, when we access the Position value in the pixel shader for a pixel that sits between two vertices, we will get the actual position of that location instead of the position of one of the two vertices. In this case, the input and output are very simple: just the position of the vertex before and after it has been transformed using the world, view, and projection matrices:

struct VertexShaderInput
{
float4 Position : POSITION0;
};

struct VertexShaderOutput
{
float4 Position : POSITION0;
};

You may note that the members of these structs are a little different from the properties of a class in C#—in that they must also include what are called semantics. Microsoft's definition for shader semantics is as follows (http://msdn.microsoft.com/en-us/library/bb509647%28VS.85%29.aspx):

A semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter.

Basically, we need to specify what we intend to do with each member of our structs so that the graphics card can correctly map the vertex shader's outputs with the pixel shader's inputs. For example, in the previous code, we use the POSITION0 semantics to tell the graphics card that this value is the one that holds the position at which to draw the vertex.

The next few lines are the vertex shader itself. Basically, we are just multiplying the input (object space or untransformed) vertex position by the world, view, and projection matrices (the mul function is part of HLSL and is used to multiply matrices and vertices) and returning that value in a new instance of the VertexShaderOutput struct:

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);

float4x4 viewProjection = mul(View, Projection);

output.Position = mul(worldPosition, viewProjection);

return output;
}

The next bit of code makes up the pixel shader. It accepts a VertexShaderOutput struct as its input (which is passed from the vertex shader), and returns a float4—equivelent to XNA's Vector4 class, in that it is basically a set of four floating point (decimal) numbers. We use the COLOR0 semantic for our return value to let the pipeline know that this function is returning the final pixel color. In this case, we are using those numbers to represent the red, green, blue, and transparency values respectively of the pixel that we are shading. In this extremely simple pixel shader, we are just returning the color gray (.5, .5, .5), so any pixel covered by the model we are drawing will be gray (like in the previous screenshot).

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
return float4(.5, .5, .5, 1);
}

The last part of the shader is the shader definition. Here, we tell the graphics card which vertex and pixel shader versions to use (every graphics card supports a different set, but in this case we are using vertex shader 1.1 and pixel shader 2.0) and which functions in our code make up the vertex and pixel shaders:

technique Technique1
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}

Texture mapping

Let's now improve our shader by allowing it to render the textures each ModelMeshPart has assigned. As you may recall, the SetModelEffect function in the CModel class attempts to set the texture of each ModelMeshPart to its respective effect. However, it attempts to do so only if it finds the BasicTexture parameter on the effect. Let's add this parameter to our effect now (under the world, view, and projection properties):

texture BasicTexture;

We need one more parameter in order to draw textures on our models, and that is an instance of a sampler. The sampler is used by HLSL to retrieve the color of the pixel at a given position in a texture—which will be useful later on—in our pixel shader where we will need to retrieve the pixel from the texture corresponding the point on the model we are shading:

sampler BasicTextureSampler = sampler_state {
texture = <BasicTexture>;
};

A third effect parameter will allow us to turn texturing on and off:

bool TextureEnabled = false;

Every model that has a texture should also have what are called texture coordinates. The texture coordinates are basically two-dimensional coordinates called UV coordinates that range from (0, 0) to (1, 1) and that are assigned to every vertex in the model. These coordinates correspond to the point on the texture that should be drawn onto that vertex. A UV coordinate of (0, 0) corresponds to the top-left of the texture and (1, 1) corresponds to the bottom-right. The texture coordinates allow us to wrap two-dimensional textures onto the three-dimensional surfaces of our models. We need to include the texture coordinates in the input and output of the vertex shader, and add the code to pass the UV coordinates through the vertex shader to the pixel shader:

struct VertexShaderInput
{
float4 Position : POSITION0;
float2 UV : TEXCOORD0;
};

struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 UV : TEXCOORD0;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);
float4x4 viewProjection = mul(View, Projection);

output.Position = mul(worldPosition, viewProjection);

output.UV = input.UV;

return output;
}

Finally, we can use the texture sampler, the texture coordinates (also called UV coordinates), and HLSL's tex2D function to retrieve the texture color corresponding to the pixel we are drawing on the model:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float3 output = float3(1, 1, 1);

if (TextureEnabled)
output *= tex2D(BasicTextureSampler, input.UV);

return float4(output, 1);
}

If you run the game now, you will see that the textures are properly drawn onto the models:

Texture sampling

The problem with texture sampling is that we are rarely able to simply copy each pixel from a texture directly onto the screen because our models bend and distort the texture due to their shape. Textures are distorted further by the transformations we apply to our models—rotation and other transformations. This means that we almost always have to calculate the approximate position in a texture to sample from and return that value, which is what HLSL's sampler2D does for us. There are a number of considerations to make when sampling.

How we sample from our textures can have a big impact on both our game's appearance and performance. More advanced sampling (or filtering) algorithms look better but slow down the game. Mip mapping refers to the use of multiple sizes of the same texture. These multiple sizes are calculated before the game is run and stored in the same texture, and the graphics card will swap them out on the fly, using a smaller version of the texture for objects in the distance, and so on. Finally, the address mode that we use when sampling will affect how the graphics card handles UV coordinates outside the (0, 1) range. For example, if the address mode is set to "clamp", the UV coordinates will be clamped to (0, 1). If the address mode is set to "wrap," the coordinates will be wrapped through the texture repeatedly. This can be used to create a tiling effect on terrain, for example.

For now, because we are drawing so few models, we will use anisotropic filtering. We will also enable mip mapping and set the address mode to "wrap".

sampler BasicTextureSampler = sampler_state {
texture = <BasicTexture>;
MinFilter = Anisotropic; // Minification Filter
MagFilter = Anisotropic; // Magnification Filter
MipFilter = Linear; // Mip-mapping
AddressU = Wrap; // Address Mode for U Coordinates
AddressV = Wrap; // Address Mode for V Coordinates
};

This will give our models a nice, smooth appearance in the foreground and a uniform appearance in the background:

3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games.
Published: December 2010
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on this subject, see here.)

Diffuse colors

Looking at the screenshot that appears immediately before the Texture sampling section, there is a problem with our render: some parts of the model are completely white. This is because this particular model does not have textures assigned to those pieces. Right now, if we don't have a texture assigned, our effect simply defaults to white. However, this model also specifies what are called diffuse colors. These are basic color values assigned to each ModelMeshPart. In this case, drawing the diffuse colors will fix our problem. We are already loading the diffuse colors into the MeshTag class, so all we need to do is add a parameter for them to our effect:

float3 DiffuseColor = float3(1, 1, 1);

Now we can make a small change to our pixel shader to use the diffuse color values instead of white:

float3 output = DiffuseColor;

Ambient lighting

Our model is now textured correctly and is using the correct diffuse color values, but it still looks flat and uninteresting. With the groundwork out of the way, we can start recreating some of the lighting effects provided by the BasicEffect shader. Let's start with ambient lighting: Ambient lighting is an attempt at simulating all the light that bounces off of other objects, the ground, and so on, which would be found in the real world. If you look at an object that doesn't have light shining directly onto it, you can still see it because light has bounced off of other objects nearby and lit it somewhat. As we can't possibly simulate all the bounced rays of light (technically, we can with a technique called ray tracing, but this is very slow), we instead simplify it into a constant color value. To add an ambient light value, we simply add another effect parameter:

float3 AmbientLightColor = float3(.1, .1, .1);

Now, we once again need only a small modification to the pixel shader:

float3 output = DiffuseColor + AmbientColor;

This will produce the following output (if you're following along with the code files, I've changed the model to a teapot at this point, as it will demonstrate lighting better due to its shape). The object should now look darker, as this light is mainly meant to fill in darker areas as though light were being bounced onto the object from its surroundings.

Lambertian directional lighting

Our next lighting type, directional lighting, is meant to provide some definition to our objects. The formula used for this lighting type is called Lambertian lighting, and is very simple:

kdiff = max(l • n, 0)

This equation simply means that the lighting amount on a given face is the dot product of the light vector and the face's normal. The dot product is defined as:

X • Y = X| |Y| cos θ|

In the previous equation, θ is the angle between the two vectors. X|| means the length of vector X. Because our vectors are normalized (their length is 1), we can disregard those calculations. The result is that the dot product is simply cos θ: 1 if the two vectors are parallel, and 0 if they are perpendicular. This is perfect for lighting, as the dot product of the normal and light direction will return 1 if the light is shining directly onto the surface (parallel to the normal vector), and 0 if it is perpendicular to the surface. In the Lambertian lighting equation, we clamp the light value to 0 to avoid negative light values.

To perform this lighting calculation, we need to retrieve the normals in the vertex shader so that they can be passed to the pixel shader where the calculation is done. We first need to update our vertex shader's input and output structs. (Note that the output uses the TEXCOORD1 semantic to store the normals. There are numerous texture coordinate channels, and they can be used to store data that is not strictly texture coordinates or of type float2.):

struct VertexShaderInput
{
float4 Position : POSITION0;
float2 UV : TEXCOORD0;
float3 Normal : NORMAL0;
};

struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 UV : TEXCOORD0;
float3 Normal : TEXCOORD1;
};

We next need to update the vertex shader to transform the normals passed in (which are in object space) with the world matrix to move them into world space. The following line does just this, and should be inserted before the VertexShaderOutput is returned from the vertex shader. Note that the value is "normalized" with the normalize() function. This resizes the normal vector to length 1 as it may have been scaled by the world matrix, which would cause the lighting to be incorrect later on. As discussed earlier, keeping our vectors at length 1 keeps dot products simple.

output.Normal = mul(input.Normal, World);

We also need to add a parameter at the beginning of the effect for the light direction. While we're at it, we will also add a parameter for the light color:

float3 LightDirection = float3(1, 1, 1);
float3 LightColor = float3(0.9, 0.9, 0.9);

Finally, we can update the pixel shader to perform the lighting calculation. Note that once again we use the normalize function, this time to ensure that the user-given light vector's components fall within the -1 to 1 range. The dot product of the vertex's normal and the light direction is multiplied by the light color and added to the total amount of light. Note that the saturate() function clamps the bottom end of a number to 0 to avoid negative light amounts:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
// Start with diffuse color
float3 color = DiffuseColor;

// Texture if necessary
if (TextureEnabled)
color *= tex2D(BasicTextureSampler, input.UV);

// Start with ambient lighting
float3 lighting = AmbientColor;

float3 lightDir = normalize(LightDirection);
float3 normal = normalize(input.Normal);

// Add lambertian lighting
lighting += saturate(dot(lightDir, normal)) * LightColor;

// Calculate final color
float3 output = saturate(lighting) * color;

return float4(output, 1);
}

This produces the effect of a light falling diagonally across the model, and highlights its edges well:

3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games.
Published: December 2010
eBook Price: $26.99
Book Price: $44.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on this subject, see here.)

Phong specular highlights

Our object is now looking very defined, but it is still missing one thing: specular highlights. Specular highlights is the formal term for the "shininess" you see when looking at the reflection of a light source on an object's surface. Think of the highlights like you are looking at a light bulb through a mirror—you can clearly see the light source. Now, fog the mirror. You can't see the light source, but you can see the circular gradiant effect where the light source would appear on a shinier surface. It should be no surprise, then, that the formula for calculating specular highlights (what is called the phong shading model) is as follows:

kspec = max(r • v, 0)n

Translate this to a mirror: r is the ray of the light bouncing off from the light source of the mirror (a mirror of our last equation's l), and v is the view direction. Given the behavior of the last equation and the dot product, it would make sense that the closer v, or the direction your eye is facing is to the reflected light vector, the brighter the specular highlight would appear. If you were to move your eye far enough across the mirror, you would lose sight of the light entirely. In that case, there would be no specular highlight as you no longer have light reflecting across the mirror into your eye.

The previous equation also includes an n exponent. This, however, unlike the last equation, is not meant to be the normal of the polygon, but rather the "shininess" of the object. As the result of the dot product is between 0 and 1, the higher the n exponent value, the closer to 0 the result of raising it to an exponent will be, and thus the duller and smaller the highlight will appear. In order to implement this, we will need three more shader parameters:

float SpecularPower = 32;
float3 SpecularColor = float3(1, 1, 1);

float3 CameraPosition;

The camera position is set automatically by the CModel class, so we can now update the vertex shader to calculate the direction between each vertex and the camera position—the view direction. First, we need to be sure to pass this value out of the vertex shader:

struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 UV : TEXCOORD0;
float3 Normal : TEXCOORD1;
float3 ViewDirection : TEXCOORD2;
};

Now we can add a line to the vertex shader to calculate the view direction:

output.ViewDirection = worldPosition - CameraPosition;

Finally, we can update the pixel shader:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
// Start with diffuse color
float3 color = DiffuseColor;

// Texture if necessary
if (TextureEnabled)
color *= tex2D(BasicTextureSampler, input.UV);

// Start with ambient lighting
float3 lighting = AmbientColor;

float3 lightDir = normalize(LightDirection);
float3 normal = normalize(input.Normal);

// Add lambertian lighting
lighting += saturate(dot(lightDir, normal)) * LightColor;

float3 refl = reflect(lightDir, normal);
float3 view = normalize(input.ViewDirection);

// Add specular highlights
lighting += pow(saturate(dot(refl, view)), SpecularPower) *
SpecularColor;

// Calculate final color
float3 output = saturate(lighting) * color;

return float4(output, 1);
}

For your convenience, the full shader is reproduced here—the screenshot of the results of our specular highlights, texturing, directional lighting, ambient lighting, and diffuse color:

float4x4 World;
float4x4 View;
float4x4 Projection;
float3 CameraPosition;

texture BasicTexture;

sampler BasicTextureSampler = sampler_state {
texture = <BasicTexture>;
MinFilter = Anisotropic; // Minification Filter
MagFilter = Anisotropic; // Magnification Filter
MipFilter = Linear; // Mip-mapping
AddressU = Wrap; // Address Mode for U Coordinates
AddressV = Wrap; // Address Mode for V Coordinates
};

bool TextureEnabled = false;

float3 DiffuseColor = float3(1, 1, 1);
float3 AmbientColor = float3(0.1, 0.1, 0.1);
float3 LightDirection = float3(1, 1, 1);
float3 LightColor = float3(0.9, 0.9, 0.9);
float SpecularPower = 32;
float3 SpecularColor = float3(1, 1, 1);

struct VertexShaderInput
{
float4 Position : POSITION0;
float2 UV : TEXCOORD0;
float3 Normal : NORMAL0;
};

struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 UV : TEXCOORD0;
float3 Normal : TEXCOORD1;
float3 ViewDirection : TEXCOORD2;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4 worldPosition = mul(input.Position, World);
float4x4 viewProjection = mul(View, Projection);

output.Position = mul(worldPosition, viewProjection);
output.UV = input.UV;
output.Normal = mul(input.Normal, World);
output.ViewDirection = worldPosition - CameraPosition;

return output;
}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
// Start with diffuse color
float3 color = DiffuseColor;

// Texture if necessary
if (TextureEnabled)
color *= tex2D(BasicTextureSampler, input.UV);

// Start with ambient lighting
float3 lighting = AmbientColor;

float3 lightDir = normalize(LightDirection);
float3 normal = normalize(input.Normal);

// Add lambertian lighting
lighting += saturate(dot(lightDir, normal)) * LightColor;

float3 refl = reflect(lightDir, normal);
float3 view = normalize(input.ViewDirection);

// Add specular highlights
lighting += pow(saturate(dot(refl, view)), SpecularPower) *
SpecularColor;

// Calculate final color
float3 output = saturate(lighting) * color;

return float4(output, 1);
}

technique Technique1
{
pass Pass1
{
VertexShader = compile vs_1_1 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}

Creating a Material class to store effect parameters

By now we have accumulated a large number of effect parameters in our lighting effect. It would be great if we had an easy way to set and change them from our C# code, so we will add a class called Material that does solely this. Each Model will have its own material that will store surface properties such as specularity, diffuse color, and so on. This class will then handle setting those properties to an instance of the Effect class. As we may have many different types of material, we will also set up a base class:

public class Material
{
public virtual void SetEffectParameters(Effect effect)
{
}
}

public class LightingMaterial : Material
{
public Vector3 AmbientColor { get; set; }
public Vector3 LightDirection { get; set; }
public Vector3 LightColor { get; set; }
public Vector3 SpecularColor { get; set; }

public LightingMaterial()
{
AmbientColor = new Vector3(.1f, .1f, .1f);
LightDirection = new Vector3(1, 1, 1);
LightColor = new Vector3(.9f, .9f, .9f);
SpecularColor = new Vector3(1, 1, 1);
}

public override void SetEffectParameters(Effect effect)
{
if (effect.Parameters["AmbientColor"] != null)
effect.Parameters["AmbientColor"].SetValue(AmbientColor);

if (effect.Parameters["LightDirection"] != null)
effect.Parameters["LightDirection"].SetValue(LightDirection);

if (effect.Parameters["LightColor"] != null)
effect.Parameters["LightColor"].SetValue(LightColor);

if (effect.Parameters["SpecularColor"] != null)
effect.Parameters["SpecularColor"].SetValue(SpecularColor);
}
}

We will now update the model class to use the Material class. First, we need an instance of the Material class:

public Material Material { get; set; }

We initialize it into a new material in the constructor. As this is the simplest material, no changes will be made to the effect when drawing. This is good as, by default, that effect is a BasicEffect, and does not match our parameters.

this.Material = new Material();

Finally, after the world, view, and projection matrices have been set to the effect in the Draw() function, we will call SetEffectParameters() on our material:

Material.SetEffectParameters(effect);

We can finally update the Game1 class to set the material to the model in the LoadContent() method, immediately after setting the effect to the model:

Effect simpleEffect = Content.Load<Effect>("LightingEffect");

models[0].SetModelEffect(simpleEffect, true);
models[1].SetModelEffect(simpleEffect, true);

LightingMaterial mat = new LightingMaterial();

models[0].Material = mat;
models[1].Material = mat;

So if we now, for example, wanted to light our pot blue with red ambient light, we could set the following options to the modelMaterial and they would be automatically reflected onto the shader when the model is drawn:

mat.AmbientColor = Color.Red.ToVector3() * .15f;
mat.LightColor = Color.Blue.ToVector3() * .85f;

Summary

Now that you've completed this article, you've learned the basics of shading—what the programmable pipeline is, what shaders and effects are, and how to write them in HLSL. You saw how to implement a number of lighting types with HLSL, as well as other effects such as diffuse colors and texturing. You upgraded the CModel class to support custom shaders and created a Material class to manage effect parameters.

If you'd like to know more about HLSL, Microsoft's official documentation is very extensive. It covers all of the built-in functions, filtering, and address modes, and so on, and has several example shaders. It is available at http://msdn.microsoft.com/en-us/library/bb509638%28VS.85%29.aspx.


Further resources on this subject:


About the Author :


Sean James

Sean James is a computer science student who has been programming for many years. He started with web design, learning HTML, PHP, Javascript, etc. Since then has created many websites including his personal XNA and game development focused blog http://www.innovativegames.net. In addition to web design he has interests in desktop software development and development for mobile devices such as Android, Windows Mobile, and Zune. However, his passion is for game development with DirectX, OpenGL, and XNA.

Sean James lives in Claremont, CA with his family and two dogs. He would like to thank his family and friends who supported him throughout the writing of this book, and all the people at Packt Publishing who worked hard on the book and to support him. He would also like to thank the XNA community for providing such amazing resources, without which this book would not have been possible.

Books From Packt


Blender 3D 2.49 Incredible Machines
Blender 3D 2.49 Incredible Machines

Unity 3D Game Development by Example Beginner's Guide
Unity 3D Game Development by Example Beginner's Guide

Away3D 3.6 Essentials
Away3D 3.6 Essentials

OGRE 3D 1.7 Beginner's Guide
OGRE 3D 1.7 Beginner's Guide

SketchUp 7.1 for Architectural Visualization: Beginner's Guide
SketchUp 7.1 for Architectural Visualization: Beginner's Guide

OpenSceneGraph 3.0: Beginner's Guide
OpenSceneGraph 3.0: Beginner's Guide

Blender 2.5 Materials and Textures Cookbook: RAW
Blender 2.5 Materials and Textures Cookbook: RAW

Blender 2.49 Scripting
Blender 2.49 Scripting


No votes yet

Post new comment

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
R
i
B
5
w
D
Enter the code without spaces and pay attention to upper/lower case.
Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software