## 3D Graphics with XNA Game Studio 4.0

Read more about this book |

*(For more resources on this subject, see here.)*

# Implementing a point light with HLSL

A **point light** is just a light that shines equally in all directions around itself (like a light bulb) and falls off over a given distance:

In this case, a point light is simply modeled as a directional light that will slowly fade to darkness over a given distance. To achieve a linear attenuation, we would simply divide the distance between the light and the object by the attenuation distance, invert the result (subtract from 1), and then multiply the lambertian lighting with the result. This would cause an object directly next to the light source to be fully lit, and an object at the maximum attenuation distance to be completely unlit.

However, in practice, we will raise the result of the division to a given power before inverting it to achieve a more exponential falloff:

*Katt = 1 – (d / a) f*

In the previous equation, *Katt* is the brightness scalar that we will multiply the lighting amount by, *d* is the distance between the vertex and light source, *a* is the distance at which the light should stop affecting objects, and *f* is the falloff exponent that determines the shape of the curve. We can implement this easily with HLSL and a new *Material* class. The new *Material* class is similar to the material for a directional light, but specifies a light position rather than a light direction. For the sake of simplicity, the effect we will use will not calculate specular highlights, so the material does not include a "specularity" value. It also includes new values, *LightAttenuation* and *LightFalloff*, which specify the distance at which the light is no longer visible and what power to raise the division to.

public class PointLightMaterial : Material

{

public Vector3 AmbientLightColor { get; set; }

public Vector3 LightPosition { get; set; }

public Vector3 LightColor { get; set; }

public float LightAttenuation { get; set; }

public float LightFalloff { get; set; }

public PointLightMaterial()

{

AmbientLightColor = new Vector3(.15f, .15f, .15f);

LightPosition = new Vector3(0, 0, 0);

LightColor = new Vector3(.85f, .85f, .85f);

LightAttenuation = 5000;

LightFalloff = 2;

}

public override void SetEffectParameters(Effect effect)

{

if (effect.Parameters["AmbientLightColor"] != null)

effect.Parameters["AmbientLightColor"].SetValue(

AmbientLightColor);

if (effect.Parameters["LightPosition"] != null)

effect.Parameters["LightPosition"].SetValue(LightPosition);

if (effect.Parameters["LightColor"] != null)

effect.Parameters["LightColor"].SetValue(LightColor);

if (effect.Parameters["LightAttenuation"] != null)

effect.Parameters["LightAttenuation"].SetValue(

LightAttenuation);

if (effect.Parameters["LightFalloff"] != null)

effect.Parameters["LightFalloff"].SetValue(LightFalloff);

}

}

The new effect has parameters to reflect those values:

float4x4 World;

float4x4 View;

float4x4 Projection;

float3 AmbientLightColor = float3(.15, .15, .15);

float3 DiffuseColor = float3(.85, .85, .85);

float3 LightPosition = float3(0, 0, 0);

float3 LightColor = float3(1, 1, 1);

float LightAttenuation = 5000;

float LightFalloff = 2;

texture BasicTexture;

sampler BasicTextureSampler = sampler_state {

texture = <BasicTexture>;

};

bool TextureEnabled = true;

The vertex shader output struct now includes a copy of the vertex's world position that will be used to calculate the light falloff (attenuation) and light direction.

struct VertexShaderInput

{

float4 Position : POSITION0;

float2 UV : TEXCOORD0;

float3 Normal : NORMAL0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float2 UV : TEXCOORD0;

float3 Normal : TEXCOORD1;

float4 WorldPosition : TEXCOORD2;

};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

float4 worldPosition = mul(input.Position, World);

float4 viewPosition = mul(worldPosition, View);

output.Position = mul(viewPosition, Projection);

output.WorldPosition = worldPosition;

output.UV = input.UV;

output.Normal = mul(input.Normal, World);

return output;

}

Finally, the pixel shader calculates the light much the same way that the directional light did, but uses a per-vertex light direction rather than a global light direction. It also determines how far along the attenuation value the vertex's position is and darkens it accordingly. The texture, ambient light, and diffuse color are calculated as usual:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

float3 diffuseColor = DiffuseColor;

if (TextureEnabled)

diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb;

float3 totalLight = float3(0, 0, 0);

totalLight += AmbientLightColor;

float3 lightDir = normalize(LightPosition - input.WorldPosition);

float diffuse = saturate(dot(normalize(input.Normal), lightDir));

float d = distance(LightPosition, input.WorldPosition);

float att = 1 - pow(clamp(d / LightAttenuation, 0, 1),

LightFalloff);

totalLight += diffuse * att * LightColor;

return float4(diffuseColor * totalLight, 1);

}

We can now achieve the above image using the following scene setup from the *Game1* class:

models.Add(new CModel(Content.Load<Model>("teapot"),

new Vector3(0, 60, 0), Vector3.Zero, new Vector3(60),

GraphicsDevice));

models.Add(new CModel(Content.Load<Model>("ground"),

Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice));

Effect simpleEffect = Content.Load<Effect>("PointLightEffect");

models[0].SetModelEffect(simpleEffect, true);

models[1].SetModelEffect(simpleEffect, true);

PointLightMaterial mat = new PointLightMaterial();

mat.LightPosition = new Vector3(0, 1500, 1500);

mat.LightAttenuation = 3000;

models[0].Material = mat;

models[1].Material = mat;

camera = new FreeCamera(new Vector3(0, 300, 1600),

MathHelper.ToRadians(0), // Turned around 153 degrees

MathHelper.ToRadians(5), // Pitched up 13 degrees

GraphicsDevice);

# Implementing a spot light with HLSL

A **spot light** is similar in theory to a point light—in that it fades out after a given distance. However, the fading is not done around the light source, but is based on the angle between the direction of an object and the light source, and the light's actual direction. If the angle is larger than the light's **"cone angle"**, we will not light the vertex.

*Katt = (dot(p - lp, ld) / cos(a)) f*

In the previous equation, Katt is still the scalar that we will multiply our diffuse lighting with, p is the position of the vertex, lp is the position of the light, ld is the direction of the light, a is the cone angle, and f is the falloff exponent. Our new spot light material reflects these values:

public class SpotLightMaterial : Material

{

public Vector3 AmbientLightColor { get; set; }

public Vector3 LightPosition { get; set; }

public Vector3 LightColor { get; set; }

public Vector3 LightDirection { get; set; }

public float ConeAngle { get; set; }

public float LightFalloff { get; set; }

public SpotLightMaterial()

{

AmbientLightColor = new Vector3(.15f, .15f, .15f);

LightPosition = new Vector3(0, 3000, 0);

LightColor = new Vector3(.85f, .85f, .85f);

ConeAngle = 30;

LightDirection = new Vector3(0, -1, 0);

LightFalloff = 20;

}

public override void SetEffectParameters(Effect effect)

{

if (effect.Parameters["AmbientLightColor"] != null)

effect.Parameters["AmbientLightColor"].SetValue(

AmbientLightColor);

if (effect.Parameters["LightPosition"] != null)

effect.Parameters["LightPosition"].SetValue(LightPosition);

if (effect.Parameters["LightColor"] != null)

effect.Parameters["LightColor"].SetValue(LightColor);

if (effect.Parameters["LightDirection"] != null)

effect.Parameters["LightDirection"].SetValue(LightDirection);

if (effect.Parameters["ConeAngle"] != null)

effect.Parameters["ConeAngle"].SetValue(

MathHelper.ToRadians(ConeAngle / 2));

if (effect.Parameters["LightFalloff"] != null)

effect.Parameters["LightFalloff"].SetValue(LightFalloff);

}

}

Now we can create a new effect that will render a spot light. We will start by copying the point light's effect and making the following changes to the second block of effect parameters:

float3 AmbientLightColor = float3(.15, .15, .15);

float3 DiffuseColor = float3(.85, .85, .85);

float3 LightPosition = float3(0, 5000, 0);

float3 LightDirection = float3(0, -1, 0);

float ConeAngle = 90;

float3 LightColor = float3(1, 1, 1);

float LightFalloff = 20;

Finally, we can update the pixel shader to perform the lighting calculations:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

float3 diffuseColor = DiffuseColor;

if (TextureEnabled)

diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb;

float3 totalLight = float3(0, 0, 0);

totalLight += AmbientLightColor;

float3 lightDir = normalize(LightPosition - input.WorldPosition);

float diffuse = saturate(dot(normalize(input.Normal), lightDir));

// (dot(p - lp, ld) / cos(a))^f

float d = dot(-lightDir, normalize(LightDirection));

float a = cos(ConeAngle);

float att = 0;

if (a < d)

att = 1 - pow(clamp(a / d, 0, 1), LightFalloff);

totalLight += diffuse * att * LightColor;

return float4(diffuseColor * totalLight, 1);

}

If we were to then set up the material as follows and use our new effect, we would see the following result:

SpotLightMaterial mat = new SpotLightMaterial();

mat.LightDirection = new Vector3(0, -1, -1);

mat.LightPosition = new Vector3(0, 3000, 2700);

mat.LightFalloff = 200;

## Drawing multiple lights

Now that we can draw one light, the natural question to ask is how to draw more than one light. Well this, unfortunately, is not simple. There are a number of approaches—the easiest of which is to simply loop through a certain number of lights in the pixel shader and sum a total lighting value. Let's create a new shader based on the directional light effect that we created in the last chapter to do just that. We'll start by copying that effect, then modifying some of the effect parameters as follows. Notice that instead of a single light direction and color, we instead have an array of three of each, allowing us to draw up to three lights:

#define NUMLIGHTS 3

float3 DiffuseColor = float3(1, 1, 1);

float3 AmbientColor = float3(0.1, 0.1, 0.1);

float3 LightDirection[NUMLIGHTS];

float3 LightColor[NUMLIGHTS];

float SpecularPower = 32;

float3 SpecularColor = float3(1, 1, 1);

Second, we need to update the pixel shader to do the lighting calculations one time per light:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

// Start with diffuse color

float3 color = DiffuseColor;

// Texture if necessary

if (TextureEnabled)

color *= tex2D(BasicTextureSampler, input.UV);

// Start with ambient lighting

float3 lighting = AmbientColor;

float3 normal = normalize(input.Normal);

float3 view = normalize(input.ViewDirection);

// Perform lighting calculations per light

for (int i = 0; i < NUMLIGHTS; i++)

{

float3 lightDir = normalize(LightDirection[i]);

// Add lambertian lighting

lighting += saturate(dot(lightDir, normal)) * LightColor[i];

float3 refl = reflect(lightDir, normal);

// Add specular highlights

lighting += pow(saturate(dot(refl, view)), SpecularPower)

* SpecularColor;

}

// Calculate final color

float3 output = saturate(lighting) * color;

return float4(output, 1);

}

We now need a new *Material* class to work with this shader:

public class MultiLightingMaterial : Material

{

public Vector3 AmbientColor { get; set; }

public Vector3[] LightDirection { get; set; }

public Vector3[] LightColor { get; set; }

public Vector3 SpecularColor { get; set; }

public MultiLightingMaterial()

{

AmbientColor = new Vector3(.1f, .1f, .1f);

LightDirection = new Vector3[3];

LightColor = new Vector3[] { Vector3.One, Vector3.One,

Vector3.One };

SpecularColor = new Vector3(1, 1, 1);

}

public override void SetEffectParameters(Effect effect)

{

if (effect.Parameters["AmbientColor"] != null)

effect.Parameters["AmbientColor"].SetValue(AmbientColor);

if (effect.Parameters["LightDirection"] != null)

effect.Parameters["LightDirection"].SetValue(LightDirection);

if (effect.Parameters["LightColor"] != null)

effect.Parameters["LightColor"].SetValue(LightColor);

if (effect.Parameters["SpecularColor"] != null)

effect.Parameters["SpecularColor"].SetValue(SpecularColor);

}

}

If we wanted to implement the three directional light systems found in the *BasicEffect* class, we would now just need to copy the light direction values over to our shader:

Effect simpleEffect = Content.Load<Effect>("MultiLightingEffect");

models[0].SetModelEffect(simpleEffect, true);

models[1].SetModelEffect(simpleEffect, true);

MultiLightingMaterial mat = new MultiLightingMaterial();

BasicEffect effect = new BasicEffect(GraphicsDevice);

effect.EnableDefaultLighting();

mat.LightDirection[0] = -effect.DirectionalLight0.Direction;

mat.LightDirection[1] = -effect.DirectionalLight1.Direction;

mat.LightDirection[2] = -effect.DirectionalLight2.Direction;

mat.LightColor = new Vector3[] {

new Vector3(0.5f, 0.5f, 0.5f),

new Vector3(0.5f, 0.5f, 0.5f),

new Vector3(0.5f, 0.5f, 0.5f) };

models[0].Material = mat;

models[1].Material = mat;

Read more about this book |

*(For more resources on this subject, see here.)*

## Prelighting

This method works, but it limits us to just three lights. In any real game, this number is way too small. We could add more lights to the shader, but we would still be quite limited as we cannot do more because we have a limited number of shader instructions to work with. Our next option would be to concoct a system to draw the scene repeatedly with multiple light sources, and then blend them together. However, this would force us to draw every object in the scene once for every light in the scene—an extremely inefficient approach.

Instead, we will use an approach called **prelighting**. In this approach, we store the information that we need to calculate lighting for a pixel into textures that can then be loaded later on by another shader to do the lighting calculation themselves. This has two benefits: First, we are drawing each object only once. Second, we use spheres to approximate our lights so that we can run a pixel shader on only the pixels a light would affect, limiting the lighting calculations to pixels that are actually affected by a light. Therefore, if a light is small enough or distant enough, we don't need to perform its lighting calculations for every pixel on the screen.

The prelighting process is as follows:

- Render the scene into two textures, storing the distance of each vertex from the camera and the normal at each vertex.
- Render the scene into another texture, rendering each (point) light as a sphere, performing lighting calculations in the pixel shader using the values stored in the corresponding pixels of the last step's textures.
- Render the final scene, multiplying diffuse colors, textures, and so on in the pixel shader by the lighting values stored in the corresponding pixels of the last step's texture.

We will implement prelighting in the next sections. This is a bit of a process, but in the end, you'll be able to draw a large number of lights and models in your scene—well worth the effort as this is a common stumbling block for new game developers. As an example, the following scene was rendered with eight different-colored point lights:

## Storing depth and normal values

Recall that we need two pieces of information to calculate simple Lambertian lighting—the position of each vertex and its normals. In a prelighting approach to lighting, we store these two pieces of information into two textures using "multiple render targets". To use multiple render targets, we output multiple values from the pixel shader using multiple *COLOR (COLOR0, COLOR1, and so on)* semantics. The output from the effect will then be stored into the "render targets" (similar to textures) of our choosing. We will see shortly how this is set up from the XNA side.

We store the distance between each vertex and the camera into one texture and the "depth" texture and each vertex's normal into the second texture. Note that the depth is divided by the far plane distance before storing it into the texture to keep it in the 0 to 1 range:

Similarly, the normals are scaled from the -1 to 1 range to the 0 to 1 range.

The effect that stores the depth and normal values is as follows. Create a new effect in your content project called *PPDepthNormal.fx* and add the following code:

float4x4 World;

float4x4 View;

float4x4 Projection;

struct VertexShaderInput

{

float4 Position : POSITION0;

float3 Normal : NORMAL0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float2 Depth : TEXCOORD0;

float3 Normal : TEXCOORD1;

};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

float4x4 viewProjection = mul(View, Projection);

float4x4 worldViewProjection = mul(World, viewProjection);

output.Position = mul(input.Position, worldViewProjection);

output.Normal = mul(input.Normal, World);

// Position's z and w components correspond to the distance

// from camera and distance of the far plane respectively

output.Depth.xy = output.Position.zw;

return output;

}

// We render to two targets simultaneously, so we can't

// simply return a float4 from the pixel shader

struct PixelShaderOutput

{

float4 Normal : COLOR0;

float4 Depth : COLOR1;

};

PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)

{

PixelShaderOutput output;

// Depth is stored as distance from camera / far plane distance

// to get value between 0 and 1

output.Depth = input.Depth.x / input.Depth.y;

// Normal map simply stores X, Y and Z components of normal

// shifted from (-1 to 1) range to (0 to 1) range

output.Normal.xyz = (normalize(input.Normal).xyz / 2) + .5;

// Other components must be initialized to compile

output.Depth.a = 1;

output.Normal.a = 1;

return output;

}

technique Technique1

{

pass Pass1

{

VertexShader = compile vs_1_1 VertexShaderFunction();

PixelShader = compile ps_2_0 PixelShaderFunction();

}

}

## Creating the light map

Once we have our normal and depth values recorded, we can generate the light map. We'll be creating a class in a moment to tie all of the steps together, but first, let's look at the effect that generates light maps. Because the depth and normal values are stored in a texture and we can't pass them from a vertex shader, we need a way to map 3D positions to pixel coordinates in the two textures. For the sake of convenience, we will place the functions that do so in a shared file that will be included in a few of the remaining effects. You'll need to create a new effect file and rename it to *PPShared.vsi*.

float viewportWidth;

float viewportHeight;

// Calculate the 2D screen position of a 3D position

float2 postProjToScreen(float4 position)

{

float2 screenPos = position.xy / position.w;

return 0.5f * (float2(screenPos.x, -screenPos.y) + 1);

}

// Calculate the size of one half of a pixel, to convert

// between texels and pixels

float2 halfPixel()

{

return 0.5f / float2(viewportWidth, viewportHeight);

}

Now we can create the effect that uses these values to perform the lighting calculations. The effect parameters are fairly self-explanatory—we include texture parameters for the depth and normal textures, world, view, and projection matrices (remember that we are drawing the light as a spherical model), and point light parameters. The vertex shader simply transforms from object space to screen space:

float4x4 WorldViewProjection;

float4x4 InvViewProjection;

texture2D DepthTexture;

texture2D NormalTexture;

sampler2D depthSampler = sampler_state

{

texture = <DepthTexture>;

minfilter = point;

magfilter = point;

mipfilter = point;

};

sampler2D normalSampler = sampler_state

{

texture = <NormalTexture>;

minfilter = point;

magfilter = point;

mipfilter = point;

};

float3 LightColor;

float3 LightPosition;

float LightAttenuation;

// Include shared functions

#include "PPShared.vsi"

struct VertexShaderInput

{

float4 Position : POSITION0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float4 LightPosition : TEXCOORD0;

};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

output.Position = mul(input.Position, WorldViewProjection);

output.LightPosition = output.Position;

return output;

}

The pixel shader is where the magic happens—we sample the depth and normal values from the textures that we rendered earlier and use the depth values to reconstruct our original world space position. We then use that position and its normal to perform the lighting calculations that we saw earlier:

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

// Find the pixel coordinates of the input position in the depth

// and normal textures

float2 texCoord = postProjToScreen(input.LightPosition) +

halfPixel();

// Extract the depth for this pixel from the depth map

float4 depth = tex2D(depthSampler, texCoord);

// Recreate the position with the UV coordinates and depth value

float4 position;

position.x = texCoord.x * 2 - 1;

position.y = (1 - texCoord.y) * 2 - 1;

position.z = depth.r;

position.w = 1.0f;

// Transform position from screen space to world space

position = mul(position, InvViewProjection);

position.xyz /= position.w;

// Extract the normal from the normal map and move from

// 0 to 1 range to -1 to 1 range

float4 normal = (tex2D(normalSampler, texCoord) - .5) * 2;

// Perform the lighting calculations for a point light

float3 lightDirection = normalize(LightPosition - position);

float lighting = clamp(dot(normal, lightDirection), 0, 1);

// Attenuate the light to simulate a point light

float d = distance(LightPosition, position);

float att = 1 - pow(d / LightAttenuation, 6);

return float4(LightColor * lighting * att, 1);

}

## Drawing models with the light map

After we have created the light map, we can sample the values it stores when drawing our models for the final pass instead of performing the lighting equations. We will again use the functions in our shared file to sample from the light map. The rest of the effects are similar to those we have already seen, transforming to screen space in the vertex shader and performing texture lookups in the pixel shader. At the end of the pixel shader, we multiply the lighting value sampled from the light map with the diffuse color to get the final color:

float4x4 World;

float4x4 View;

float4x4 Projection;

texture2D BasicTexture;

sampler2D basicTextureSampler = sampler_state

{

texture = <BasicTexture>;

addressU = wrap;

addressV = wrap;

minfilter = anisotropic;

magfilter = anisotropic;

mipfilter = linear;

};

bool TextureEnabled = true;

texture2D LightTexture;

texture2D LightTexture;

sampler2D lightSampler = sampler_state

{

texture = <LightTexture>;

minfilter = point;

magfilter = point;

mipfilter = point;

};

float3 AmbientColor = float3(0.15, 0.15, 0.15);

float3 DiffuseColor;

#include "PPShared.vsi"

struct VertexShaderInput

{

float4 Position : POSITION0;

float2 UV : TEXCOORD0;

};

struct VertexShaderOutput

{

float4 Position : POSITION0;

float2 UV : TEXCOORD0;

float4 PositionCopy : TEXCOORD1;

};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)

{

VertexShaderOutput output;

float4x4 worldViewProjection = mul(World, mul(View, Projection));

output.Position = mul(input.Position, worldViewProjection);

output.PositionCopy = output.Position;

output.UV = input.UV;

return output;

}

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0

{

// Sample model's texture

float3 basicTexture = tex2D(basicTextureSampler, input.UV);

if (!TextureEnabled)

basicTexture = float4(1, 1, 1, 1);

// Extract lighting value from light map

float2 texCoord = postProjToScreen(input.PositionCopy) +

halfPixel();

float3 light = tex2D(lightSampler, texCoord);

light += AmbientColor;

return float4(basicTexture * DiffuseColor * light, 1);

}

Read more about this book |

*(For more resources on this subject, see here.)*

## Creating the prelighting renderer

Let's now create a class that manages the effects we created and the rest of the prelighting process. This class, *PrelightingRenderer*, will be responsible for calculating the depth and normal maps, light map, and eventually preparing models to be drawn with the calculated lighting values. The following framework version loads all of the effects and the model that we will need to perform the prelighting process.

The *PrelightingRenderer* also handles the creation of three "surfaces" or "render targets" that we will render the depth, normal, and light maps into. Render targets serve to capture the output of the graphics card and store it in memory, much like a texture. We can then access the data in that texture later, when we are calculating the light map, for example. We can also draw into multiple render targets at the same time using the various color semantics, as we saw earlier in *DepthNormal.fx*.

public class PrelightingRenderer

{

// Normal, depth, and light map render targets

RenderTarget2D depthTarg;

RenderTarget2D normalTarg;

RenderTarget2D lightTarg;

// Depth/normal effect and light mapping effect

Effect depthNormalEffect;

Effect lightingEffect;

// Point light (sphere) mesh

Model lightMesh;

// List of models, lights, and the camera

public List<CModel> Models { get; set; }

public List<PPPointLight> Lights { get; set; }

public Camera Camera { get; set; }

GraphicsDevice graphicsDevice;

int viewWidth = 0, viewHeight = 0;

public PrelightingRenderer(GraphicsDevice GraphicsDevice,

ContentManager Content)

{

viewWidth = GraphicsDevice.Viewport.Width;

viewHeight = GraphicsDevice.Viewport.Height;

// Create the three render targets

depthTarg = new RenderTarget2D(GraphicsDevice, viewWidth,

viewHeight, false, SurfaceFormat.Single, DepthFormat.Depth24);

normalTarg = new RenderTarget2D(GraphicsDevice, viewWidth,

viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24);

lightTarg = new RenderTarget2D(GraphicsDevice, viewWidth,

viewHeight, false, SurfaceFormat.Color, DepthFormat.Depth24);

// Load effects

depthNormalEffect = Content.Load<Effect>("PPDepthNormal");

lightingEffect = Content.Load<Effect>("PPLight");

// Set effect parameters to light mapping effect

lightingEffect.Parameters["viewportWidth"].SetValue(viewWidth);

lightingEffect.Parameters["viewportHeight"].SetValue(viewHeight);

// Load point light mesh and set light mapping effect to it

lightMesh = Content.Load<Model>("PPLightMesh");

lightMesh.Meshes[0].MeshParts[0].Effect = lightingEffect;

this.graphicsDevice = GraphicsDevice;

}

public void Draw()

{

drawDepthNormalMap();

drawLightMap();

prepareMainPass();

}

void drawDepthNormalMap()

{

}

void drawLightMap()

{

}

void prepareMainPass()

{

}

}

Now we can start filling in the three empty functions in the framework of this class. The *drawDepthNormalMap()* function will be responsible for capturing the depth and normal map information from all of the models currently in view. We already wrote the effect that does this, so all we need to do is set our render target and draw the models with the *PPDepthNormal.fx* effect:

void drawDepthNormalMap()

{

// Set the render targets to 'slots' 1 and 2

graphicsDevice.SetRenderTargets(normalTarg, depthTarg);

// Clear the render target to 1 (infinite depth)

graphicsDevice.Clear(Color.White);

// Draw each model with the PPDepthNormal effect

foreach (CModel model in Models)

{

model.CacheEffects();

model.SetModelEffect(depthNormalEffect, false);

model.Draw(Camera.View, Camera.Projection,

((FreeCamera)Camera).Position);

model.RestoreEffects();

}

// Un-set the render targets

graphicsDevice.SetRenderTargets(null);

}

The second function takes the depth and normal map data from the first and uses it to perform the lighting calculations for each point light in the scene, approximated as spheres:

void drawLightMap()

{

// Set the depth and normal map info to the effect

lightingEffect.Parameters["DepthTexture"].SetValue(depthTarg);

lightingEffect.Parameters["NormalTexture"].SetValue(normalTarg);

// Calculate the view * projection matrix

Matrix viewProjection = Camera.View * Camera.Projection;

// Set the inverse of the view * projection matrix to the effect

Matrix invViewProjection = Matrix.Invert(viewProjection);

lightingEffect.Parameters["InvViewProjection"].SetValue(

invViewProjection);

// Set the render target to the graphics device

graphicsDevice.SetRenderTarget(lightTarg);

// Clear the render target to black (no light)

graphicsDevice.Clear(Color.Black);

// Set render states to additive (lights will add their influences)

graphicsDevice.BlendState = BlendState.Additive;

graphicsDevice.DepthStencilState = DepthStencilState.None;

foreach (PPPointLight light in Lights)

{

// Set the light's parameters to the effect

light.SetEffectParameters(lightingEffect);

// Calculate the world * view * projection matrix and set it to

// the effect

Matrix wvp = (Matrix.CreateScale(light.Attenuation)

* Matrix.CreateTranslation(light.Position)) * viewProjection;

lightingEffect.Parameters["WorldViewProjection"].SetValue(wvp);

// Determine the distance between the light and camera

float dist = Vector3.Distance(((FreeCamera)Camera).Position,

light.Position);

// If the camera is inside the light-sphere, invert the cull mode

// to draw the inside of the sphere instead of the outside

if (dist < light.Attenuation)

graphicsDevice.RasterizerState = RasterizerState.CullClockwise;

// Draw the point-light-sphere

lightMesh.Meshes[0].Draw();

// Revert the cull mode

graphicsDevice.RasterizerState = RasterizerState.

CullCounterClockwise;

}

// Revert the blending and depth render states

graphicsDevice.BlendState = BlendState.Opaque;

graphicsDevice.DepthStencilState = DepthStencilState.Default;

// Un-set the render target

graphicsDevice.SetRenderTarget(null);

}

The last function, *prepareMainPass()*, attempts to set the light map and viewport width/height to the effect each model is currently using. The models can then sample from the light map to obtain lighting information, as our *PPLight.fx* function does:

void prepareMainPass()

{

foreach (CModel model in Models)

foreach (ModelMesh mesh in model.Model.Meshes)

foreach (ModelMeshPart part in mesh.MeshParts)

{

// Set the light map and viewport parameters to each model's

effect

if (part.Effect.Parameters["LightTexture"] != null)

part.Effect.Parameters["LightTexture"].SetValue(lightTarg);

if (part.Effect.Parameters["viewportWidth"] != null)

part.Effect.Parameters["viewportWidth"].SetValue(viewWidth);

if (part.Effect.Parameters["viewportHeight"] != null)

part.Effect.Parameters["viewportHeight"].

SetValue(viewHeight);

}

}

## Using the prelighting renderer

With that, we've finished the prelighting renderer and can now implement it into our game. To begin with, we'll need an instance variable of the *renderer* in the *Game1* class:

PrelightingRenderer renderer;

Next, we set the scene up as follows in the *LoadContent()* function, using our *PPLight.fx* effect and four point lights:

models.Add(new CModel(Content.Load<Model>("teapot"),

new Vector3(0, 60, 0), Vector3.Zero, new Vector3(60),

GraphicsDevice));

models.Add(new CModel(Content.Load<Model>("ground"),

Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice));

Effect effect = Content.Load<Effect>("PPModel");

models[0].SetModelEffect(effect, true);

models[1].SetModelEffect(effect, true);

camera = new FreeCamera(new Vector3(0, 300, 1600),

MathHelper.ToRadians(0), // Turned around 153 degrees

MathHelper.ToRadians(5), // Pitched up 13 degrees

GraphicsDevice);

renderer = new PrelightingRenderer(GraphicsDevice, Content);

renderer.Models = models;

renderer.Camera = camera;

renderer.Lights = new List<PPPointLight>()

{

new PPPointLight(new Vector3(-1000, 1000, 0), Color.Red * .85f,

2000),

new PPPointLight(new Vector3(1000, 1000, 0), Color.Blue * .85f,

2000),

new PPPointLight(new Vector3(0, 1000, 1000), Color.Green * .85f,

2000),

new PPPointLight(new Vector3(0, 1000, -1000), Color.White * .85f,

2000)

};

Finally, we need to call the *Draw()* function of the *renderer* before drawing our models for the final pass, making sure to clear the graphics card first:

protected override void Draw(GameTime gameTime)

{

renderer.Draw();

GraphicsDevice.Clear(Color.Black);

foreach (CModel model in models)

if (camera.BoundingVolumeIsInView(model.BoundingSphere))

model.Draw(camera.View, camera.Projection,

((FreeCamera)camera).Position);

base.Draw(gameTime);

}

# Summary

Having completed this article, you've learned how to implement point lights and spot lights in HLSL. You've also learned of the limitations of the programmable pipeline, as far as lighting is concerned, and learned two ways to draw multiple lights in your scenes relatively efficiently.

**Further resources on this subject:**

- Models and Animations with Away3D 3.6 [article]
- 3D Animation Techniques with XNA Game Studio 4.0
- Environmental Effects in 3D Graphics with XNA Game Studio 4.0
- Starting Ogre 3D
- Unity Game Development: Welcome to the 3D world
- Blender 3D 2.49: Quick Start
- Introduction to Game Development Using Unity 3D