OpenSceneGraph: Advanced Scene Graph Components

Rui Wang

December 2010


OpenSceneGraph 3.0: Beginner's Guide

OpenSceneGraph 3.0: Beginner's Guide

Create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engines.

  • Gain a comprehensive view of the structure and main functionalities of OpenSceneGraph
  • An ideal introduction for developing applications using OpenSceneGraph
  • Develop applications around the concepts of scene graphs and design patterns
  • Extend your own scene elements from the base interfaces of OpenSceneGraph
  • Packed with examples, this book explains each knowledge point in detail and makes you practice your knowledge for better understanding
        Read more about this book      

(For more resources on OpenSceneGraph, see here.)

Creating billboards in a scene

In the 3D world, a billboard is a 2D image that is always facing a designated direction. Applications can use billboard techniques to create many kinds of special effects, such as explosions, fares, sky, clouds, and trees. In fact, any object can be treated as a billboard with itself cached as the texture, while looking from a distance. Thus, the implementation of billboards becomes one of the most popular techniques, widely used in computer games and real-time visual simulation programs.

The osg::BillBoard class is used to represent a list of billboard objects in a 3D scene. It is derived from osg::Geode, and can orient all of its children (osg::Drawable objects) to face the viewer's viewpoint. It has an important method, setMode(), that is used to determine the rotation behavior, which must set one of the following enumerations as the argument


POINT_ROT_EYE If all drawables are rotated about the viewer position with the object coordinate Z axis constrained to the window coordinate Y axis.
POINT_ROT_WORLD If drawables are rotated about the viewer directly from their original orientation to the current eye direction in the world space.
AXIAL_ROT If drawables are rotated about an axis specified by setAxis().


Every drawable in the osg::BillBoard node should have a pivot point position, which is specified via the overloaded addDrawable() method, for example:

billboard->addDrawable( child, osg::Vec3(1.0f, 0.0f, 0.0f) );

All drawables also need a unified initial front face orientation, which is used for computing rotation values. The initial orientation is set by the setNormal() method. And each newly-added drawable must ensure that its front face orientation is in the same direction as this normal value; otherwise the billboard results may be incorrect.

Time for action – creating banners facing you

The prerequisite for implementing billboards in OSG is to create one or more quad geometries first. These quads are then managed by the osg::BillBoard class. This forces all child drawables to automatically rotate around a specified axis, or face the viewer. These can be done by presetting a unified normal value and rotating each billboard according to the normal and current rotation axis or viewing vector.

We will create two banks of OSG banners, arranged in a V, to demonstrate the use of billboards in OSG. No matter where the viewer is and how he manipulates the scene camera, the front faces of banners are facing the viewer all the time. This feature can then be used to represent textured trees and particles in user applications.

  1. Include the necessary headers:
    #include <osg/Billboard>
    #include <osg/Texture2D>
    #include <osgDB/ReadFile>
    #include <osgViewer/Viewer>
  2. Create the quad geometry directly from the osg::createTexturedQuadGeometry() function. Every generated quad is of the same size and origin point, and uses the same image file. Note that the osg256.png file can be found in the data directory of your OSG installation path, but it requires the osgdb_png plugin for reading image data.

    osg::Geometry* createQuad()
    osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D;
    osg::ref_ptr<osg::Image> image =
    osgDB::readImageFile( "Images/osg256.png" );
    texture->setImage( image.get() );

    osg::ref_ptr<osg::Geometry> quad=
    osg::Vec3(-0.5f, 0.0f,-0.5f),
    osg::Vec3(0.0f,0.0f,1.0f) );

    osg::StateSet* ss = quad->getOrCreateStateSet()
    ss->setTextureAttributeAndModes( 0, texture.get() );
    return quad.release();

  3. In the main entry, we first create the billboard node and set the mode to POINT_ROT_EYE. That is, the drawable will rotate to face the viewer and keep its Z axis upright in the rendering window. The default normal setting of the osg::BillBoard class is the negative Y axis, so rotating it to the viewing vector will show the quads on the XOZ plane in the best appearance:

    osg::ref_ptr<osg::Billboard> geode = new osg::Billboard;
    geode->setMode( osg::Billboard::POINT_ROT_EYE );

  4. Now let's create the banner quads and arrange them in a V formation:

    osg::Geometry* quad = createQuad();
    for ( unsigned int i=0; i<10; ++i )
    float id = (float)i;
    geode->addDrawable( quad, osg::Vec3(-2.5f+0.2f*id, id, 0.0f)
    geode->addDrawable( quad, osg::Vec3( 2.5f-0.2f*id, id, 0.0f)

  5. All quad textures' backgrounds are automatically cleared because of the alpha test, which is performed internally in the osgdb_png plugin. That means we have to set correct rendering order of all the drawables to ensure that the entire process is working properly:
    osg::StateSet* ss = geode->getOrCreateStateSet();
    ss->setRenderingHint( osg::StateSet::TRANSPARENT_BIN );
  6. It's time for us to start the viewer, as there are no important steps left to create and render billboards:

    osgViewer::Viewer viewer;
    viewer.setSceneData( geode.get() );

  7. Try navigating in the scene graph:


  8. You will find that the billboard's children are always rotating to face the viewer, but the images' Y directions are never changed (point to the window's Y coordinate all along). Replace the mode POINT_ROT_EYE to POINT_ROT_WORLD and see if there is any difference:


What just happened?

The basic usage of billboards in OSG scene graph is shown in this example. But it is still possible to be further improved. All the banner geometries here are created with the createQuad() function, which means that the same quad and the same texture are reallocated at least 20 times! The object sharing mechanism is certainly an optimization here. Unfortunately, it is not clever enough to add the same drawable object to osg::Billboard with different positions, which could cause the node to work improperly. What we could do is to create multiple quad geometries that share the same texture object. This will highly reduce the video card's texture memory occupancy and the rendering load.

Another possible issue is that somebody may require loaded nodes to be rendered as billboards, not only as drawables. A node can consist of different kinds of child nodes, and is much richer than a basic shape or geometry mesh. OSG also provides the osg::AutoTransform class, which automatically rotates an object's children to be aligned with screen coordinates.

Have a go hero – planting massive trees on the ground

Billboards are widely used for simulating massive trees and plants. One or more tree pictures with transparent backgrounds are applied to quads of different sizes, and then added to the billboard node. These trees will automatically face the viewer, or to be more real, rotate about an axis as if its branches and leaves are always at the front. Now let's try to create some simple billboard trees. We only need to prepare an image nice enough.

Creating texts

Text is one of the most important components in all kinds of virtual reality programs. It is used everywhere—for displaying stats on the screen, labeling 3D objects, logging, and debugging. Texts always have at least one font to specify the typeface and qualities, as well as other parameters, including size, alignment, layout (left-to-right or right-to-left), and resolution, to determine its display behaviors. OpenGL doesn't directly support the loading of fonts and displaying texts in 3D space, but OSG provides full support for rendering high quality texts and configuring different text attributes, which makes it much easier to develop related applications.


The osgText library actually implements all font and text functionalities. It requires the osgdb_freetype plugin to work properly. This plugin can load and parse TrueType fonts with the help of FreeType, a famous third-party dependency. After that, it returns an osgText::Font instance, which is made up of a complete set of texture glyphs. The entire process can be described with the osgText::readFontFile() function.

The osgText::TextBase class is the pure base class of all OSG text types. It is derived from osg::Drawable, but doesn't support display lists by default. Its subclass, osgText::Text, is used to manage fat characters in the world coordinates. Important methods includes setFont(), setPosition(), setCharacterSize(), and setText(), each of which is easy to understand and use, as shown in the following example.


Time for action – writing descriptions for the Cessna

This time we are going to display a Cessna in the 3D space and provide descriptive texts in front of the rendered scene. A heads-up display (HUD) camera can be used here, which is rendered after the main camera, and only clears the depth buffer for directly updating texts to the frame buffer. The HUD camera will then render its child nodes in a way that is always visible.

  1. Include the necessary headers:
    #include <osg/Camera>
    #include <osgDB/ReadFile>
    #include <osgText/Font>
    #include <osgText/Text>
    #include <osgViewer/Viewer>
  2. The osgText::readFontFile() function is used for reading a suitable font file, for instance, an undistorted TrueType font. The OSG data paths (specified with OSG_FILE_PATH) and the windows system path will be searched to see if the specified file exists:

    osg::ref_ptr<osgText::Font> g_font =

  3. Create a standard HUD camera and set a 2D orthographic projection matrix for the purpose of drawing 3D texts in two dimensions. The camera should not receive any user events, and should never be affected by any parent transformations. These are guaranteed by the setAllowEventFocus() and setReferenceFrame() methods:

    setAllowEventFocus() and setReferenceFrame() methods:

    osg::Camera* createHUDCamera( double left, double right,
    double bottom, double top )
    osg::ref_ptr<osg::Camera> camera = new osg::Camera;
    camera->setReferenceFrame( osg::Transform::ABSOLUTE_RF );
    camera->setClearMask( GL_DEPTH_BUFFER_BIT );
    camera->setRenderOrder( osg::Camera::POST_RENDER );
    camera->setAllowEventFocus( false );
    osg::Matrix::ortho2D(left, right, bottom, top) );
    return camera.release();

  4. The text is created by a separate global function, too. It defines a font object describing every character's glyph, as well as the size and position parameters in the world space, and the content of the text. In the HUD text implementation, texts should always align with the XOY plane:

    osgText::Text* createText( const osg::Vec3& pos,
    const std::string& content,
    float size )
    osg::ref_ptr<osgText::Text> text = new osgText::Text;
    text->setFont( g_font.get() );
    text->setCharacterSize( size );
    text->setAxisAlignment( osgText::TextBase::XY_PLANE );
    text->setPosition( pos );
    text->setText( content );
    return text.release();

  5. In the main entry, we create a new osg::Geode node and add multiple text objects to it. These introduce the leading features of a Cessna. Of course, you can add your own explanations about this type of monoplane by using additional osgText::Text drawables:

    osg::ref_ptr<osg::Geode> textGeode = new osg::Geode;
    textGeode->addDrawable( createText(
    osg::Vec3(150.0f, 500.0f, 0.0f),
    "The Cessna monoplane",
    textGeode->addDrawable( createText(
    osg::Vec3(150.0f, 450.0f, 0.0f),
    "Six-seat, low-wing and twin-engined",

  6. The node including all texts should be added to the HUD camera. To ensure that the texts won't be affected by OpenGL normals and lights (they are textured geometries, after all), we have to disable lighting for the camera node:

    osg::Camera* camera = createHUDCamera(0, 1024, 0, 768);
    camera->addChild( textGeode.get() );
    GL_LIGHTING, osg::StateAttribute::OFF );

  7. The last step is to add the Cessna model and the camera to the scene graph, and start the viewer as usual:

    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( osgDB::readNodeFile("cessna.osg") );
    root->addChild( camera );

    osgViewer::Viewer viewer;
    viewer.setSceneData( root.get() );

  8. In the rendering window, you will see two lines of text over the Cessna model. No matter how you translate, rotate, or scale on the view matrix, the HUD texts will never be covered. Thus, users can always read the most important information directly, without looking away from their usual perspectives:


What just happened?

To build the example code with CMake or other native compilers, you should add the osgText library as dependence, and include the osgParticle, osgShadow, and osgFX libraries.

Here we specify the font from the arial.ttf file. This is a default font in most Windows and UNIX systems, and can also be found in OSG data paths. As you can see, this kind of font offers developers highly-precise displayed characters, regardless of font size settings. This is because the outlines of TrueType fonts are made of mathematical line segments and Bezier curves, which means they are not vector fonts. Bitmap (raster) fonts don't have such features and may sometimes look ugly when resized. Disable setFont() here, to force osgText to use a default 12x12 bitmap font. Can you figure out the difference between these two fonts?

Have a go hero – using wide characters to support more languages

The setText() method of osgText::Text accepts std::string variables directly. Meanwhile, it also accepts wide characters as the input argument. For example:

wchar_t* wstr = …;
text->setText( wstr );

This makes it possible to support multi-languages, for instance, Chinese and Japanese characters. Now, try obtaining a sequence of wide characters either by defining them directly or converting from multi-byte characters, and apply them to the osgText::Text object, to see if the language that you are interested in can be rendered. Please note that the font should also be changed to support the corresponding language.


        Read more about this book      

(For more resources on OpenSceneGraph, see here.)

Creating 3D texts

Believe it or not, OSG also provides support for 3D texts in the scene graph. Each character will be extruded with a depth parameter and finally rendered with OpenGL's vertex array mechanism. The implementer class, osgText::Text3D, is also derived form osgText::Textbase and thus has nearly the same methods as osgText::Text. It requires an osgText::Font3D instance as the font parameter, which can be obtained by the osgText::readFont3DFile() function.

Time for action – creating texts in the world space

A simple 3D text object will be created in this example. Like the 2D text class osgText::Text, the osgText::Text3D class also inherits a list of methods to set basic text parameters, including position, size, alignment, font object, and the content. 3D texts are most likely to be used as a special effect of games and applications.

  1. Include the necessary headers:

    #include <osg/MatrixTransform>
    #include <osgDB/ReadFile>
    #include <osgText/Font3D>
    #include <osgText/Text3D>
    #include <osgViewer/Viewer>

  2. Read an appropriate font file with the osgText::readFont3DFile() function, which is similar to osgText::readFontFile(). Using the osgdb_freetype plugin, TrueType fonts can be parsed into finely-detailed 3D character glyphs:

    osg::ref_ptr<osgText::Font3D> g_font3D =

  3. So we are going to imitate the createText() function in the last example. The only difference is that we have to set an extra depth parameter for the text character to make it stand out in the 3D world. The setAxisAlignment() method here indicates that the text object is placed on the XOZ plane, with its front faces facing the negative Y axis:

    osgText::Text3D* createText3D( const osg::Vec3& pos,
    const std::string& content,
    float size, float depth )
    osg::ref_ptr<osgText::Text3D> text = new osgText::Text3D;
    text->setFont( g_font3D.get() );
    text->setCharacterSize( size );
    text->setCharacterDepth( depth );
    text->setAxisAlignment( osgText::TextBase::XZ_PLANE );
    text->setPosition( pos );
    text->setText( content );
    return text.release();

  4. Create a 3D text object with short words. Note that because 3D texts are actually made up of vertices and geometry primitives, abuse of them may cause high resource consumption:

    osg::ref_ptr<osg::Geode> textGeode = new osg::Geode;
    createText3D(osg::Vec3(), "The Cessna", 20.0f, 10.0f) );

  5. This time we add an osg::MatrixTransform as the parent of textGeode. It will apply an additional transformation matrix to the model-view matrix when rendering all text drawables, and thus change their displayed positions and altitudes in the world coordinates:

    osg::ref_ptr<osg::MatrixTransform> textNode= new
    textNode->setMatrix( osg::Matrix::translate(0.0f, 0.0f, 10.0f) );
    textNode->addChild( textGeode.get() );

  6. Add our Cessna to the scene graph again, and start the viewer:

    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( osgDB::readNodeFile("cessna.osg") );
    root->addChild( textNode.get() );
    osgViewer::Viewer viewer;
    viewer.setSceneData( root.get() );

  7. You will see some big letters above the model, but in fact the initial position of the 3D text object should be at (0, 0, 0), which is also the origin of the Cessna. The osg::MatrixTransform node here prevents the model and the text from overlapping each other, by translating textGeode to a new position (0, 0, 10):


What just happened?

Both 2D and 3D texts can be transformed by their parent nodes. This is always helpful when we have to compose a paragraph or move a model followed by a text label. Similar to OSG's transformation nodes, the setPosition() method of osgText::TextBase only sets the location under the relative reference frame of the text object's parent. The same thing happens to the setRotation() method, which determines the rotation of the text, and setAxisAlignment(), which aligns the text with a specified plane.

The only exception is the SCREEN alignment mode:

text->setAxisAlignment( osgText::TextBase::SCREEN );

This mimics the billboard technique of scene objects, and makes the text (either osg::Text or osg::Text3D) always face the viewer. In 3D Geographic Information Systems (3DGIS), placing landmarks on earth or cites as billboards is a very common operation, and can be implemented with the SCREEN mode. In this case, rotation and parent transformations are not available and should not be used, as they may cause confusion and potential problems.

Creating particle animations

Particles are used in various 3D applications for special effects such as smoke, dust, explosions, fluid, free, and rain. It is much more difficult to build and manage a complete particle system rather than construct other simple scene objects. In fact, OSG provides a large number of classes in the osgParticle library to enable customization of complex particle systems, most of which may be extended and overridden using inheritance, if user-defined algorithms are needed.

The particle class, osgParticle::Particle, represents the atomic particle unit. It is often used as a design template before the simulation loop starts, and copied and regenerated by the particle system in run-time to render massive particles.

The particle system class, osgParticle::ParticleSystem, manages the creation, updating, rendering, and destruction of all particles. It is derived from osg::Drawable, so it can accept different rendering attributes and modes, just like normal drawables. It should be added to an osg::Geode nod, as the last class.

The emitter abstract class (osgParticle::Emitter) defines the number and basic properties of newly-generated particles every frame. Its descendant class, osgParticle::ModularEmitter, works like a standard emitter, which provides the mechanism for controlling particles to be created. It always holds three kinds of sub-controllers:

  • The placer (osgParticle::Placer) sets the initial position of every particle
  • The shooter (osgParticle::Shooter) sets the initial velocities of particles
  • The counter (osgParticle::Counter) determines how many particles should be created

The program's abstract class (osgParticle::Program) manipulates the position, velocity, and other properties of each individual particle during its lifetime. Its descendant class, osgParticle::ModularProgram, is composed of a list of osgParticle::Operator subclasses to perform operations on existing particles.

Both the emitter and program classes are indirectly derived from osg::Node, which means that they can be treated as nodes in the scene graph. During the update and cull traversals, they will be automatically traversed, and sub-controllers and operators will be executed. The particle system will then make use of their results to re-compute and draw its managed particles. The re-computing process can be done with the osgParticle::ParticleSystemUpdater, which is actually a node, too. The updater should be placed after the emitter and the program in the scene graph, in order to ensure that updates are carried out in the correct order. For example:

root->addChild( emitter );
root->addChild( program );
root->addChild( updater ); // Added last

The following diagram shows the hierarchy of the above osgParticle classes:


Time for action – building a fountain in the scene

We will demonstrate how to implement a basic particle fountain. The simulation of a fountain can be described as follows: firstly, the water emitted from a point rises with a certain initial speed; then the speed decreases due to gravity of the earth, until reaching the highest point; after that, the water drops fall down onto the ground or into the pool. To achieve this, an osgParticle::ParticleSystem node, along with an emitter and a program processor, should be created and added to the scene graph.

  1. Include the necessary headers:

    #include <osg/MatrixTransform>
    #include <osg/Point>
    #include <osg/PointSprite>
    #include <osg/Texture2D>
    #include <osg/BlendFunc>
    #include <osgDB/ReadFile>
    #include <osgGA/StateSetManipulator>
    #include <osgParticle/ParticleSystem>
    #include <osgParticle/ParticleSystemUpdater>
    #include <osgParticle/ModularEmitter>
    #include <osgParticle/ModularProgram>
    #include <osgParticle/AccelOperator>
    #include <osgViewer/ViewerEventHandlers>
    #include <osgViewer/Viewer>

  2. The entire process of creating a particle system can be implemented in a separate user function:

    osgParticle::ParticleSystem* createParticleSystem(
    osg::Group* parent )


  3. Now we are inside the function. Every particle system has a template particle that determines the behaviors of all newly-generated particles. Here, we set the shape of each particle in this system to POINT. With the help of OpenGL's point sprite extension, these points can be rendered as textured billboards, which is enough in most cases:

    osg::ref_ptr<osgParticle::ParticleSystem> ps =
    new osgParticle::ParticleSystem;
    osgParticle::Particle::POINT );

  4. Set the rendering attributes and modes of the particle system. These will automatically affect every rendered particle. Here, we attach a texture image to particles, and define a blending function in order to make the background of the image transparent:

    osg::ref_ptr<osg::BlendFunc> blendFunc = new osg::BlendFunc;
    blendFunc->setFunction( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
    osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D;
    texture->setImage( osgDB::readImageFile("Images/smoke.rgb") );

  5. Another two important attributes are osg::Point and osg::PointSprite. The first will set the point size (diameter of a rasterized point), and the later will enable point sprites, which can effectively replace a four-point quad with a single vertex, without requiring to specify the texture coordinates and rotate the front face to the viewer. Besides, we had better turn of the lighting of particles, and we set a suitable rendering order to enable it to be drawn correctly in the whole scene graph:
    osg::StateSet* ss = ps->getOrCreateStateSet();
    ss->setAttributeAndModes( blendFunc.get() );
    ss->setTextureAttributeAndModes( 0, texture.get() );

    ss->setAttribute( new osg::Point(20.0f) );
    ss->setTextureAttributeAndModes( 0, new osg::PointSprite );

    ss->setMode( GL_LIGHTING, osg::StateAttribute::OFF);
    ss->setRenderingHint( osg::StateSet::TRANSPARENT_BIN );
  6. The osgParticle::RandomRateCounter class generates a random number of particles every frame. It is derived from osgParticle::Counter and has a setRateRange() method that is used to specify the minimum and maximum number of elements:

    osg::ref_ptr<osgParticle::RandomRateCounter> rrc =
    new osgParticle::RandomRateCounter;
    rrc->setRateRange( 500, 800 );

  7. Add the random rate counter to the standard emitter. Also, we have to attach the particle system to it as the operation destination. By default, the modular emitter already includes a point-shape placer at (0, 0, 0), and a radial shooter that chooses a direction and an initial speed randomly for each particle, so we don't need to specify new ones here:

    osg::ref_ptr<osgParticle::ModularEmitter> emitter =
    new osgParticle::ModularEmitter;
    emitter->setParticleSystem( ps.get() );
    emitter->setCounter( rrc.get() );

  8. The osgParticle::AccelOperator class applies a constant acceleration to all particles, on the fly. To simulate gravity, we can either use setAcceleration() to specify the acceleration vector of gravity, or call the setToGravity() method directly:

    osg::ref_ptr<osgParticle::AccelOperator> accel =
    new osgParticle::AccelOperator;

  9. Add the only operator to the standard program node, and attach the particle system, too:

    osg::ref_ptr<osgParticle::ModularProgram> program =
    new osgParticle::ModularProgram;
    program->setParticleSystem( ps.get() );
    program->addOperator( accel.get() );

  10. The particle system, which is actually a drawable object, should be added to a leaf node of the scene graph. After that, we add all particle-related nodes to the parent node. Here is an interesting issue of world and local coordinates, which will be discussed later:

    osg::ref_ptr<osg::Geode> geode = new osg::Geode;
    geode->addDrawable( ps.get() );

    parent->addChild( emitter.get() );
    parent->addChild( program.get() );
    parent->addChild( geode.get() );
    return ps.get();

  11. Now let's return to the main entry. Firstly, we create a new transformation node for locating the particle system:

    osg::ref_ptr<osg::MatrixTransform> mt = new osg::MatrixTransform;
    mt->setMatrix( osg::Matrix::translate(1.0f, 0.0f, 0.0f) );

  12. Create all particle system components and, add them to the input transformation node. The particle system should also be registered to a particle system updater, using the addParticleSystem() method.

    osgParticle::ParticleSystem* ps = createParticleSystem( mt.get()

    osg::ref_ptr<osgParticle::ParticleSystemUpdater> updater =
    new osgParticle::ParticleSystemUpdater;
    updater->addParticleSystem( ps );

  13. Add all of the nodes above to the scene's root node, including a small axes model. After that, start the viewer and just take a seat:

    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( updater.get() );
    root->addChild( mt.get() );
    root->addChild( osgDB::readNodeFile("axes.osg") );
    osgViewer::Viewer viewer;
    viewer.setSceneData( root.get() );

  14. Our particle fountain is finally finished! Zoom in and you will find that all particles start from a point on the positive X axis, at x = 1. Now, with just a few simple fixed-function attributes, particles are rendered as well-textured points, and each particle element appears much like a water drop because of the blending operation:


What just happened?

In the above image, we can find out that the whole particle system is translated to (1, 0, 0) in the world. That's because we add the emitter, the program, and the particle system's parent to a transformation node. But, in fact, the result will be different if we put one of the three elements under the transformation node and the other two under the root node. Adding only the osg::Geode node to an osg::Transform will make the entre particle system move with it; but adding only the emitter will change the transform behavior of new-born particles but will leave any existing ones in the world coordinate. Similarly, only adding the program node will make the parent transformation node only affect the operators.

A good example is to design fight jets. While spiraling in the sky, the fight plume's location and direction will vary at any time. Using an osg::MatrixTransform as the parent of the particle emitter will be helpful in representing such a particle-based scenario. The particle system and the updater should not be placed under the same transformation node; otherwise all old particles in the air will move and rotate with it, too, which is certainly unreasonable in reality.

Have a go hero – designing a rotary sprinkler

Have you ever seen a rotary sprinkler? It consists of at least one rounded head that can automatically rotate 360 degrees and spray water around the sprinkler's diameter. To create such a machine with a simple cylinder model and the particle system, you have to design a modular emitter with the shooter shooting particles to a specified horizontal direction, and a modular program with the gravity acceleration operator.

As a hint, the default radial shooter (osgParticle::RadialShooter) uses two angles, theta and phi, within specified ranges, in order to determine a random direction of particles, for example:

osg::ref_ptr<osgParticle::RadialShooter> shooter =
new osgParticle::RadialShooter;
// Theta is the angle between the velocity vector and Z axis
shooter->setThetaRange( osg::PI_2 - 0.1f, osg::PI_2 + 0.1f );
// Phi is the angle between X axis and the velocity vector projected
// onto the XOY plane
shooter->setPhiRange( -0.1f, 0.1f );
// Set the initial speed range
shooter->setInitialSpeedRange( 5.0f, 8.0f );

To rotate the initial direction of emitting particles, you can either use an update callback that changes the theta and phi ranges, or consider adding a transformation node as the emitter’s parent.


        Read more about this book      

(For more resources on OpenSceneGraph, see here.)

Creating shadows on the ground

Shadow is also an important component of 3D applications. When constructing massive 3D scenes like digital cites, modelers may first design and compute lights on buildings, models, and the ground in modeling software like 3dsmax, Maya, and Blender, and then bake the shadows to these models' textures. Then, real-time applications will read the model files with textures, and the shadows are then rendered statically in the rendering window.

Real-time shadows are also possible, but not for unlimited use. The osgShadow library provides a range of shadow techniques on a scene graph that needs to have shadows cast upon it. The core class, named osgShadow::ShadowedScene, should be used as the root node of these shadowy sub-graphs. It accepts an osgShadow::ShadowTechnique instance as the technique used to implement shadowing. Deriving the technique class will extend the scene graph to support more algorithms and solutions, which will enrich the shadow functionalities.

Time for action – receiving and casting shadows

Our goal is to show you the construction of a scene by casting shadows on models. It always includes a specific shadow scene root, an inbuilt or custom shadow technique, and child nodes with a distinguishable receiving or casting mask. A normal scene can't be shadowed without adding a shadow scene as the parent, and on the contrary, a shadowed scene graph can either remove the osgShadow::ShadowedScene root node or remove the shadow technique object (by simply setting a null one) applied to the node to exclude all shadow computations and effects. In this example, we just create and manage the scene graph under the shadow scene root, and make use of the predefined shadow mapping technique to render both real objects and shadows correctly.

  1. Include the necessary headers:

    #include <osg/AnimationPath>
    #include <osg/MatrixTransform>
    #include <osgDB/ReadFile>
    #include <osgShadow/ShadowedScene>
    #include <osgShadow/ShadowMap>
    #include <osgViewer/Viewer>

  2. The code for creating the animation path is used. It uses a few sample control points to generate a circle, which can then be applied to an osg::AnimationPathCallback to implement a time-varying transformation pathway:

    osg::AnimationPath* createAnimationPath( float radius, float time
    osg::ref_ptr<osg::AnimationPath> path =
    new osg::AnimationPath;
    path->setLoopMode( osg::AnimationPath::LOOP );

    unsigned int numSamples = 32;
    float delta_yaw = 2.0f * osg::PI/((float)numSamples - 1.0f);
    float delta_time = time / (float)numSamples;
    for ( unsigned int i=0; i<numSamples; ++i )
    float yaw = delta_yaw * (float)i;
    osg::Vec3 pos( sinf(yaw)*radius, cosf(yaw)*radius, 0.0f );
    osg::Quat rot( -yaw, osg::Z_AXIS );
    path->insert( delta_time * (float)i,
    osg::AnimationPath::ControlPoint(pos, rot)
    return path.release();

  3. Set masks of shadow receivers and casters. The AND operation of these two masks must yield 0:

    unsigned int rcvShadowMask = 0x1;
    unsigned int castShadowMask = 0x2;

  4. Create the ground model. This only receives shadows from other scene objects, so performing an AND operation on its node mask and the receiver mask should return a non-zero value, and the bitwise AND between the node mask and the caster mask should always return 0. Therefore, we can determine the node mask according to such principles:

    osg::ref_ptr<osg::MatrixTransform> groundNode =
    new osg::MatrixTransform;
    groundNode->addChild( osgDB::readNodeFile("lz.osg") );
    groundNode->setMatrix( osg::Matrix::translate(0.0f, 0.0f,-200.0f)
    groundNode->setNodeMask( rcvShadowMask );

  5. Set the Cessna model, which also accepts an update callback to perform path animation. In our example, it only casts a shadow on the ground and other scene objects:

    osg::ref_ptr<osg::MatrixTransform> cessnaNode =
    new osg::MatrixTransform;
    cessnaNode->addChild( osgDB::readNodeFile("cessna.osg.0,0,90.rot")
    cessnaNode->setNodeMask( castShadowMask );

    osg::ref_ptr<osg::AnimationPathCallback> apcb =
    new osg::AnimationPathCallback;
    apcb->setAnimationPath( createAnimationPath(50.0f, 6.0f) );
    cessnaNode->setUpdateCallback( apcb.get() );

  6. Add a dump truck model onto the ground using an approximate translation matrix. It receives a shadow from the Cessna circling overhead, and casts a shadow onto the ground. This means that we have to set an appropriate node mask to retrieve a non-zero value while performing a bitwise AND with the union of both the receiver and caster masks:

    osg::ref_ptr<osg::MatrixTransform> truckNode =
    new osg::MatrixTransform;
    truckNode->addChild( osgDB::readNodeFile("dumptruck.osg") );
    truckNode->setMatrix( osg::Matrix::translate(0.0f, 0.0f,-100.0f)
    truckNode->setNodeMask( rcvShadowMask|castShadowMask );

  7. Set a light source for producing shadows. We specify the parallel light's direction with the setPosition() method to generate declining shadows here:

    osg::ref_ptr<osg::LightSource> source = new osg::LightSource;
    source->getLight()->setPosition( osg::Vec4(4.0, 4.0, 10.0,
    0.0) );
    source->getLight()->setAmbient( osg::Vec4(0.2, 0.2, 0.2, 1.0)
    source->getLight()->setDiffuse( osg::Vec4(0.8, 0.8, 0.8, 1.0)

  8. We must set a shadow technique here. There are already several OpenGL-based shadow techniques implemented by organizations and individuals, including shadow mapping using projective texture mapping, shadow volumes realized by stencil buffer, and other implementations. We choose the famous and effective shadow mapping (osgShadow::ShadowMap) technique, and set its necessary parameters including the light source, shadow texture's size, and unit:

    osg::ref_ptr<osgShadow::ShadowMap> sm = new osgShadow::ShadowMap;
    sm->setLight( source.get() );
    sm->setTextureSize( osg::Vec2s(1024, 1024) );
    sm->setTextureUnit( 1 );

  9. Set the shadow scene's root node, and apply the technique instance, as well as shadow masks to it:

    osg::ref_ptr<osgShadow::ShadowedScene> root =
    new osgShadow::ShadowedScene;
    root->setShadowTechnique( sm.get() );
    root->setReceivesShadowTraversalMask( rcvShadowMask );
    root->setCastsShadowTraversalMask( castShadowMask );

  10. Add all models and the light source to the root and start the viewer:

    root->addChild( groundNode.get() );
    root->addChild( cessnaNode.get() );
    root->addChild( truckNode.get() );
    root->addChild( source.get() );

    osgViewer::Viewer viewer;
    viewer.setSceneData( root.get() );

  11. With a simple light source, and the most frequently-used and stable shadow mapping technique, we can now render the ground, Cessna, and dump truck in a shadowed scene. You may change the texture resolution with setTextureSize() method, or switch to other shadow techniques to see if there are any changes or improvements:


What just happened?

The setNodeMask() was used for indicating the intersection visitor to pass a specified sub-scene graph. But this time, we make use of this method to distinguish between the receiver and casters of shadows. Here, it performs a bitwise logical AND operation on the shadow scene node's masks, instead of the previous node visitor's traversal mask.


The setNodeMask()can even be used to cull the node from the to-be-rendered scene, that is, to remove certain sub-graphs from the rendering pipeline. In the cull traversal of the OSG backend, each node's mask value will be computed with the camera node's cull mask, which is set by setCullMask()method of the osg::Camera class. Therefore, nodes and their sub-graphs will not be drawn if the node mask is 0, because the AND operation always returns 0 in the culling process.

Note that the current OSG shadow map implementation only handles cast shadow masks of nodes. It will adapt the shadow map to ft the bounds of all objects set to cast shadows, but you have to handle objects that do not need to receive shadows yourselves, for example, don't add them to the shadow scene node. In practice, almost all objects will be set to receive shadows, and only the ground should be set to not cast shadows.

Have a go hero – testing other shadow techniques

There are more shadow techniques besides shadow mapping, including the simplest implementation using only textures and fixed-functons, volume algorithm using the stencil buffer (not fully completed at present), soft-edged shadows, parallel-split shadows, light space perspective shadows, and so on.

You may find a brief introduction to them at:

The knowledge of how to create advanced graphical effects (shadows is only one field) is profound. If you have an interest in learning more, you can read some advanced books, such as Real-time rendering by Akenine-Möller, Haines, and Hofman, and Computer Graphics: Principles and Practice by Foley, Van Dam et al.

Now, choose one of the best performers among these shadow techniques. Another option is to design your own shadow techniques, if your application development requirements cannot be met by existing shadow techniques and there is a tangible benefit to developing at your own risk.

Implementing special effects

The osgFX library provides a special effects framework. It is a little analogous to the osgShadow NodeKits, which has a shadow scene as the parent of all shadowy sub-graphs. The osgFX::Effect class, which is derived from osg::Group, implements special effects on its child nodes, but never affects its siblings and parent nodes.

The osgFX::Effect is a pure base class that doesn't realize actual effects at all. Its derivatives include anisotropic lighting, highlights, cartoons, bump mapping, and outline and scribe effect implementations, and it can be extended at any time for different purposes.

Time for action – drawing the outline of models

Outlining an object is a practical technique for representing special effects in gaming, multimedia, and industry applications. One implementation in OpenGL is to write a constant value into the stencil buffer and then render the object with thick wireframe lines. After the two-pass rendering process, an outline around the object will be populated, the thickness of which is just one half of the wireframe's. Fortunately, this has already been implemented in the osgFX library, in the osgFX::Outline class—a derived class of osgFX::Effect.

  1. Include the necessary headers:

    #include <osg/Group>
    #include <osgDB/ReadFile>
    #include <osgFX/Outline>
    #include <osgViewer/Viewer>

  2. Load a Cessna model for outlining:

    osg::ref_ptr<osg::Node> model = osgDB::readNodeFile( "cessna.osg"

  3. Create a new outline effect node. Set the width and color parameters, and add the model node as the child:

    osg::ref_ptr<osgFX::Outline> outline = new osgFX::Outline;
    outline->setWidth( 8 );
    outline->setColor( osg::Vec4(1.0f, 0.0f, 0.0f, 1.0f) );
    outline->addChild( model.get() );

  4. As discussed before, outlining requires the stencil buffer in order to accurately render the results. So we have to set valid stencil bits for the rendering windows in the osg::DisplaySettings instance. The stencil bits is setting 0 by default, which means that the stencil buffer will not be available.

    osg::DisplaySettings::instance()->setMinimumNumStencilBits( 1 );

  5. Before starting the viewer, don't forget to reset the clear mask, in order to also clear stencil bits every frame. The outline effect node is used as the root node here. It can also be added to a more complex scene graph for rendering.

    osgViewer::Viewer viewer;
    viewer.setSceneData( outline.get() );

  6. That's it! This is really a simple example when compared to other examples in this article. However, it may not be easy to realize a similar one by using traditional nodes and attached state sets. The osgFX library uses the concept of multi-pass rendering here to realize such kinds of special effects:


What just happened?

OSG's effect classes are actually collections of state attributes and modes. They allow multiple state sets to be managed for a single host node. When traversing the scene graph, the node is traversed as many times as the number of predefined state sets. As a result, the model will be drawn multiple times (so-called multiple passes) in the rendering pipeline, each of which applies different attributes and modes, and is then combined with previous passes.

For outlining implementation, there are two passes defined internally: firstly, the model is drawn with the stencil buffer set to 1 if passable; secondly, the model is drawn again in wireframe mode, with a thick enough line width and another stencil test process. Pixels will only be drawn to the frame buffer if the stencil buffer is not set the last time, and thus the result has a colored outline. For better understanding of how this works, you are encouraged to take a look into the implementation of the osgFX::Outline class in the src/osgFX folder of the OSG source code.

Playing with more NodeKits

There are a lot more NodeKits, either in the OSG source code or contributed by third parties. Each one provides a specific functionality to be used in the scene graph. Most of them also extend OSG native formats (.osg, .osgb, and so on) to support reading or writing extended node and object types.

Here is a table of some of the existing NodeKits (and practical applications) that may enrich the visual components in OSG-based applications. Play with them freely, or attend one of these communities to share your ideas and codes. Note that not all of these NodeKits are available for direct use, but they are always believed to be worthy, and will be sure to draw the attention of more contributors:

Name Description Website
osgART Augmented reality (AR) support
osgAudio Sound toolkits in OSG
osgBullet Physics engine support using the Bullet library
osgcal Character animation support using the Cal3D library
osgCairo Cairo interface support
Parallel streaming processors support
osgEarth Scalable terrain rendering toolkit
osgIntrospection An introspection or reflection framework
(only available in SVN at present)
osgManipulator 3D interactive manipulators In the core OSG source code
osgMaxExp 3dsmax's OSG scene exporter
osgModeling Parametric modeling and polygon techniques support
osgNV Cg and NVIDIA extensions support
osgOcean Simulation toolkit of above and below water effects
osgPango Improvements of the font rendering using the Pango library
osgQt Qt GUI integration In the core OSG source code
osgSWIG Language bindings for Python and other languages
osgWidgets 3D widgets support In the core OSG source code
osgVirtualPlanets A framework of 3D GIS planets inside the gvSIG
osgVisual Scientific visualization and vehicle simulators
osgVolume Volume rendering support In the core OSG source code
osgXI CgFx, 3D UI, and game developing components
Maya2OSG Maya's OSG scene importer/exporter
Terrain database creation tool



In this article, we discussed the most important visual components of a rendering API. These actually extend the core OSG elements by inheriting basic scene classes (for instance, osg::Group), re-implementing their functionalities, and adding derived objects to the scene graph. Because of the flexibility of scene graph, we can thus enjoy the new features of various customized NodeKitsas soon as the simulation loop starts and traverses the scene nodes. It is never too difficult to design your own NodeKits, even if you don't have too much knowledge of all aspects of OSG.

Further resources on this subject:

You've been reading an excerpt of:

OpenSceneGraph 3.0: Beginner's Guide

Explore Title