About this book
Publication date:
January 2014


Chapter 1. Blender Compositing – Overview

This chapter provides a basic understanding on the role of compositing in a CG workflow and Blender's importance as a compositor. The following is a list of topics covered in this chapter:

  • Compositing significance in the CG pipeline

  • Significance of Blender as a compositor

  • Blender-supported formats

  • Blender color modes and depths

  • Blender color spaces

  • Understanding the render layers and render passes concepts


Understanding CG compositing

CG compositing is an assembly of multiple images that are merged and modified to make a final image. Compositing happens after 3D rendering, as seen in a typical CG pipeline flow, which is the most expensive phase of CG filmmaking. A well planned lighting and compositing pipeline can optimize render resources and also provide unlimited image manipulation functionalities to achieve the desired look for the film. Though compositing is at the end of the pipeline, with its wide range of toolsets, it can help to avoid the work of going back to previous departments in the CG pipeline.

The following diagram depicts a CG pipeline flow and also shows where the composite process fits in:

The strength of compositing lies in modifying the rendered CG footage into a believable output. The following screenshot portrays a Composited Output image done from rendered passes. Many effects such as glare, color corrections, and defocus make the output seem more believable than the rendered beauty pass, which is shown as the first image in Render Passes.

Compositing also provides tools to grade an image to achieve extreme or fantasy style outputs. The following screenshot illustrates different types of grades that can be performed:

Blender's significance as a compositor

Blender is the only open source product with a range of features comparable to other industry standard commercial or proprietary software. It provides a unique advantage of combining 3D and 2D stages of CG filmmaking into one complete package. This gives tremendous control when planning and executing a CG pipeline. Automating and organizing data flow from 3D rendering to compositing can be achieved more easily in Blender compared to other solutions, since compositing software is separate from the 3D rendering software.


Getting started

To be able to get most out of Blender Compositor, it is essential to have a superficial understanding of what Blender can offer. This includes supporting formats, color modes, color spaces, render layers, and render passes.

Supported image formats in Blender

Blender's image input/output system supports regular 32 bit graphics (4 x 8 bits) or floating point images that store 128 bits per pixel (4 x 32 bits) or 64 bits per pixel (4 x 16 bits). This includes texture mapping, background images, and the compositor. These attributes are available in output properties as shown in following screenshot:

Supported color modes in Blender

The color modes are the options available to view the channel information of a footage, they are:

  • BW: Images get saved in 8 bits grayscale (only PNG, JPEG, TGA, and TIF)

  • RGB: Images are saved with RGB (color)

  • RGBA: Images are saved with RGB and Alpha data (if supported)

Supported color depths in Blender

Image color depth, also called bit depth, is the number of bits used for each color component of a single pixel. Blender supports 8, 10, 12, 16, and 32 bit color channels.

Blender's color spaces

The mathematical representation of a set of colors is termed as color space. Each color space has a specific significance and provides unique ways to perform image manipulation. Depending on the task in hand, the color space can be chosen. Blender supports the RGB color space, the HSV color space, the YUV color space, and the YCbCr color space.

The RGB color space

The RGB (red, green, and blue) color space is widely used in computer graphics due to the fact that color displays use red, green, and blue as three primary additive colors to create the desired color. This choice simplifies the system's design and you can benefit from a large number of existing software routines since this color space has been around for a number of years. However, RGB is not suitable when working with real-world images. All three RGB components should be of equal bandwidth to generate a color, resulting in a frame buffer that has the same pixel depth and display resolution for each RGB component. So, irrespective of modifying the image for luminance or color, all three channels have to be read, processed, and stored. To avoid these limitations, many video standards use color spaces that provide luma and color as separate signals.

The HSV color space

HSV stands for hue, saturation, and value. This color space provides flexibility to be able to modify hue, saturation, and value independently. HSV is a cylindrical co-ordinate representation of points in an RGB color model. The following screenshot shows RGB in comparison to HSV values to attain a red color:

The YUV color space

The YUV color space is used by the Phase Alternating Line (PAL), National Television System Committee (NTSC), and Sequential Color with Memory (SECAM) composite color video standards for color televisions. Y stands for the luma component (the brightness), and U and V are the chrominance (color) components. This color space was intended to provide luma information for black and white television systems and color information for color television systems. Now, YUV is a color space typically used as part of a color image or CG pipeline to enable developers and artists to work separately with luminance and color information of an image.

The YCbCr color space

The YCbCr color space was developed as a digital component video standard, which is a scaled and offset version of the YUV color space. Y is the luma component and Cb and Cr are the blue-difference and red-difference chroma components. While YUV is used for analog color encoding in television systems, YCbCr is used for digital color encoding suitable for video and still-image compressions and transmissions, such as MPEG and JPEG.

Render layers/passes

To optimize render resources and also be able to provide full control at the compositing stage, a CG lighting scene is split into multiple render layers and render passes.

Render layers

A typical lighting scene consists of two to three characters, props, and one set. To provide an opportunity to re-render only required elements in the scene, each element is separated into its own render layer for rendering. All interaction renders are also separated into render layers. The following list shows a typical render layer classification.

  • Character 1

  • Character 2

  • Character 3

  • Characters cast shadow

  • Characters occlusion

  • Set

  • Set occlusion

  • Set interaction with characters

Render passes

Passes or AOVs (arbitrary output variables) are intermediate computational results that are shown when rendering a layer. All render passes are buffered out when rendering a render layer and written as separate data. These passes can be utilized in compositing to rebuild the beauty of the render layer and also allow us to tweak individual shader/light contributions. The following screenshot shows the Blender internal render engine's Passes panel:

Every render layer in Blender, by default, is equipped with these render passes, but the content in the render passes is based on the data available to the render layer. However, the pass definition and the type of content it stores doesn't vary. All passes that have a camera icon beside them can be excluded from the combined pass data by clicking on the camera icon. This provides another level of control over the content of the combined pass.

Each passes' significance and content

The following screenshot shows outputs of different render passes available, by default, in Blender's internal render engine. Their significance is explained as follows:

  • Combined: This renders everything in the image, even if it's not necessary. This includes all the options blended into a single output, except those options that you've indicated should be omitted from this pass as indicated with the camera button.

  • Z (Z depth): This map shows how far away each pixel is from the camera. It is used for depth of field (DOF). The depth map is inverse linear (1/distance) from the camera position.

  • Vector: This indicates the direction and speed of things that are moving. It is used with Vector Blur.

  • Normal: This calculates lighting and apparent geometry for a bump map (an image that is used to fake details of an object) or to change the apparent direction of the light falling on an object.

  • UV: This allows us to add textures during compositing.

  • Mist: This is used to deliver the Mist factor pass.

  • Object Index (IndexOB): This is used to make masks of selected objects using the Matte ID Node.

  • Material Index (IndexMA): This is used to make masks of selected material using the Matte ID Node.

  • Color: This displays the flat color of materials without shading information.

  • Diffuse: This displays the color of materials with shading information.

  • Specular: This displays specular highlights.

  • Shadow: This displays the shadows that can be cast. Make sure shadows are cast by your lights (positive or negative) and received by materials. To use this pass, mix or multiply it with the Diffuse pass.

  • Emit: This displays the options for emission pass.

  • AO: This displays ambient occlusion.

  • Environment: This displays the environment lighting contribution.

  • Indirect: This displays the indirect lighting contribution.

  • Reflection: This displays the reflection contributions based on shader attributes that are, participating in the current render.

  • Refraction: This displays the refraction contributions based on shader attributes that are participating in the current render.

The following screenshot shows some outputs of Blender's default render passes:



This chapter introduced the CG compositing stage and Blender's significant advantage as a compositor. We also obtained an understanding on what can go in and out of Blender Compositor in terms of formats, color spaces, passes, layers, and bit depths. The next chapter deals with Blender's node-based architecture and user interface.

About the Authors
  • Mythravarun Vepakomma

    Mythravarun Vepakomma was born in Hyderabad, India, in 1983 and is currently working as a CG Supervisor at Xentrix Studios Pvt Ltd, India. Though he graduated in Electrical and Electronics Engineering in 2004, he has always had a great passion for comics and cartoons. During his studies, his passion got him attracted to web designing and 3D animation. Mythravarun always believed in transforming his passion into a career. He decided to go for it and started learning 3D graphics and web designing on his own. He also started working as a part-time illustrator and graphic designer. After consistent efforts, he finally moved into the field of 3D animation in 2005 to chase his dream of making it his career. He has a decade of experience in several TV series, theme park ride films, and features. He now deals with creating and setting up CG lighting and compositing pipelines, providing a creative direction for CG Projects, research and development on several render engines to create a stable future for the studio, and many more things. Midway through his career, Mythravarun encountered Blender and was fascinated by its features and the fact that it was an open source software. This made him dig deeper into Blender to get a better understanding. Now he prefers to use Blender for many of his illustrations. As a hobby and secondary interest, he composes music and writes blogs on social awareness. His online presence can be found at the following links: Personal website:                          www.mythravarun.com Blog:                                           www.senseandessence.com Music and entertainment:               www.charliesmile.com                                                  www.youtube.com/thevroad                                                 www.youtube.com/joyasysnthesis                                                 http://joyasynthesis.blogspot.in

    Browse publications by this author
  • Ton Roosendaal
Blender Compositing and Post Processing
Unlock this book and the full library FREE for 7 days
Start now