Your message has been sent.
This article has been saved to your account.
Go to my account
This article has been emailed to your Kindle.
Send this article
In this article series by Reynante Martinez, we will learn how to go about creating really convincing still images in Blender with the help of Blender Internal Renderer. This is the second part of this series. To read the first part, click: Creating Convincing Images with Blender Internal Renderer-part1
In your journey as a 3d artist, you might have encountered several (if not all) astounding works of art. And through close inspection, you’ll notice that we barely see them without textures. That is because textures are one of the most important aspect of 3d, but still, this doesn’t apply to all. But adding textures to your characters, props, environment, etc. will greatly add to the aesthetic factor of your image that you wouldn’t believe it would.
There are a number of ways to add texture to your objects in 3D such as UV mapping techniques, projections, 2D painting, etc. All of these depend entirely on what kind of render are you trying to achieve. But for the sake of this article, we’ll try to achieve some nice looking textures without having to worry about the complex tasks involved with it. And with this, we’ll be using the ever famous and useful procedural textures to create seamless and continuously looking texture mapped over the surface of our models.
More information about Procedural Textures can be found on http://www.blender.org/development/release-logs/blender-233/procedural-textures/.
Now let’s add some textures, shall we?
Let’s select the character model in our scene then go to the Texture tab on the rightmost part of the Material Buttons window and click Add New to add a new texture.
Adding a New Texture
After having added a new texture, additional windows appear allowing us to further modify how the currently added texture will affect our material. Name this first texture as “bump” and the mapping options can be seen below.
Bump Texture Mapping Settings
Bump Texture Settings
Add another texture below the “bump” texture and call it “stain”. The settings can be seen below.
Stain Texture Mapping Settings
Stain Texture Settings
We could have added more overlaying textures, but this will do for now just so we could see how the textures have affected our material so far. Rendering now will only lead us to the image below.
Dirtier And Better :)
This time might be a good idea to change our framing and staging so we could look at it at a better perspective. Changing the camera angle and increasing the ground plane’s scale and some adjustments on the spheres, I achieved something like this:
New Camera Angle
For an even better interaction from within the scene, we will adjust some material settings to simulate hard and reflective surfaces. It’s a little unfair to give our main character some good materials while neglecting the other stuff we have. So let’s just get on, and add some decent materials as replacement to the initial materials that both the spheres have had before.
Go on and select the larger sphere and edit the current material we have so it would match that of the settings as seen in the image below.
You’ll notice I added a Color Ramp to each of the materials, this is to slightly give the color a color transition as would be seen in the natural world, in addition to the current diffuse it already has.
The vital part of the shading process of the Spheres is the reflectivity and mirror options as you can see in the following table:
Green Sphere Material Settings
Blue Sphere Material Settings
Our render would now look like this:
Reflections to Simulate Mirror Effect and Smoothness
To nearly finalize this part, we now deal with adding a texture to the world and varying the colors that would affect the Occlusion effect.
To do so, let’s first change the Horizon and Zenith color of our World and change the Ambient Occlusion diffuse energy to the color we’ve just set by changing from “Plain” to “Sky Color”, as seen below.
Rendering now will lead us to:
New World Settings Render
Notice the subtle difference between the previous render and the latest one where the slight bluish hue is more distinguishable.
And then lastly, since we've already added some decent reflective material over to our spheres, it would be best if we can also see some environment being reflected over, to add to the already existent character as one of the objects being reflected.
To do this, we're going to add a texture to our World. This is one nifty tool in simulating an environment since we don't have to do the hard work in manually creating the objects that are going to be reflected. Not only does it save us a lot of time but also the ease by which we can alter these environment is already a big advantage that we have at our hands.
So to do this, let's go ahead and go to our Shading (F5) and select World Buttons. Scroll to the far left side and you'll see tabs labeled “Texture and Input” and “Map To”, both of these tabs are essential in setting up our World texture so pay close attention to them.
Below is an image that further shows you what we need to set up (sorry for the sudden theme change).
You might have already guessed what we should do next, if not, I'll continue on. After heading over to the “Texture and Input” and “Map To” tabs, let's first focus on what's active by default, that is, “Texture and Input”. In this part, we'll only need a few things to get started. First is to click “Add New” to add a new texture datablock to our blender scene, after which, let's edit the name of our texture and name it “environment”, then change the coordinates from “View” to “AngMap” to use a 360 degree angular coordinate, you'll see why in awhile.
Adding a World Texture
After applying these initial settings, we'll go ahead and proceed to the actual texturing process, which, as far as the World is concerned is just a very quick process. I suppose you're still on the same Buttons window that we're on last time. Click on the Texture button or press F6 on your function keys. Bam! Another set of Windows. You'll see here that the texture we named “environment” awhile back is now reflected over to one of the texture slots, just like what we previously did with texturing the character we have. But this time, instead of choosing procedural textures like Clouds, Voronoi, Noise, etc., we'll now be dealing with an image texture, as in our case, an HDRi (High Dynamic Range Image). Our purpose in using an HDR image is to simulate the wide range of intensity levels (brightness and darkness) that is seen in reality and apply these settings over to our world, thus reflected upon by our objects. As in our case, we'll be using high dynamic range images as light probes which are oriented 360 degrees and that's the very reason why we chose “AngMap” as our World texture coordinate.
More info about HDRi can be seen at http://en.wikipedia.org/wiki/High_dynamic_range_imaging and you can download Light Probe Images over at Paul Debevec's collection at http://www.debevec.org/Probes. Save your downloaded light probe images somewhere you can easily identify them with. I couldn't stress enough how file organization can greatly help you in your career. You could just imagine how frustrating it is to find assets among a thousand you already have, without properly placing them in their right places, this counts for every project you have as well .
So to open up our Light Probe Image as texture to our World, click the drop down menu and choose “Image” as your texture type. This tells Blender to use an image instead of the default procedural textures. Then head to the far right side to locate the Image tab with a Load button on it. Let's skip the Map Image tab for now.
Image as Texture Type
Loading an Image Texture
Browse over at your downloaded HDR image (which should have an extension of .hdr) and confirm. Now that the image is loaded, let's leave the default settings as they are since we wouldn't be using them that much. You'll see on the far left Preview just how wonderful looking our image is. But rendering your scene right now would yield to nothing but the same previous render we've had. So if you're itching to get this image right at our scene (which I am too), go back to your World Settings and head over to the “Map To” tab just beside “Texture and Input” then deselect “Blend” and select “Hori” instead. Kabam! Now we're all set!
World Texture Mapping options
And now, the moment we've all been eagerly waiting for, the Render! Yup, go ahead and render and it would (luckily) look like the image below.
Render with HDRi Environment
Then finally, on the next and last part of this article, we'll look on how we can even further add realism to our scene by simulating camera lenses and further enhancing the tone of the image with Composite Nodes.
eBook Price: $26.99
Book Price: $44.99
One great thing about Blender (among the other awesome stuff it already has) is its ability to add special effects on an image or a sequence of image using post processing techniques. Post processing is a technique or an aspect of CG or video/film to improve the quality and aesthetic portions of an image or a video footage. Effects may include sharpening, requantization, luminance alteration, blurring, color correction, etc.
Among the hundreds of techniques that are used in post processing, for the sake of this article, we'll only be dealing with two, that is, simulating Depth of Field (DoF) and color correction. Essentially, Depth of Field is a part of a scene, photograph, or image that appears sharp and in focus. Beyond this distance is already blurred and out of focus. Just like how you would see in the viewfinder of a camera when you focus on something, and the objects nearest and farthest to you look blurred. This is an effective way of conveying your subjects to viewers, aside from toning down your colors. With DoF, you explicitly tell your viewers to subconsciously focus their attention over to the subjects that are sharpest.
More info on Depth of field can be seen at http://en.wikipedia.org/wiki/Depth_of_field.
Luckily, there's a great tool that our great Blender has to offer. We can, however, do this process on a separate image editing application like GIMP or Photoshop or even use a dedicated application just for the simulation of Depth of Field. But since we already have Blender, why don't we dig in and take advantage of its power for awhile?
Post processing, among other things Blender-related, can be achieved through the use of Composite Nodes. If you are new to this thing, I suggest you head over and learn some of the basic stuff on and http://blenderunderground.com/2008/03/31/introduction-to-composite-nodes-part-1/.
To begin with the compositing process, let's go ahead and one of our 3D Views into a Node Editor. Do that by clicking the lower left corner of your subscreen to open up the window menus and choose “Node Editor”, as can be seen below.
Changing the 3D View to Node Editor
Doing so will show up a fresh screen with which we will be working our post processing on. By default, this is what we should see:
Node Editor Window
This is, initially, what we don't want. What we see now is the Material Node Editor, a special node editor for editing the material settings and for further design and alterations that the standard material editor can't. But we're already good with the current material settings we have, so let's leave this for the meantime. What we need to be focusing on right now is the button just beside this, the one which says “Composite Nodes”, mmmmmMmm, I already smell beauty just by saying that.
Now that you have the Composite Nodes window ready, go ahead and click “Use Nodes” and “Backdrop”. Use Nodes basically tells blender to use the nodes setup as a compositing process and backdrop is used to preview the effects of the setup on our background window (which is a very intuitive feat I might say). Just a note, as I've had trouble figuring this out myself too before, for you to be able to see something on your background as backdrop, you must have an active Viewer selected. If you have none, go ahead and add one by pressing Spacebar> Add > Output > Viewer and connect the image input socket to that of the output image socket you want to view. If that looks daunting to you, we'll see how to do that in a minute.
So first things first, let's do some color correction and enhancement to the rendered image. If you don't have any rendered image in your buffer right now, go ahead and press the “Re-render this Layer” button on the Render Layer node.
Let's select the Render Layer node and press Spacebar > Add > Color > RGB Curves. You'll notice that just by doing that, the currently added RGB Curve node is already connected to the image output socket of Render Layer and additionally the RGB Curve node is already selected, so let's head over and add another Viewer node by pressing Spacebar> Add > Output > Viewer. This active viewer will then tell us that whatever has been done over to the RGB Curve node, it will be reflected on to here and in the backdrop. You'll notice that the RGB Curve tool is nothing different as compared to the image processor's RGB Curves, which gives us a familiar option to tweak. If your nodes seem a little too cluttered right now you can individually select nodes by clicking them and moving them with the G key, just like you would in the Blender 3D View. If, however, you want to select multiple nodes, press B key for box select and click drag the node items you wish to select. It's a good idea to get organized with your nodes because you can just imagine that if in time, you have a complete set of complex nodes, it would be very difficult and cumbersome to trace all the paths and where it leads to and from. So it's best to start it right with organizing them depending on your preference and need.
RGB Curves node
This time, I give you all the freedom you need (I think I should have done it from the very beginning) to modify the RGB curve as you see fit. My node setup can be seen below.
RGB Curve Node settings
Before we proceed with the next step which is adding the Depth of Field, we just need to add additional objects in our scene that will serve as our distant objects to be blurred and a very important camera setting for simulating DoF. My current render is this:
Render with New Objects for DoF
Though we did create an initial node setup, it doesn't seem to be reflected in our render, why is that? It's because we didn't tell yet Blender to use our current node setup as a post processing procedure for our render output. We'll get to that in a bit after we finalize setting up our nodes.
Let's go over our scene and select our Camera then go to the Editing (F9) Panel and enable “Limits”. After just doing this, you'll notice in our 3D View that a line has been drawn from the camera suggesting the start and end of the view, and also, a yellow cross has been drawn, this, on the other hand, displays our Depth of Field clue. In order to move the cross, go over and edit the values of the DofDist and see which fits best; just a reminder though, the yellow cross should be on the object which you choose to have the sharpest focus on. Or alternatively, you can just type the name of your object in the DofOb field. As in my case, it's the character.
Camera DoF Settings
Now let's go back to our Node Editor and add the final node that we'll be using. Select the RGB Curve node to make it active then press Spacebar > Add > Filter > Defocus. Then add a Viewer for this node by making the Defocus node active then press Spacebar > Add > Output > Viewer. You'll notice that the image output socket of the RGB Curve node has been connected to the image input socket of the Defocus node, which is good, but this still lacks something of great importance, that is the Depth information with which Blender will simulate the Depth of Field. The depth information we have right now is the Z which happens to be one of the input sockets of the Defocus Node and one of the output sockets of the Render Layer node. So now, we'll tell Blender to use the depth information we have on our scene based on the DofDist values we added to our camera. To do this, we simply connect the “z” output socket of our Render Layer to that of the z input socket of our Defocus Node... And we're done? Not yet. We have to define how much blurriness we need for our scene and how much quality would we be needed for it.
And before I forget (which I wish I didn't on some parts here), connect the image output socket of the Defocus Node to the image input socket of the Composite Node to finalize the node setup..
You can play around a bit and see which looks best. Almost everything in the node is self-explanatory. BUT, be sure to turn off z-buffer though, since we are using the z value in an image based processing. Then when you're all set, you can go ahead and tell Blender to output our post processed image. Do this by activating the “Do Composite” button found on the Scene (F10) under Render Buttons then on the “Anim” tab.
Do Composite Button
And lastly, though this is entirely optional, you can turn on Full Sampling under the Render options to further achieve an even smoother result, but with a render time tradeoff. If you have plenty of time though and a lot of patience to wait, you can just go for it, since it doesn't hurt to have a beautiful image anyway
... Leading us to this rendered image:
<!--[if gte mso 9]>
Final Render with DoF and Full Sampling
eBook Price: $26.99
Book Price: $44.99
I just can't take our character looking dead all throughout our render process, so why don't we give him some nice pose so he can be remembered pretty well.
Final Render with Pose
I'm very glad you made it 'till the end of the article (which could have been very a very painstaking journey). In this article, we learned how to setup scenes and make them look more aesthetically pleasing to the eye. Basically, we learned how to create simple-looking materials yet set them up in a fashioned way with the use of soft shadows, reflections, environment images, basic light setups, and a bit of post processing.
Hopefully next time, I could dig in more into the details of achieving high quality images using a combination of textures and how to incorporate soft body, cloth, and fluid dynamics into the scene to create a stunning realistic image.
If you have any questions, clarifications, suggestions, and/or comments, or if you just wanted to throw a note at how noisy I had been, feel free to drop me an email at email@example.com. Please be nice, ok?
If you have read this article you may be interested to view :
- Modeling, Shading, Texturing, Lighting, and Compositing a Soda Can in Blender 2.49: Part 1
- Modeling, Shading, Texturing, Lighting, and Compositing a Soda Can in Blender 2.49: Part 2
- Creating an Underwater Scene in Blender- Part 1
- Creating an Underwater Scene in Blender- Part 2
- Creating an Underwater Scene in Blender- Part 3
- Creating Convincing Images with Blender Internal Renderer-part1
- Textures in Blender
About the Author :
Reynante Martinez is a self-learnt graphic designer, illustrator, web designer, and 3D generalist. His interest in CG started nine years ago and was directly introduced to The GIMP as one of the open source image editing applications available in Linux. Aside from being an animator at work, he also has experience in mentoring and has been a speaker and workshop conductor at several occasions during the past few years. He is also the co-founder of PinoyBlender, a Filipino Blender User Group. Since his discovery of Blender six years ago, his passion for CG art grew even more, with more upgrades coming now and then and with an active and helpful community of Blender artists being one of the most exciting factors in his career. He can be reached through the email above or through his weblog at http://www.reynantem.blogspot.com and you can also view his online gallery at http://www.reynante.deviantart.com/gallery.You can follow him on Twitter at http://www.twitter.com/reynantem.