"If I have seen further it is by standing on the shoulders of giants."
|--Sir Isaac Newton|
A graphical revolution was taking place. The command-line operating system was finally giving way to a graphical user interface. Toy Story, the first animated feature film premiered, showed that 3D was a story-telling medium with engaging characters. Moreover, there was a small group of committed, talented people with visions of an interactive Web3D, combined with enthusiasm and dreams of virtual reality. So much was in place for Web3D's growth: fast technology, 3D modeling tools for creation, and talented people with vision. The first specification, Virtual Reality Modeling Language (VRML), was also born, although it served as a prototype to the soon-replaced update, VRML 2.0. Web3D was a small representative of the dot-com boom. A mix of startups and technology giants entered the arena with varied engineering approaches. Some opted for their own technology such as Pulse 3D and Activeworlds, which is still active as its name says. Others relied on their own VRML browsers and plugins such as Sony, Microsoft, and Silicon Graphics.
With time, however, the Web could no longer just be about fun, irrelevant stuff such as live video cameras focused on fish tanks. It had to become economically viable, and thus, the more frivolous applications gave way to search engines, online banking, and e-commerce. Among them were Yahoo!, Amazon, and America Online. The early pioneers of Web3D and VRML were ahead of their time and deserve acknowledgement for their great work. Their efforts were not futile, but Web3D's day would come another time. The public needed the familiar medium of text, photos, and streaming audio and video. Interactive 3D was finding its early adopters elsewhere—gamers—people who embraced new technology for fun.
3D was basically an entertainment medium used in movies such as Toy Story and Shrek or in video games such as Doom. Game development was rather tedious. Programmers had to create software known as drivers for every model of a graphics card, similar to how peripherals today, for instance a printer and a scanner, must have their own driver device. Each video game had its own unique graphic card drivers. The industry needed a better solution to interface between video games and graphic cards. Thus, an industry standard interface was born: a set of commands that said "you game developers, build your game using these commands" and "you graphics chip makers, make sure you accept these commands". The end result was that any game could run on any graphics card.
A graphic interface can be thought of as a gas station; any car can purchase their gas from any gas station. We need not worry about our car only accepting gas from a particular vendor. The two leading graphics interfaces at the end of the millennium were Open Graphics Library (OpenGL) (1992) and DirectX (1995). OpenGL was from Silicon Graphics Incorporated (SGI) and was designed for computer graphics in animation or special effects, mostly in movies or commercials. Thus, OpenGL did not require interactivity from the mouse or keyboard to play video games. OpenGL also did not include sound; audio would just be combined with the computer-generated video in a film-editing process.
The other graphics interface was DirectX, originally known as Game Developers Kit from Microsoft. Launched in the early 1990s, DirectX provided additional support for programming video games such as interfaces to the mouse, keyboard, and game controllers as well as support to control audio. Game developers could use DirectX to load their 3D models; move and rotate them; specify lights and properties; and receive mouse, keyboard, and game controller commands.
OpenGL was picked up by Khronos (www.khronos.org), a consortium of graphics and computer companies that insured its growth. Khronos' mission is to create open standards for all computing media. It was a broad agenda that incorporated mobile graphics and the Web.
Meanwhile, file formats also needed an industry standard. It was clear that information was being shared among organizations, businesses, and over the Internet. There was a need for a worldwide standard for the World Wide Web. XML, eXtensible Markup Language, was launched in the late 1990s. It specified a format to share data and ways to validate whether the format is correct. Each industry would come up with its own standards; the most prevalent was HTML, which adopted the XML standard to become XHTML, a more rigorous standard that enabled a more consistent file format.
VRML 2.0 gained stature as a 3D mesh file-sharing format and was exported from major 3D modeling programs. Now was the time to jump on the XML bandwagon, and thus, X3D was born. It had the features of VRML but was now in a standardized XML format. VRML and X3D were under the direction of the Web3D Consortium (http://www.web3d.org/), a group of outstanding, intelligent, dedicated people with a vision and commitment for 3D on the Web. As an XML document, X3D could be extended for specific applications, such as medical applications, computer-aided design (CAD) for mechanical engineers, and avatars. Collada is another file format from Khronos with a broader scope for other 3D applications, but with X3D, the community is well served.
Khronos and the Web3D Consortium brought different companies together to produce unified standards and solutions. An issue with the standards was that companies with vested financial interests in their own technologies would have to compromise and perhaps lose a technical advantage. However, in the long run, we end up with a greater good that has standards, and some companies continue to support their own features as an extension to these standards. Often, the right path cannot be discovered until we have tried other unsuccessful paths. We learned a lot in the early days. Remarkably, the early inventors got much of it right, even for the products that had not yet been invented or conceived such as smartphones. Perhaps the only criticism seemed to be redundancy; there were multiple commands in OpenGL to accomplish the same functions. A little streamlining was in order, and thus, OpenGL ES (Embedded Systems, 2003) gave us a powerful 3D graphics interface for low battery power and low-level devices.
Technical breakthroughs are often a synergy of near-simultaneous events. The Internet had been around for nearly a quarter of a century before 1970, but was not a commercial success. A convergence of hardware and software took place. Fax modems became commonplace on most PCs. Netscape, the first web browser, was born. Consumers were introduced to the Internet via AOL (America Online), and while primitive to today's standards, the graphical user interface was ubiquitous with Windows, introduced years earlier by Macintosh (and Xerox if we wish to be precise). Web3D was undergoing its own technical breakthrough with HTML5, OpenGL ES, X3D, and one more innovation—shader languages—also known as GPU (Graphics Processing Unit) programming.
The earlier versions of OpenGL and the streamlined OpenGL ES used the fixed-function pipeline method. A 3D mesh, which is simply a list of vertices and how they are connected—think of a Rose Parade Float formed with chicken wire—would go through a series of steps known as the 3D graphics pipeline. The pipeline would perform the following tasks:
Transform the object so that it would move (translate), rotate, and scale the 3D object
Transform the object to the camera's point-of-view
Convert the scene into perspective view so that it appears on the screen in the same way as we would perceive it with our eye in the real world
Traditionally, all the programming was done on the CPU, which passed the 3D meshes and object transformations to the GPU in order to draw or render the colored dots on the screen. The GPU is simply another computer chip specifically designed for this final drawing. It is programmable and its multiprocessor capability means it can operate on multiple vertices simultaneously. Innovators began programming GPUs. Eventually, a formal programming language was designed to program the GPU / shader languages.
Shader languages enabled developers to have finite control and programming over each pixel, vertex, light, and text. With the earlier fixed-function pipeline, we only controlled the final location of the vertices and let the GPU interpolate between the vertices to draw the polygons. Now, with shader languages, we can calculate lighting, shadows, rough surfaces, and blend texture maps on a pixel-by-pixel basis. A great advantage of GPU programming is that the OpenGL ES standard is shared across many products. So, the same shader language coded on an iPhone works for an Android phone. Finally, all was in place—WebGL overcame the plugin issues of the previous Web3D attempts, X3D would be the latest file format based on the XML community standard, and shader languages would give us improved performance and image quality on a pixel-by-pixel basis.
We now venture into Web3D by building our first X3D objects. This will also introduce you to the 3D scene graph of how objects are specified; first, they are specified as primitives, such as boxes and spheres, and then as more complex 3D models built by artists. We will also apply textures to these 3D meshes and include cameras, lights, animation, and interactivity.
X3D is a great language to specify a 3D scene without doing any programming. It is also a great learning tool. Best of all, it provides instant gratification. If you have never created anything in 3D, you will now be able to create something in a few minutes.
Transformations – translation, rotation, and scaling
Lights, camera, action!
Navigating between multiple viewports
Animation with interpolators
Adding texture maps to 3D meshes
Lighting a scene and shading 3D objects with normals
Most X3D and WebGL developments require a little more than what comes on a standard computer—be it a PC, Macintosh, or other device; I would not doubt that one can create and test Web3D on a smartphone or a tablet.
Firefox is the preferred browser for testing. Google Chrome will not allow you to read 3D objects off the hard drive due to security restrictions, which require you to upload your 3D objects, texture maps, and WebGL to your website before testing. Firefox relaxes these restrictions and will enable you to test your work on your hard drive.
As we dive deeper into Web3D creation, you may need to configure your server to enable MIME types such as
.obj for 3D models. You may want to consult your server administrator to check this.
Some websites that are worth bookmarking are as follows:
The Web3D Consortium: This website (http://www.web3d.org/) defines the X3D file format and has the latest news
X3Dom: This website (http://www.x3dom.org/) has the libraries that are used for our X3D demonstrations
The Khronos Group: This website (http://www.khronos.org/) is the consortium that oversees the OpenGL specification and defines WebGL
3D-Online: This website (http://www.3D-Online.com) is for book demonstrations and author contact information
A picture is worth a thousand words, and this is, after all, a book on 3D graphics. So, let's get started with the fun stuff! Two technologies will be demonstrated: WebGL and X3D. WebGL is related to X3D, but X3D is better to demonstrate simple objects. Since we will be building the Solar System, X3D is better in order to show the three rotations of the Earth—the 24-hour day rotation, the seasonal tilts, and the 365.25 annual rotation around the Sun. The Moon is a child of the Earth. Wherever the Earth goes, the Moon follows with its own independent rotation around the Earth. To assist the parsing of these X3D files, we shall use X3DOM (X3D with the Document Object Model (DOM))—a publicly available program. WebGL, by itself, is ideal to display the 3D mesh, whereas X3D better represents the connection between objects. For example, an X3D file can show the hierarchy of an animated hand rotating at the wrist, which is connected to the lower arm that rotates at the elbow, and then the shoulder.
Programming tradition states that the first program shall be called "Hello World" and simply displays these words. 3D also has "Hello World"; however, it displays the simplest 3D object—a box.
<Box> node is one of the several simple primitives that are included in X3D; the other shape nodes are
>, will often be used to describe X3D.
<Box> shape node used in the following code:
</head> tags are two references that link this
x3dom.js. This code parses the X3D file and loads the data onto the graphics card using the OpenGL commands. So, it takes care of a lot of low-level coding, which we will look at later using WebGL. Also, the
x3dom.css file sets some parameters similar to any CSS file. Thus, a basic knowledge of HTML is helpful to develop WebGL. Some of the other tags in the preceding code relate to the validation process of all XML documents such as the
DOCTYPE information in the preceding code.
The heart of the matter begins with the
<X3D> tag being embedded into a standard
XHTML document. It can also include X3D version information, width and height data, and other identifications that will be used later. There are also a set of
</Scene> tags within which all the 3D data will be contained.
<Shape> tags contain a single 3D mesh and specify the geometry and appearance of the 3D mesh. Here, we have a single
<Box/> tag and the
<Appearance> tag, which specifies either a texture map and/or a
<Material> tag that includes several properties, namely, the diffuse color that will be blended with the color of our scene's lights, the emissive or glow simulation color, the object's specular highlights such as a bright spot reflecting the Sun on a car's hood, and any transparency. Colors in 3D are the same as those used on the Web—red, green, and blue. Though 3D colors span between 0 and 1, the Web often uses a hexadecimal number from 0 x 00 through 0 x FF. In the preceding code, the diffuse color used is 0.9 red, 0.6 green, and 0.3 blue for a light orange color.
The box also shows some shading with its brightest sides facing us. This is because there is a default headlight in the scene, which is positioned in the direction of the camera. This scene has no camera defined; in this case, a default camera will be inserted at the position (0, 0, 10), which is 10 units towards us along the z axis and points towards the origin (0, 0, 0). If you run this program (which you should), you will be able to rotate the viewpoint (camera) around the origin with the default headlight attached to the viewpoint. We will address lighting later, as lighting is a very important and complex part of 3D graphics.
We are off to a good start. Now, let's add two or more objects. If we don't want everything sitting in the same place, we need a way to position the objects in the vast universe of 3D space. The most common way is by using transformation.
We don't alter the original 3D object, but just apply some math to each point in the 3D mesh to rotate, translate (move), and/or scale the object, as follows:
<Scene> <Transform translation="-2 -3 -3" rotation=".6 .8 0 .5"> <Shape> <Appearance> <Material diffuseColor='0.9 0.6 0.3' /> </Appearance> <Box/> </Shape> </Transform> <Transform translation="2 2.5 1" rotation="0 0 1 -.5"> <Shape> <Appearance> <Material diffuseColor='0.3 0.9 0.6' /> </Appearance> <Cone/> </Shape> </Transform> <Transform translation="-1 0 0" scale=".5 .5 .5"> <Shape> <Appearance> <Material diffuseColor='0.6 0.3 0.9' /> </Appearance> <Cylinder/> </Shape> </Transform> <Transform translation="1 0 0"> <Shape> <Appearance> <Material diffuseColor='0.6 0.3 0.9' /> </Appearance> <Cylinder/> </Shape> </Transform> </Scene>
<Shape> tag is now embedded into a
<Transform> tag. The first object, the box, has a translation of (-2, -3, -3), which moves it two units to the left, three units downwards, and three units backward from the origin. It also has a rotation of (0.6, 0.8, 0, 0.5), which will be discussed in more detail later, but the first three values represent the x, y, and z axes, respectively, and the fourth value is the angle of rotation in radians (π radians = 180 degrees). Also, note that the sum of the squares of the x, y, and z values equals 1: x2 + y2 + z2 = 1.
The second object is a cone translated two units to the right, 2.5 units upwards, and one unit forward with a rotation of 0.5 radians around the z axis (like the hands of a clock). The third and fourth objects are both cylinders with a uniform 0.5 scale on the left cylinder, which means that it's half its default size. Note that the scale does not need to be the same value for all three axes.
This scene retains the first two objects created previously and adds a point light that can be thought of as a light bulb—a light from a single point emanating in all directions. We turned off the headlight inside the
<NavigationInfo> tag, and at the same time, restricted movement in the scene by setting
NONE, simply to introduce this as part of the demo. At the same time, the
<Viewpoint> tag or camera is introduced with its default
orientation (rotation) value, and
fieldOfView value that defaults to π/4, that is, 0.785 radians. The added code is as follows:
<Scene> <NavigationInfo headlight="FALSE" type='"NONE"'/> <PointLight location="0 3 2"/> <Viewpoint position="0 0 10" orientation="0 0 1 0" fieldOfView=".785"/> <Transform …>
The point light is 3 units up and 2 units in front, so it clearly shines on the top of the box and to the left-hand side of the cone but not on the left-hand side of the box or the bottom of the cylinder.
3D space is wonderful to travel freely, but we also want specific cameras for users and ways to navigate between them. The following figure depicts the same scene from three different viewpoints with a single spotlight on the left-hand side of the image along the negative x axis pointing towards the origin (0, 0, 0). The image on the left shows the initial default camera, the image in the middle shows the view from the left along the x axis from the
<Viewpoint> node named
vp1, and the image on the right is from the
<Viewpoint> node labeled
vp2 at a 45-degree angle between the positive x axis and the positive z axis:
In the following code, we have three
<Viewpoint> nodes. These cameras are in the
id=origin default position, the left-hand side of the scene (
id=vp1), and at a 45-degree angle on the right (
id=vp2). All cameras are facing the origin. Clicking on the
<Box> node directs us to go to the camera
vp1. Clicking on the cone animates us to
vp2, and clicking on the small blue
<Box> sends us back to our original
<Viewpoint>. Note that the order does matter, although the scene begins with the first
<Scene> <NavigationInfo headlight="FALSE" type='"NONE"'/> <SpotLight location="-5 0 0" direction="0 0 0"/> <Viewpoint id="origin" position="0 0 10" orientation="0 0 1 0" fieldOfView=".785"/> <Viewpoint id="vp1" orientation="0 1 0 -1.57" position="-12 0 0"></Viewpoint> <Viewpoint id="vp2" orientation="0 1 0 .785" position="10 0 10"></Viewpoint> <Transform translation="-2 0 -2"> <Shape> <Appearance> <Material diffuseColor='1 0.75 0.5'/> </Appearance> <Box onclick = "document.getElementById('vp1').setAttribute ('set_bind','true');"/> </Shape> </Transform> <Transform translation="2 2.5 1" rotation="0 0 1 -.5"> <Shape> <Appearance> <Material diffuseColor='0.5 1 0.75'/> </Appearance> <Cone onclick = "document.getElementById('vp2').setAttribute ('set_bind','true');"/> </Shape> </Transform> <Transform scale=".25 .25 .25"> <Shape> <Appearance> <Material diffuseColor='0 0 0' emissiveColor='0.75 0.5 1' /> </Appearance> <Box onclick = "document.getElementById('origin').setAttribute ('set_bind','true');"/> </Shape> </Transform> </Scene>
<Box> node is placed at the origin for reference. Its diffuse color is black and thus is unaffected by any lights. Instead, its emissive color is light purple, though it does not actually emit light. For this, we would need some additional lighting objects to give the impression that it glows. The
onclick="document.getElementById('origin'), that HTML web developers have seen while programming interactive websites. The rest of the line (
setAttribute('set_bind','true');) is X3D's way of setting the viewpoint named
origin to be the current or bound camera. Note that the spotlight in the image in the middle does not have a rounded edge that a flashlight typically produces. Lights without GPU or shader languages are limited to calculating the light at each vertex and interpolating the light across the polygon. By contrast, shader languages calculate these images on a pixel-by-pixel basis in the GPU's multiprocessor, so the process is quite fast. We will see more of this and shader languages' contribution to 3D graphics imagery later.
Animation comes in the following forms:
The Moon orbiting around the Earth, which in turn orbits around the Sun, has a lot of good 3D graphic concepts to review. Note that the Earth's transformation is inside the Sun's transformation, and the Moon's transform is contained inside the Earth's transform. Thus, wherever the Earth goes, rotating around the Sun, the Moon will follow. Also, not only is the Earth 10 units away from the Sun, but it's center is -10 units, which means that the center of the Earth's rotation is the Sun. Now, the Earth also rotates around its own axis for a day; we will show this later. Have a look at the following screenshot:
Layout the code with all the objects such as the Sun, Earth, and Moon along with all the
<TimeSensor> nodes, and interpolators before the
ROUTE node. The order is important, just as we must have the
<Transform> nodes embedded properly to represent the annual, seasonal, and daily rotations of the Earth, as shown in the following code:
<Scene> <Viewpoint orientation="1 0 0 -.3" position="0 8 30"/> <NavigationInfo headlight="false"/> <PointLight/> <Transform DEF="Sun"> <Shape> <Sphere radius="2.5"/> <Appearance> <Material diffuseColor="1 1 0" emissiveColor="1 .5 0"/> </Appearance> </Shape> <Transform DEF="Earth" center="-10 0 0"translation="10 0 0"> <Shape> <Sphere radius="1.2"/> <Appearance> <Material diffuseColor=".2 .4 .8"/> </Appearance> </Shape> <Transform DEF="Moon" center="-3 0 0" translation="3 0 0"> <Shape> <Sphere radius=".6"/> <Appearance> <Material diffuseColor=".4 .4 .4"/> </Appearance> </Shape> </Transform> </Transform> </Transform> <TimeSensor DEF="yearTimer" cycleInterval="36.5" loop="true"/> <OrientationInterpolator DEF="YRotation" key="0 .5 1" keyValue="0 1 0 0 0 1 0 3.14 0 1 0 6.28"/> <ROUTE fromField="fraction_changed" fromNode="yearTimer" toField="set_fraction" toNode="YRotation"/> <ROUTE fromField="value_changed" fromNode="YRotation" toField="rotation" toNode="Earth"/> <TimeSensor DEF="moonTimer" cycleInterval="2.9" loop="true"/> <OrientationInterpolator DEF="YRotMoon" key="0 .5 1" keyValue="0 1 0 0 0 1 0 3.14 0 1 0 6.28"/> <ROUTE fromField="fraction_changed" fromNode="moonTimer" toField="set_fraction" toNode="YRotMoon"/> <ROUTE fromField="value_changed" fromNode="YRotMoon" toField="rotation" toNode="Moon"/> </Scene>
<TimeSensor> nodes, interpolators, and the
ROUTE nodes create the key frame animation. The
<TimeSensor> node specifies the duration (
cycleInterval) of the animation, which is 36.5 seconds here, where each day represents one tenth of a second. Interpolators specify the
keyValue functions such that for each key function, there must be a key value function. Since this is a rotation or change in orientation, we use
<OrientationInterpolator>. Also, there are
<ColorInterpolator> nodes that move the 3D mesh and change its color, respectively, over time. Unlike films, where we have a fixed 30 frames per second, in real-time animations we can have more or less frames per second depending on how complex the scene is and also on the performance of our CPU.
We will break down the interpolator at the three keys when the time equals 0, 0.5, and 1, as follows:
<OrientationInterpolator>node says that at time = 0, the rotation will be (0, 1, 0, 0), meaning a rotation of 0 radians around the y axis.
At time = 0.5, which is 18.25 seconds (half of 36.5 seconds) here, the rotation will be (0 1 0 3.14) or 3.14 radians, which is 180 degrees around the y axis.
Finally, at time = 1, which is 36.5 seconds, the rotation will be 6.28 radians, which is a full 360-degree circle around the y axis.
So, why do we have to put a midpoint such as 180 degrees? The problem is that the
<OrientationInterpolator> node optimizes the rotation distances to be the smallest. For example, a 120-degree rotation clockwise is the same as a 240-degree rotation counter-clockwise. However, the
<OrientationInterpolator> node will take the shortest route and rotate 120 degrees clockwise. If you wanted to force a 240-degree counter-clockwise rotation, you'd need to add a midpoint.
Finally, we have to connect the timer to the interpolator with the 3D object that has to be rotated. The
ROUTE node directs value(s) from a timer or sensor to another sensor or object. The first
ROUTE node takes the output from
YRotation.set_fraction. Note the passing of a fraction value within the
ROUTE node. The timer will count from 0 to 36.5 seconds and divide this value by 36.5 so that it is a value between 0 and 1. The orientation interpolator will receive this fraction value and interpolate between the two key values. For example, at 10 seconds, the fraction is 10/36.5 or 0.274, which is between 0 and 0.5 in the
<OrientationInterpolator> node's key. This becomes a key value of (0 1 0 1.72), about 98.63 degrees (1.72 radians * 180 degrees/π).
ROUTE node sends the
keyValue function from the
<OrientationInterpolator> node and sets the rotation values of the
<Transform> node defined as Earth, using the
DEF="Earth" statement inside the
<Transform> node. And with this, we nearly have our Solar System. We just have to add more planets.
So far, our objects have been pretty basic shapes and in a single color. That's why they are called primitives. But of course, 3D objects are far more complex and require the talent of artists. Texture maps help us to understand how complex 3D objects are assembled based on simple ones. Have a look at the following figure:
A solid colored triangle is the simplest object to draw. I often suggest drawing objects on paper and labelling the vertices depending on how they are connected because it can be very tedious to form shapes from memory:
<Shape> <Appearance> <Material diffuseColor="1 1 0"/> </Appearance> <IndexedFaceSet coordIndex="0 1 2 -1"> <Coordinate point="-2 2 0 -2 -2 0 2 -2 0"/> </IndexedFaceSet> </Shape>
Instead of specifying a box or sphere, the shape in the preceding code consists of
IndexedFaceSet with three points or vertices connected in the order listed by
coordIndex = "0 1 2 -1". Vertex 0 is connected to vertex 1, vertex 1 is connected to vertex 2, and vertex 2 is connected back to vertex 0. The side that should face us is determined by the right-hand rule. Using your right hand, curve your fingers in the order of the vertices. The direction of your thumb is the direction that the polygon faces. So, as you curve your fingers counter-clockwise, if your thumb is pointing towards you, the polygon will be visible to you. The vertices are connected in a counter-clockwise order.
Let's add a texture map of a wonderful Basset Hound dog with a wet, pink tongue. The camera has been slightly rotated to distinguish this 3D object from a flat image. Have a look at the following screenshot:
Have a look at the following code:
<Shape> <Appearance> <ImageTexture url="./textureMaps/bassethound.jpg"/> </Appearance> <IndexedFaceSet coordIndex="0 1 2 -1 2 3 0 -1" texCoordIndex="0 1 2 -1 2 3 0 -1"> <Coordinate point="-2 2 0 -2 -2 0 2 -2 0 2 2 0"/> <TextureCoordinate point="0 1 0 0 1 0 1 1"/> </IndexedFaceSet> </Shape>
In the preceding code, a fourth coordinate point has been added to form two triangles. The
IndexedFaceSet coordIndex node now specifies two triangles. It is preferred to use triangles rather than polygons of four or more vertices for the same reason that a three-legged chair won't wobble, but a four-legged chair may wobble because one leg may be longer than the others and not sit flat on the ground. At least this is a nontechnical and noncomplex answer. Three vertex polygons will always be flat or planer and four vertex polygons can be bent. Additionally, it's often just about selecting a checkbox to export a 3D mesh using only triangles for artists.
<Appearance> tag now has an
<ImageTexture> node tag instead of a
<Material> tag and specifies an image similar to how HTML would embed an image into a web page. A texture map on a 3D mesh is like hanging wallpaper on a wall. We paste the wallpaper to the corners. We need to align the corners of the walls with the correct corners of the wallpaper; otherwise, the wallpaper gets hung sideways or upside down. The
<TextureCoordinate> point specifies which corner of the texture map is placed at each vertex of the 3D mesh. The lower-left corner of a texture map is (0, 0), and the upper-right corner is (1, 1). The first value is along the x axis, and the second value is along the y axis.
<TextureCoordinate> point gets aligned to the
<Coordinate> point vertices. For example, the first
<Coordinate> point is
(-2, 2, 0), which is the upper-left vertex, and the first
<TextureCoordinate> point is
(0, 1), which is the upper-left corner of the texture map.
The final texture map shows the
<TextureTransform> tag, which is often used for tiling (such as repeating a brick wall pattern), but it can also be used for shifting and rotating images. For example, texture maps can also be animated to create nice water effects.
The three images in the preceding screenshot show tiling with a texture map of 3 x 2 in the upper-left 3D mesh, rotation of the texture map by 0.2 radians in the image on the right, and translation of the texture map by -0.3 units to the left and 0.6 units upwards in the image in the lower-left-hand side corner. In the following code, within the
<Scene> tags, there are three
<Transform> tags, one tag for each
<Shape> node, and the
<Scene> <Transform translation="-3 2 -3"> <Shape> <Appearance> <ImageTexture DEF="basset" url="./textureMaps/bassethound.jpg"/> <TextureTransform scale="3 2"/> </Appearance> <IndexedFaceSet DEF="bassetIFS" coordIndex="0 1 2 -1 2 3 0 -1" texCoordIndex="0 1 2 -1 2 3 0 -1"> <Coordinate point="-2 2 0 -2 -2 0 2 -2 0 2 2 0"/> <TextureCoordinate point="0 1 0 0 1 0 1 1"/> </IndexedFaceSet> </Shape> </Transform> <Transform translation="2 1 -2"> <Shape> <Appearance> <ImageTexture USE="basset"/> <TextureTransform rotation=".2"/> </Appearance> <IndexedFaceSet USE="bassetIFS"/> </Shape> </Transform> <Transform translation="-3 -3 -4"> <Shape> <Appearance> <ImageTexture USE="basset"/> <TextureTransform translation=".3 -.6"/> </Appearance> <IndexedFaceSet USE="bassetIFS"/> </Shape> </Transform> </Scene>
One of the great effects in 3D graphics is to create surfaces such as a weathered brick wall, the bumpy skin of an orange, or ripples on the surface of water. Often, these surfaces are flat, but we can blend images together or paint the texture with these bumps. They can even be animated so that water appears to be flowing, such as in a waterfall. To create this realistic, weathered, and irregular look, we use vertex normals. A normal is simply a three-dimensional vector usually at a right angle to the polygon and often generated by 3D modeling tools.
Each of the images in the preceding screenshot of the dog uses the same lighting, texture maps, and polygons. They only differ by their normals. In the upper-left image, the normals are all set to (0, 0, 1), which means that they point right back at the light and thus each corner is fully lit. However, the lower-left image has its normals randomly set and thus does not point back at the light source. The image on the upper-right-hand side has the normal in the top-right corner set 90 degrees away; therefore, this corner appears as a dark spot. Finally, the lower-right image has its normal pointing at an angle to the light and thus the entire image is dark. The following code contains four
<Transform> nodes with identical shapes and texture maps, and only differs by the vector values in the
<Normal> nodes (thus, much of the repeated code has been left out):
<Transform translation="-2.25 2.25 -2"> <Shape> <Appearance DEF='bassetHoundImage' > <Material diffuseColor='1 1 1' /> <ImageTexture url='textureMaps/bassethound.jpg' /> </Appearance> <IndexedFaceSet coordIndex='0 1 2 -1 3 2 1 -1' normalIndex='0 1 2 -1 3 2 1 -1' texCoordIndex='0 1 2 -1 3 2 1 -1' > <Coordinate DEF="coords" point='-2 -2 0, 2 -2 0,-2 2 0, 2 2 0'/> <Normal vector='0 0 1, 0 0 1, 0 0 1, 0 0 1'/> <TextureCoordinate DEF="textureCoord"point='0 0, 1 0, 0 1, 1 1' /> </IndexedFaceSet> </Shape> </Transform> <Transform translation="-2.25 -2.25 -2"> <Shape> <Appearance USE='bassetHoundImage' /> <IndexedFaceSet coordIndex='0 1 2 -1 3 2 1 -1' normalIndex='0 1 2 -1 3 2 1 -1' texCoordIndex='0 1 2 -1 3 2 1 -1' > <Coordinate USE="coords" /> <Normal vector='-.707 -.5 .5, 0 0 1,-.707 .707 0, 0 .8 .6'/> <TextureCoordinate USE="textureCoord" /> </IndexedFaceSet> </Shape> </Transform> <Transform translation="-2.25 -2.25 -2"> … <Normal vector='0 0 1, 0 0 1, 0 0 1, 1 0 0'/> … </Transform> <Transform translation="2.25 -2.25 -2"> … <Normal vector='.63 .63 .48, .63 .63 .48, .63 .63 .48, .63 .63 .48'/> </Transform>
Only the X3D code for the first two images is shown. Using the
USE properties allows us to share the same
<TextureCoordinate> nodes for each shape. Only the
<Normal> node within each
<Transform> node for the two textured 3D meshes on the right are shown. Note that the
<Normal> vector is a unit value, which means that it has a length of 1; its three-dimensional (x, y, z) values have the property x2 + y2 + z2 = 1.
Normals play a major role in shader languages; they allow you to create a realistic look for complex lighting. We shall visit normals later, but let's see one small piece of math here. The amount of light on a polygon at each vertex is calculated by multiplying the (opposite direction of the) light vector and the normal vector. Both values must be unit values, which means that they should have a length of 1 unit before you multiply the two vectors. The light vector
L can be multiplied with the normal vector
N (known as the dot product) as (Lx, Ly, Lz) * (Nx, Ny, Nz) = Lx * Nx + Ly * Ny + Lz * Nz. The value will be between -1 and 1 (inclusive of both), but for any value less than zero, the light will come from behind the object, leaving the vertex in the dark. Incidentally, this amount of light at a vertex is equal to the cosine of the angle between the two vectors. For example, if the angle between the light vector and the normal vector is 30 degrees, then the dot product will equal cosine (30) = 0.866, or about 86.6 percent of the entire amount of light will reach this vertex.
We conclude this 3D graphics overview with our first application, a Solar System with several cameras and planets undergoing multiple animations. First, we will look at the organization of these objects in a condensed version of the X3D code (the
<IndexedFaceSet> node in the Earth was removed from here since it consists of thousands of values).
Earth comprises of three
<Transform> nodes for its rotation around the Sun, the seasons, and the 24-hour day. Note that in the Earth's outermost transformation,
center (0, 0, 10) is at the same distance from the Sun as the Earth's translation (0, 0, -10). This is because the center of the Earth's yearly rotation is the Sun. The rotation for the Earth's seasons is around the z axis, set to 0.4 radians or 23 degrees, the actual tilt of the Earth. The final inner
<Transform> node controls the Earth's daily rotation.
<Transform> is a child of the Earth's annual rotation. The Moon's rotation is centered on the Earth. Thus, the Moon's translation is 3 units (3, 0, 0) from the Earth, but its center is 3 units behind (-3, 0, 0). Of course, the Moon is unaffected by the Earth's seasonal and daily rotation, and thus the Moon's
<Transform> node is a child object of the Earth's outermost year
<Transform> but not a child of the Earth's seasonal or daily transformation.
<NavigationInfo headlight="FALSE"/> <Viewpoint id="mainCamera" orientation="1 0 0 -.3" position="0 8 30"/> <Viewpoint id="aboveCamera" orientation="1 0 0 -1.57" position="0 180 0"/> <PointLight/> <Transform DEF="Sun"> <Shape> <Sphere radius="6" onclick = "document.getElementById('aboveCamera') .setAttribute('set_bind','true');"/> <Appearance> <Material diffuseColor="1 1 0" emissiveColor="1 .7 0"/> </Appearance> </Shape> </Transform> <Viewpoint id="EarthCamera" position=" 0 2 0" orientation="0 1 0 0"/> <Transform DEF="Earth" center="0 0 10" translation="0 0 -10" scale="2 2 2"> <Transform DEF="Earth-Season" rotation="0 0 1 .4"> <Transform DEF="Earth-Day"> <Shape> ….IndexedFaceSet and coordinates </Shape> </Transform> </Transform> <Transform DEF="Moon" center="-3 0 0" translation="3 0 0"> <Shape> <Sphere radius=".6"/> <Appearance> <Material diffuseColor=".4 .4 .4"/> </Appearance> </Shape> </Transform> </Transform>
Saturn also has a
<Transform> node centered around the Sun and two child
<Transform> nodes to control Saturn's day and rings that are constructed from a flat plane and a texture map with a transparency.
<Viewpoint id="SaturnCamera" position=" 0 0 0" orientation="0 1 0 0" fieldOfView=".5"/> <Transform DEF="Saturn" center="0 0 20" translation="0 0 -20" scale="4 4 4"> <Transform DEF="SaturnDay"> <Shape> <Appearance> <Material diffuseColor='1 1 1' specularColor='.1 .1 .1' shininess='0.9' /> <ImageTexture url="./textureMaps/saturn.jpg"/> </Appearance> <IndexedFaceSet USE='Sphere_GEOMETRY' /> </Shape> </Transform>
Saturn's rings are just a flat plane consisting of two polygons and a texture map. A
.gif image is used to allow transparent areas in the corners and the center where Saturn sits. Saturn's rings are slightly tilted towards the Sun during the phases of its rotation around the Sun, as shown in the figure that follows this code:
<Transform DEF="rings" rotation="1 0 0 .2"> <Shape> <Appearance> <Material diffuseColor='1 1 1' specularColor='.2 .2 .2' shininess='0.8' emissiveColor=".1 .1 .1"/> <ImageTexture url="./textureMaps/SaturnRings.gif"/> </Appearance> <IndexedFaceSet coordIndex="0 1 2 -1 2 3 0 -1 2 1 0 -1 0 3 2 -1" texCoordIndex="0 1 2 -1 2 3 0 -1 2 1 0 -1 0 3 2 -1"> <Coordinate point="-4 0 4 4 0 44 0 -4 -4 0 -4"/> <TextureCoordinate point="0 0 1 01 1 0 1"/> </IndexedFaceSet> </Shape> </Transform> </Transform> </Transform>
Animation is a series of
<ROUTE> functions. The fraction of time is sent via the
<ROUTE> node from
<TimeSensor> to the interpolator, which uses another
<ROUTE> node to update the rotation or position in the object's
<Transform> node in order to allow rotation of the Earth around the Sun as the first four statements show in the following code. The next set of three statements control the seasonal tilt of the Earth using the same
<TimeSensor> node with a rotation around the z axis of +/- 0.4 radians. The day rotation for the Earth has its own four statements to control the 24-hour day, animated as one second. The Moon has its own independent animation and finally, the camera focused on the Earth uses the same
<TimeSensor> node as the Earth's year and seasons. However, the cameras focused on the Earth, the Earth's annual rotation, and the Earth's seasons have their own
<OrientationInterpolator> nodes. Saturn has its own interpolators to rotate around the Sun and for its own day, but this is not shown in the following code:
<TimeSensor DEF="yearTimer" cycleInterval="36.5" loop="true"/> <OrientationInterpolator DEF="yearlyRotation" key="0 .5 1" keyValue="0 1 0 0 0 1 0 3.14 0 1 0 6.28"/> <ROUTE fromField="fraction_changed" fromNode="yearTimer" toField="set_fraction" toNode="yearlyRotation"/> <ROUTE fromField="value_changed" fromNode="yearlyRotation" toField="rotation" toNode="Earth"/>
Earth's seasonal rotation, which has a tilt of 0.4 radians, is demonstrated in the following code:
<OrientationInterpolator DEF="seasonalRotation" key="0 .5 1" keyValue="0 0 1 .4 0 0 1 -.4 0 0 1 .4"/> <ROUTE fromField="fraction_changed" fromNode="yearTimer" toField="set_fraction" toNode="seasonalRotation"/> <ROUTE fromField="value_changed" fromNode="seasonalRotation" toField="rotation" toNode="Earth-Season"/>
Earth's day rotation, set to 1 second, is demonstrated in the following code:
<TimeSensor DEF="EarthDayTimer" cycleInterval="1" loop="true"/> <OrientationInterpolator DEF="EarthDayRotation" key="0 .5 1" keyValue="0 1 0 0 0 1 0 3.14 0 1 0 6.28"/> <ROUTE fromField="fraction_changed" fromNode="EarthDayTimer" toField="set_fraction" toNode="EarthDayRotation"/> <ROUTE fromField="value_changed" fromNode="EarthDayRotation" toField="rotation" toNode="Earth-Day"/>
<TimeSensor DEF="moonTimer" cycleInterval="5.8" loop="true"/> <OrientationInterpolator DEF="YRotMoon" key="0 .5 1" keyValue="0 1 0 0 0 1 0 3.14 0 1 0 6.28"/> <ROUTE fromField="fraction_changed" fromNode="moonTimer" toField="set_fraction" toNode="YRotMoon"/> <ROUTE fromField="value_changed" fromNode="YRotMoon" toField="rotation" toNode="Moon"/>
To ensure that our camera stays focused on the Earth, the
<Viewpoint> node is also animated using the same year timer as the Earth, as shown in the following code:
<OrientationInterpolator DEF="EarthCameraRotation" key="0 .5 1" keyValue="0 1 0 0 0 1 0 3.14 0 1 0 6.28"/> <ROUTE fromField="fraction_changed" fromNode="yearTimer" toField="set_fraction" toNode="EarthCameraRotation"/> <ROUTE fromField="value_changed" fromNode="EarthCameraRotation" toField="orientation" toNode="EarthCamera"/>
The images in the preceding figure show the views from Saturn's and Earth's cameras, two of the four cameras in the scene. To track planets, the user requires HTML buttons to navigate from one planet to the next. Clicking on the Sun or Earth will also take the user to the respective cameras. Buttons for X3D on a web page use the same buttons as any HTML page. What is unique for X3D is that element ID's such as
mainCamera are the ID values for the
<Viewpoint> nodes in X3D. The
setAttribute('set_bind', 'true') method, also a part of X3D, sets this as the new camera position, as shown in the following code:
<input type="button" value="Solar System View" onclick="document.getElementById('aboveCamera').setAttribute('set_bind','true');" /> <input type="button" value="Sun" onclick="document.getElementById('mainCamera').setAttribute('set_bind','true');" /> <input type="button" value="Earth" onclick="document.getElementById('EarthCamera').setAttribute('set_bind','true');" /> <input type="button" value="Saturn" onclick="document.getElementById('SaturnCamera').setAttribute('set_bind','true');" />
If you are new to the creation of 3D, one of the most fun aspects is the instant gratification from creating 3D scenes. In the subsequent projects, we will apply 3D graphics to familiar applications and see the places where 3D can be a more effective communication tool for users.