*(For more resources related to this topic, see here.)*

Our world in 5000 AD is incomplete without our mutated human being Mr. Green. Our Mr. Green is a rigged model, exported from Blender. All famous 3D games from *Counter Strike* to *World of Warcraft* use skinned models to give the most impressive real world model animations and kinematics. Hence, our learning has to now evolve to load Mr. Green and add the same quality of animation in our game.

We will start our study of character animation by discussing the skeleton, which is the base of the character animation, upon which a body and its motion is built. Then, we will learn about skinning, how the bones of the skeleton are attached to the vertices, and then understand its animations. In this article, we will cover basics of a character's skeleton, basics of skinning, and some aspects of Loading a rigged JSON model.

# Understanding the basics of a character's skeleton

A character's skeleton is a posable framework of bones. These bones are connected by articulated joints, arranged in a hierarchical data structure. The skeleton is generally rendered and is used as an invisible armature to position and orient a character's skin.

The joints are used for relative movement within the skeleton. They are represented by a 4 x 4 linear transformation matrices (combination of rotation, translation, and scale). The character skeleton is set up using only simple rotational joints as they are sufficient to model the joints of real animals.

Every joint has limited **degrees of freedom (DOFs)**. DOFs are the possible ranges of motion of an object. For instance, an elbow joint has one rotational DOF and a shoulder joint has three DOFs, as the shoulder can rotate along three perpendicular axes. Individual joints usually have one to six DOFs. Refer to the link http://en.wikipedia.org/wiki/Six_degrees_of_freedom to understand different degrees of freedom.

A joint local matrix is constructed for each joint. This matrix defines the position and orientation of each joint and is relative to the joint above it in the hierarchy. The local matrices are used to compute the world space matrices of the joint, using the process of forward kinematics. The world space matrix is used to render the attached geometry and is also used for collision detection.

The digital character skeleton is analogous to the real-world skeleton of vertebrates. However, the bones of our digital human character do have to correspond to the actual bones. It will depend on the level of detail of the character you require. For example, you may or may not require cheek bones to animate facial expressions.

Skeletons are not just used to animate vertebrates but also mechanical parts such as doors or wheels.

# Comprehending the joint hierarchy

The topology of a skeleton is a tree or an open-directed graph. The joints are connected up in a hierarchical fashion to the selected root joint. The root joint has no parent of itself and is presented in the model JSON file with the parent value of *-1*. All skeletons are kept as open trees without any closed loops. This restriction though does not prevent kinematic loops.

Each node of the tree represents a joint, also called bones. We use both terms interchangeably. For example, the shoulder is a joint, and the upper arm is a bone, but the transformation matrix of both objects is same. So mathematically, we would represent it as a single component with three DOFs. The amount of rotation of the shoulder joint will be reflected by the upper arm's bone.

The following figure shows simple robotic bone hierarchy:

# Understanding forward kinematics

Kinematics is a mathematical description of a motion without the underlying physical forces. Kinematics describes the position, velocity, and acceleration of an object. We use kinematics to calculate the position of an individual bone of the skeleton structure (skeleton pose). Hence, we will limit our study to position and orientation. The skeleton is purely a kinematic structure. Forward kinematics is used to compute the world space matrix of each bone from its DOF value. Inverse kinematics is used to calculate the DOF values from the position of the bone in the world.

Let's dive a little deeper into forward kinematics and study a simple case of bone hierarchy that starts from the shoulder, moves to the elbow, finally to the wrist. Each bone/joint has a local transformation matrix, *this.modelMatrix*. This local matrix is calculated from the bone's position and rotation. Let's say the model matrices of the wrist, elbow, and shoulder are *this.modelMatrix _{wrist}*,

*this.modelMatrix*, and

_{elbow}*this.modelMatrix*respectively. The world matrix is the transformation matrix that will be used by shaders as the model matrix, as it denotes the position and rotation in world space.

_{shoulder}The world matrix for a wrist will be:

this.worldMatrix

_{wrist}= this.worldMatrix_{elbow}* this.modelMatrix_{wrist}

The world matrix for an elbow will be:

this.worldMatrix

_{elbow}= this.worldMatrix_{shoulder}* this.modelMatrix_{elbow}

If you look at the preceding equations, you will realize that to calculate the exact location of a wrist in the world space, we need to calculate the position of the elbow in the world space first. To calculate the position of the elbow, we first need to calculate the position of shoulder. We need to calculate the world space coordinate of the parent first in order to calculate that of its children. Hence, we use depth-first tree traversal to traverse the complete skeleton tree starting from its root node.

A depth-first traversal begins by calculating *modelMatrix* of the root node and traverses down through each of its children. A child node is visited and subsequently all of its children are traversed. After all the children are visited, the control is transferred to the parent of *modelMatrix*. We calculate the world matrix by concatenating the joint parent's world matrix and its local matrix. The computation of calculating a local matrix from DOF and then its world matrix from the parent's world matrix is defined as forward kinematics.

Let's now define some important terms that we will often use:

**Joint DOFs:**A movable joint movement can generally be described by six DOFs (three for position and rotation each). DOF is a general term:

this.position = vec3.fromValues(x, y, z); this.quaternion = quat.fromValues(x, y, z, w); this.scale = vec3.fromValues(1, 1, 1);

We use quaternion rotations to store rotational transformations to avoid issues such as gimbal lock. The quaternion holds the DOF values for rotation around the

*x, y,*and*z*values.**Joint offset**: Joints have a fixed offset position in the parent node's space. When we skin a joint, we change the position of each joint to match the mesh. This new fixed position acts as a pivot point for the joint movement. The pivot point of an elbow is at a fixed location relative to the shoulder joint. This position is denoted by a vector position in the joint local matrix and is stored in*m31, m32,*and*m33*indices of the matrix. The offset matrix also holds initial rotational values.

# Understanding the basics of skinning

The process of attaching a renderable skin to its articulated skeleton is called skinning. There are many skinning algorithms depending on the complexity of the task. However, for gaming, the most common algorithm is smooth skinning. Smooth skinning is also known as multi-matrix skinning, blended skinning, or linear blend skinning.

## Simple skinning

Binding is a term common in skinning. It refers to the initial assignment of vertices of a mesh to underlying joints and then assigning the relevant information to the vertices. By using simple skinning, we attach every vertex in our mesh to exactly one joint. When we change the orientation of any joint in the skeleton, or in other words, when the skeleton is posed, the vertices are transformed using the joint's world matrix. Hence, if the vertex is attached to a single joint, then it is transformed using the equation *v'=v.mjoint* of the world space matrix.

Simple skinning is not adequate for complex models. It defines that a vertex is attached to exactly one joint. For example, a vertex at the elbow of your articulated arm is affected by two bones, the lower arm and the upper arm. The transformation of that vertex should be affected by the joint matrices of both bones.

## Smooth skinning

Smooth skinning is an extension of simple skinning. We can attach a vertex with more than one joint. Each attachment with a joint will be provided by a weight value. The key point is that the sum total of all weights affecting a vertex is 1 as shown in the following formula:

Σw

_{i}=1, w_{1}+w_{2}+w_{3}+w_{4}......w_{n}=1

The final vertex's transformed position is the weighted average of the initial vertex position transformed by each of the attached joints. However, before deriving the formula for the vertex position, let's first understand the concept of the binding matrix. The B_{i} matrix for the i joint is a matrix of the transformation of the coordinate joint local space to skin local space. To transform a point from skin local space to joint local space, we use B^{-1}_{i}, the inverse of the binding matrix.

## The binding matrix

Although, the binding matrix is a simple concept, sometimes it baffles the most intelligent of the minds. When we draw a mesh, each vertex is provided with a position relative to the model's center. It is like the model is centered at the origin of our world. During modeling, we create the skeleton along with the skin, each bone/joint at this point has a zero DOF value, we call this pose a zero pose. However, during the skinning process, we change the position of each joint to match the mesh. This pose is called the binding pose. Note that we change the DOF (position and angles) of each joint to match the vertices. The initial DOF values of the binding pose for each joint form the binding matrix or we can say, the initial joint matrix. This matrix is used to transform any position from joint local space to skin local space. Remember that each vertex is defined in skin local space. Hence, to transform a coordinate from skin local space to joint local space, we use inverse joint matrix B^{-1}_{i}.

During animation, we change DOF values (position and rotations) of a joint, but these values are in joint local space. Hence, the final vertex is transformed using M_{i}=B^{-1}_{i}W_{i} where W_{i} is the joint matrix in the world space. Hence first, we transform a vertex from skin local space to joint local space and then, we transform it using the joint's world space matrix. For a pose or animation frame, we calculate the M_{i} for all joints and then pass this final transformation matrix as a uniform to the shader so that we do not have to recalculate it for other vertices in the same joint, as shown in the following code snippet:

// compute the offset between the current and the original transform. mat4.mul(offsetMatrix,this.bones[ b ].skinMatrix, this.boneInverses[ b ]);

## The final vertex transformation

The final vertex transformation is the weighted average of the initial vertex position transformed by each of the attached joints, v'=∑w_{i}*v*M_{i}, where Mi=B^{-1}_{i}Wi and wi are the weight value of a joint for vertex.

In most cases, a vertex is shared between two bones and maximum of four bones. Hence for simplicity, our code only handles skinned models whose vertices are shared with a maximum of two joints.

vec4 skinVertex = vec4(aVertexPosition, 1.0); vec4 skinned = boneMatX * skinVertex * skinWeight.x; skinned += boneMatY * skinVertex * skinWeight.y;

In the preceding code, *boneMatX* is the offset matrix for bone X with its contributing weight in *skinWeight.x*, and *boneMatY* is an offset matrix of the second bone with its contributing weight in *skinWeight.y*.

The transformation computation is performed in the vertex shader.

## The final normal transformation

We would also need to transform our vertex normals as lighting calculation uses vertex normals. The normals are treated in a similar fashion to vertices, but as normals only specify direction and not position and are of unit length, we first calculate the weighted average and then multiply the normal with *skinMatrix* to avoid one extra multiplication step, as shown in the following code snippet:

mat4 skinMatrix = skinWeight.x * boneMatX; skinMatrix += skinWeight.y * boneMatY; vec4 skinnedNormal = skinMatrix * vec4( aVertexNormal, 0.0); transformedNormal = vec3(nMatrix * skinnedNormal);

# Loading a rigged JSON model

We will first understand how the bone DOFs and skinning information is encoded in the *three.js* JSON file format (Version 3.1). Then we will modify our code to load the data. The JSON file is exported from Blender.

## Understanding JSON file encoding

The JSON file contains bone DOF values and their corresponding skinning information. Open *model/obj/mrgreen.json* in your favorite text editor. The file has now four new arrays: *bones, skinIndices, skinWeights*, and *animation*.

The *bones* array contains the DOF information. It holds the binding matrix and its parent's information, as shown in the following code:

"bones" : [{"parent":-1, "name":"Back", "pos":[0.000000, -0.123622, -0.149781], "rotq":[0,0,0,1]}, {}, {}.......];

Each element of the bones array holds the following four elements:

*parent:*This element holds the hierarchical information of the skeleton. Each bone holds its parent's index. The root bone has a parent index of -1, denoting it does not have any parent.*name:*This element holds the name of the bone.*pos:*This element is a vector and holds the position of each bone with respect to its parent.*rotq:*Each bone's rotation is expressed as a quaternion rotation (*x, y, z,*and*w*) with respect to its parent.

For each vertex(*x, y, z, x1, y1,* and *z1*) in the *vertices* array, there are two values defined in the *skinIndices* (*a, b, a1*, and *b1*) and *skinWeights* (*a, b, a1,* and *b1*) arrays. We had discussed earlier in the *Understanding the basics of skinning* section that we will use a smooth skinning algorithm to store weights and skinning information. The *three.js* JSON model (https://github.com/mrdoob/three.js/wiki/JSONModel-format-3.1) allows only two attached bones per vertex. Hence for each vertex, we will have two corresponding *skinIndices* and *skinWeights* defined. Although a vertex may be associated with more than two bones, it is not advisable or even not required in gaming. It would rarely happen that a vertex is affected by three bones simultaneously. The skinIndices array holds the index of the bone in the *bones* array.

vertices:[x,y,z,x1,y1,z1,x2,y2,z2.............xn,yn,zn]; skinIndices:[a,b,a1,b1,a2,b2..........an,bn]; skinWeights:[z,w,z1,w1,z2,w2........zn,wn]; bones:[]

The preceding arrays denote the following:

- The vertices x, y, and z are attached to the bones[a] and bones[b] with weights z and w.
- The vertices x1, y1, and z1 are attached to the bones[a1] and bones[b1] with weights z1 and w1.
- The vertices x2, y2, and z2 are attached to the bones[a2] and bones[b2] with weights z2 and w2.

A vertex might be associated with a single bone, but we will still have two skin indices (a and b) and two skin weights (z and w) associated with it. In this case, one of the skin weights (z and w) will be 1 and the other would be 0, denoting that only one of the bones will affect the vertex.

## Loading the rigged model

We will first modify our parsing algorithm to accommodate our newly discovered arrays.

Open *primitive/parseJSON.js* in your favorite text editor. We have added a new parseSkin function as follows:

function parseSkin(data, geometry) { var i, l, x, y, z, w, a, b, c, d; if ( data.skinWeights ) { for ( i = 0, l = data.skinWeights.length; i < l; i += 2 ) { x = data.skinWeights[ i ]; y = data.skinWeights[ i + 1 ]; z = 0; w = 0; geometry.skinWeights.push(x); geometry.skinWeights.push(y); geometry.skinWeights.push(z); geometry.skinWeights.push(w); } } if ( data.skinIndices ) { for ( i = 0, l = data.skinIndices.length; i < l; i += 2 ) { a = data.skinIndices[ i ]; b = data.skinIndices[ i + 1 ]; c = 0; d = 0; geometry.skinIndices.push(a); geometry.skinIndices.push(b); geometry.skinIndices.push(c); geometry.skinIndices.push(d); } } geometry.bones = data.bones; geometry.animation = data.animation; }

The function simply iterates over the *skinIndices* and *skinWeights* arrays in our data object and stores the four values for each vertex in the corresponding geometry arrays. Note that although our JSON array has two bones per vertex, we still store four values (the last two values as zero, *{c = 0; d = 0;})*, so that our geometry class can handle data with two to four bones per vertex.

We also save the data for bones and animation information in the geometry object.

**Enhancing the StageObject class**

Our *StageObject* class had two shortcomings:

- It did not have any provision to handle child objects or tree hierarchy.
- We used the rotation matrix but we know that our bone object in the bones array uses quaternion rotations.

The following code shows the earlier use of *modelMatrix* to store rotations in the *x, y,* and *z* axes:

StageObject.prototype.update=function(steps) { mat4.identity(this.modelMatrix); mat4.translate(this.modelMatrix, this.modelMatrix, this.location); mat4.rotateX(this.modelMatrix, this.modelMatrix, this.rotationX); mat4.rotateY(this.modelMatrix, this.modelMatrix, this.rotationY); mat4.rotateZ(this.modelMatrix, this.modelMatrix, this.rotationZ); }

Let's walk through the changes we have made to overcome the shortcomings. Open *primitive/StageObject.js* in your editor, and take a look at the following code:

StageObject=function() { ... this.parent = undefined; this.children = []; this.up = vec3.fromValues( 0, 1, 0 ); this.position = vec3.create(); this.quaternion = quat.create(); this.scale = vec3.fromValues(1,1,1 ); this.matrixWorld = mat4.create(); this.matrixAutoUpdate = true; this.matrixWorldNeedsUpdate = true; this.visible = true; };

First, we added a few variables such as *quaternion* to hold the rotation DOF, *location* has been renamed to position, and new variables, *scale* and *matrixWorld*, have been added. If *stageObject* is the child object, then the final matrix, *worldMatrix*, is the concatenation of its parent, *matrixWorld*, and *modelMatrix*.

The *parent* object and the *children* array have been added to hold the parent and children information.

Two new variables, *matrixAutoUpdate* and *matrixWorldNeedsUpdate*, have been added to reduce the possible computation time. Basically in our previous code packets, we were calculating *modelMatrix* of each *StageObject* on every animation frame. However, now, we will only calculate the matrices if any of the DOFs (*scale, quaternion,* and *position*) change. On any DOF update, we will set the *matrixAutoUpdate* and *matrixWorldNeedsUpdate* values to false, then only *modelMatrix* and *matrixWorld* will be recalculated.

StageObject.prototype.rotate=function(radianX,radianY,radianZ) { quat.rotateX(this.quaternion,this.quaternion,radianX); quat.rotateY(this.quaternion,this.quaternion,radianY); quat.rotateZ(this.quaternion,this.quaternion,radianZ); } StageObject.prototype.setRotationFromAxisAngle=function ( axis, angle) { // assumes axis is normalized quat.setAxisAngle(this.quaternion, axis, angle ); } StageObject.prototype.setRotationFromMatrix= function ( m ) { // assumes the upper 3 x 3 of m is a pure rotation matrix (that is, unscaled) quat.fromMat3(this.quaternion, m ); } StageObject.prototype.setRotationFromQuaternion=function ( q ) { // assumes q is normalized this.quaternion=quat.clone( q ); } StageObject.prototype.rotateOnAxis= function(axis, angle) { // rotate object on axis in object space // axis is assumed to be normalized quat.setAxisAngle(this.quaternion, axis, angle ); } StageObject.prototype.rotateX= function (angle) { var v1 = vec3.fromValues( 1, 0, 0 ); return this.rotateOnAxis( v1, angle ); } StageObject.prototype.rotateY= function (angle) { var v1 = vec3.fromValues( 0, 1, 0 ); return this.rotateOnAxis( v1, angle ); } StageObject.prototype.rotateZ=function (angle) { var v1 = vec3.fromValues( 0, 0, 1 ); return this.rotateOnAxis( v1, angle ); }

The preceding set of functions either initializes the quaternion or simply updates it with new values. The implementation of the preceding functions uses the *quat* class of the *glMatrix* library.

StageObject.prototype.translateOnAxis= function (axis, distance) { // translate object by distance along axis in object space // axis is assumed to be normalized var v1 = vec3.create(); vec3.copy(v1, axis ); vec3.transformQuat(v1, v1, this.quaternion); vec3.scale(v1, v1, distance); vec3.add(this.position, this.position, v1); return this; } StageObject.prototype.translateX= function () { var v1 = vec3.fromValues( 1, 0, 0 ); return function ( distance ) { return this.translateOnAxis( v1, distance ); }; }(); StageObject.prototype.translateY= function () { var v1 = vec3.fromValues( 0, 1, 0 ); return function ( distance ) { return this.translateOnAxis( v1, distance ); }; }(); StageObject.prototype.translateZ= function () { var v1 = vec3.fromValues( 0, 0, 1 ); return function ( distance ) { return this.translateOnAxis( v1, distance ); }; }();

The preceding set of functions translates *StageObject* along the given axis. The key function is *translateOnAxis*, and all other functions are dependent on it.

StageObject.prototype.localToWorld= function ( vector ) { var v1=vec3.create(); vec3.transformQuat(v1,vector,this.matrixWorld ); return v1; }; StageObject.prototype.worldToLocal= function () { var m1 = mat4.create(); return function ( vector ) { mat4.invert(m1,this.matrixWorld); var v1=vec3.create(); vec3.transformQuat(v1,vector,m1 ); return v1; }; }();

The preceding functions transform any vector from the world space to the object's local space and vice versa.

StageObject.prototype.add=function ( object ) { if ( object === this ) { return; } if ( object.parent !== undefined ) { object.parent.remove( object ); } object.parent = this; //object.dispatchEvent( { type: 'added' } ); this.children.push( object ); // add to scene }; StageObject.prototype.remove= function ( object ) { var index = this.children.indexOf( object ); if ( index !== - 1 ) { object.parent = undefined; //object.dispatchEvent( { type: 'removed' } ); this.children.splice( index, 1 ); } }

The *add* function pushes the object on its *children* array and sets its *parent* value to itself after verifying that the child object maintains an open-graph structure. It first checks if the object has a parent and then it removes the object from its parent's list by invoking the *remove* function of its parent.

The *remove* function unsets the *parent* of the object and deletes it from its *children* array.

StageObject.prototype.traverse= function ( callback ) { callback( this ); for ( var i = 0, l = this.children.length; i < l; i ++ ) { this.children[ i ].traverse( callback ); } } StageObject.prototype.getObjectById=function ( id, recursive ) { for ( var i = 0, l = this.children.length; i < l; i ++ ) { var child = this.children[ i ]; if ( child.id === id ) { return child; } if ( recursive === true ) { child = child.getObjectById( id, recursive ); if ( child !== undefined ) { return child; } } } return undefined; } StageObject.prototype.getObjectByName= function ( name, recursive ) { for ( var i = 0, l = this.children.length; i < l; i ++ ) { var child = this.children[ i ]; if ( child.name === name ) { return child; } if ( recursive === true ) { child = child.getObjectByName( name, recursive ); if ( child !== undefined ) { return child; } } } return undefined; } StageObject.prototype.getChildByName= function ( name, recursive ) { return this.getObjectByName( name, recursive ); } StageObject.prototype.getDescendants=function ( array ) { if ( array === undefined ) array = []; Array.prototype.push.apply( array, this.children ); for ( var i = 0, l = this.children.length; i < l; i ++ ) { this.children[ i ].getDescendants( array ); } return array; }

We have also added traversal functions to locate the child objects either by ID or by name. The key function is *traverse*; it calls itself recursively followed by the depth-first search algorithm.

StageObject.prototype.updateMatrix=function () { mat4.identity(this.modelMatrix); mat4.fromQuat(this.modelMatrix,this.quaternion); mat4.scale(this.modelMatrix,this.modelMatrix,this.scale); this.modelMatrix[12]=this.position[0]; this.modelMatrix[13]=this.position[1]; this.modelMatrix[14]=this.position[2]; this.matrixWorldNeedsUpdate = true; }

The preceding function is the most significant change we have done from the previous code. Earlier, we were using rotational matrices to compute the object's transformation matrix, but now we are using the quaternion to calculate the model matrix (*mat4.fromQuat(this.modelMatrix, this.quaternion)*). Then, we apply shear transformation and scale our object with the provided scale vector. Then we simply place the *position* vector in *m31, m32*, and *m33* of our transformation matrix.

StageObject.prototype.updateMatrixWorld=function ( force ) { if ( this.matrixAutoUpdate === true ) this.updateMatrix(); if ( this.matrixWorldNeedsUpdate === true || force === true ) { if ( this.parent === undefined ) { this.matrixWorld.copy( this.modelMatrix ); } else { mat4.mul(this.matrixWorld, this.parent.matrixWorld, this.modelMatrix); } this.matrixWorldNeedsUpdate = false; force = true; } // update children for ( var i = 0, l = this.children.length; i < l; i++ ) { this.children[ i ].updateMatrixWorld( force ); } } StageObject.prototype.update=function(steps) { this.updateMatrixWorld(); }

Another interesting function is the *updateMatrixWorld* function. It first invokes *updateMatrix*; if *matrixAutoUpdate* is *true*, the function then checks for the value of *parent*. If *parent* is not defined, then *modelMatrix* is copied to *matrixWorld*; otherwise, *matrixWorld* for that object is computed by concatenating the parent's *matrixWorld* matrix and the object's *modelMatrix**(mat4.mul(this.matrixWorld,this.parent.matrixWorld,this.modelMatrix))*. Then, we iterate over all the children of the object to compute their new world matrix. We have also updated our *update* function. It invokes *updateMatrixWorld* when it is invoked from our main control code.

# Summary

In this article, we covered the basics of a character's skeleton, basics of skinning, and some aspects of Loading a rigged JSON model.

## Resources for Article:

**Further resources on this subject:**

- Introduction to Game Development Using Unity 3D [Article]
- Flash Game Development: Making of Astro-PANIC! [Article]
- Microsoft XNA 4.0 Game Development: Receiving Player Input [Article]