3D Websites

Exclusive offer: get 50% off this eBook here
WebGL Hotshot

WebGL Hotshot — Save 50%

Create interactive 3D content for web pages and mobile devices with this book and ebook

$29.99    $15.00
by Mitch Williams | May 2014 | Games Open Source Web Development

In this article by Mitch Williams, author of the book WebGL Hotshot, we will revisit X3D to create engaging scenes and we will then add portals to transport within a 3D website for faster navigation.

(For more resources related to this topic, see here.)

Creating engaging scenes

There is no adopted style for a 3D website. No metaphor can best describe the process of designing the 3D web. Perhaps what we know the most is what does not work. Often, our initial concept is to model the real world. An early design that was used years ago involved a university that wanted to use its campus map to navigate through its website. One found oneself dragging the mouse repeatedly, as fast as one could, just to get to the other side of campus. A better design would've been a book shelf where everything was in front of you. To view the chemistry department, just grab the chemistry book, and click on the virtual pages to view the faculty, curriculum, and other department information. Also, if you needed to cross-reference this with the math department's upcoming schedule, you could just grab the math book.

Each attempt adds to our knowledge and gets us closer to something better. What we know is what most other applications of computer graphics learned—that reality might be a starting point, but we should not let it interfere with creativity. 3D for the sake of recreating the real world limits our innovative potential.

Following this starting point, strip out the parts bound by physics, such as support beams or poles that serve no purpose in a virtual world. Such items make the rendering slower by just existing. Once we break these bounds, the creative process takes over—perhaps a whimsical version, a parody, something dark and scary, or a world-emphasizing story. Characters in video games and animated movies take on stylized features. The characters are purposely unrealistic or exaggerated. One of the best animations to exhibit this is Chris Landreth's The Spine, Ryan (Academy Award for best-animated short film in 2004), and his earlier work in Psychological Driven Animation, where the characters break apart by the ravages of personal failure (https://www.nfb.ca/film/ryan).

This demonstration will describe some of the more difficult technical issues involved with lighting, normal maps, and the efficient sharing of 3D models. The following scene uses 3D models and textures maps from previous demonstrations but with techniques that are more complex.

Engage thrusters

This scene has two lampposts and three brick walls, yet we only read in the texture map and 3D mesh for one of each and then reuse the same models several times. This has the obvious advantage that we do not need to read in the same 3D models several times, thus saving download time and using less memory. A new function, copyObject(), was created that currently sits inside the main WebGL file, although it can be moved to mesh3dObject.js. In webGLStart(), after the original objects were created, we call copyObject(), passing along the original object with the unique name, location, rotation, and scale. In the following code, we copy the original streetLight0Object into a new streetLight1Object:

streetLight1Object = copyObject( streetLight0Object, "streetLight1",
\streetLight1Location, [1, 1, 1], [0, 0, 0] );

Inside copyObject(), we first create the new mesh and then set the unique name, location (translation), rotation, and scale:

function copyObject(original, name, translation, scale, rotation) { meshObjectArray[ totalMeshObjects ] = new meshObject(); newObject = meshObjectArray[ totalMeshObjects ]; newObject.name = name; newObject.translation = translation; newObject.scale = scale; newObject.rotation = rotation;

The object to be copied is named original. We will not need to set up new buffers since the new 3D mesh can point to the same buffers as the original object:

newObject.vertexBuffer = original.vertexBuffer; newObject.indexedFaceSetBuffer = original.indexedFaceSetBuffer; newObject.normalsBuffer = original.normalsBuffer; newObject.textureCoordBuffer = original.textureCoordBuffer; newObject.boundingBoxBuffer = original.boundingBoxBuffer; newObject.boundingBoxIndexBuffer = original.boundingBoxIndexBuffer; newObject.vertices = original.vertices; newObject.textureMap = original.textureMap;

We do need to create a new bounding box matrix since it is based on the new object's unique location, rotation, and scale. In addition, meshLoaded is set to false. At this stage, we cannot determine if the original mesh and texture map have been loaded since that is done in the background:

newObject.boundingBoxMatrix = mat4.create(); newObject.meshLoaded = false; totalMeshObjects++; return newObject; }

There is just one more inclusion to inform us that the original 3D mesh and texture map(s) have been loaded inside drawScene():

streetLightCover1Object.meshLoaded =
streetLightCover0Object.meshLoaded; streetLightCover1Object.textureMap =
streetLightCover0Object.textureMap;

This is set each time a frame is drawn, and thus, is redundant once the mesh and texture map have been loaded, but the additional code is a very small hit in performance. Similar steps are performed for the original brick wall and its two copies.

Most of the scene is programmed using fragment shaders. There are four lights: the two streetlights, the neon Products sign, and the moon, which sets and rises. The brick wall uses normal maps. However, it is more complex here; the use of spotlights and light attenuation, where the light fades over a distance. The faint moon light, however, does not fade over a distance.

Opening scene with four light sources: two streetlights, the Products neon sign, and the moon

This program has only three shaders: LightsTextureMap, used by the brick wall with a texture normal map; Lights, used for any object that is illuminated by one or more lights; and Illuminated, used by the light sources such as the moon, neon sign, and streetlight covers.

The simplest out of these fragment shaders is Illuminated. It consists of a texture map and the illuminated color, uLightColor. For many objects, the texture map would simply be a white placeholder. However, the moon uses a texture map, available for free from NASA that must be merged with its color:

vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s,
vTextureCoord.t)); gl_FragColor = vec4(fragmentColor.rgb * uLightColor, 1.0);

The light color also serves another purpose, as it will be passed on to the other two fragment shaders since each adds its own individual color: off-white for the streetlights, gray for the moon, and pink for the neon sign.

The next step is to use the shaderLights fragment shader. We begin by setting the ambient light, which is a dim light added to every pixel, usually about 0.1, so nothing is pitch black. Then, we make a call for each of our four light sources (two streetlights, the moon, and the neon sign) to the calculateLightContribution() function:

void main(void) { vec3 lightWeighting = vec3(uAmbientLight, uAmbientLight,
uAmbientLight); lightWeighting += uStreetLightColor * calculateLightContribution
(uSpotLight0Loc, uSpotLightDir, false); lightWeighting += uStreetLightColor * calculateLightContribution
(uSpotLight1Loc, uSpotLightDir, false); lightWeighting += uMoonLightColor * calculateLightContribution
(uMoonLightPos, vec3(0.0, 0.0, 0.0), true); lightWeighting += uProductTextColor * calculateLightContribution
(uProductTextLoc, vec3(0.0, 0.0, 0.0), true);

All four calls to calculateLightContribution() are multiplied by the light's color (white for the streetlights, gray for the moon, and pink for the neon sign). The parameters in the call to calculateLightContribution(vec3, vec3, vec3, bool) are: location of the light, its direction, the pixel's normal, and the point light. This parameter is true for a point light that illuminates in all directions, or false if it is a spotlight that points in a specific direction. Since point lights such as the moon or neon sign have no direction, their direction parameter is not used. Therefore, their direction parameter is set to a default, vec3(0.0, 0.0, 0.0).

The vec3 lightWeighting value accumulates the red, green, and blue light colors at each pixel. However, these values cannot exceed the maximum of 1.0 for red, green, and blue. Colors greater than 1.0 are unpredictable based on the graphics card. So, the red, green, and blue light colors must be capped at 1.0:

if ( lightWeighting.r > 1.0 ) lightWeighting.r = 1.0; if ( lightWeighting.g > 1.0 ) lightWeighting.g = 1.0; if ( lightWeighting.b > 1.0 ) lightWeighting.b = 1.0;

Finally, we calculate the pixels based on the texture map. Only the street and streetlight posts use this shader, and neither have any tiling, but the multiplication by uTextureMapTiling was included in case there was tiling. The fragmentColor based on the texture map is multiplied by lightWeighting—the accumulation of our four light sources for the final color of each pixel:

vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.
s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)); gl_FragColor = vec4(fragmentColor.rgb * lightWeighting.rgb, 1.0); }

In the calculateLightContribution() function, we begin by determining the angle between the light's direction and point's normal. The dot product is the cosine between the light's direction to the pixel and the pixel's normal, which is also known as Lambert's cosine law (http://en.wikipedia.org/wiki/Lambertian_reflectance):

vec3 distanceLightToPixel = vec3(vPosition.xyz - lightLoc); vec3 vectorLightPosToPixel = normalize(distanceLightToPixel); vec3 lightDirNormalized = normalize(lightDir); float angleBetweenLightNormal = dot
( -vectorLightPosToPixel, vTransformedNormal );

A point light shines in all directions, but a spotlight has a direction and an expanding cone of light surrounding this direction. For a pixel to be lit by a spotlight, that pixel must be in this cone of light. This is the beam width area where the pixel receives the full amount of light, which fades out towards the cut-off angle that is the angle where there is no more light coming from this spotlight:

With texture maps removed, we reveal the value of the dot product between the pixel normal and direction of the light

if ( pointLight) { lightAmt = 1.0; } else { // spotlight float angleLightToPixel = dot( vectorLightPosToPixel, lightDirNormalized ); // note, uStreetLightBeamWidth and uStreetLightCutOffAngle // are the cosines of the angles, not actual angles if ( angleLightToPixel >= uStreetLightBeamWidth ) { lightAmt = 1.0; } if ( angleLightToPixel > uStreetLightCutOffAngle ) { lightAmt =
(angleLightToPixel - uStreetLightCutOffAngle) /
(uStreetLightBeamWidth - uStreetLightCutOffAngle); } }

After determining the amount of light at that pixel, we calculate attenuation, which is the fall-off of light over a distance. Without attenuation, the light is constant. The moon has no light attenuation since it's dim already, but the other three lights fade out at the maximum distance. The float maxDist = 15.0; code snippet says that after 15 units, there is no more contribution from this light. If we are less than 15 units away from the light, reduce the amount of light proportionately. For example, a pixel 10 units away from the light source receives (15-10)/15 or 1/3 the amount of light:

attenuation = 1.0; if ( uUseAttenuation ) { if ( length(distanceLightToPixel) < maxDist ) { attenuation = (maxDist - length(distanceLightToPixel))/
maxDist; } else attenuation = 0.0; }

Finally, we multiply the values that make the light contribution and we are done:

lightAmt *= angleBetweenLightNormal * attenuation; return lightAmt;

Next, we must account for the brick wall's normal map using the shaderLightsNormalMap-fs fragment shader. The normal is equal to rgb * 2 – 1. For example, rgb (1.0, 0.5, 0.0), which is orange, would become a normal (1.0, 0.0, -1.0). This normal is converted to a unit value or normalized to (0.707, 0, -0.707):

vec4 textureMapNormal = vec4
( (texture2D(uSamplerNormalMap, vec2(vTextureCoord.s*uTextureMapTiling.s,
vTextureCoord.t*uTextureMapTiling.t)) * 2.0) - 1.0 ); vec3 pixelNormal = normalize(uNMatrix *
normalize(textureMapNormal.rgb) );

A normal mapped brick (without red brick texture image) reveals how changing the pixel normal alters
the shading with various light sources

We call the same calculateLightContribution() function, but we now pass along pixelNormal calculated using the normal texture map:

calculateLightContribution(uSpotLight0Loc, uSpotLightDir,
pixelNormal, false);

From here, much of the code is the same, except we use pixelNormal in the dot product to determine the angle between the normal and the light sources:

float angleLightToTextureMap =
dot( -vectorLightPosToPixel, pixelNormal );

Now, angleLightToTextureMap replaces angleBetweenLightNormal because we are no longer using the vertex normal embedded in the 3D mesh's .obj file, but instead we use the pixel normal derived from the normal texture map file, brickNormalMap.png.

A normal mapped brick wall with various light sources

Objective complete – mini debriefing

This comprehensive demonstration combined multiple spot and point lights, shared 3D meshes instead of loading the same 3D meshes, and deployed normal texture maps for a real 3D brick wall appearance. The next step is to build upon this demonstration, inserting links to web pages found on a typical website. In this example, we just identified a location for Products using a neon sign to catch the users' attention. As a 3D website is built, we will need better ways to navigate this virtual space and this is covered in the following section.

WebGL Hotshot Create interactive 3D content for web pages and mobile devices with this book and ebook
Published: May 2014
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

Portals to navigate virtual spaces

Even before the World Wide Web, linking to data non-sequentially was introduced in Apple's Hypercard. We know it today as hyperlinks or simply links in a website. Prior to this, most data was linear, such as this book or an article, where we read from the beginning to end. It took time for links to become intuitive. However, now when we see text underlined, a different color, or the mouse icon change, we know that clicking on hyperlinks takes us to another web page.

Virtual worlds have an even better navigation system—the portal. Perhaps our first portal could be attributed to the Star Trek transporter that could beam you from the Enterprise to another planet. So popular was the transporter that the phrase beams me up is part of our culture. Star Trek was hardly the first although, as other fictional works conceived vortexes to transfer from place to place. However, now 3D worlds make practical use of these portals, enabling us to navigate virtual worlds without having to drag our mouse hundreds of times to get across the virtual 3D website.

Engage thrusters

To create a portal, there is just one item required—an object representing the portal. We could just texture map that object, click on it, and go through the portal. Since no one wants to go someplace lifeless, we need animated objects inside the portal showing us that there is life on the other side. In addition, while we are at it, we will need some way to get back. Our two new portals are portalWall1Object and portalWall2Object. This time we will declare two texture maps to render inside webGLStart() and name them rttFramebuffer1 (render-to-texture) and rttFramebuffer2:

rttFramebuffer1 = gl.createFramebuffer(); // buffer memory area rttTexture1 = gl.createTexture(); // render-to-texture texture initTextureFramebuffer(rttFramebuffer1, rttTexture1); rttFramebuffer2 = gl.createFramebuffer(); rttTexture2 = gl.createTexture(); initTextureFramebuffer(rttFramebuffer2, rttTexture2);

The drawScene() function is broken up so that the panels that are used for the portals use the same code to render the scene as the main camera. Thus, we now have a new function, renderer(), with a single parameter—the camera from where the scene should be rendered, once for each of the two portals and once for the 3D scene itself. Within drawScene(), we set the parameters to render the scene from the first portal camera and save it to rttFramebuffer1. Then, set up the parameters for the second portal camera, save its rendered image to rttFramebuffer2, and finally render the scene from the original camera:

function drawScene() { // switch from default to the render-to-texture frame buffer gl.bindFramebuffer(gl.FRAMEBUFFER, rttFramebuffer1); mat4.lookAt(portal1eye, portal1target, portal1up, portalCamera); drawRenderedTextureMap(rttFramebuffer1, rttTexture1, portalCamera ); gl.bindFramebuffer(gl.FRAMEBUFFER, null); gl.bindFramebuffer(gl.FRAMEBUFFER, rttFramebuffer2); mat4.lookAt( portal2eye, portal2target, portal2up, portalCamera ); drawRenderedTextureMap(rttFramebuffer2, rttTexture2, portalCamera ); gl.bindFramebuffer(gl.FRAMEBUFFER, null); gl.clearColor( fogColor[0], fogColor[1], fogColor[2], 1.0); gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); target[0] = eye[0] + Math.sin(cameraRotation) *targetDistance; target[2] = eye[2] - Math.cos(cameraRotation) *targetDistance; mat4.lookAt( eye, target, up, camera ); renderer(camera); } // end drawScene

The drawRenderedTextureMap() function sets the width and height of our memory area (preferably in dimensions of 2), and then calls the same renderer() function used to draw the original scene. On being returned from the renderer() function, we bind the rttTexture texture map object, which saves this image. As a texture map, we generate mipmaps, the previously discussed process that makes half-sized copies of the texture map blending neighboring pixels. Finally, we reset variables to null to build the next texture map:

function drawRenderedTextureMap( rttFramebuffer,
rttTexture, portalCamera ) { gl.clearColor(0.25, 0.25, 0.5, 1.0); gl.viewport(0,0, rttFramebuffer.width, rttFramebuffer.height); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); renderer(portalCamera); gl.bindTexture(gl.TEXTURE_2D, rttTexture); gl.generateMipmap(gl.TEXTURE_2D); gl.bindTexture(gl.TEXTURE_2D, null); } // end drawRenderedTextureMap

The textures have been created and are ready to be used inside the renderer() function. Within the for loop that draws all the 3D meshes, we check whether the current mesh is portWall1Object or portalWall2Object. The key line, gl.bindTexture(gl.TEXTURE_2D, rttTexture1), assigns our generated portal texture as the texture map for our 3D mesh portal object:

if ( meshObjectArray[i] == portalWall1Object ) { gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, rttTexture1 ); gl.uniform1i(shaderProgram.samplerUniform, 0); } else if ( meshObjectArray[i] == portalWall2Object ) { gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, rttTexture2 ); gl.uniform1i(shaderProgram.samplerUniform, 0); }

To see the effect of cameras in each portal, the rotating Earth and a teapot were added and can be seen in the portals on the left-hand side of the screen.

The new scene has the rotating teapot and Earth and the portals on the left

Now that the portal planes are rendering the scene live in real time, we have to detect when to go through the portal. We create a box named portalActiviationBox and set its coordinates to (5, 0, 2) in front of each portal. If the camera is within five units in front of the portal and two units in either direction of the center of the portal, we go through the portal. A distance of five units allows the user to see the entire portal texture map. We check the camera position for every frame inside the tick() function right after handleKeys() to see if we entered the portal area. Incidentally, this scene rotates the teapot and Earth inside the tick() function:

if ( (eye[0] >
portalWall2Location[0]) && (eye[0] < (portalWall2Location[0]+
portalActiviationBox[0])) &&
(eye[2] < (portalWall2Location[2]+portalActiviationBox[2]))
&& (eye[2] > (portalWall2Location[2]-portalActiviationBox[2])) ){ eye[0] = portal2target[0]-3.0; eye[2] = portal2target[2]; cameraRotation = 1.57; }

Once we go through the portal, the camera's view matches the portal's view. Therefore, if you walk towards the portal with the picture of the teapot, you come out on the other side looking at the teapot as shown in the following screenshot:

An up close view of the two portals where cameras are pointed in the opposite direction

The 3 percent gray borders surrounding the portal's texture maps were added to the textureMapPortal fragment shader to separate the portal from the background:

void main(void) { vec4 textureColor = texture2D(uSampler,
vec2(vTextureCoord.s, vTextureCoord.t)); if ( (vTextureCoord.s <= 0.03) ||
(vTextureCoord.s >= 0.97) ) textureColor = vec4( 0.5, 0.5, 0.5, 1.0 ); else if ((vTextureCoord.t <= 0.03)||
(vTextureCoord.t >= 0.97)) textureColor = vec4( 0.5, 0.5, 0.5, 1.0 ); gl_FragColor = vec4(textureColor.rgb, 1.0); }

Objective complete – mini debriefing

Designing portals is a cool feature in 3D web design. It made good use of code sharing between the existing scene renderer and only needed you to insert the new camera's transformation matrix.

Classified intel

The video game Portal and the Unreal game engine editor display portals as circles. Perhaps this gives a vortex effect that is a fairly simple modification in the shader. We only need to display pixels within a radius from the center of our portal, and then add a gray border as done previously.

A portal using a circle rather than a rectangle

Modify the textureMapPortal fragment shader by first calculating the distance from the center of the texture map to each pixel. In this example, our radius is 1, so any pixel within our radius will be displayed. Beyond a distance of 1, that pixel will be discarded as if it were transparent. Note that the texture map coordinates are from 0 to 1, so by multiplying the texture map's s and t coordinates with 2 and then subtracting one will give us coordinates from -1 to 1 in the x and y dimensions:

void main(void) { float distanceFromCenter = sqrt
( pow((vTextureCoord.s * 2.0) - 1.0, 2.0) +
pow((vTextureCoord.t * 2.0) - 1.0, 2.0) ); vec4 textureColor = texture2D
(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); if ( distanceFromCenter > 1.0) discard; else if ( distanceFromCenter > .97)
textureColor = vec4( 0.5, 0.5, 0.5, 1.0 ); gl_FragColor = vec4(textureColor.rgb, 1.0); }

Another interesting illusion is having the portal camera look at the portal. Much like when a video camera is aimed at a television, the television displays smaller versions of itself as shown in the following screenshot. This is known as recursion and can be a cool effect in virtual spaces.

The camera is aimed at the portal and thus recursively displays the image. Note the rising moon in each image.

Another neat trick is to see ourselves in the portal. If we step in front of the portal's camera, we see ourselves—the current camera, which is our current position. The Mario video games were among the first to do this, as Mario looked into a mirror and saw a camera floating behind him. It captures the user's attention and reinforces the portal concept. The video game Portal used a generic female character instead of the movie camera when its two portals looked across from each other and you as the game player were in the middle.

Seeing ourselves in the portal, as represented by a camera object

The 3D model movie camera is encasing the scene's camera. Since the actual camera is now inside our 3D modeled movie camera, it is important to include gl.enable(gl.CULL_FACE) to webGLStart(). This culls or removes the backsides of objects so that we can see through them from the inside. Incidentally, backface culling also speeds up performance since we don't render the backsides of objects.

In drawScene(), we moved up the camera transformation code to the beginning and set the translation of our new cameraObject to the same location as the camera. The camera rotates only around the y axis, but we have to negate this y value since the rendering of a camera rotates opposite the calculation of the actual camera. It's like looking into a mirror. If you step twice to the left, the mirror image looks as if you stepped twice to the right:

function drawScene() { target[0] = eye[0] + Math.sin( cameraRotation ) * targetDistance; target[2] = eye[2] - Math.cos( cameraRotation ) * targetDistance; mat4.lookAt( eye, target, up, camera ); cameraObject.translation = [eye[0], eye[1], eye[2]]; cameraObject.rotation = [0, -cameraRotation, 0]; . . .

Mission accomplished

We are at a point where it is time for our imagination to run wild. The missing elements are fun and creativity. We have looked at interesting lighting examples, converting textures that looked like wallpaper into textures with depth and an organic look, and virtual spaces with portals. Like most work in the field of computer graphics, we demonstrated the technical issues first and then allowed the designers to create the engaging scenery that connects us emotionally with their story.

Perhaps this final image poses the questions, "What do we see when we see ourselves designing WebGL websites? What do we see when we look deeper and deeper into virtual space?"

Portal camera looking into a portal; we can see ourselves over and over again

Summary

In this article, we learned how to create engaging scenes and also learned how to add portals to transport within a 3D website for faster navigation.

Resources for Article:


Further resources on this subject:


WebGL Hotshot Create interactive 3D content for web pages and mobile devices with this book and ebook
Published: May 2014
eBook Price: $29.99
Book Price: $49.99
See more
Select your format and quantity:

About the Author :


Mitch Williams

Mitch Williams has been involved with 3D graphics programming and Web3D development since its creation in the mid 1990s. He began his career writing software for digital imaging products before moving on as Manager of Software for Vivendi Universal Games. In the late 1990s, he started 3D-Online, his own company, where he created "Dynamic-3D", a Web3D graphics engine. He has worked on various projects ranging from interactive 3D medical procedures, online 3D training for the Department of Defense, creating one of the first 3D mobile games prior to the launch of the iPhone, and graphics card shader language programming. He has been teaching Interactive 3D Media at various universities including UC Berkeley, UC Irvine, and UCLA Extension.

Books From Packt


 Game Development with Three.js
Game Development with Three.js

Leap Motion Development Essentials
Leap Motion Development Essentials

Learning Three.js: The JavaScript 3D Library for WebGL
Learning Three.js: The JavaScript 3D Library for WebGL

WebGL Beginner's Guide
WebGL Beginner's Guide

WebGL Game Development
WebGL Game Development

3D Graphics with XNA Game Studio 4.0
3D Graphics with XNA Game Studio 4.0

Instant Google Compute Engine
Instant Google Compute Engine

Torque 3D Game Development Cookbook
Torque 3D Game Development Cookbook


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software