Getting started with WebGL
Written by David Conrad   
Wednesday, 09 February 2011
Article Index
Getting started with WebGL
Using shaders
Drawing and rendering

Now is a very good time to get started with WebGL - it's close to its final version and Chrome 9 now has a stable implementation. This tutorial is a clear introduction to the 3D basics of WebGL.

 

Getting started with WebGL isn't easy for several  reasons. The documentation isn't generally WebGL specific and simply refers to the OpenGL/ES documentation and points out differences. Even if you do know OpenGL there are some surprising and unwelcome differences between it and the slightly more primitive WebGL. It all makes it difficult to get started on a 3D project not to mention the need to use HTML5's Canvas and code 3D graphics in JavaScript.

You can use any HTML5 WebGL supporting browser but this example is based on using Chrome 9 or later. Simply download and install Chrome 9 or allow your current version to automatically update to version 9.

Chrome 9 comes with WebGL support turned on by default and so there is nothing additional you need to do. Everything should work in any WebGL supporting browser but you might have to turn the support on first. The simplest thing to do is to navigate to a WebGL demo such as WebGL Experiments and check that it all works before starting on your own project. The only complication is that if you are working with Windows you have to make sure that you have the DirectX runtime installed.

Once again it is worth stressing that you need to check that you can view a WebGL page in the browser  before moving on to try writing your own 3D programs. You will of course need to know how to program in JavaScript and how to work with HTML - but if you plan to master WebGL these are trivial requirements in comparison.

The project we are about to make a start on is going to presented as a single long function and it is not going to use any "helper" functions or any elaborate ways of doing things. The purpose of this example is to show you how things work. When you start work on a real program you need to break it down into sensible functions, you need helper functions to keep the code compact and many an "elaborate" way of doing something turns out to be the best. So in this case the code is simple and direct - not the best.

The Canvas and the Viewport

There is a lot of initialisation to do before you can start drawing anything and this is the case with most 3D programs. WebGL is particularly bad when it comes to lengthy initialisation - there is nothing much that can be done about this.

The first thing we have to do is set up a web page complete with a Canvas object that we can use to draw on.

 <body>
<canvas id="webGLCanvas"
width="400" height="400">
</canvas>
</body>
</html>

This is all the HTML we need but you can add more if you want to. All that matters is that we have a Canvas object that WebGL can use to draw on.

Now that we have a Canvas object we can start to write the JavaScript that draws on it. First we get the Canvas object:

function Draw3D()
{
var canvas = document.getElementById(
"webGLCanvas");

Next we need to get the WebGL object from the Canvas object:

var gl = canvas.getContext(
"experimental-webgl");

At the moment the name of the WebGL object has "experimental-" in front of it but when the final standard is released this will change to just "webgl". You should now test that the variable gl really does have a reference to the WebGL object but for the sack of simplicity let's just assume it does.

Finally we have to set the WebGL viewport to the area of the Canvas object that we want to use to render the 3D graphics. In most cases this is the whole of the Canvas:

gl.viewport(0, 0, canvas.width, canvas.height);

Shader theory

The next standard initialisation task is to set up a vertex and pixel shader. If you have used other 3D frameworks you might not be used to this idea but modern graphics hardware has a programmable pipeline and WebGL and OpenGL support this. You there are no default rendering modes that you can fall back on - you have to specify how you want to process the 3D points and you have have to specify how to render them. There are no lights, no lighting effects and so on unless you provide the shader code for it. This can make getting started more difficult but in practice you can "borrow" other peoples standard shaders to create the lighting and rendering effects you want. When you become and expert then working out new shaders simply adds to the fun.

In most cases you have to supply two shaders - a vertex shader and a pixel or fragment shader. The vertex shader is all about how the point that you specify in 3D space to the point that is plotted on the screen. It is the transformation and projection processing and generally you have to use it to apply transformation and projection matrices onto the raw 3D data.

After the vertex shader has been run the polygons specified by the vertexes are rasterised - that is pixels that are within the filled polygon are generated and passed to the fragment shader. This is where the lighting, texture and material properties are applied to the pixels. The fragment shader is where most of the clever stuff happens but for this example it is going to be very simple. 

Both types of shader are specified using GLSL (OpenGL Shading Language) which is basically C with additional data types and standard functions. We don't have space to go into the details of GLSL but you should be able to understand roughly what our basic shaders are doing.

The vertex shader is a very standard

attribute vec3 vertexPosition;
uniform mat4 modelViewMatrix;
uniform mat4 perspectiveMatrix;
void main(void) {
 gl_Position = perspectiveMatrix *
modelViewMatrix *
vec4(vertexPosition, 1.0);
}

 

The first three lines define some data structures. The vertexPosition is a 3D vector which specifies a location in model space. The "uniform" qualifier means that these data objects will be provided from outside of the shader - i.e. they are variables that allow your program to communicate with the shader as will be explained later.

The modelViewMatrix specifies the transformation to be applied in the 3D space - roughly speaking it provides the position of the "camera". Finally the perspectiveMatrix is the perspective projection that transforms what the camera sees to a 2D representation that can be rendered to the screen.  Program then takes the 3D vector that specifies the location of the vertex, augments it to a 4D vectors and applies the matrices to produce the 2D result gl_Position.This is then passed to the renderer along with the other points which form the polygon to be rendered. Pixels within the polygon are thus generated and passed to the fragment shader.

While the vertex shader is realistic for a typical 3D program the fragment shader that we are going to use isn't. It is simply going to set every pixel to a shade of pure green:

void main(void) {
 gl_FragColor = vec4(0.0, 6.0, 0.0, 1.0);
}

Setting the standard variable gl_FragColor to an RGBA value - Red, Green, Blue, Alpha - sets every pixel within the polygon to that color. In this case the color is green.  (Recall that the Alpha value is the transparency with 0.0 being fully transparent) In general the fragment shader would take into account other data to set the pixels color depending on where is was in the polygon and in 3D space. This is where you implement lighting algorithms and so on but for the moment- a uniform green will do.

<ASIN:0321712617>

<ASIN:0240813286>

<ASIN:1430228741>



Last Updated ( Wednesday, 04 June 2014 )