JavaScript Canvas WebGL Convolutions
Written by Ian Elliot   
Monday, 18 May 2020
Article Index
JavaScript Canvas WebGL Convolutions
Texture Coordinates

Now we have the bitmap we need to create the texture co-ordinates and the vertex co-ordinates. For the shape that the bitmap will be mapped onto we will use a rectangle composed of two triangles that fill the entire canvas:

var vertexPos = gl.getAttribLocation(
program, "vertexPosition"); var vertexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.bufferData(gl.ARRAY_BUFFER,
new Float32Array([ -1, 1, -1, -1, 1, 1, 1, 1, -1, -1, 1, -1 ]), gl.STATIC_DRAW); gl.vertexAttribPointer(vertexPos,2.0,gl.FLOAT,
false,0, 0);

You should recognize the steps to set up the vertex buffer. Next we set up the texture co-ordinates for each vertex and you have to be careful to assign the correct texture co-ordinate to each of the corners to get the image to map correctly:

var texCoord = gl.getAttribLocation(program, 
"a_texCoord"); var texCoordBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([ 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0 ]), gl.STATIC_DRAW); gl.vertexAttribPointer(texCoord, 2, gl.FLOAT,
false, 0, 0);

Again, all of the steps should be familiar, but they are now associating the buffer with the texCoord attribute.

Finally all we have to do is enable the attributes and draw the triangles:

gl.enableVertexAttribArray(texCoord);
gl.enableVertexAttribArray(vertexPos);
gl.drawArrays(gl.TRIANGLES, 0, 6);

You should now see the image displayed in the canvas. You can make sure that you understand the texture co‑ordinates by changing them and seeing the effect.

As the transformation matrix is still in the program, you can now scale and position the bitmap just as you would any collection of vertices. The only difference is that now the fragments are being shaded using the bitmap as a source.

You can see a complete listing of this program at https://iopress.info/JSGraphicsPrograms/page293.html

A GPU Convolution

Now that we can display an image in WebGL, the next question is can we process it? The answer is very easy. All we have to do is translate pixel co‑ordinates to texture co-ordinates in the range (0,0) to (1,1). To do this we need to pass the shader the size of a pixel in texture co-ordinates:

var pixelStep = gl.getUniformLocation(program, "pixelStep");
gl.uniform2f(pixelStep,1.0/img.width,1.0/img.height);

The shader has to be modified to use this uniform and to implement the [1,1,1,0,0,0,-1,-1,-1] horizontal edge-finding mask implemented in Chapter 13:

var fsScript =
`precision mediump float; uniform vec2 pixelStep; uniform sampler2D u_image; varying vec2 v_texCoord; void main(void) { vec4 color= texture2D(u_image,
v_texCoord+vec2(-pixelStep.x,-pixelStep.y)); color+=texture2D(u_image, v_texCoord+vec2(0,-pixelStep.y)); color+= texture2D(u_image, v_texCoord+vec2(+pixelStep.x,-pixelStep.y)); color-= texture2D(u_image, v_texCoord+vec2(-pixelStep.x,pixelStep.y)); color-= texture2D(u_image, v_texCoord+vec2(0,pixelStep.y)); color-= texture2D(u_image, v_texCoord+vec2(+pixelStep.x,pixelStep.y)); color=abs(color); gl_FragColor = vec4(color.rgb,1); }`;

You can see that the convolution is first computed into the variable color and then it is stored in gl_FragColor after setting the alpha channel, A, to 1. Notice the use of “swizzlers”. You can access array elements using syntax like v.x for the x component i.e. v[0] and v.xy which is a 2D vector made up of the x and y components.

If you try this out the result is:

conv

You can see this program in action at :https://iopress.info/JSGraphicsPrograms/page298.html

You can use the same technique to implement a complete custom convolution program. Simply pass in a uniform for the mask and write shader code to work out the convolution of the mask and the bitmap.

Listing – Bitmap Convolution

<!DOCTYPE html>
<html>
  <head>
    <title>TODO supply a title</title>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, 
                                    initial-scale=1.0">
    </head>
  <body>
   <script>
   function createCanvas(h, w) {
     var c = document.createElement("canvas");
     c.width = w;
     c.height = h;
     return c;
   }
   function createShaders(gl, vs, fs) {
    var vertexShader = gl.createShader(gl.VERTEX_SHADER);
    gl.shaderSource(vertexShader, vs);
    gl.compileShader(vertexShader);
    if (!gl.getShaderParameter(vertexShader,
gl.COMPILE_STATUS)){ alert("Error in vertex shader"); var compilationLog =
gl.getShaderInfoLog(vertexShader); console.log('Shader compiler log: ' +
compilationLog); gl.deleteShader(vertexShader); return; } var fragmentShader =
gl.createShader(gl.FRAGMENT_SHADER); gl.shaderSource(fragmentShader, fs); gl.compileShader(fragmentShader); if (!gl.getShaderParameter(fragmentShader,
gl.COMPILE_STATUS){ alert("error in fragment shader"); var compilationLog =
gl.getShaderInfoLog(fragmentShader); console.log('Shader compiler log: ' +
compilationLog); gl.deleteShader(fragmentShader); return; } return [vertexShader, fragmentShader]; } function createProgram(gl, shaders) { var program = gl.createProgram(); gl.attachShader(program, shaders[0]); gl.attachShader(program, shaders[1]); gl.linkProgram(program); if (!gl.getProgramParameter(program, gl.LINK_STATUS)){ alert("Error in shaders"); gl.deleteProgram(program); gl.deleteProgram(vertexShader); gl.deleteProgram(fragmentShader); return; } return program; } function imgLoaded(img) { return new Promise( function (resolve, reject) { img.addEventListener("load", function () { resolve(img); }); }); } function loadBitmap(gl, img) { var texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, texture); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img); } async function draw2d() { gl = document.body.appendChild(createCanvas(600, 600)). getContext("webgl2"); if (!gl) alert("no webgl2"); gl.viewport(0, 0, gl.canvas.width, gl.canvas.height); var vsScript = `attribute vec2 a_texCoord; varying vec2 v_texCoord; attribute vec2 vertexPosition;
uniform mat3 transform; void main(void) { vec2 temp= vec2(transform*vec3(vertexPosition,1.0)); gl_Position = vec4(temp,0.0,1.0); v_texCoord = a_texCoord; }`; var fsScript = ` precision mediump float; uniform vec2 pixelStep; uniform sampler2D u_image; varying vec2 v_texCoord; void main(void) { vec4 color= texture2D(u_image, v_texCoord+vec2(-pixelStep.x,-pixelStep.y)); color+=texture2D(u_image, v_texCoord+vec2(0,-pixelStep.y)); color+= texture2D(u_image, v_texCoord+vec2(+pixelStep.x,-pixelStep.y)); color-= texture2D(u_image, v_texCoord+vec2(-pixelStep.x,pixelStep.y)); color-= texture2D(u_image, v_texCoord+vec2(0,pixelStep.y)); color-= texture2D(u_image, v_texCoord+vec2(+pixelStep.x,pixelStep.y)); color=abs(color); gl_FragColor = vec4(color.rgb,1); }`; var shaders = createShaders(gl, vsScript, fsScript); var program = createProgram(gl, shaders); gl.useProgram(program); gl.clearColor(0.8, 0.8, 0.8, 1.0); gl.clearDepth(1.0); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); var url = new URL("jeep.jpg", "http://server/"); var img = new Image(); img.src = url; await imgLoaded(img); loadBitmap(gl, img); var pixelStep = gl.getUniformLocation(program,
"pixelStep"); gl.uniform2f(pixelStep, 1.0 / img.width,
1.0 / img.height); var T = gl.getUniformLocation(program, "transform"); gl.uniformMatrix3fv(T, false,
new Float32Array([1, 0, 0, 0, 1, 0, 0, 0, 1 ])); var vertexPos = gl.getAttribLocation(program,
"vertexPosition"); var vertexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([ -1, 1, -1, -1, 1, 1, 1, 1, -1, -1, 1, -1 ]),
gl.STATIC_DRAW); gl.vertexAttribPointer(vertexPos,2.0,gl.FLOAT,
false,0, 0); var texCoord = gl.getAttribLocation(program,
"a_texCoord"); var texCoordBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer); gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([ 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0 ]), gl.STATIC_DRAW); gl.vertexAttribPointer(texCoord, 2, gl.FLOAT,
false, 0, 0); gl.enableVertexAttribArray(texCoord); gl.enableVertexAttribArray(vertexPos); gl.drawArrays(gl.TRIANGLES, 0, 6); } draw2d(); </script> </body> </html>

WebGL for 2D Graphics?

If you want to use WebGL then, unless you are planning something special, it is better to adopt a library such as three.js. However, most of these specialize in 3D graphics and 2D is an afterthought, if it is mentioned at all. A notable exception is pixi.js. You can learn to do things directly using the WebGL API, but this generally requires a bigger investment of time. The 2D Canvas API is much easier and for simple graphics is much closer in performance to WebGL than is generally accepted. However, this said, it is not so difficult to implement very efficient 2D bitmap graphics using WebGL once you have the basic ideas.

 shader

 

Summary

  • WebGL is just a 2D rendering system which makes use of the GPU hardware that most machines have.

  • If you just want to work in 2D you can simplify the vertex shader to accept 2D vertex data.

  • Using a transformation matrix you can use the standard method of drawing a 2D shape centered on the origin and transform it to the size and position you need.

  • It if fairly easy to create functions which draw standard shapes without having to load the vertex data every time you need to draw something.

  • Animation in WebGL works in the usual way via the requestAnimationFrame function and setting a transformation.

  • WebGL can work with bitmaps which are referred to as textures because of their use in 3D graphics.

  • A varying is a shader variable that is set to a weighted average of the values at the vertices of a fragment according to the distance from each of the current pixels.

  • Bitmaps loaded into the GPU always have a 0,0 (top left corner) to 1,1 (bottom right corner) co-ordinate system.

  • Texture co-ordinates are a 2D vector of varyings that interpolate the texture co-ordinates assigned to the vertices.

  • Texture co-ordinates are used to sample the pixel value in the texture bitmap and use that as the pixel’s color.

  • The fragment shader can do image processing by forming functions of the colors of neighboring pixels in the texture bitmap.

 

Now available as a paperback or ebook from Amazon.

JavaScript Bitmap Graphics
With Canvas

largecover360

 

Contents

  1. JavaScript Graphics
  2. Getting Started With Canvas
  3. Drawing Paths
      Extract: Basic Paths
      Extract: SVG Paths
      Extract: Bezier Curves
  4. Stroke and Fill
      Extract: Stroke Properties 
      Extract: Fill and Holes
      Extract: Gradient & Pattern Fills
  5. Transformations
      Extract: Transformations
      Extract: Custom Coordinates 
      Extract  Graphics State
  6. Text
      Extract: Text, Typography & SVG 
      Extract: Unicode
  7. Clipping, Compositing and Effects
      Extract: Clipping & Basic Compositing
  8. Generating Bitmaps
      Extract:  Introduction To Bitmaps
      Extract :  Animation 
  9. WebWorkers & OffscreenCanvas
      Extract: Web Workers
      Extract: OffscreenCanvas
  10. Bit Manipulation In JavaScript
      Extract: Bit Manipulation
  11. Typed Arrays
      Extract: Typed Arrays 
  12. Files, blobs, URLs & Fetch
      Extract: Blobs & Files
      Extract: Read/Writing Local Files
      Extract: Fetch API **NEW!
  13. Image Processing
      Extract: ImageData
      Extract:The Filter API
  14. 3D WebGL
      Extract: WebGL 3D
  15. 2D WebGL
    Extract: WebGL Convolutions

<ASIN:B07XJQDS4Z>

<ASIN:1871962579>

<ASIN:1871962560>

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

espbook

 

Comments




or email your comment to: comments@i-programmer.info



Last Updated ( Tuesday, 19 May 2020 )