# Colour correction with WebGL

Although cameras continuously get better, most pictures are retouched. A significant part of post-processing is finding the right balance of colours and shades that make the entire image cohesive or make the subject pop. With WebGL, we can do this on the web.

There are many situations this could be useful: when I update my avatar, share my holiday pictures, or upload an image for a blog post. We can do this with WebGL. In this article, we’ll recreate the effects of my favourite web-based image editor, Doka.

## Why WebGL?

We can apply filters with SVG and do pixel manipulation with a 2D canvas, so why use WebGL?

We can serve an SVG with the image plus effects, but it recomputes the effects every time the browser renders the SVG. To bake the effects in, we could draw the SVG to a canvas element using the `context.drawImage()`

method and use the canvas API to export it as a PNG. A big pro is that SVG has various built-in effects we can leverage for colour manipulation. We’re slightly limited in flexibility, and performance might become an issue for large images.

With a 2D canvas, we can manipulate each individual pixel. The downside of this approach is that it’s slow. First, we’d have to load the pixels with the `context.getImageData()`

method, which requires the computer to download the pixel data from the GPU, which takes a while. After that, we can iterate over all pixels with JavaScript. The more pixels, the slower it gets. Finally, we have to push the pixel data back to the GPU with `context.putImageData()`

, which isn’t fast either. All in all, processing an image would quickly cause the web page to lag or freeze, especially on low-end devices. It is super flexible though!

Lastly, there’s WebGL. You get a similar amount of flexibility of a 2D canvas, so you can write your own effects. It also outperforms both SVG and canvas. This is partly because it’s low-level, but a consequence is that you need a lot of code to draw anything on the screen.

In this article, we’ll focus on fragment shaders because that is where we’ll program the effects. You can follow along in plain WebGL or a library of your choosing. I’ll use OGL in the demos. If you’re unfamiliar with WebGL, shaders, and/or GLSL, check out WebGL Fundamentals/WebGL2 Fundamentals. You don’t need to be a WebGL expert to follow along, but some understanding of WebGL, shaders, and GLSL will help a lot.

## Colours

We can group all colour manipulation in a single fragment shader. First, we need to get the colour and transparency from the image. With the `texture2D()`

function we can read pixels from a texture at coordinate `uv`

. This returns a `vec4`

value, a 4-component vector, containing the amount of red, green, blue, and alpha (opaqueness). We typically call each component of a colour a channel.

```
uniform sampler2D image;
void main() {
vec4 texel = texture2D(image, uv);
vec3 color = texel.rgb;
// TODO: adjust color
gl_FragColor = vec4(color, texel.a);
}
```

We’re ready to change some colours! Let’s warm up with brightness.

### Brightness

Each channel is a floating point number with a range of 0 to 1, where 0 is unlit and 1 is fully lit. So, `vec3(0, 0, 0)`

represents black, `vec3(1, 0, 0)`

is red, `vec3(0, 0, 1)`

makes blue, and `vec3(1, 1, 1)`

is white. To increase or decrease brightness, we can add or subtract a floating point number. To change brightness, we must add or subtract an equal amount of red, green, and blue. Consider following example:

```
vec3(0, 0, 0) + 0.5 // yields vec3(0.5, 0.5, 0.5), so we went from black to gray
vec3(1, 0, 0) + 0.5 // yields vec3(1.5, 0.5, 0.5), which is clamped to vec3(1, 0.5, 0.5), which makes a light shade of red
vec3(0, 0, 1) - 0.5 // yields vec3(-0.5, -0.5, 0.5), which is clamped to vec3(0, 0, 0.5), giving us a dark shade of blue
```

Let’s do this using the `color`

variable in a function we’ll call `adjustBrightness`

:

```
vec3 adjustBrightness(vec3 color, float value) {
return color + value;
}
// ...
color = adjustBrightness(color, 0.5); // yields a lighter colour
color = adjustBrightness(color, -0.5); // yields a darker colour
```

Let’s move on to a harder one, contrast!

### Contrast

When we increase contrast, we want values less than 0.5 to decrease and values greater than 0.5 to increase. We can do this in two ways: write if-statements that handle values less than 0.5 differently than values greater than 0.5. Alternatively, we can do this with linear algebra. In shaders, we usually prefer a mathematical approach over a logical one.

We can leverage the peculiarity of negative numbers. If we multiply both positive and negative numbers by two, they move in different directions. 2 _ -0.5 yields -1, and 2 _ 0.5 yields 1. We can do this by shifting the 0-1 range of colours to -0.5 to 0.5 (`color - 0.5`

), multiply the colour with our contrast value, and shift it back to the original range (`color + 0.5`

). Maybe this makes more sense in code:

```
vec3 adjustContrast(vec3 color, float value) {
return 0.5 + value * (color - 0.5);
}
```

Because we multiply the colour with the contrast value, we can’t use 0 as a base contrast value. Any colour multiplied with 0 becomes black. Instead, our base value must be 1, because `1 * n = n`

. A value of <1 reduces contrast and a value of >1 increases contrast. We can either choose to take this into account when we pass values to the shader, or in the shader itself. I’ll choose to account for it in the shader, so we can pass a -1 to 1 value.

```
vec3 adjustContrast(vec3 color, float value) {
return 0.5 + (1.0 + value) * (color - 0.5);
}
// ...
color = adjustContrast(color, 0.5); // yields a higher contrast
color = adjustContrast(color, -0.5); // yields a lower contrast
```

Let’s move on to exposure.

### Exposure

We have arrived at the last effect: exposure. This is similar to brightness, but the change of brightness is proportional to the luminosity of the colours. In other words, we have to multiply instead of add. We have to use 1 as the base value for the exact same reason we had to for contrast.

```
vec3 adjustExposure(vec3 color, float value) {
return (1.0 + value) * color;
}
// ...
color = adjustExposure(color, 0.5); // yields a higher exposure
color = adjustExposure(color, -0.5); // yields a lower exposure
```

I hope you’re still with me because adjusting saturation is weirder.

### Saturation

When we adjust saturation, we adjust the contrast between channels. A desaturated image is grayscale, so each channel has equal value (`r == g == b`

). When we increase saturation, the contrast becomes greater. This means a colour (e.g. `vec3(0.75, 0.2, 0.1)`

) that is fully saturated yields a colour with one channel fully lit (`vec3(1, 0, 0)`

).

When we desaturate, we’re essentially moving between a grayscale variant and the original image. We can use the same math to extrapolate the colours, increasing the saturation. WebGL gives us the `mix(x, y, t)`

function, which performs linear interpolation, which we can use for both. If `t`

is less than 1, we’re desaturating. If it’s greater, we’re saturating.

```
vec3 adjustSaturation(vec3 color, float value) {
return mix(grayscale, color, 1.0 + value);
}
```

You may have noticed that `grayscale`

isn’t defined yet. We have yet to calculate the colour if a pixel is fully desaturated. The naive way is to take the average of the channels or pick the brightness of a single channel. As human beings, we don’t perceive every red, green, and blue as equally luminous. We have to take that into account. The Web Content Accessibility Guidelines (WCAG) takes that into account and conveniently documented how we can calculate the brightness from an RGB colour in the section relative luminance.

You know what else is convenient? Multiplying each channel with a number and taking the sum is the same as a dot product between two vectors! And GLSL has a built-in function for that! Let’s add the `grayscale`

variable to our `adjustSaturation`

routine:

```
vec3 adjustSaturation(vec3 color, float value) {
// https://www.w3.org/TR/WCAG21/#dfn-relative-luminance
const vec3 luminosityFactor = vec3(0.2126, 0.7152, 0.0722);
vec3 grayscale = vec3(dot(color, luminosityFactor));
return mix(grayscale, color, 1.0 + value);
}
color = adjustSaturation(color, 0.5); // yields a higher saturationP
color = adjustSaturation(color, -0.5); // yields a lower saturation
```

Check out the demo.

## Colour Matrices

Before we move on to filters, let’s explore colour matrices. We can consider a 4-component vector a 1 by 4 matrix. You may also have noticed that our functions for each effect used multiplication and addition to calculate the new colours. Instead of having a formula for each effect, we can combine them into a single colour matrix.

Matrix math is pretty counterintuitive to me and by no means am I the right person to explain it to you. Instead, consider reading on Matrix Multiplication on Wikipedia and/or check out the video series Essence of linear algebra by Grant Sanderson (3Blue1Brown).

Each row of a colour matrix is the output value of each channel and each column is the input value of each channel. Consider this colour matrix:

```
r g b a w
r 1 0 0 0 0
g 0 1 0 0 0
b 0 0 1 0 0
a 0 0 0 1 0
```

If you pay attention, you probably noticed the extra column, marked w for white. We’ll discuss that later. What you’re seeing here is a colour matrix that keeps all original colours. The diagonal line of 1s tells us this is an identity matrix. If we read the row for red, we’ll see all values are nil except for the column red. This means that the output will have the same amount of red as the original–it’s unchanged. Same goes for green, blue, and alpha. Let’s consider another colour matrix:

```
r g b a w
r 0 0 1 0 0
g 0 1 0 0 0
b 1 0 0 0 0
a 0 0 0 1 0
```

Pay close attention to where the 1s are. The amount of red in the output will be the same as the amount of blue of the input. The opposite is true for row blue. If we have a picture of a person wearing a red shirt and blue jeans, these colours are swapped. A red apple would become blue. Note that this matrix will affect other colours that contain red or blue. Yellow (red + green) becomes cyan (blue + green), for example. Let’s now have a look at that white column.

```
r g b a w
r 0 0 0 0 1
g 0 1 0 0 0
b 0 0 1 0 0
a 0 0 0 1 0
```

The last column allows us to shift the brightness for that channel. In this case, we’ve set the offset for red to 1. This means that pixels that are black in the original would become red in the output. For more reading on colour matrices, check out SVG’s feColorMatrx. Now you know what a colour matrix is, let’s replace the calculations in our shader with a colour matrix!

First, we need to pass the matrices to the shaders. Unfortunately, WebGL shaders only support matrices up to 4x4. We can cut off the white column and send it to the shader separately as a vector. Because this column offsets the brightness of each colour, I decided to name the uniform `u_offset`

instead of `u_white`

.

```
uniform mat4 u_matrix; // the RGBA matrix
uniform vec4 u_offset; // the white column
```

Calculating the new colour is a matter of matrix multiplication, which you can do with the multiply arithmetic operator (`*`

):

```
vec4 texel = texture2D(u_map, v_uv);
gl_FragColor = u_matrix * texel + u_offset;
```

At this point, we have two options. We can either multiply all colour matrices in JavaScript and pass a single matrix (and vector) to our shader program, or multiply them in the shader. I’ll pass all matrices to the shader separately so we get the complete picture in a single code sample.

```
uniform mat4 u_brightnessMatrix;
uniform vec4 u_brightnessOffset;
uniform mat4 u_contrastMatrix;
uniform vec4 u_contrastOffset;
uniform mat4 u_exposureMatrix;
uniform vec4 u_exposureOffset;
uniform mat4 u_saturationMatrix;
uniform vec4 u_saturationOffset;
void main() {
vec4 texel = texture2D(u_map, v_uv);
mat4 matrix = u_brightnessMatrix * u_contrastMatrix * u_exposureMatrix * u_saturationMatrix;
vec4 offset = u_brightnessOffset + u_contrastOffset + u_exposureOffset + u_saturationOffset;
gl_FragColor = matrix * texel + offset;
}
```

Lastly, we need to define brightness, contrast, exposure, and saturation as matrices. For brightness, we use an identity matrix and set the white column to adjust the brightness. Note that brightness does not adjust the alpha channel.

```
x = brightness (0-2)
r g b a w
r 1 0 0 0 x
g 0 1 0 0 x
b 0 0 1 0 x
a 0 0 0 1 0
```

Remember the formula for contrast? We performed multiplication and shifted the range. We can do that with a matrix too:

```
x = contrast (0-2)
y = (1 - x) / 2
r g b a w
r x 0 0 0 y
g 0 x 0 0 y
b 0 0 x 0 y
a 0 0 0 1 0
```

Exposure is similar to contrast, but without the brightness shift:

```
x = exposure (0-2)
r g b a w
r x 0 0 0 0
g 0 x 0 0 0
b 0 0 x 0 0
a 0 0 0 1 0
```

We’re almost there! Let’s finish up with saturation:

```
// https://www.w3.org/TR/WCAG20/#relativeluminancedef
lr = 0.2126
lg = 0.7152
lb = 0.0722
s = saturation (0-2)
sr = (1 - s) * lr
sg = (1 - s) * lg
sb = (1 - s) * lb
r g b a w
r sr+s sg sb 0 0
g sr sg+s sb 0 0
b sr sg sb+s 0 0
a 0 0 0 1 0
```

Check out the demo.

## Filters

Doka’s filters are colour matrices as well, and the library allows you to add custom ones. Because we learned colour matrices and how to use them, implementing the filters is a matter of applying the right matrices. First, we need to add another matrix to our shader so we can pass combine the filter with the other matrices:

```
# Hid other uniforms for brevity
uniform mat4 u_filterMatrix;
uniform vec4 u_filterOffset;
void main() {
vec4 texel = texture2D(u_map, v_uv);
mat4 matrix = u_brightnessMatrix * u_contrastMatrix * u_exposureMatrix * u_saturationMatrix * u_filterMatrix;
vec4 offset = u_brightnessOffset + u_contrastOffset + u_exposureOffset + u_saturationOffset + u_filterOffset;
gl_FragColor = matrix * texel + offset;
}
```

Now we can pass an arbitrary colour matrix as a filter! You can take the filters from the Setting filters section of Doka’s documentation or make your own.

## Conclusion

We learned so much! We learned about the essentials in colour correction: brightness, contrast, exposure, and saturation. We implemented them in GLSL and rewrote them as colour matrices. Near the end, we added filters so that allows us to spice up their pictures in an instant.

There’s a lot we can do with this. Perhaps you want to build another web-based Photoshop competitor in which users can deform, retouch, layer, and blend images. Maybe you’re building a new social media platform and you want to allow users to colour correct, crop, and anonymise their pictures before uploading them, although getting Doka saves you a lot of work.

You can play around with all effects we’ve discussed in the final demo on CodePen.

Whatever your plans are, please share your creations on Twitter.