Presenting images on the web with a higher apparent bit-depth

Here, I describe some experimentation with using spatiotemporal noise to increase the apparent bit-depth of images displayed over the web on conventional hardware.

We can sometimes seek to present images in a browser for which we have a high-fidelity representation of the pixel values. For example, we might show a sinusoidal grating of a particular contrast or a rendering from a specialised software package (e.g., my previous post on rendering). This image data might have something like a 10-bit representation, in which each pixel can take on an integer value of between 0 and 1023 (a 210 range) in each of its colour channels.

We tend to be limited to displaying an 8-bit pixel representation using current web image formats and display devices—which only permits 256 values rather than the 1024 available with a 10-bit representation. Although 256 values doesn’t sound like that many, we tend to experience surprisingly few perceptual artefacts when viewing images with the restriction of an 8-bit pixel depth. This lack of perceptual artefacts with 8-bit images is aided by the details of the sRGB colour space, which handily allocates more bits to those values to which we are more visually sensitive.

However, it can be useful to be able to show images with a higher apparent bit-depth on conventional hardware and from within the web browser. For example, we might be trying to measure thresholds and require small changes in luminance. Helpfully, methods for dealing with the general problem of bit-depth limitations have been developed—see the great article by Allard & Faubert (2008) for a review. These methods typically involve adding spatial or spatiotemporal noise to the image and then using the human visual system to essentially average away the noise and recover a higher-fidelity image.

Here, I want to describe a couple of examples that I have been experimenting with that apply one such method to the presentation of images over the web on a conventional monitor.

The example below shows two luminance ramps, each going from slightly lower luminance on the left to slightly higher luminance on the right. The ramp on the top has no extra processing applied to it, and has some subtle bit-depth artefacts that may be visible—there might be a faint ‘banding’ apparent in the ramp. The ramp on the bottom had a small amount of noise added to it, with a different noise sample applied to each pixel and on each screen refresh. You might see a reduction in the artefacts compared to the ramp on the top—and hopefully without the noise being visible.

The second example below is particularly interesting. It depicts a grating that varies in log spatial frequency along the horizontal axis and log contrast along the vertical axis (sometimes referred to as a ‘Campbell-Robson’ chart), which is typically used to show the variation in contrast sensitivity with spatial frequency. When there is no noise added, there is an artefact in the form of a small luminance oscillation that continues to be present as the contrast approaches zero (this artefact has been noted before). This isn’t always visible (it is on my monitor but not on my laptop), but it would make it very difficult to measure a threshold when it is. With noise (toggle the selector below), the visible artefact is no longer there for me—which is very neat!

Noise presence

The above examples suggest that this might be a viable approach to certain web image presentation requirements. However, there are a few issues that I need to think more about:

  • The best way to generate the noise. It is surprisingly difficult to generate noise from within a GLSL shader. Here, I am using an implementation that I found within the three.js source, which is based on this blog post.
  • The degree of resource usage. Because the image needs to be updated on each monitor refresh, and with a new noise instance, the method can substantially increase the use of system resources (particularly relative to presenting what would otherwise be static images, as in these examples). This might cause the device to drop frames, particularly if the images are being shown at a large size or in fullscreen. The time cost to adding the noise would need to be measured, and the associated code possibly optimised.
  • The consistency across devices. Presenting images over the web necessitates accommodating substantial variation in computer platforms, display devices, and viewing conditions. It is unclear how much the effect of adding noise might depend on such variations. For example, monitors might have some temporal display settings that reduce the effectiveness of the noise.
  • The consistency across individuals. Some viewers might be more sensitive to the presence of the noise than others, leading to differences in the visibility of the noise.
  • The application to chromatic images. I have only considered the application of the method to greyscale images. It seems like something similar should work for chromatic images, but colour tends to add complexity and would need some careful thought.


  1. Allard, R. & Faubert, J. (2008) The noisy-bit method for digital displays: Converting a 256 luminance resolution into a continuous resolution. Behavior Research Methods, 40(3), 735–743.