Photography‎ > ‎

Color Casts in Digital Photography

Why do some photos have color casts?


When the color of the light illuminating a scene changes, also the color of the light reflected by the objects in the scene changes.

Our vision adapts to the prevailing light conditions, and compensates to a large extent for changes in the color of ambient light.  To a first approximation, we perceive colors as deviations from ambient light: when we look at a red book, our brain notices that the light coming from the book contains a higher ratio of red to green and blue light, compared to ambient light, and thus perceives the book as red.  This compensation is the reason why a white sheet of paper appears to us as white both when viewed under green foliage, and at sunset.  This compensation is imperfect: under incandescent light, for instance, blue object do appear to be darker than they do under sunlight, due to the small amount of blue light emitted by incandescent bulbs.  Still, we compensate well enough that we are mostly not aware of things changing colors around us when the color of the prevailing illuminating light changes.

The sensors of digital (and film) cameras do not compensate for the color of ambient light: they simply measure the amounts of red, green, and blue light present in each pixel of the scene [1].  If these absolute values of colored light intensity are directly translated in print reflectivity values, or monitor brightness values, we will perceive color casts.

To understand this, assume that a camera, when photographing a white sheet of paper in direct sunlight, measures equal light intensities in the red, green, and blue channel, so that the intensities are in the ratios 1:1:1.  When these values are translated in reflectivity values in a print, we obtain a print that equally reflects red, green, and blue light: such a print appears white.  If we look at the print under direct sunlight, and rephotograph it, indeed we will again obtain 1:1:1 as color intensity ratio.  If we look at the print under any kind of light (fluorescent, a green neon light, etc), the sheet will again contain the same color composition as ambient light, and it will appear white.

Assume now that  we take a photograph of a white sheet under incandescent light, which contains much more red light in proportion to blue light.  The measured red : green : blue intensities may be in the ratio 4:2:1 .  If these values are again translated directly into reflectivity ratios in a print, we obtain a print that reflects twice as much red light, and only half as much blue light, than green light.  Because of the way in which our brain compensates, such a print looks reddish under all light conditions.

The situation when a photo is viewed on a computer screen is more complicated.  If the screen were the only light source in the room, and the photo entirely filled the screen, then our vision would compensate and reduce color casts.  But this is rarely the common situation: most often on the screen we have other colors, including grays and whites which emit light in the ratios 1:1:1.  Our vision would associate "white" with this ratio, and our photo with ratio 4:2:1 would again appear reddish.

To alleviate color casts, when we take a photo, the camera tries to compensate for the type of ambient light, and tries to assign color ratios of 1:1:1 to objects that are white.  There are two ways it does so:

  • If the camera is set in a specific white balance mode, such as "sunlight", "incandescent", "cloud cover", then the camera software applies a fixed transformation to the color information, which compensates for the typical lighting conditions existing under sunlight, incandescent bulb illumination, cloud cover, and so on.
  • If the camera is set in Auto White Balance, then the camera software tries to infer the color of the prevailing ambient light from the image, and applies a color transformation that depends on such an estimate. [2]

While these color transformations reduce color casts, they are not perfect.  Ambient light comes in an infinity of colors, while cameras have only a few predefined settings.  The algorithms for auto white balance, while sophisticated, can be fooled by strange backgrounds, scenes, or overall casts.  Finally, the photographer is often at fault, selecting the wrong white balance setting for a scene.  All of these factors translate in an imperfect compensation for the color of ambient light, and results in a color cast in the digital image.

How can we eliminate color casts?


Two of the most effective ways of eliminating color casts are: make white what was white, and compensate for color temperature.

Make white what was white

If the image contains a white or gray object, then we can transform the colors of all the image, to ensure that the object appears white.  Many photography programs, such as Adobe Photoshop, Adobe LightRoom, Olympus Studio, Bibble Pro, Capture One, and more, offer a tool that can sample the color of a pixel or of a small region, and transform the image so that the pixel or region appears neutral in color (white or gray).  To use the tool, simply select a portion of the image that is white (a white shirt, white socks, sheets of paper, and so forth).  If you use Gimp, you can use the Whitebalance script to perform this correction.  When using these correction tools, you need to pay attention to two things:

  • Color often varies from one pixel to the next: if possible, measure the color of an area.  Usually, adjacent pixels are of slightly different colors, due to noise in the image.  This is especially true in images taken by point-and-shoot cameras, whose small sensors tend to generate noisy images at high ISO settings.  If you use the color of a single pixel to compute the white balance, you will consequently obtain somewhat random, unpredictable results.  If your photography tool allows it, use
  • Beware of saturated color channels.  Sometimes, a white object saturates a color channel: the intensity of red, green, or blue light is so high that it is recorded as the maximum value (typically 255, in jpg photos).  When this saturation (also known as clipping) of a color channels occurs, we do not have precise color information about the original object, and the color-cast correction will not work properly.  Beware of using as white reference object very bright objects, such as white shirts in full sunlight.

Technically, the correction is performed as follows.  A pixel appears white when the amounts r, g, b of red, green, and blue light are equal: r = g = b.  Assume that a pixel of a white object has color values r', g', b'.  Then, to correct the color balance, we multiply the red value of every pixel by q/r', we multiply the green value by q/g', and we multiply the blue value by q/b'.  The contant q is chosen so that the overall brightness of the image does not change: a naive choice would be q = (r + g + b) / 3, but this is not ideal, as the human eye is most senstive to green light.  Furthermore, the correspondence between pixel values in a jpg image (from 0 to 255), and luminosity values as rendered by monitors, is not linear.  For these reasons, the actual correction implemented by the tools, including Whitebalance, is somewhat more complex.

Compensate for color temperature

Even if there is no white or gray object in the image, you can still eliminate or reduce color casts by tranforming the image colors.  One particularly effective way of doing this is to change the color temperature of an image.  Light from "hot" objects, such as the sun, or an incandescent bulb, has a color composition that depends on the object's temperature in a way that is precisely known.   A physics equation, known as the law of black-body radiation [3], describes the intensity of each color as a function of the object's temperature. 

Photographs taken using the "daylight" setting of a digital camera appear free of color cast when the ambient light originates from an object at about 5,300 K.  If you happen to know the temperature of the object whose light was illuminating the scene, you can compute a color transformation that eliminates the color cast.  For instance, an incandescent light bulb has a temperature of about 2,800 K: using this information, you can compute a color transformation that eliminates the red color cast in photos taken under incandescent light, with a camera set to the "daylight" white balance setting.  The transformation works also for light sources that are not "hot objects", such as the overcast sky, which is approximately equivalent to a body with temperature of 6,500 K. 

There are two ways to apply such transformations:

  • Photos in RAW format: if your photo was taken in RAW format, virtually all the tools to produce jpg images from the RAW files let you choose the color temperature of the light under which the photo was taken, allowing you to remedy to most color casts.
  • Photos in JPG format: if your photo was taken in jpg, various tools, among which Adobe LightRoom, provide you with a slider that lets you warm or cool the image colors.  This warming and cooling of the colors offers an approximate way to compensate for color temperature; the compensation is however only approximate, as no information on the color temperature is used.  Another way to compensate for color casts in jpg photos consists in using Gimp and the Colortemp script.  The Colortemp script enables you to choose the color temperature to which the camera was set, and the color temperature under which you believe the photo was taken; the tool then performs a transformation computed from the law of black-body radiation, eliminating or reducing the color cast.  The script can also be used to slightly warm or cool an image color.

References

  1. This is a simplification: in truth, in most digital cameras each pixel is sensitive to only one color, the pixels sensitive to red, green, and blue are interleaved, and the full color information at every pixel is then reconstructed by interpolation. Still, these considerations are not relevant to thetopic being discussed here
  2. Some cameras, such as the Olympus E-3, have a sensor that measures the color of ambient light, and use this information in the computation of the color transformation.
  3. Black Body (from the Wikipedia)
Comments