When the color of the light illuminating a scene changes, also the color of the light reflected by the objects in the scene changes.
Our vision adapts to the prevailing light conditions, and compensates to a large extent for changes in the color of ambient light. To a first approximation, we perceive colors as deviations from ambient light: when we look at a red book, our brain notices that the light coming from the book contains a higher ratio of red to green and blue light, compared to ambient light, and thus perceives the book as red. This compensation is the reason why a white sheet of paper appears to us as white both when viewed under green foliage, and at sunset. This compensation is imperfect: under incandescent light, for instance, blue object do appear to be darker than they do under sunlight, due to the small amount of blue light emitted by incandescent bulbs. Still, we compensate well enough that we are mostly not aware of things changing colors around us when the color of the prevailing illuminating light changes.
The sensors of digital (and film) cameras do not compensate for the color of ambient light: they simply measure the amounts of red, green, and blue light present in each pixel of the scene . If these absolute values of colored light intensity are directly translated in print reflectivity values, or monitor brightness values, we will perceive color casts.
To understand this, assume that a camera, when photographing a white sheet of paper in direct sunlight, measures equal light intensities in the red, green, and blue channel, so that the intensities are in the ratios 1:1:1. When these values are translated in reflectivity values in a print, we obtain a print that equally reflects red, green, and blue light: such a print appears white. If we look at the print under direct sunlight, and rephotograph it, indeed we will again obtain 1:1:1 as color intensity ratio. If we look at the print under any kind of light (fluorescent, a green neon light, etc), the sheet will again contain the same color composition as ambient light, and it will appear white.
Assume now that we take a photograph of a white sheet under incandescent light, which contains much more red light in proportion to blue light. The measured red : green : blue intensities may be in the ratio 4:2:1 . If these values are again translated directly into reflectivity ratios in a print, we obtain a print that reflects twice as much red light, and only half as much blue light, than green light. Because of the way in which our brain compensates, such a print looks reddish under all light conditions.
The situation when a photo is viewed on a computer screen is more complicated. If the screen were the only light source in the room, and the photo entirely filled the screen, then our vision would compensate and reduce color casts. But this is rarely the common situation: most often on the screen we have other colors, including grays and whites which emit light in the ratios 1:1:1. Our vision would associate "white" with this ratio, and our photo with ratio 4:2:1 would again appear reddish.
To alleviate color casts, when we take a photo, the camera tries to compensate for the type of ambient light, and tries to assign color ratios of 1:1:1 to objects that are white. There are two ways it does so:
While these color transformations reduce color casts, they are not perfect. Ambient light comes in an infinity of colors, while cameras have only a few predefined settings. The algorithms for auto white balance, while sophisticated, can be fooled by strange backgrounds, scenes, or overall casts. Finally, the photographer is often at fault, selecting the wrong white balance setting for a scene. All of these factors translate in an imperfect compensation for the color of ambient light, and results in a color cast in the digital image.
Two of the most effective ways of eliminating color casts are: make white what was white, and compensate for color temperature.
Gimp, you can use the Whitebalance script to perform this correction. When using these correction tools, you need to pay attention to two things:
Technically, the correction is performed as follows. A pixel appears white when the amounts r, g, b of red, green, and blue light are equal: r = g = b. Assume that a pixel of a white object has color values r', g', b'. Then, to correct the color balance, we multiply the red value of every pixel by q/r', we multiply the green value by q/g', and we multiply the blue value by q/b'. The contant q is chosen so that the overall brightness of the image does not change: a naive choice would be q = (r + g + b) / 3, but this is not ideal, as the human eye is most senstive to green light. Furthermore, the correspondence between pixel values in a jpg image (from 0 to 255), and luminosity values as rendered by monitors, is not linear. For these reasons, the actual correction implemented by the tools, including Whitebalance, is somewhat more complex.
, describes the intensity of each color as a function of the object's temperature.
Photographs taken using the "daylight" setting of a digital camera appear free of color cast when the ambient light originates from an object at about 5,300 K. If you happen to know the temperature of the object whose light was illuminating the scene, you can compute a color transformation that eliminates the color cast. For instance, an incandescent light bulb has a temperature of about 2,800 K: using this information, you can compute a color transformation that eliminates the red color cast in photos taken under incandescent light, with a camera set to the "daylight" white balance setting. The transformation works also for light sources that are not "hot objects", such as the overcast sky, which is approximately equivalent to a body with temperature of 6,500 K.
There are two ways to apply such transformations: