Linear gamma and linear digital capture are popular topics on the web at the moment.
The discussions arise from a concept known as expose to the right, or ETTR. Basically some proponents of this suggest that a digital histogram can be used as an exposure meter. And this exposure should be based on the highlights in an image. Incredibly detailed measurements are being employed in attempts to justify ETTR. These details are frequently presented out of context, leading to circular debates.
The objective of this article is an attempt to demystify the term gamma. First, a little level set on the technology and terminology.
Light is a form of energy. Candelas, lumens, voltage, or photon counts can be used to record light intensity. Light intensity is a logarithmic scale. This is photography 101. It is a factor in lighting distance, aperture settings, and exposure values. Exposure values simply normalize this intensity to a base 10 linear scale. Thus we have a progression of f/stops, shutter speeds, and ISO settings that match our exposure settings to a given light intensity.
A logarithmic scale is one that progresses mathematically based on an exponent or power function. In the case of light measurements, this power or exponent is 2.
Gamma γ is a power function: f(x) = xγ where x is an input value, γ is the exponent or gamma, and xγ is the output value. The output value increases in a non-linear fashion as the input value increases. This characteristic curve is typical of a logarithmic scale. Gamma 1.0 results in no scaling. This is illustrated in the figures below.
The RGB input values (0-255) shown are scaled 0 to 1 before gamma correction, and then the output is scaled back to 0 to 255. The same concepts hold true when we scale sensor voltage to perceived brightness. Which scale do we want to call linear; brightness, luminosity, voltage, or the color encoded values? In the chart on the right, if we replace brightness with lumens on the Y-axis, the chart would appear as gamma 1.
Gamma is also sometimes defined as the degree of contrast in the mid level tones. In addition, there are gamma correction algorithms that apply more complex tone curves. These would be true transfer functions, not simple power functions. One example of this would be a Photoshop tone curve. Lets stick to the traditional, simple gamma function.
The values captured by digital sensors are simply numbers. The minimum and maximum values are function of the circuit design and this alone determines the dynamic range of a given sensor. These values are scaled and digitized, but there is no direct correlation between the number of bits recorded and the range of exposure values captured. Since the raw values represent light intensity, they do reflect a logarithmic scale. But the range of this analogue data will be the same whether it is recorded as 8-bits, 12-bits, or 16-bits. It is simply a coincidence that binary encoding is base 2. The bit depth does affect the granularity of the data, or the number of unique tones within this range.
For output devices, in theory, the brightness of a display circuit follows a base 2 logarithmic function with respect to the voltage applied. That is, a one unit of brightness change requires a doubling of the applied voltage. Just like photographic exposure values, this would be gamma 2.0.
In practice, it turned out that most early display monitors had a gamma closer to 2.5. In other words, the devices did not precisely track the physics.
The correction could be applied at the input (capture), output (display), or somewhere in between. Of course it should never be applied twice. In the early days of broadcast video it was decided that this should be done at capture, in the studio cameras and equipment rather than in each consumer television. This approach soon migrated to the RGB values in digital processing. This is the fundamental basis of RGB gamma correction.
Similar effects apply to most electronic devices. In fact, even human vision does not track luminosity in a perfectly logarithmic scale. The response of the human visual system to low light levels is not a scaled-down version of its response to high light levels. This is the domain of color science.
Today, most consumer high-end output devices, monitors, projectors, and even televisions, have some sort of sophisticated calibration controls. But gamma correction has already been standardized in ICC RGB profiles. So it is an integral part of any digital workflow.
Any gamma correction will compress or expand some tones. Early on Mac chose gamma 1.8 because it retains more detail in the shadows. The user base was desktop publishing. Windows chose 2.2 because it retains more detail in the highlights. The user base was presentation graphics. Of course, this is in relation to gamma 2.0, not gamma 1.0. These considerations were important with low resolution, 8-bit images. With proper color management, either will print and display the same.
Some confusion arises from a variety of graphs that attempt to plot luminosity and voltage or RGB values. They seldom define what scaling is being use for luminosity. If it is EV, it rightfully should be shown on a linear scale. If it is lumens (or related) it should be shown on a logarithmic scale.
If we show the measurements in XYZ color values, they are based on luminous intensity. Thus the scale is logarithmic, base 2 or gamma 2. If we show measurements in Lab color values, they are based on perceived intensity. Thus the scale is linear, base 10. Some choose to call this gamma 1. If we show measurements in RGB the values are simply representations of a color space. And they will all change if we change the color space.
All digital imaging software has to perform some manipulation of the digital raw values captured by a sensor as part of the raw image processing. This is known as the image pipeline and consists of many components. If we shoot JPG, the only difference is that this processing is done in the camera at the time of image capture. The first important step is a process called demosaicing. This examines multiple sensor sites to determine the RGB values for a single image pixel. Most algorithms will examine at least two green sites in evaluating the effective luminosity. Then some form of color matching is needed to adjust for the color filter responses and the source illuminant of the scene. Finally, colors have to be converted to the desired output RGB color space. If they are not already in the software’s “Profile Connection Space”, a prerequisite conversion is needed. This is usually Lab or XYZ. Then, the destination RGB color space determines the gamma, gamut, reference white point, and RGB primaries that will be used.
Some of the so-called experts have ventured into the slippery slopes of linear capture and linear gamma to support the assertion that a histogram can replace a light meter. This has led to incredibly detailed measurements of tone response curves and such. When this does not support the assertion, they claim that the cameras and meters are defective. The real problem is that they cannot see the forest for the trees. Step back from the trees and contemplate the forest.
Since light energy is log based, either under or over exposure will shift even mid tone values in a non-linear fashion. Software tools can offer excellent features for tone correction. This gives us increased exposure latitude, especially with raw formats. The simple fact is that light is not linear.
Digital photographers should concentrate on basic photographic concepts and artistic attributes of photography. Linear capture and linear gamma will not improve your exposure techniques.
I hope you enjoyed this article. If you have any comments, or suggestions, I would welcome your input. Please send me an Email. Read more about ETTR or Tones and Zones.
Rags Int., Inc.
204 Trailwood Drive
Euless, TX 76039
December 8, 2007
This page last updated on: Saturday December 08 2007
You are visitor number 13,498 since 12/09/07