Before April 17th, 2005 there was relative peace in the galaxy.
There were various proprietary raw file formats for digital cameras from various camera manufacturers. Some were even based on common file format standards such as TIFF/EP or TIFF 6.0.
With these standards, it was at least possible to parse the metadata using common routines. But many of the necessary tags seemed to be either missing or they contained non-standard values. The image data was often compressed, usually with proprietary compression techniques. If a method for interpolation of the image data was identified, it was almost always “private”, just like the compression. But, this should not have been a problem since most competing image software claims that their own private interpolation methods are the best anyhow.
Thanks to Dave Coffin and “dcraw.c” much of the mystery was unmasked.
In my humble opinion, ISO TIFF/EP standard is what we should all be emphasizing. But, since the ISO is neither a certification or enforcement agency, what we also need is an organization like Consumer Reports or PC Labs to join the party. I will show the research that leads me to this conclusion in a few moments.
Nikon introduced some new technology in their SLR cameras that changed the way they calculated white balance with some of the camera settings. Thus, they changed some of their “private” MakerNotes” tags.
This was all a nuisance that could have been addressed by some diplomatic cooperation between the camera and image software vendors. It might have even suggested potential revisions to the existing standards. Instead, Thomas Knoll fired a shot across the Nikon bow and ignited a firestorm of controversy. This has resulted in a frenzy of vendor bashing, misinformation, and raw emotions.
I prefer a less emotional and more logical approach to the problem. If you would like to review a history of the events that led to the debate, have a look at: The Raw Debate. This also includes a quick review of the most relevant standards and an introduction to "metadata" if you need it.
Please bear with me for just a moment. You might find some of the gory details a bit technical. They are a necessary ingredient to understanding the problem. You don't need to understand everything in detail. Just try to collect some of the background concepts.
The sensors in your camera do not record light values the same way that our eyes and minds perceive it. They see “white” light filtered to portions of the spectrum. These are called the color filters or known as the color filter array (CFA). Thus different sensors and different filters may record the same light with different values. This is very specific to the unique construction and materials used in the sensor.
These are generally known as spectral response data points. They relate to how the photo sensors and the color filters in front of them measure light.
See the three sample sensors shown in the illustrations below.
These represent three different real sensors. There is no intent to represent any sensor from any specific manufacturer. The point to be made is that each sensor's color channels respond to the same intensity of light with different values. This is also referred to as spectral information. Some will call it “gray scale”, but that is misleading. Others will call it “radiance linear”, accurate but a little complex to me. So I will simply call it spectral for now.
Color is derived by mathematically combining these values.
When the colors are based on physical properties and measurements (physics), they are called chromatic. The corresponding light intensity values are referred to as photometric. Even if we created a sensor that responded exactly the same as the rods and cones in our eyes, it would not be colorimetric. That is simply because colorimetric colors are defined by how our brains respond to this stimulus. Thus, they are a psychological response; measured, defined, and standardized by the CIE. These are the color spaces we use in our color-managed workflow. They are often referred to as perceptual color spaces. That is because the intent is to have the values track linearly with how we perceive and differentiate tones and colors. The general topic of color measurement is called colorimetry.
The math behind converting the physical measurements to colorimetric values is color science. It is well documented, but not easy for the casual observer to absorb. It starts with tables of what are called tri-stimulus values, measured at spectral increments like one or ten nanometers. The end result is typically the red, blue, and green values evaluated to match a particular CIE color space. For standard file formats such as JPG and TIF, this is all done by software (firmware) in the camera itself. For raw formats, the sensor data is recorded simply as is.
Some generic editor vendors would like to assume that all raw data has been measured in a completely linear space. That is, each sensor records the same maximum luminosity and scale across its assigned segment of the spectrum. That is seldom the case. Altering the raw data to make it fit would defeat the purpose of raw. Absent the spectral data points from the manufacturer, or presenting the data in a CIE space such as xyY or XYZ, or other mathematical transforms provided by the manufacturer; some sort of calibration is needed. Some vendors will do it quietly behind the scenes while some will leave it up to the end user.
To scientifically reproduce the colors accurately, the spectral sensor data is essential. An example is the efforts to archive images of artwork from the masters for future generations. On the flip side, even science will sometimes intentionally alter the image colors to visualize otherwise hidden image details. We see this in astronomical images where the spectrum is altered to show surface details or chemical composition of distant bodies. Both are admirable objectives.
If you want to see more about digital image construction and how it differs from film, see this: Digital verses Film discussion. Actually, there are more similarities than differences.
In the graphs above, it seems that the blue channel is weaker in two of three examples. This leads some to assume that digital noise reduction should be initiated in the blue channel. Obviously not always. Others would like to see things like the focus distance, or the image coordinates, where focus was achieved by the lens. Obviously admirable objectives. Some image editors already show these. Just as obviously some cameras or lenses don’t capture such metrics. Hopefully the standards and documentation can evolve with the technology.
White balance relates to the color temperature of the light source(s) illuminating the scene. Each natural or artificial light source has a unique color temperature. Sunlight and the atmosphere actually have the most variability here. Fluorescent light typically has spikes in different area of the spectrum requiring additional adjustments on another axis (tint). The proper white balance correction will preserve neutral tones (gray) and color fidelity.
Mathematically, there are multiple ways to achieve proper white balance. They all boil down to what can be best described as scalar values. Color science describes this as the correlated color temperature (CCT). To me, this is the same as saying the color temperature of the illuminating light.
The adjustments can be designed to be applied to the sensor data (recommended) or to the resulting XYZ or RGB values (triplets). The CIE algorithms I can find seem to be based on photometric spaces such as XYZ and xyY. There is a serious problem here if the spectral sensitivity data points are unavailable and no transforms to XYZ or xyY are available.
The colorimetric RGB data is clearly the least recommended way. And the sensor may use more than three different filters, requiring more than three scalar values. That said, it is also possible to automatically correct the image itself as long as there is a known neutral area within it to use as a sample.
In the end, practical white balance considerations have a lot more to do with the illumination of the scene being photographed than the specific sensor being used. As long as you use the right math at the right place. If you attempt to use scalars intended for the spectral data on the RGB or XYZ data, they will not work.
I’m one of less than three people in the world that do not believe that Nikon is evil. The Adobe "point of view gun" just didn't work on me. So I am doing some serious research into the raw standards and white balance topics. I am not quite finished, but I think that I have sufficient information to draw some conclusions.
I have written a utility based on Dave Coffin’s “dcraw” for my research. I removed all of the image processing routines and significantly enhanced the metadata parsing routines to show everything recorded by the camera. Then I provided an option to summarize only the tags related to image and color fidelity. Finally, I note any exceptions to the standards as defined by the standards documents.
I have images from Nikon, Kodak, Fuji, Canon, Leaf, Leica, and Sigma (Foveon) for this initial testing.
I will post all the supporting documentation just as soon as I can. There are already some very interesting results. I’m getting pretty excited about it. But it will take some time to finish. In the mean time, I do have some Prelimenary Reports. But I must caution you that these are still in an early construction phase. Treat them that way please.
I am confident that already these will convince you that Nikon is the currently leading supporter of the ISO TIFF/EP standards, and a clear leader in scientifically correct digital raw manipulation. Canon is coming close, but only with its newest cameras. Kodak was using the TIFF/EP standards, but they have withdrawn from this segment of the market, so it is a moot point.
If you would like to contribute a test (raw) image, or try this utility yourself, please send me an Email and I will accommodate you. I don’t currently have any raw images from Sony or Sinar. A daylight shot of a Macbeth CC chart would be a plus, but unnecessary. I’m only interested in serious inquiries.
Black Body K°
Most consumer images are not taken with a strictly pure and standard light source. Thus, there are a variety of methods currently employed to address custom preset and automatic white balance.
Since the existing standards did not completely address the needs of all image processors, and compliance was voluntary, some image information was missing or placed in private tags. Thus, some reverse engineering techniques were commonly employed. This is simply free market exploitation.
First and most important, I have not been convinced that there is any real encryption here. The “dcraw” code is using translate tables that look similar to those routinely used to convert Japanese characters (probably romaji, but possibly katakana or hiragana) to a generic European UTF codepage. I have not verified this with 100% certainty, but that’s my story and I’m sticking to it.
The algorithm is bizarre for sure. It also includes the serial number and shutter release count. Since the keys are supplied with the data this could only be described as very weak encryption. But I have seen no conclusive evidence yet that it actually results in the correct white balance. And I have seen no Nikon statement that says they actually encrypted anything. They do rightly claim publicly that they have the legal right to encrypt raw data.
Encryption is based on intent as much as the algorithms. For example in WWII the US used Navaho code talkers for secure communications. The Navaho language is simply a form of encoding that the Japanese never caught on to. Using constantly changing phrases and names for military assets provided the real encryption of the communications. These encrypted phrases were known only to select members of the military. And this knowledge had only a short (usually daily) useful life.
On June 20th, while running a test of the supposedly “encrypted” white balance, I realized that the "dcraw" image software utilizes the Nikon camera curves in its color interpolation. This is not directly related with the encryption topic, but it has been claimed for some time that Adobe can’t support these private curves. So, how can they find and use the private WB tags for all the earlier Nikon and other cameras? They seem to be able to address the private decompression. They ingore Nikon's private interpolation techniques. So, what's the problem?
Nikon introduced new technology with the D2X that employs new sensors in the viewfinder to measure the color temperature of the ambient light before the shutter is opened. This does require a change in the calculations. It is more in line with color science.
The answer is that this is not a technical problem, not an encryption problem, and not a copyright or trade secret problem. It is simply a political game between Adobe and the rest of the world.
The CIE standards for color rendition are based on solid color science. They know the difference between spectral sensor data (raw), photometric or chromatic data (CIE XYZ), and colorimetric data (CIE Lab). The CIE standards clearly address scientific white balance calculations. The ISO TIFF/EP standards clearly address both CIE spectral and white balance recording metrics. And that, folks, is where the real answer to the raw formats issue lies.
Most camera and software vendors are ignoring these because they are technically based on the light source, not the image data. In the CIE formulas white balance is applied after the spectral (raw) data has been interpolated but before the data is converted to a colorimetric color space. Some software vendors just do all of this in a single step along with Bayer pattern demosaicing.
After considerable research I did find a useful solution to the white balance enigma in the existing ISO TIFF/EP standard. The breakthrough came in a Fuji S3 image. This was the only one that had a real TIFF tag for the white point setting. It turned out to be a T60 (not ISO) tag. This tag is only valid for a YCbCr color space. The value matched the D65 color temperature (6500 Kelvin).
It also used the EXIF Light Source tag (37384), but here the value was “unknown”. And the EXIF white balance setting was “auto”, so this didn’t all compute.
So I went back to research mode to see if I was evaluating these right. The ISO standard allows the Kelvin color temperature to be encoded in the equivalent Light Source tag. If bit 15 is on, the remaining value is in Kelvin degrees. This is a useful metric, which should be usable by any software in any image color space to apply white balance adjustments. Now if we can only get the camera folks to put the Kelvin temperature in this tag and use the ISO TIFF/EP Light Source tag properly.
Adobe has been a leader in image editing and color management for some time. But they seem to have their own concepts about how digital raw files should be interpolated. That is why the camera vendors frequently do a “subjectively” better job of color rendering with their own software. That is why Adobe needs the Camera Raw color calibration sliders. And that is why Adobe is pushing the “linear raw” concept implemented in their vendor owned DNG standard.
The bottom line is that there is a lot of useful data in the existing standards. Nikon and some other camera vendors are recording most of it. But some software editor vendors are ignoring important parts of it. That’s the real reason that some of us like the camera editor results better than the generic editor results. Still there are some camera vendors that don’t follow any public standards. Nikon is not one of them.
Nikon tried to improve the technology of automatic white balance by considering the ambient light. Instead of kudos they got kicked in the gonads.
I do support the basic objectives of OpenRAW. But I am not convinced they will achieve their currently stated goals (full documentation by all). I would rather see the goal stated as full conformance to ISO TIFF/EP or Adobe DNG standards. Then, better documentation of their EXIF MakerNote tags and any exceptions to the selected standard. In addition, I would like to see printed spectral sensor response charts as have been historically provided for film. Without these, color fidelity is purely a guess. Subjectively of course, you may still be very happy with the results.
As long as the rest of the data complies with verifiable and usable industry standards, this should not be a difficult or unreasonable request. But, since the ISO is neither a certification or enforcement agency, what we also need is an organization like Consumer Reports or PC Labs to join the party.
There are a few ISO TIFF/EP tags that I have not seen in any camera data. Among these are SpectralSensitivity (34852) and SpatialFrequencyResponse (37388). Spectral Sensitivity would be a name = value text string describing the number of unique spectral channels and the number of spectral bands (frequency/wavelength) included for each channel. Spatial Frequency Response would be a collection of arrays containing the data points. This is exactly how the CIE describes the values used for spectral to photometric conversion. The white balance adjustments are then applied during the next step (colorimetric conversion) in the CIE model. This is addressed in the LightSource tag (33423).
That is just my two cents. These are my own observations and opinions. They do not support or reject the claims of any manufacturer. I hope you also gained some new insight from this article. If you have any comments, or suggestions, I would welcome your input. Please send me an Email.
Cheers, Rags :-)
So long, and thanks for all the fish!
Rags Int., Inc.
204 Trailwood Drive
Euless, TX 76039
May 21, 2005
This page last updated on: Wednesday October 03 2007
You are visitor number 21,564 since 06/27/05