Confused? Read on.
With all of the attention properly being paid to color management, I’ve been asked whether it matters which color space one uses in the camera — and if so, which one to use. Without going into an in depth discussion of the concept of “color space”, I’ll try to clear up some of the confusion in this post.
Most modern cameras allow us to choose a “color space”. The most common choices are Adobe RGB 1998 and sRGB; the Adobe space is “larger” but whether it makes a difference, as we will see, depends on how we are going to use our image. Most modern cameras come with a default selection — so whether we choose or not, we have in fact designated a color space. My Nikon D3 comes from the factory set at the narrower sRGB space. As you can see, I’ve reset my camera to Adobe RGB 1998.
Will my selection make a difference? It depends:
If I am shooting .jpegs. When we shoot .jpegs, the camera takes the data the sensor captures and “compresses” it. The goal is to create a smaller image file size. In essence, the computer within the camera “edits” the RAW data — deciding what to keep and what to throw out. The color space selection is part of the instruction set the camera uses when making it’s “keep it” or “lose it” decisions.
If we are shooting .jpegs, only, those decisions are irrevocable. That is why .jpegs are called a “lossy format”.
I don’t know too many people who shoot .jpegs only. Most of the people I know, who have cameras capable of shooting RAW, shoot either RAW exclusively, or RAW and .jpeg, concurrently. With the decreased cost of storage — both in the camera and on computers — and the increase in speed of camera buffers, there’s not much reason to forgo the many benefits of shooting RAW, but that’s a rant for another time.
Suffice it to say, Color Space does matter in the in camera conversion to .jpeg.
If I am shooting RAW, exclusively, the color space selection does not matter. Simply stated, a RAW image represents all of the data to hit the sensor. The camera does not sift, winnow and throw away data. So, no matter which color space is selected, there is no instruction followed that says “bring the image down to the specifications of this space”.
With a RAW image, the color space decision only becomes important when we “output” from our post production software. At that time, we take our RAW image and choose a color space which will best suit our output needs — whether it be for the web, computer screen, inkjet printer, or press. Since the conversion of a RAW image does not affect pixels but, rather, is simply creating an instruction set — we will always have that RAW image to work with and can use it in multiple color spaces (so long as we don’t delete it in favor of the .psd, .jpeg, or .tif we may have created from it.) Most people seem to agree that for monitors and web use, sRGB is the space of choice. However, for regular printing both Adobe RGB and sRGB have proponents — and the decision is often made by which profile is used by a lab rather than which better suits the image. For press, the most common space is CMYK.
All we need to know is that when shooting RAW, the color space decision in the camera is not critical.
Well, sort of. There are some exceptions.
You didn’t really expect a simple clear cut answer from me, did you? I’m an academic. We don’t see the world in clear cut terms.
There are situations in which the color space choice in the camera WILL MATTER, even when shooting RAW exclusively.
First, and foremost, it will affect the image we see on our LCD’s. Why? Because that image is a .jpeg, created by the camera from the RAW data. Even if we are shooting RAW only, the camera has to create a .jpeg to show on the LCD and to be the basis of the image’s histogram. For more on this, see the work of the late Bruce Fraser, a man whose book, Real World Camera RAW, was one of the finest and clearest of all of the photography books I’ve read.
(For those of you who have wondered why, when you shoot RAW and set the camera to Black and White — you get a B&W image on the LCD but a full color image when you go to process the RAW — the answer is that the B&W image on the LCD is the camera created .jpeg but, since the image was shot RAW and nothing was thrown out, all of the color data remains for use in post-production.)
And, second, some RAW processors are capable of using the camera’s color space selection as a starting point for adjusting the RAW images. For example, Nikon’s NX2, when opening my RAW .nef images, will start with the color space selected in the camera. (It will also use the “picture control” settings from the camera — things like sharpening, tone compensation, and saturation — as starting points.) The key here is that although the in camera choices are being respected ALL of the RAW data remains which allows us to process the image in any way we want. Nothing has been or will be thrown away. Once more, all we are creating is an “instruction set”; we are not changing pixels.
I don’t know about Canon’s software, or the other independent RAW processors, but I do know that neither Adobe Camera Raw nor Lightroom can read the .nef’s in the same way that Nikon’s own software can; neither strictly “honors” the camera’s color space setting in the same way that NX2 does. Is this a big deal? No, not really. First, Adobe has created camera profiles for use in both programs that come very close to replicating Nikon’s starting points. And, second, I don’t find the discrepancy in color space to make a difference in the ultimate RAW processing decisions I make.
So, does it matter which color space you choose? Maybe. Which one do I choose? The larger Adobe RGB space.
But Wait, There’s More: Warning — Geek Alert
To make sure I understood these concepts before writing about them, I went to the Guru of Color Space — Eddie Tapp. Our email chain is attached here for anyone who wants to see how Geeky we both can be. Thanks, Eddie for the help and for letting me publish the emails.
(Copyright: PrairieFire Productions/Stephen J. Herzberg — 2009)