Jump to content

8-bit DNGs-how is this done?


Recommended Posts

Advertisement (gone after registration)

All this discussion is very interesting, but what I still do not understand well is the methods for highlight recovery.

For one thing, you must keep in mind that the image the raw converter (any raw converter) will show you at any one time is not raw data, but some interpretation of the raw data. If the software were to show you the full dynamic range, many images would look kind of dull. For this reason, the converter will usually apply some default gamma curve that enhances the contrast in the midtones, even when the brightest highlights get pushed over the edge. But those highlights are actually alive and well in the raw data, and if you want them back, you can, just by moving some slider that changes the curve.

 

And then there are those highlights that are truly blown. But even then, there is some chance for recovery, provided there is some tonality preserved in one or two of the colour channels. The sensor’s sensitivity for the red, green, and blue parts of the spectrum isn’t identical, so for example, the red and green channels may be blown, but the blue channel is not. With data like this, the raw converter can try to recover highlight detail by reconstructing the missing info in the red and green channel from the blue channel. This can cause slight colour shifts, but as we are dealing with highlights, in most cases one wouldn’t notice anyway.

Link to post
Share on other sites

I think that some of the confusion here is because two separate issues are getting discussed at the same time - the M8's 8-bit compression, and how to expose a digital camera. The 8-compression issue really doesn't impact on exposure, or "highlight recovery", except in as much as the more levels you have in the highlights, the more graduations in those highlights you can show. But the key thing is that, unlike film which has a "toe", or non-linear saturation, digital sensors have hard limits. So with film you can usually "recover" at least a stop or two of information from blown highlights. With digital sensors, all you can do is show more detail within the "highest stop" that you exposed. Subtle but fundamental difference.

 

And there is no one answer as to how to expose any camera, film or digital. That debate has been going on since the first camera. What the photographer needs to do is to understand the instrument, and then expose to get the results they want.......

 

BTW, I've put on a separate thread some simulated 8-bit vs, 14-bit images....

 

Sandy

Link to post
Share on other sites

At the risk of sticking one's toe in a water with much bigger fish in it, here's a thought: as Sean has educated us on the value of the lower contrast lenses (bright day shooting), might one's attitude toward exposure be affected by what one is shooting? Shooting in bright light (southern Calif) is very different than northern light (possibly more cloudy, less contrast....). Not trying to open a big can of worms, but there is a difference.

 

The Leica rendering seems to have a control in the subtleties of its rendering, especially for the darker tones, where it just keeps finding information, delightfully so. Thus might the slight shift in the curve (and thus emphasis) to the lower light values, and theri restraint in pursuing the very high highlights, which are often blown out anyway, be purposefully driven to what they think needs more control?

 

 

I agree, I'm approaching this camera (it's my first digital camera for serious work) the same way as I would evaluate film/developer/paper combos with my previous setup. I have a consistent workflow and take a range of pictures in a variety of lighting situations until I get a solid understanding of what the camera and lens are going to do and how I might want to change my settings. I also try to take notes.

 

As much as I struggle with this, some rules of film-based photography just don't seem to apply to digital and it's not always helpful to use those concepts (especially w/r/t exposure for highlights and shadows). The one that I think is most helpful and Michael has been repeating it in this thread and another (about LFI) is that the exposure with the greatest amount of information is the best exposure. I think this is also basically what Jamie is getting at by suggesting a handheld meter to better learn what you are seeing. It's fundamental.

 

I don't think there's any easy answer as to how to get a maximal exposure every time, but as far as what the camera offers the histogram is it. For me, the rest is trial and error and building up a level of familiarity with what I've got to work with that I can come to reasonably expect a consistent result.

 

Unfortunately, I don't know of any way to learn about a camera/lens/workflow other than to do the knowledge.

Link to post
Share on other sites

Ok I get it, I am comparing apples and oranges here, the Nikon supposedly intelligent matrix metering with it´s database and the simple centered Leica method. Very good advice here, of course when I think about it, the camera is trying to expose correctly for 18% gray. I have to try to remember this. But thanks for the reminder, never get enough of those.

 

That's right. The Nikon is trying to figure things out for you (sometimes quite well) whereas the Leica is just providing a simple kind of information."Old School" photographers, such as myself, tend to like the latter. That said, the matrix metering in some cameras does much more than aim for middle grey, esp. in cameras that are able to react to a live digital feed (such as the Sony R1 and many other cameras). The Leica's metering is much simpler.

 

For some experienced photographers, simpler can be better.

 

Cheers,

 

Sean

Link to post
Share on other sites

The 8-compression issue really doesn't impact on exposure, or "highlight recovery",

. But the key thing is that, unlike film which has a "toe", or non-linear saturation, digital sensors have hard limits. So with film you can usually "recover" at least a stop or two of information from blown highlights. With digital sensors, all you can do is show more detail within the "highest stop" that you exposed. Subtle but fundamental difference.

Sandy

 

Exactly and well stated. And this is why exposure with digital must be approached differently then exposure with negative film.

 

I want to emphasize one of Sandy's points above:

 

"except in as much as the more levels you have in the highlights, the more graduations in those highlights you can show"

 

That's at the heart of what *may* be affected by the 8-bit compression scheme. This is what I am talking about when referred to M8 highlights as being more abrupt than, for example, those from the 5D.

 

BTW, in case anyone does not know Sandy, he's the developer of Cornerfix and has a good amount of experience experimenting with, and pushing, M8 DNGs.

 

Cheers,

 

Sean

Link to post
Share on other sites

{snipped}

As much as I struggle with this, some rules of film-based photography just don't seem to apply to digital and it's not always helpful to use those concepts (especially w/r/t exposure for highlights and shadows). The one that I think is most helpful and Michael has been repeating it in this thread and another (about LFI) is that the exposure with the greatest amount of information is the best exposure. I think this is also basically what Jamie is getting at by suggesting a handheld meter to better learn what you are seeing. It's fundamental.

 

I don't think there's any easy answer as to how to get a maximal exposure every time, but as far as what the camera offers the histogram is it. For me, the rest is trial and error and building up a level of familiarity with what I've got to work with that I can come to reasonably expect a consistent result.

 

Unfortunately, I don't know of any way to learn about a camera/lens/workflow other than to do the knowledge.

 

It helped me when I first made the switch from film to think of shooting digital as shooting positive / slide film. Expose for the highlights, not the shadows.

 

But even that is a gross simplification. A helpful one, though ;)

 

The reason I mention for people to get an incident meter to get the exposure they might want is for them to see how light falls on an area, not how it's reflected. So it's not so much to maximise exposure as to normalize it within people's usual expectations of shadows being dark and highlights being bright :)

 

With an incident meter, within the DR of the camera or film you essentially get bright highlights and the shadows fall where they fall in the scene. FWIW, this is because an incident meter essentially corrects for the grey the meter itself reads.

 

It is true that if you want flexibility in post processing, the more information the better. But if you know where you want to lose information, then go ahead and shoot that way: blow the highlights or bury the shadows, if that's the picture you want.

Link to post
Share on other sites

Advertisement (gone after registration)

The article in LFI had only an example where a white car was exposed-right, with the original 16-bit M8 prototype, as well as with the final M8, IIRC. While this is a good test, and tells us that a properly exposed image does not suffer from the 8-bit mapping that the M8, it doesn't tell us what happens in an image which needs some manipulation in post. I still suspect that under the right (wrong) conditions, the M8 may be provoked to band more than, say, the 5D's raw images. I have meant to do such a test myself, but didn't get around to it yet.

Link to post
Share on other sites

The article in LFI had only an example where a white car was exposed-right, with the original 16-bit M8 prototype, as well as with the final M8, IIRC. While this is a good test, and tells us that a properly exposed image does not suffer from the 8-bit mapping that the M8, it doesn't tell us what happens in an image which needs some manipulation in post. I still suspect that under the right (wrong) conditions, the M8 may be provoked to band more than, say, the 5D's raw images. I have meant to do such a test myself, but didn't get around to it yet.

 

Carsten, you may be right, but out of 10K images from both cameras I've never seen a difference in marginal conditions where I could attribute it to the M8's compression scheme (exposure? yes. Noise? yes... compression? no).

Link to post
Share on other sites

Guest GuyMancusoPhoto

Okay i will throw something in here that no one has even mentioned even slightly about it. First off i am not having many issues with blown highlights or real issues in this area that any other camera regardless of bit depth. If anything i think the M8 is very pilable in movemnet up and down the tonal ranges . Now having said that, does anyone remember what the DNG color space was when it came out, it was huge compared to a Prophoto color space and one of the largest color spaces we have seen in digital. Now given this huge color space of the file, would this not help in the tonal range of the Raw file and also give the M8 the DR that it has. Not the engineer here and paying attention to this stuff is like watching paint dry for me but does this have some positive effect on the 8 bit file and acts like it has more bit depth because of the huge color space in the DNG. I think Luminous landscape had a chart of the color space of the M8 and it far exceeded any other color space around.

Link to post
Share on other sites

Guest GuyMancusoPhoto

Check this out

 

Leica M8 Review

 

 

Excerpt from it

 

The bottom line on this discussion is that the Leica M8 appears to have the widest colour gamut, by a wide margin, of any camera of which I am aware. Does this translate into any image quality advantage? According to Dr. Know, a friend who writes raw software, and who is extremely knowledgeable in this area, the answer is likely no. Gamut plots of camera profiles are not particularly meaningful and don't correlate with actual sensor or camera performance. But, nevertheless I have to think that we are seeing something at work here, if not just an indication of what electronic filtering is taking place inside the camera. If anyone really does understand the implications of what we're seeing here I would enjoy hearing from you. Please though – not what you guess, not what you imagine, but what you actually know to be at work here.

 

 

me again. makes you kind of wonder

 

Okay i am going to take a seat while the brains figure that one out

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

I'd argue that wide gamut is only an advantage to the extent the gamut is still within the limits of the CIE LAB space. As soon as you have sensitivity outside that gamut, you have visible colors being contaminated with "invisible" colors. As in our infra-red problem. So the ideal camera has exactly CIE LAB sensitivity - either more or less is bad.....

 

Sandy

Link to post
Share on other sites

  • 3 years later...
People fail to realize the meter is set for grey and only grey and anything going on be it black or white tones will average for grey. All camera meters and even handheld are set for grey....

No, not entirely so...

 

Nikon uses 'Color Matrix Metering' in different variants since the F5...

 

It's a Bayer pattern sensor and software evaluating the scene in color...

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...