Jump to content
Sign in to follow this  
t024484

8 bits versus 16 bits, continuing.

Recommended Posts

Advertisement (gone after registration)

Again, interesting results, and I'm certain Mama boss is watching. Forum admin might help as well.

 

Hi Diogenes,

 

It it refreshing to get such a normal and positive answer.

Thank you.

 

Hans

Share this post


Link to post
Share on other sites

A non technical comment.

 

I did download the 14-bit files and looked at them in Capture 1 V4 where you can see them side by side at 100%.

 

I did see differences in the shadows under the cart. There is a vertical chain there and that area is better defined in the original file. To my eye it is noticeable in terms of colour and contrast.

 

The compressed/uncompressed files are very similar but if I had to rank them I would rank the log file marginally ahead of the sqrt file in the shadow area mentioned above.

 

I was looking for differences – otherwise I would never have known.

 

Compressing and uncompressing a file surely cant improve it. If Leica is aiming to produce the best files that the M8 can output then it would be nice to have the option of recording uncompressed dngs. I would probably use them in some circumstances as long as the camera did not become unusably slow. Obviously only I could judge how slow 'unusably slow' actually is for my purposes.

 

I use a PC and a calibrated IPS IIyama screen.

 

Jeff

Share this post


Link to post
Share on other sites
A non technical comment.

 

I did download the 14-bit files and looked at them in Capture 1 V4 where you can see them side by side at 100%.

 

I did see differences in the shadows under the cart. There is a vertical chain there and that area is better defined in the original file. To my eye it is noticeable in terms of colour and contrast.

 

The compressed/uncompressed files are very similar but if I had to rank them I would rank the log file marginally ahead of the sqrt file in the shadow area mentioned above.

 

I was looking for differences – otherwise I would never have known.

 

Compressing and uncompressing a file surely cant improve it. If Leica is aiming to produce the best files that the M8 can output then it would be nice to have the option of recording uncompressed dngs. I would probably use them in some circumstances as long as the camera did not become unusably slow. Obviously only I could judge how slow 'unusably slow' actually is for my purposes.

 

I use a PC and a calibrated IPS IIyama screen.

 

Jeff

Hi Jeff,

You are right, you have to look with extreme attention to find any differences at all. That is why my conclusion was that we have nothing to worry about.

I could not see any differences in the area that you mention, but I have Lightroom and an Apple screen.

But as you say, if compression has to be done, than the log compression is the better one.

Thank you for your observations.

 

Hans

Share this post


Link to post
Share on other sites

I have been following this thread witn amusement ( and amazement ).

 

Given the current price of flash memory ( about $ 4.00 / GB ) Even if the effect of the 8 bit compression is only visible under extreme circumstances, this was a questionable design decision on Leica's part.

 

Look at the time we are spending on this issue.

 

Besides, It is what it is .

 

Regards ... Harold

Share this post


Link to post
Share on other sites
It it refreshing to get such a normal and positive answer.

You may like his response, but Sandy’s and Harald’s constructive criticism is dead on, so you’d better heed their advice. This isn’t about who is right, but about getting it right.

Share this post


Link to post
Share on other sites

Advertisement (gone after registration)

You may like his response, but Sandy’s and Harald’s constructive criticism is dead on, so you’d better heed their advice. This isn’t about who is right, but about getting it right.

As always I regard your opinion very high, but this time I miss the point you want to make.

If someone showed the desire in getting it right, look at all the initiatives and the time I took so far.

One thing is quite obvious, the discussion is not always very streamlined, but I am very open for (your) suggestions.

 

Hans

Share this post


Link to post
Share on other sites
I think it has more to do with processor speed and power rather than memory costs.

And with temperature management. The size of the M8 makes cooling a problem. Even now one notices it getting warm from time to time. More processing power=more heat.

Share this post


Link to post
Share on other sites
You may like his response, but Sandy’s and Harald’s constructive criticism is dead on, so you’d better heed their advice. This isn’t about who is right, but about getting it right.

 

Every single one is contributing in this thread: Hans, that has put huge effort for this, Sandy for his criticism, Herald ... This is not a competition, but about the option of something good that might come out of all this, again let's hope that big momma is watching

Share this post


Link to post
Share on other sites
this time I miss the point you want to make.

The point I wanted to make was the one made by Sandy and Harald: If you intend to correlate your numerical results to visible differences, you should convert your numbers into a space where equal numerical differences correspond to equal visual differences. And that’s what gamma is for. Your claim that gamma was about monitors, not about the eye, is one of the misconceptions Charles Poynton set out to refute:

 

“Misconception

The main purpose of gamma correction is to compensate for the nonlinearity of the CRT.

 

Fact

The main purpose of gamma correction in video, desktop graphics, prepress, JPEG, and MPEG is to code luminance or tristimulus values (proportional to intensity) into a perceptually-uniform domain, so as optimize perceptual performance of a limited number of bits in each RGB (or CMYK) component.” (Charles Poynton: “The rehabilitation of gamma")

Share this post


Link to post
Share on other sites
The point I wanted to make was the one made by Sandy and Harald: If you intend to correlate your numerical results to visible differences, you should convert your numbers into a space where equal numerical differences correspond to equal visual differences. And that’s what gamma is for. Your claim that gamma was about monitors, not about the eye, is one of the misconceptions Charles Poynton set out to refute:

 

“Misconception

The main purpose of gamma correction is to compensate for the nonlinearity of the CRT.

 

Fact

The main purpose of gamma correction in video, desktop graphics, prepress, JPEG, and MPEG is to code luminance or tristimulus values (proportional to intensity) into a perceptually-uniform domain, so as optimize perceptual performance of a limited number of bits in each RGB (or CMYK) component.” (Charles Poynton: “The rehabilitation of gamma")

I am happy with your reaction; it could help to put things in their right perspective.

 

The point where it all started was that I can see small differences in a SQRT compressed picture, and not in a Log compressed picture.

The search is thus for a sensible explanation.

 

Let's try to find out where consensus and misunderstandings, misconceptions are:

1) "When correlating numerical differences to visible differences, you should convert your numbers into a space where equal numerical differences correspond to equal visual differences."

Agreed, no discussion possible.

2) My assumption is that the Camera Sensor is a linear device. If so, twice as much light (luminance) leads to twice the Charge of a pixel, and to twice the digital value of that pixel.

In this case DNG values are directly proportiomnal to Luminance.

Is this assumption roughly true?

3) The perceptual response to Luminance is sometimes expressed as Lightness.

4) Lightness perception is roughly logarithmic to Luminance (Poynton).

If so the DNG differences between pre and post compression can be treated with a percentage to create the desired "space", since the log ( 1015/1000) is the same step as log (10150/10000) when using for this example 1.5 as a percentage.

That is what and why I did so far.

 

5) There is a CIE approximation of the ratio Lightness versus Luminance, where lightness goes from 0 to 100 ( black to white) called L*

In this representation, the threshold of visibility in Lightness has a magnitude of 1, at least when I have interpreted this correctly.

Although this curve looks at first sight similar to x^0,45, it is significantly different in the darker areas below 0.01 in Luminance.

6) When calculating this Lightness step of 1 forth and back into a percentage of the DNG value for a 14 bits system, you get the following percentages.

 

DNG + 1 step in Lightness

--164 +13,0 %

--491 +8.9 %

-1637 +5.9%

-3274 +4,7%

-6547 +3,6%

11458 +3,0%

16041 +2,6%

 

7) The approximation with L*, is clearly not comparable with a logarithm. It suggests that Percentage Wise your eye is less sensitive in the dark. At a DNG value of 164 the smallest visible step is 13%, and at full light threshold goes down to 2.6 %.

I must say, a bit higher in percentage as I had expected.

That our eye is roughly logarithmic in sensitivity, does not at all not match with L* curve.

8) I can use this L* Lightness space for doing a new compare between the 14 bits Picture that I have, and the two compressed/decompressed versions.

Horizontal scale should now be in lightness, where +/- 1 represents the threshold of visibility.

 

If you think that this L* space will bring us a step forward in getting it right, let me know.

 

Hans

Share this post


Link to post
Share on other sites
My assumption is that the Camera Sensor is a linear device.

For all practical purposes, it can be regarded as such, yes.

 

There is a CIE approximation of the ratio Lightness versus Luminance, where lightness goes from 0 to 100 ( black to white) called L* (...)

The approximation with L*, is clearly not comparable with a logarithm. It suggests that Percentage Wise your eye is less sensitive in the dark. At a DNG value of 164 the smallest visible step is 13%, and at full light threshold goes down to 2.6 %.

I must say, a bit higher in percentage as I had expected.

That our eye is roughly logarithmic in sensitivity, does not at all not match with L* curve.

8) I can use this L* Lightness space for doing a new compare between the 14 bits Picture that I have, and the two compressed/decompressed versions.

Horizontal scale should now be in lightness, where +/- 1 represents the threshold of visibility.

 

If you think that this L* space will bring us a step forward in getting it right, let me know.

CIE L*a*b was designed so that the Euclidean distance between two points within the colour space should correspond to the perceptual difference between the colours represented by those points. So L* should fit the bill.

 

Interestingly, the older Hunter Lab colour space used the square root for calculating lighness (L). CIE L*a*b* uses the cube root for L*, except for low values where the relationship is basically linear.

Share this post


Link to post
Share on other sites

Whether it was about storage, or processing power and time, there would have been no downside to Leica making the 8 / 16 (or 12 / 14 ) bit choice a customer driven parameter, and we could then each decide whether the difference matters to us, and under what circumstances.

 

Nikon did exactly this on the D3 / 300, and on the D300 I usually use 14 bit ( couldn't hurt ), and 2.5 fps except when shooting sports or wildlife when I switch to 12 bit and 6 fps.

 

The D3 has enough power to render that choice moot except for storage.

 

Hopefully this will be in some future firmware.

 

None of this however is ever a factor for me in choosing Leica or Nikon for any given project. IQ is just fine on both, and other factors drive the choice.

 

Regards .. Harold

Share this post


Link to post
Share on other sites
For all practical purposes, it can be regarded as such, yes.

 

 

CIE L*a*b was designed so that the Euclidean distance between two points within the colour space should correspond to the perceptual difference between the colours represented by those points. So L* should fit the bill.

 

Interestingly, the older Hunter Lab colour space used the square root for calculating lighness (L). CIE L*a*b* uses the cube root for L*, except for low values where the relationship is basically linear.

 

The proof of the pudding is in the eating. I will perform the calculation and see what it will bring, still with Jaaps picture.

 

In the meantime you would do me a favor finding out more information regarding the treshold of perception in this space.

I find a value of 1 rather coarse.

 

Hans

Share this post


Link to post
Share on other sites

Hallo Hans!

I am happy with your reaction; it could help to put things in their right perspective.
I am also glad about Michaels successful "bring together again".
Let's try to find out where consensus and misunderstandings, misconceptions are:

1) "When correlating numerical differences to visible differences, you should convert your numbers into a space where equal numerical differences correspond to equal visual differences."

Agreed, no discussion possible.

2) My assumption is that the Camera Sensor is a linear device. If so, twice as much light (luminance) leads to twice the Charge of a pixel, and to twice the digital value of that pixel.

In this case DNG values are directly proportiomnal to Luminance.

Is this assumption roughly true?

This assumption is - in my opinion - the first point of misconception. DNG values contain luminance AND chrominance representations of the captured scenery.

3) The perceptual response to Luminance is sometimes expressed as Lightness.
If you mean CIE Lab (or Luv), than I can agree.
4) Lightness perception is roughly logarithmic to Luminance (Poynton).
Shortly, no :-) but I will not longer insist in our both different interpretation of a LOG versus SQRT function...
If so the DNG differences between pre and post compression can be treated with a percentage to create the desired "space", since the log ( 1015/1000) is the same step as log (10150/10000) when using for this example 1.5 as a percentage.

That is what and why I did so far.

The main issue of your approach (at least in my opinion) is your treatment of the "colorless" DNG values for calculating visual differences.

5) There is a CIE approximation of the ratio Lightness versus Luminance, where lightness goes from 0 to 100 ( black to white) called L*
In common, I am a friend of simple approaches, but in our case (which is at a very high academic level :-) it is not so simple as it would look at the first view. The L* valuerange is not integer, it is a continous range (of real numbers). Only for storage purposes, one uses the range 0 ... 100 or very often 0 ... 255 and for calculation (by trilinear interpolation) the range 0 ... 65535 is also used.
In this representation, the threshold of visibility in Lightness has a magnitude of 1, at least when I have interpreted this correctly.
And exactly here we should find a better (in fact the best) concensus. There exist a (common accepted) measure of visual difference, which is the cartesian difference of two colors in the L*a*b* color space. It is not only the difference in luminance, we also take into account the chrominance of two different "colors" (even if they are colorless (black, grey, white).
Although this curve looks at first sight similar to x^0,45, it is significantly different in the darker areas below 0.01 in Luminance.
Now it is time to unveil the next mystery of colorspaces :-)

The CIE L*a*b* values are not directly derived from their RGB equivalents, there is one color space in between, the XYZ color space.

To get useable values for the calculation of (visual) color differences, you have to do the following steps:

1) decompress the 8 bit DNG in using the LinearizationTable (makeing them 16 bit or whatever)

2) demosiac the colorless values of the bayer filter

3) convert this (intermediate, linear) RGB values to the XYZ colorspace, by using the appropriate whitepoint and illuminator

4) convert the XYZ values to CIE Lab

5) calculate the (cartesian) difference of two colors from L*,a* and b*

6) When calculating this Lightness step of 1 forth and back into a percentage of the DNG value for a 14 bits system, you get the following percentages.
It is not worth to worrying about
7) The approximation with L*, is clearly not comparable with a logarithm. It suggests that Percentage Wise your eye is less sensitive in the dark. At a DNG value of 164 the smallest visible step is 13%, and at full light threshold goes down to 2.6 %.

I must say, a bit higher in percentage as I had expected.

Your whole approach is far away from what you have should do...
That our eye is roughly logarithmic in sensitivity, does not at all not match with L* curve.

Believe me or not, the eyes response is much more a power function than a logarithmic one.

8) I can use this L* Lightness space for doing a new compare between the 14 bits Picture that I have, and the two compressed/decompressed versions.
Yes, of course, but as a first step, follow the rules of color space conversion (and demosaic) and choose also useable values for the white point and the illuminator. The last two are also far away from beeing trivial!
Horizontal scale should now be in lightness, where +/- 1 represents the threshold of visibility.
Here is a good reference (only in German language): Delta E

Here a more general discription ColorWiki - Delta E: The Color Difference

and also AIM: Evaluation of the the CIE Color Difference Formulas

which will drive you crazy :-)

If you think that this L* space will bring us a step forward in getting it right, let me know.
Of course, we are on the right way, but there are a lot of rocks on this way :-)

Share this post


Link to post
Share on other sites

Thanks guys. I'm not qualified to participate, but I'm avidly learning. Thanks for the links too, Harald..

I hope you don't mind a more or less unrelated question.:

According to Photoshop gurus the move from RGB to LAB and back is non-destructive, as opposed by for instance the move from Profoto to sRGB.

What is the difference?

Share this post


Link to post
Share on other sites
Thanks guys. I'm not qualified to participate, but I'm avidly learning. Thanks for the links too, Harald..

I hope you don't mind a more or less unrelated question.:

According to Photoshop gurus the move from RGB to LAB and back is non-destructive, as opposed by for instance the move from Profoto to sRGB.

What is the difference?

 

Jaap,

 

To simply vastly, LAB is an "absolute" scheme; it can represent any color. ProPhoto and sRGB however are both effectively defined as ratios of three primary colors - so they can represent any color that is within the "triangle" of the three primaries that they use. Because ProPhoto and sRGB have different primaries, there are colors that you lose, because there are parts of each one's "triangle" that the other doesn't cover. (sRGB has a "smaller" triangle)

 

Note that RGB->LAB->RGB is lossless, but LAB->RGB->LAB isn't.....

 

Sandy

Share this post


Link to post
Share on other sites

Hello Jaap!

According to Photoshop gurus the move from RGB to LAB and back is non-destructive, as opposed by for instance the move from Profoto to sRGB.

What is the difference?

Sandy has answered your question in a perfect way already, so I cannot add more, except that the "lost" takes place already in the conversion from <any>RGB to XYZ color space and back to <another>RGB, because they may have different whitepoints and/or different gammas and/or different standard observer illuminants.

Simply said, there is a kind of crop of white (there cannot be a whiter color than white :-) which casts (?) the whole reproduceable amount of colors (gamut).

The conversion from XYZ to Lab colorspace does not (?) remove any colors, it is only a presentation into an other (color) domain or in other words a reversable function transfer.

Share this post


Link to post
Share on other sites

Thanks Harald - this whole digital thing has certainly started us poor laymen on a steep learning curve. Where are the days that we only had to look at our teststrip through the Kodak colour-filter set before twiddling with the colour-head on the enlarger?

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue., Read more about our Privacy Policy