Jump to content

8 bits versus 16 bits, continuing.


t024484

Recommended Posts

Advertisement (gone after registration)

Hello Jaap!

Thanks Harald - this whole digital thing has certainly started us poor laymen on a steep learning curve. Where are the days that we only had to look at our teststrip through the Kodak colour-filter set before twiddling with the colour-head on the enlarger?:o

Yes, I remember well the old days of color (paper) developement in the darkroom. Unfortunatly, I never owned an enlarger with a color-head, so I had to use color filters which made the workflow slow down and much more complicated...

One additional word to my hints to Hans, regarding to use Lab color space in any case. Even if he tests gray values only, after converting to Lab there will be also (tiny) color values for a and b channel, resulting from the conversion matrix from sRGB to XYZ. So in the (perceptive) Lab color space, it is in fact the cartesian difference of two color and not only the difference of the L values.

Link to post
Share on other sites

  • Replies 85
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Hallo Hans!

I am also glad about Michaels successful "bring together again".

 

This assumption is - in my opinion - the first point of misconception. DNG values contain luminance AND chrominance representations of the captured scenery.

If you mean CIE Lab (or Luv), than I can agree.

Shortly, no :-) but I will not longer insist in our both different interpretation of a LOG versus SQRT function...

 

The main issue of your approach (at least in my opinion) is your treatment of the "colorless" DNG values for calculating visual differences.

In common, I am a friend of simple approaches, but in our case (which is at a very high academic level :-) it is not so simple as it would look at the first view. The L* valuerange is not integer, it is a continous range (of real numbers). Only for storage purposes, one uses the range 0 ... 100 or very often 0 ... 255 and for calculation (by trilinear interpolation) the range 0 ... 65535 is also used.

And exactly here we should find a better (in fact the best) concensus. There exist a (common accepted) measure of visual difference, which is the cartesian difference of two colors in the L*a*b* color space. It is not only the difference in luminance, we also take into account the chrominance of two different "colors" (even if they are colorless (black, grey, white).

Now it is time to unveil the next mystery of colorspaces :-)

The CIE L*a*b* values are not directly derived from their RGB equivalents, there is one color space in between, the XYZ color space.

To get useable values for the calculation of (visual) color differences, you have to do the following steps:

1) decompress the 8 bit DNG in using the LinearizationTable (makeing them 16 bit or whatever)

2) demosiac the colorless values of the bayer filter

3) convert this (intermediate, linear) RGB values to the XYZ colorspace, by using the appropriate whitepoint and illuminator

4) convert the XYZ values to CIE Lab

5) calculate the (cartesian) difference of two colors from L*,a* and b*

It is not worth to worrying about

Your whole approach is far away from what you have should do...

 

Believe me or not, the eyes response is much more a power function than a logarithmic one.

Yes, of course, but as a first step, follow the rules of color space conversion (and demosaic) and choose also useable values for the white point and the illuminator. The last two are also far away from beeing trivial!

Here is a good reference (only in German language): Delta E

Here a more general discription ColorWiki - Delta E: The Color Difference

and also AIM: Evaluation of the the CIE Color Difference Formulas

which will drive you crazy :-)

Of course, we are on the right way, but there are a lot of rocks on this way :-)

 

Hi Harald,

 

First of all thank you for taking the time to substantiate your vision.

 

I have read your links and many other publications, and find the subject rather edgy to say the least. There are almost as many opinions as there are writers.

 

In this publication with grey tones:

http://www.engineering.uiowa.edu/~aip/Lectures/eye_phys_lecture.ppt#278,16,Slide 16

The range from black to white has been divided into 50 zones with equal step sizes for the eye.

One can read that on this scale, called magnitude of sensor experience, the eye has a rather good logarithmic sensitivity over the range from 10 to 45 with a threshold level of 2% in luminance

Below 10 and above 45 sensitivity decreases.

With the CIE L* function, the sensitivity threshold goes from 8.9 to 2.9% when lightness is considered over the same range.

My feeling that CIE L* is quite coarse, is substantiated by the above article.

 

Your proposal is to measure distance E in the three dimensional L*a*b* space makes things much more complex, although doable.

As a first step things would be easier when the color information is disregarded, meaning a=0 and b=0 and only grey tones are investigated. All that is left is L*.

 

What is measured then is only the one dimensional difference in L and not the three dimensional distance in the LAB space.

 

Reading through your links give me more arguments why I think it has restricted sense to perform a distance calculation E in the LAB color space:

 

AIM: Evaluation of the the CIE Color Difference Formulas

 

What are the two major findings?

1. Always when the color difference is due to a change in L* channel alone the perceptual difference is huge compared to any of the 10 cases where the color difference is due to a change in a* and/or b*.

In other words the transfer function of the CIELAB is faulty. The dE(2000) formula increase this error the most.

2. None of the evaluated color difference models are in good agreement with our vision, or, more accurately, they are completely useless measures for color difference.

E.g. dE=8 can mean that two colors are perceptually extremely different from each other but it can also mean that two colors are perceptually extremely similar.

So, we desperately need a Color Difference Formula for the real life use.

 

With many patches they have shown colors with the same distance E to a reference color.

The evidence formulated in the two major findings is overwhelming.

So although E is well defined, the outcome does not correspond at all to what we see.

So as far as I can see, the whole charade of demosaicing, going to XYZ and to CIE Lab and measuring a distance, looks almost as a fruitless investment of time.

 

I haven’t finalized my bar graph’s with the CIE L* algorithm, but I very much doubt if it will bring us any further,

Firstly, because it is not three dimensional, and second which is even more serious, because there is no well defined threshold algorithm.

 

Maybe I should make a third bar graph with the eye sensitivity as measured in the first link of this article. A simplification, yes, but at least based on real measurements.

 

Hans

Link to post
Share on other sites

In the meantime you would do me a favor finding out more information regarding the treshold of perception in this space.

As you will have realized, no single threshold value would be adequate. But then, I don’t even think it matters. There might be cases where Leica’s compressed storage scheme leads to visible differences compared to images developed from uncompressed 14 bit data – say when you flip back and forth between two images and try to pinpoint the differences. But these will be very small differences in any case, so why should we care? For example, the noise patterns in two consecutive and otherwise identical shots won’t be the same, but we don’t care about that either. As Gregory Bateson used to say: there cannot be a difference that doesn’t make a difference. For the most part, these differences don’t make any difference, and thus they can be safely ignored.

 

Leica’s square root compression scheme is just a rough approximation of the human eye’s response, but I don’t think that painstakingly optimizing this scheme for an even better approximation is worth the effort. You might actually come up with something better, no doubt about that, but then someone would accidentally or deliberately underexpose by 1/3 EV and your optimization doesn’t take that into account – there is no way to control all the relevant variables to make sure that any theoretical optimization was also optimal in practice.

Link to post
Share on other sites

I like working with it, but it's the results that keeps me from using it.

 

That's very strange, if by "it" you mean C1. The colour and gradation results from C1 v4 are better than ACR, though I'm sure the newest gen of LR / ACR will be better than previous.

 

If you want to see what the M8 DNGs are capable of in terms of pushing and pulling, you really do need to use C1. Currently, ACR's colour problems are compounded when you push and pull the M8 DNG, IMO.

 

{now back to the interesting academic compression argument ongoing--I feel like I hijacked a thread here! LOL!)

Link to post
Share on other sites

As you will have realized, no single threshold value would be adequate. But then, I don’t even think it matters. There might be cases where Leica’s compressed storage scheme leads to visible differences compared to images developed from uncompressed 14 bit data – say when you flip back and forth between two images and try to pinpoint the differences. But these will be very small differences in any case, so why should we care? For example, the noise patterns in two consecutive and otherwise identical shots won’t be the same, but we don’t care about that either. As Gregory Bateson used to say: there cannot be a difference that doesn’t make a difference. For the most part, these differences don’t make any difference, and thus they can be safely ignored.

 

Leica’s square root compression scheme is just a rough approximation of the human eye’s response, but I don’t think that painstakingly optimizing this scheme for an even better approximation is worth the effort. You might actually come up with something better, no doubt about that, but then someone would accidentally or deliberately underexpose by 1/3 EV and your optimization doesn’t take that into account – there is no way to control all the relevant variables to make sure that any theoretical optimization was also optimal in practice.

 

I have completely peace with your comment.

It fully complements my conclusion based upon restricted source material, that the 8 bit compression scheme that Leica has used is nothing to worry about.

I have nothing to complaint, and I am more than happy with my M8.

But improvements go mostly in small steps, so if there would be a better compressing scheme possible as the current one, and if this just means nothing else but a software update, why not.

After all it is just out of curiousity that I investgate the things I did so far.

Investigating things like this is gave me a much better insight, no matter if this leads to something in the end. It is like you formulated "getting it right".

 

Hans

Link to post
Share on other sites

Last week I have tried to go as deep into the matter of color representation as made sense.

Very much can be found on history of color spaces, translations from one space to another and the reasons why there are so many color spaces.

There seems to be no single color space however, that helps to predict what change in the in the (Raw) values of R.,G or B leads to a well defined visible difference, or so called perceptual uniformity.

 

Most notably are attempts by CIE for color spaces starting from RGB to XYZ and LAB in 1931,1960 ,1964,1976, 1994 and in 2000, and the British Standards CMC(1,1) and (2,1)

The result of this all is that they still have not achieved anything that comes close to perceptual uniformity.

 

The earlier link (1) provided by Harald, is one of the many on the subject who come to conclusions like:

 

None of the evaluated color difference models are in good agreement with our vision, or, more accurately, they are completely useless measures for color difference

 

The CIELAB space has it’s 3 parameters, resp L, a and b, where L stands for for brightness, +/- a represents the colors from red to green, and +/- b for yellow to blue .

The single most obvious and consistent value change, leading to a predictable perceptual change, is in the brightness parameter L.

 

In literature describing camera sensors, one can often read the (simplified) description of Green being the Luminance pixels and Red and Blue as the Chroma pixels.

The calculation of Luminance Y in the XYZ space is a linear process, where green is weighted much higher than red, and blue plays almost no role at all.

Y is the single parameter used to calculate lightness L in the CIELAB space.

 

For that reason, I have concentrated on changes in Green (as if it were already Y) before and after compressing with the 4 algorithm’s that I have compared, and have tried to correlate the findings with the picture that I had processed.

 

I have used 4 different compression/decompression schemes, but only on the green pixels.

SQRT

Gamma 2,2

CIE L*

Log

 

First I have added a statistical printout of the distribution of errors of the 4 compression algorithms. Horizontal scale goes from -5% to +5% deviation from the original 14 bit picture. Vertically are the amount of pixels falling in a band of 0.1%.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

 

Link (2) shows that we can roughly discriminate in luminance values of 2% over a large range of our vision, that is why I went a bit under this value with 1.8%.

In the pictures below, showing the result this all, I have made those pixels black, that deviate more than 1.8% from the original green pixel before compressing/decompressing.

This makes it possible to see where improvements can be made.

I have made the pictures a bit brighter with curves to get more contrast.

 

As the 100% crops show, the SQRT is the worst compression, then resp the Gamma 2.2, the CIE L* and finally the Log.

 

 

 

 

 

 

My conclusion can only be that Leica has not choosen the best possible compression algorithm. Not that SQRT is that bad, but it simply could have been much better.

 

 

(1) AIM: Evaluation of the the CIE Color Difference Formulas

(2) http://www.engineering.uiowa.edu/~aip/Lectures/eye_phys_lecture.ppt#278,16,Folie

Link to post
Share on other sites

That's very strange, if by "it" you mean C1. The colour and gradation results from C1 v4 are better than ACR, though I'm sure the newest gen of LR / ACR will be better than previous.

 

If you want to see what the M8 DNGs are capable of in terms of pushing and pulling, you really do need to use C1. Currently, ACR's colour problems are compounded when you push and pull the M8 DNG, IMO.

Hi Jamie,

I can't put my finger on what it is, but the images from C1 looks good at first eye sight but then I start to notice something that I really don't like in the images. I think it makes people look dead and sometimes even like wax dolls.

 

I care about color, but the overall image quality is more important to me.

Link to post
Share on other sites

Hello!

I have used 4 different compression/decompression schemes, but only on the green pixels.

SQRT

Gamma 2,2

CIE L*

Log

Thank you for your extensive work. One obvious result - hopefully also for you - is, that Gamm 2,2 has almost the same distribution as SQRT, which proofes, that SQRT is numerical equal to Gamm 2.

 

One question: which kind of LOG do you use? Base e, base 10 or which one?

Link to post
Share on other sites

Hello!

 

One question: which kind of LOG do you use? Base e, base 10 or which one?

 

Hi Harald,

 

Just the LOG with whatever base is minus infinity, which is not what we want.

For compression we need a function F that gives F(0)=0. Further at the beginning of compression also at F(0) the slope should be 1, meaning that dF(0)dx=0.

The third condition to be met is that F(65535)=255.

Because the differential of dLn(x)dx=1/x, I used the natural Log.

First condition F(0)=0 is met by Ln((A+x)/A), because Ln(1)=0

To get F(65535)=255, is met with B*Ln((A+x)/A)

The differential of this dF(x)dx is B*(A/(A+x))*1/A.

Because this should be one at F(0), it follows that B=A.

To get F(65535)=255, it follows that A=33.67.

So the full function for compression is:

F(x)=33.67*Ln((x+33.67)/33.67)

 

Hans

Link to post
Share on other sites

Hi Harald,

 

Just the LOG with whatever base is minus infinity, which is not what we want.

 

Hans

I was in such a hurry, that I made some errors:

"minus infinity" should be "minus infinity at x=0"

 

and somewhat further, read dF(0)dx= 1 instead of dF(0)dx=0

Link to post
Share on other sites

Hans,

 

Very interesting - When you "deviate more than 1.8%", in what space are you measuring the 1.8%? Still the linear?

 

Sandy

Hi Sandy,

 

You are right, this is in the linear or in the luminance space, just like the measurements of the 2% discrimination treshold in reference (2)

 

Hans

Link to post
Share on other sites

Hello Jaap,

 

I hope you are reading this, because to conclude my series of tests, I would very much like to process a (razorsharp) picture of a face.

So If you have a 16bit something that can be used for this purpose, you would make me very happy.

 

Hans

Link to post
Share on other sites

I dowloaded the latest software release from Leica, did a quick search, and found the following interesting things:

 

From E37ECh until E77EB the Compression table y=(Int(SQRT(X*4))+0.5)

From E35ECh until E37EBh the decompression table Int((Y*Y)+0.5)

 

I did a number of checks and both are rounded. Because of that, the Compression has been limited at 255 to prevent 256.

 

This proves 2 things:

1) because the decompression table is 16384 bytes, the A/D converter is indeed 14 bits.

2) Leica does not calculate the Sqrt, but is using a lookup table. The lookup table for compression as well as decompression can be replaced by any other lookup table without any consequence on speed.

 

So far I have not found a checksum. There must be some mechanism to check the integrity of the file.

It would be a nice experiment to replace both tables.

Link to post
Share on other sites

Hello Jaap,

 

I hope you are reading this, because to conclude my series of tests, I would very much like to process a (razorsharp) picture of a face.

So If you have a 16bit something that can be used for this purpose, you would make me very happy.

 

Hans

 

Hans, I'm afraid I cannot be of service there. I'm not much of a portrait photographer at the best of times, and I have never used the DMR for portraits. I know there are some members doing great studio stuff with the DMR. Maybe one of them can help out?

Link to post
Share on other sites

I dowloaded the latest software release from Leica, did a quick search, and found the following interesting things:

 

From E37ECh until E77EB the Compression table y=(Int(SQRT(X*4))+0.5)

From E35ECh until E37EBh the decompression table Int((Y*Y)+0.5)

 

I did a number of checks and both are rounded. Because of that, the Compression has been limited at 255 to prevent 256.

 

This proves 2 things:

1) because the decompression table is 16384 bytes, the A/D converter is indeed 14 bits.

2) Leica does not calculate the Sqrt, but is using a lookup table. The lookup table for compression as well as decompression can be replaced by any other lookup table without any consequence on speed.

 

So far I have not found a checksum. There must be some mechanism to check the integrity of the file.

It would be a nice experiment to replace both tables.

of course, these tables are stored in the firmware somewhere, and get written every time one takes a pic right?

Link to post
Share on other sites

diogenis--

You're basically right that the firmware writes out the values for each image, but as I understand, it's the rules for building the LUT which are stored in firmware, and not the LUT per se. The rule for that computation is what t0 has quoted in the paragraph you cite. That's why the algorithm could be replaced relatively easily.

 

For more information on the M8's compression algorithm, with its advantages and disadvantages, see Michael Hußmann's article on the topic in LFI from about the time the question of 8 vs 16 bits first became popular, a few months after the M8 came out.

 

Hans is certainly subjecting the matter to some good, enthusiastic investigation.

Link to post
Share on other sites

This sounds like fun, I would be happy to test it with my cam.

Can the compression/decompression be removed too?

I you want to give it a try, I will make a software release with Log compression.

And no, compression cannot be taken away.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...