Jump to content
Sign in to follow this  
t024484

8 bits versus 16 bits, continuing.

Recommended Posts

Advertisement (gone after registration)

I used the compare function in Lightroom, and switched back and forth between 100% crops of two images, so at exactly the same spot on my screen the picture switched from one to the other within a fraction of a second..

It was always on the sideboard of the wagon that I could see very small differences when the picture had been compressed with the SQRT algorithm.

The differences are best described as just a little bit less sharp, and a little bit noisier, but nothing to worry about.

When compressed with the Log algorithm, there are absolutely no differences visible.

 

I have not printed the pictures, could be another useful step to make.

 

Just for completeness, I am trying to calculate the effective value of the relative error of the compressed/decompressed picture to the original picture, with both SQRT and LOG.

So the answer will be one figure as an efective error in percents to each sample after compressing, decompressing. The smaller this figure, the less is the original picture mutilated.

 

Hans

 

.

 

What monitor, Hans?

Share this post


Link to post
Share on other sites
So what can we then blame for the somewhat "lesser" colors of the M8 compared to the DMR?

 

When I first got the M8 the first thing I noticed was that the files could handle less processing than the DMR before they showed artifacts(PS+ACR), I had the feeling that the M8 files was somewhat limited and a bit edgy in a digital-noisy-way, even at ISO160.

When I shoot a color checker I see edgy noise in some of the patches even at ISO160, there's almost none with the DMR and the 5D.

 

I only investigated what happens if you compress 16,14 and 12 bits data to 8 bits and then back to 12,14 and 16 bits. The answer was that it can be done without los of quality.

As Jaap formulated it,"the sting is taken out".

 

It does not say that the M8 is identical to a DMR.

To start with, the whole analog part of these two cameras is probably different, being sensor, microprism, IR filter etc.etc.

 

It is not the easiest thing to take two pictures with both cameras under exactly the same conditions.because optimally you should use the same lens.

To compare the pictures is probably just as difficult, unless the differences are super obvious, which seems very unlikely.

 

Hans

Share this post


Link to post
Share on other sites
What monitor, Hans?

20 inch Apple Cinema Display, DVI connected to a Nvidia 8800 graphics board.

 

Hans

Share this post


Link to post
Share on other sites
I only investigated what happens if you compress 16,14 and 12 bits data to 8 bits and then back to 12,14 and 16 bits. The answer was that it can be done without los of quality.

As Jaap formulated it,"the sting is taken out".

 

It does not say that the M8 is identical to a DMR.

To start with, the whole analog part of these two cameras is probably different, being sensor, microprism, IR filter etc.etc.

 

It is not the easiest thing to take two pictures with both cameras under exactly the same conditions.because optimally you should use the same lens.

To compare the pictures is probably just as difficult, unless the differences are super obvious, which seems very unlikely.

 

Hans

 

Hi Hans,

I did expect the M8 to be different, just not to see more digital artifacts.

Usually I don't find A/B testing very useful, what counts for me is how the long term results are.

 

If the goal is to do good with photos, I consider A/B testing damaging for the artistic expression.

To my knowledge there hasn't been made a scientific test in this field, but I've noticed that A/B testing favour relative changes, in other words changes that the viewer don't really care about and doesn't help the viewers perception of the photo. In some cases these relative changes, if there's drawn attention to them, can take focus away from what really makes a difference to the viewer. Maybe good photos are the ones where the parameters that makes a difference has been wisely selected and used to make excellence.

 

H

Share this post


Link to post
Share on other sites
Hi Hans,

I did expect the M8 to be different, just not to see more digital artifacts.

Usually I don't find A/B testing very useful, what counts for me is how the long term results are.

 

If the goal is to do good with photos, I consider A/B testing damaging for the artistic expression.

To my knowledge there hasn't been made a scientific test in this field, but I've noticed that A/B testing favour relative changes, in other words changes that the viewer don't really care about and doesn't help the viewers perception of the photo. In some cases these relative changes, if there's drawn attention to them, can take focus away from what really makes a difference to the viewer. Maybe good photos are the ones where the parameters that makes a difference has been wisely selected and used to make excellence.

 

H

 

You are a bit of a philosopher, aren't you.

But you are right, excellence can never be catched in a technical description.

Henry Cartier-Bresson was probably not very impressed by technical innovations.

He just made the kind of pictures that many of us can only dream of.

Share this post


Link to post
Share on other sites
I used the compare function in Lightroom, and switched back and forth between 100% crops of two images, so at exactly the same spot on my screen the picture switched from one to the other within a fraction of a second..

It was always on the sideboard of the wagon that I could see very small differences when the picture had been compressed with the SQRT algorithm.

The differences are best described as just a little bit less sharp, and a little bit noisier, but nothing to worry about.

When compressed with the Log algorithm, there are absolutely no differences visible.

 

I have not printed the pictures, could be another useful step to make.

 

Just for completeness, I am trying to calculate the effective value of the relative error of the compressed/decompressed picture to the original picture, with both SQRT and LOG.

So the answer will be one figure as an efective error in percents to each sample after compressing, decompressing. The smaller this figure, the less is the original picture mutilated.

 

Hans

 

.

 

You know why I ask it: If humans can't spot any difference then, who cares? and there are two basic means we now have: LCD monitors and printouts using a photolab printer.

So this step which is now cleared was really important. Next step we should try is to compare what I think is also very important, the 3-4 main raw processors: Aperture vs C1 vs LR vs ACR: do they translate .dng files correct, or not?

Share this post


Link to post
Share on other sites

Advertisement (gone after registration)

Hi Hans,

I did expect the M8 to be different, just not to see more digital artifacts.

Usually I don't find A/B testing very useful, what counts for me is how the long term results are.

 

If the goal is to do good with photos, I consider A/B testing damaging for the artistic expression.

To my knowledge there hasn't been made a scientific test in this field, but I've noticed that A/B testing favour relative changes, in other words changes that the viewer don't really care about and doesn't help the viewers perception of the photo. In some cases these relative changes, if there's drawn attention to them, can take focus away from what really makes a difference to the viewer. Maybe good photos are the ones where the parameters that makes a difference has been wisely selected and used to make excellence.

 

H

The A/B testing is only useful to prove that M8 is a camera you can rely on, and that it won't let you down. Not everyone can take artistic pictures or has to, but there are many who believe this little camera has supposedly some undefined flaws, because its not that big or noisy as its dSLR cousins

Share this post


Link to post
Share on other sites
The A/B testing is only useful to prove that M8 is a camera you can rely on, and that it won't let you down. Not everyone can take artistic pictures or has to, but there are many who believe this little camera has supposedly some undefined flaws, because its not that big or noisy as its dSLR cousins

Thanks, I think you are right about that.

Share this post


Link to post
Share on other sites

I played a bit with Photoshop, to find out what percentage change in brightness can be seen by the naked eye.

I filled a layer with a solid color, selected a square inside, and changed the brightness of this square.

In general, I could just see with several levels of brightness to start with, a change in brightness of 1%, by switching preview on and off.

2% change in brightness was easy to see.

So let’s assume that when compressing from 14 bits to 8 bits and decompressing it again to 14 bits, the pixels in the new decompressed file should be within 1.5% of the original value.

 

That is exactly what I calculated with the picture from Jaap I used so far.

 

I divided the relative value ((Original-New)/Original) in bands of 0.1% wide, and counted how many pixels fell within the various bands from -5.0% to +5.0% (or +/- 50 promille as the picture shows).

 

Top picture is for the SQRT calculation, and bottom for the LOG version. X axis is in promille deviation, Y axis the number of pixels falling within this deviation band.

Calculation was done on a 14 bit picture.

 

 

 

The log version keeps almost all pixels within +/- 1.5%, only 150.000 are outside the range.

With the sqrt conversion, things are quite different. Approx 2.0 Mio pixels, or 20 % of all pixels are outside the range of 1.5%.

 

This is just another way to show that the sqrt compression/decompression is inferior to the log version.

 

All Leica probably has to do, is to change the lookup table for compression and the one for decompression in the DNG file, nothing else.

So who is going to tell them ?

Share this post


Link to post
Share on other sites

Interesting analysis, and your quick test showing that people can generally detect about a 1% change in brightness is about right.

 

However, to get technical, "brightness" is more complex that you're assuming above - implicitly your analysis is assuming that the eye's response is linear. Not so - the human eye responds to light in a complex way - but roughly on a gamma 2.2 basis. (There are CIE color space definitions that more accurately model the response of the eye). So you would need to look for how many pixels were 1.5% out in a "human eye response" adjusted space, not the linear space you're looking in.

 

BTW, Photoshop uses a "Melissa RGB" color space, which uses a sRGB gamma curve. So your 1.5% itself is actually a "approximately gamma 2.2 space" 1.5%, not a linear 1.5%.......

 

Sandy

Share this post


Link to post
Share on other sites
Interesting analysis, and your quick test showing that people can generally detect about a 1% change in brightness is about right.

 

However, to get technical, "brightness" is more complex that you're assuming above - implicitly your analysis is assuming that the eye's response is linear. Not so - the human eye responds to light in a complex way - but roughly on a gamma 2.2 basis. (There are CIE color space definitions that more accurately model the response of the eye). So you would need to look for how many pixels were 1.5% out in a "human eye response" adjusted space, not the linear space you're looking in.

 

BTW, Photoshop uses a "Melissa RGB" color space, which uses a sRGB gamma curve. So your 1.5% itself is actually a "approximately gamma 2.2 space" 1.5%, not a linear 1.5%.......

 

Sandy

I am not sure what you mean with 1.5% linear.

I focussed on RGB values, like presented in the DNG file.

1,5% of 128 is 2 more and 1.5 % of 73 is 1 more etc. etc.

Try it on your screen with Photoshop as I did, and you have a rough, but not too bad ball park figure.

As a first approximation, one should not overcomplicate things, and I made it quite obvious that Log is better than SQRT.

 

Hans

Share this post


Link to post
Share on other sites

Hallo Hans!

I am not sure what you mean with 1.5% linear.

I focussed on RGB values, like presented in the DNG file.

Obviously (or unfortunally) you mix up RGB color values with sRGB color space.

Your calculations ar based on the decompressed color value representation which is in fact free of any color space and therefore linear.

The human eye has - as an useable approximation - a gamma of 2.2 and therefore, the SQRT compression (which is numerycal a gamma 2) is almost aware of the luminanc granulation our eye can follow.

1,5% of 128 is 2 more and 1.5 % of 73 is 1 more etc. etc.

Try it on your screen with Photoshop as I did, and you have a rough, but not too bad ball park figure.

As a first approximation, one should not overcomplicate things, and I made it quite obvious that Log is better than SQRT.

I fear, your are now far away from reality and only in your comparison between LOG and SQRT in the linear "color space" you can find again your 1.5% errors in LOG compression.

Share this post


Link to post
Share on other sites
I am not sure what you mean with 1.5% linear.

I focussed on RGB values, like presented in the DNG file.

1,5% of 128 is 2 more and 1.5 % of 73 is 1 more etc. etc.

Try it on your screen with Photoshop as I did, and you have a rough, but not too bad ball park figure.

As a first approximation, one should not overcomplicate things, and I made it quite obvious that Log is better than SQRT.

 

Hans

 

You're assuming that if, for example, there were two blocks in the DNG, one 8000,8000,8000 and the other 4000,4000,4000, then the second block would be half the brightness of the first. Not so. Nor in Photoshop will the RGB readouts of block one be twice that of block 2. Try it for yourself.

 

Sandy

Share this post


Link to post
Share on other sites
You're assuming that if, for example, there were two blocks in the DNG, one 8000,8000,8000 and the other 4000,4000,4000, then the second block would be half the brightness of the first. Not so. Nor in Photoshop will the RGB readouts of block one be twice that of block 2. Try it for yourself.

 

Sandy

 

Hi Sandy,

 

A discussion who is right and who is wrong would do harm to the subject itself.

I understand what you say, but I think it is of second order importance.

 

Everybody will agree, that the closer the compressed/decompressed value is to the original, the better it is.

I did these calcalations, because with the SQRT, no matter if I started with 16,14 or 12 bit, I could always see very small differences against the original. No differences to worry about, but nevertheless detectable differences.

With the log compression I never saw any differences between the 16,14 or 12 bit original and the compressed / decompressed one.

I think that my calculations showed why this is the case.

 

Maybe you can provide me with a table or algorithm what the discrimination levels are for all the levels from zero to 16000 for a 14 bit converter, and I will do my calculation again.

But to be honest, I do not expect great differences.

 

Hans

Share this post


Link to post
Share on other sites
Hi Sandy,

 

A discussion who is right and who is wrong would do harm to the subject itself.

I understand what you say, but I think it is of second order importance.

 

Everybody will agree, that the closer the compressed/decompressed value is to the original, the better it is.

I did these calcalations, because with the SQRT, no matter if I started with 16,14 or 12 bit, I could always see very small differences against the original. No differences to worry about, but nevertheless detectable differences.

With the log compression I never saw any differences between the 16,14 or 12 bit original and the compressed / decompressed one.

I think that my calculations showed why this is the case.

 

Maybe you can provide me with a table or algorithm what the discrimination levels are for all the levels from zero to 16000 for a 14 bit converter, and I will do my calculation again.

But to be honest, I do not expect great differences.

 

Hans

 

Just use a gamma 2.2 transform aka y = x^2.2

 

Point is, the Leica scheme is for practical purposes very close to a gamma 2. Not quite 2.2, but close. So it may well be that Leica's scheme more closely mimics the response of the eye than a log scheme would......

 

Sandy

 

Sandy

Share this post


Link to post
Share on other sites
Just use a gamma 2.2 transform aka y = x^2.2

 

Point is, the Leica scheme is for practical purposes very close to a gamma 2. Not quite 2.2, but close. So it may well be that Leica's scheme more closely mimics the response of the eye than a log scheme would......

 

Sandy

 

Sandy

Hi Sandy,

 

It seems to be a bit over the top to go in so much detail, but here is my vision:

Gamma has nothing to do with your eye, but everything with the characteristic of a monitor.

To get a linear response in brightness, the signal applied to a monitor needs to be inverse gamma corrected. So you are right, the RGB values in Photoshop are not linear.

 

But the raw signal from the camera is linear, and the human eye has roughly a logarithmic response. ( see also page 4 "the Rehablitation of Gamma" from Charles Poynton)

That means that comparing the original picture with the compressed/decompressed picture should be done on a percentage base, exactly the way I did without any gamma correction.

This gamma correction follows later when converting to Jpeg or Tiff.

 

A raw value of 4000 and 8000 as you mention does not give RGB values twice as large, but luminance values twice as large, since RGB is gamma corrected with the inverse gamma of the monitor.

 

If 1.5% error band is the right level to stay within is just made quick and dirty by me within Photoshop.

Maybe 2.0% is a better value, but it is somewhere in this order of magnitude.

The prove again is that I see little differences with the sqrt compression and not with the log compression.

So the allowable error is smaller as 5% and larger as 1.5% as used in the dispayed bar graphs a few postings before this one, meaning that the Log is O.K. and the SQRT is a lesser option.

 

Hans

Share this post


Link to post
Share on other sites
Gamma has nothing to do with your eye, but everything with the characteristic of a monitor.

 

Ummm, no, gamma's not just about monitors. Gamma is a general measure of non-linearity in color repoduction systems, and broadly used, for example, to describe color spaces. To quote from Poynton's GammaFAQ (Gamma FAQ - Abstract) :

 

"In video, computer graphics and image processing, gamma represents a numerical parameter that describes the nonlinearity of intensity reproduction. Having a good understanding of the theory and practice of gamma will enable you to get good results when you create, process and display pictures."

 

Gamma is a vast simplification of reality, but it's at least a first approximation. You can't talk about accuracy of reproduction unless you take non-linearity into acount in some way.

 

Sandy

Share this post


Link to post
Share on other sites

Hallo Hans!

Gamma has nothing to do with your eye, but everything with the characteristic of a monitor.

Gamma is an approximated function of our eyes (retina+cortex) response.

But the raw signal from the camera is linear, and the human eye has roughly a logarithmic response.

Seems to be a (common) misunderstanding. The logarithmic response is in the form of exp(log(x)*gamma (I am repeating myself) which is equal to pow(x,gamma).

This gamma correction follows later when converting to Jpeg or Tiff.
The gamma correction has nothing to do with the storage format of an image. In fact, in JPEG compression the image is converted from its sRGB color space to the YCbCr color space.

Every silicon recorded image has to be converted from no color space (linear) to any color space (sRGB or Adobe RGB or whatever).

Compressing an image inside the "no color space" yields lost in luminance and color resolution because the granularity is different from what our eye (the retina) is detecting (and not was our brain - the cortex - is showing us).

So it is neccessary, to convert the linear image into an(y) appropriate color space before compressing. Leicas way of "compression" includes both in one single step!

Share this post


Link to post
Share on other sites

This is just another way to show that the sqrt compression/decompression is inferior to the log version.

 

All Leica probably has to do, is to change the lookup table for compression and the one for decompression in the DNG file, nothing else.

So who is going to tell them ?

 

Again, interesting results, and I'm certain Mama boss is watching. Forum admin might help as well.

Share this post


Link to post
Share on other sites
Hallo Hans!

 

Gamma is an approximated function of our eyes (retina+cortex) response.[/Quote]

Come with a prove of this please.

Seems to be a (common) misunderstanding. The logarithmic response is in the form of exp(log(x)*gamma (I am repeating myself) which is equal to pow(x,gamma).

Exp is the inverse operator of log, as is the ^2 from a sqrt. So your algorithm is nothing else as x^gamma, which is not the same as a logarithm.

You can also write your formula as (Sqrt(x^gamma))^2=exp(log(x)*gamma)=1/(1/x^gamma)=x^gamma.

So to talk about a common misunderstanding is a bit.....

The gamma correction has nothing to do with the storage format of an image. In fact, in JPEG compression the image is converted from its sRGB color space to the YCbCr color space.

According to Charles Poynton, pictures have to be recorded at least in 11 bits, if not gamma corrected. When gamma corrected, 8 bit is enough. That's why in some form compression has been applied to jpeg etc.

If you do not agree, come with harder facts such as decent references.

Every silicon recorded image has to be converted from no color space (linear) to any color space (sRGB or Adobe RGB or whatever).

Compressing an image inside the "no color space" yields lost in luminance and color resolution because the granularity is different from what our eye (the retina) is detecting (and not was our brain - the cortex - is showing us).

So it is neccessary, to convert the linear image into an(y) appropriate color space before compressing. Leicas way of "compression" includes both in one single step!

Leica does only compress with the purpose to let everything fit in 8 bits, and hope that after decompression that this is unvisable..

They are not doing anything at all in a single step like you suggest, but are only saving on memory.

 

You present it as if the compressing decompressing choice that Leica made, improved the picture.

No sir, it does not.

And in this all you haven't still noticed the only imortant thing of them all, which is that the sqrt compression is visible and the log version not.

Look at the bar graphs I made, and see why that is.

 

Hans

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use. We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue., Read more about our Privacy Policy