Jump to content

Official Response from Leica on Laundry List


Guest guy_mancuso

Recommended Posts

Advertisement (gone after registration)

While I find this effort and the replies from Leica really great news (not aware similar things have happened before also with other vendors), I interpret as another sign that Leica is putting all their forces into the M8 evolution and DMR is forgotten.

 

This is the bad news :-(

Link to post
Share on other sites

x
  • Replies 232
  • Created
  • Last Reply

Just for reference, the "10 bit thing" is what Nikon use, so it's quite practical. Actually, Nikon use it in a even more sophisticated form, they also use a form of encoding that encodes more frequently used values in smaller numbers of bits, and less frequently used values in longer bit lengths.....So on average you get a shorter file length than for uniform length coding. That's why, if you look at compressed NEFs, they are all slightly different lengths, while Leica DNGs are all the same.....

 

Regards,

 

Sandy

Link to post
Share on other sites

The 8-bit-issue has been discussed to death and all necessary information is in several threads here as well as other places on the web. Why all this speculation?

Of course effects can be seen, first of all the loss of information in the highlights that will cost a stop or so of dynamic range. The first M8 DNG we ever saw (from Photokina) already proved this. It makes no sense to speculate endlessly about what is obvious right before your very eyes!

 

Anyway, what I wanted to remark is that I would NOT use Isopropanol (Ethanol) for cleaning, but Methanol (poisenous, be careful). I have ruined a focusing screen with Ethanol already.. it can create residua.

That "Eclipse" sensor cleaning product seems to be nothing else than Methanol with some additives.

IMHO Leica should not recommend Ethanol for cleaning.

Link to post
Share on other sites

I can't see how 8 bit coding can affect dynamic range. My understanding from reading the other threads is that the maximum 16 bit value is mapped to the maximum 8 bit value when creating the DNG file, and then mapped back again when a DNG file is opened, so of there's clipping it must be present in the original 16 (ok, padded 14) bit data stream.

 

Pure Ethanol will not leave a deposit. Any residue will evaporate. Like Methanol it is a pure alcohol, albeit with an extra carbon/hydrogen group. If there's a residue it's because of impurities in fluid.

Link to post
Share on other sites

NJ, could you post a picture which has 8-bit-caused problems? If it is so easy, surely you must be able to generate one. I haven't seen any. And dynamic range should not be affected by the 8-bit coding. Both ends of the spectrum are retained. There are just fewer values in between.

Link to post
Share on other sites

NJ, could you post a picture which has 8-bit-caused problems? If it is so easy, surely you must be able to generate one. I haven't seen any. And dynamic range should not be affected by the 8-bit coding. Both ends of the spectrum are retained. There are just fewer values in between.

 

 

That is right. The problem is with tonal range in any case. The only way of seeing the effect of encoding is by comparing DMR files (similar sensor) and M8's. I would have preferred a somewhat wider storing space for tonal values. True 16-bit files are big and too much variants are reserved for the highlights (50% of all the tonal variants for the first stop, 25% for the second stop, etc.). Some encoding is a good idea, but 256 variants (8-bit files) seems to be too much. I don't know for sure how severe are the consequences, but I feel a bit uncomfortable thinking on only 256 tonal differences. Even if you have adverse effects (posterization in the highlights, tonal variability losses) it is difficult to evaluate. You need a point of reference for a comparison.

Link to post
Share on other sites

Just for reference, the "10 bit thing" is what Nikon use, so it's quite practical. Actually, Nikon use it in a even more sophisticated form, they also use a form of encoding that encodes more frequently used values in smaller numbers of bits, and less frequently used values in longer bit lengths.....So on average you get a shorter file length than for uniform length coding. That's why, if you look at compressed NEFs, they are all slightly different lengths, while Leica DNGs are all the same.....

 

Regards,

 

Sandy

 

I know Nikon was doing something like that, but I didn't know the technical details.

 

Do you have a link Sandy? Thanks!!

Link to post
Share on other sites

I know Nikon was doing something like that, but I didn't know the technical details.

 

Do you have a link Sandy? Thanks!!

 

I don't know of any one link, but you could take a look at Nikon Forum: NEF Compression strategy - Answer and Luminous Landscape Forum > D2X raw compressed OK?

 

This issue has also been beaten to death on various nikonians threads Nikonians :: The Nikon User Community

 

Sandy

Link to post
Share on other sites

This link is interesting: http://www.astrosurf.com/buil/d70v10d/eval.htm

 

It has a detailed analysis of the losses that the Nikon's compression algorithm implies:

 

Nikon D70 seems to use a lossy compression algorithm through a transcoding table for the NEF files. The result is an average coding of about 9.5-bit per data point, and not the original 12-bit. It is possible that the codes missing ("gaps") in image RAW is the result of the decompression process (9 to12 bits coding), which created a phenomenon of posterization. Once image RAW processed (registration, stacking, ...) and converted into colors, the phenomenon practically disappears and the visual aspect of the image is preserved.

 

More details here: http://www.majid.info/mylos/weblog/2004/05/02-1.html

 

Thom Hogan claims:

 

Leaving off Uncompressed NEF is potentially significant--we've been limited in our ability to post process highlight detail, since some of it is destroyed in compression.

 

(...) I read the C language source code for Dave Coffin's excellent reverse-engineered, open-source RAW converter, dcraw, which supports the D70. The camera has a 12-bit analog to digital converter (ADC) that digitizes the analog signal coming out of the Sony ICX413AQ CCD sensor. In theory a 12-bit sensor should yield up to 212 = 4096 possible values, but the RAW conversion reduces these 4096 values into 683 by applying a quantization curve. These 683 values are then encoded using a variable number of bits (1 to 10) with a tree structure similar to the lossless Huffmann or Lempel-Ziv compression schemes used by programs like ZIP.

 

The decoding curve is embedded in the NEF file (and could thus be changed by a firmware upgrade without having to change NEF converters) (...).

 

The quantization discards information by converting 12 bits' worth of data into into log2(683) = 9.4 bits' worth of resolution. The dynamic range is unchanged. This is a fairly common technique - digital telephony encodes 12 bits' worth of dynamic range in 8 bits using the so-called A-law and mu-law codecs. (...) The curve resembles a gamma correction curve, linear for values up to 215, then quadratic.

 

In conclusion, Thom is right - there is some loss of data, mostly in the form of lowered resolution in the highlights.

 

Does it really matter? You could argue it does not, as most color spaces have gamma correction anyway, but highlights are precisely where digital sensors are weakest, and losing resolution there means less headroom for dynamic range compression in high-contrast scenes. Thom's argument is that RAW mode may not be able to salvage clipped highlights, but truly lossless RAW could allow recovering detail from marginal highlights. I am not sure how practicable this would be as increasing contrast in the highlights will almost certainly yield noise and posterization. But then again, there are also emotional aspects to the lossless vs. lossy debate...

 

It seems to me that Leica's solution is 1) more simple than Nikon's and 2) it results in a narrower space for storing tonal variants (683 vs. 256). The question is if Leica goes too far with the 8-bit encoding.

Link to post
Share on other sites

I think to answer that we need more testing. We simply need to try to make a file misbehave. I don't personally want to ask Leica to change anything before I have seen a file misbehave as a consequence.

 

Keep in mind that the processor in the M8 is apparently rather weak, and thus probably uses less power, than those in common DSLRs. Asking Leica to do a lot of complicated processing is not realistic, with the M8. A few shifts and ORs should be okay, to use the 4 pixels in 40 bytes scheme, as long as the processor is programmable via firmware in this way. That doesn't seem unrealistic to expect, but I really don't know.

Link to post
Share on other sites

In the Nikon community, while the "compressed NEF" issue erupts occasionally, it's really been a bit of a non-issue, simply because many of the high end Nikons (e.g., D200) allow both compressed and uncompressed NEFs as an option. In practice, nobody has managed to convincingly show an example where it made a practical difference. In the case of Leica however, there is no such option, and the debate rages on. Nothing like an absence of data to fuel controversy.....

 

Suggestion to Leica: put in the 16-bit option - even if nobody uses it, at least the noise on the forums will go away. :D

 

Sandy

Link to post
Share on other sites

I am with those of you who prefer to get real problems before they take off in search of solutions. Perhaps it is my engineering / computing / real business experience.:)

 

But we are in the minority.

 

What I would like to see is Leica clearing the backlog of real problems and assure us in an open manner that sending our cameras to Solms will likely fix all infant ills. (Which IMHO are few and relatively minor. But real nevertheless.)

Link to post
Share on other sites

I don't consider sudden death to be minor, but then it is bound to be found and fixed sooner or later, and I am in the lucky situation of not making a living from my camera, so I can wait until it happens.

 

Ultimately, Leica appears to stand 100% behind the camera, so the biggest worry is what happens between now and when the causes of the more serious problems are discovered, and the fixes implemented. There is no 'if' there.

Link to post
Share on other sites

Oh, I see there are good proactive activities by some members showing first results like intensifying communications between Leica and his customers and providing additional information.

Should there be another list in the future I had another topic for it. The longer the M8 is in the market the more complex some things appear to be. It would be helpful if Leica could provide a 'compatibility matrix for lenses' to answer the questions below:

1) coding necessary (strongly recommended) to improve image quality

2) coding 'nice to have' to improve image quality but only accademical improvement (almost not visible in practical photography)

3) correct focussing with this lens not possible at all apertures due to optical properties

 

The matrix should account for the different versions of all lenses ever produced, let's say in the last 40 years, to give potential buyers and owners of M system lenses a fair picture of what they can expect when digitalizing their system by buying a M8.

Link to post
Share on other sites

I think to answer that we need more testing. We simply need to try to make a file misbehave. I don't personally want to ask Leica to change anything before I have seen a file misbehave as a consequence.

 

"misbeahaviour" is an ambiguous term. The first link shows losses of information. The sensor and the A/D converter capture information, and some is lost during storing. This is the fact. Is it relevant in "practical" terms? When it has "visible" effects?

 

Keep in mind that the processor in the M8 is apparently rather weak, and thus probably uses less power, than those in common DSLRs. Asking Leica to do a lot of complicated processing is not realistic, with the M8. A few shifts and ORs should be okay, to use the 4 pixels in 40 bytes scheme, as long as the processor is programmable via firmware in this way. That doesn't seem unrealistic to expect, but I really don't know.

 

I don't know. Is the DMR much slower than the M8 processing the files (showing in the LCD, magnifying on camera, storing on the SD card...)?

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...