Lenshacker Posted May 10, 2015 Share #61 Posted May 10, 2015 Advertisement (gone after registration) I guess what I find confusing- the second stage table is a non-adaptive Huffman table. I take that to mean a fixed table, somehow "optimized" for best fit across many images. "In the following table compression means the first compression step; the second compression step is always loss‑less.""Some Nikon DSLR models offer lossless compression that is accomplished by skipping the encoding table." "Raw image data is compressed by using a lossy encoding followed by non‑adaptive Huffman compression." "During encoding the distinct values from the Analog to Digital Converter (ADC) are reduced to either 567, 683, 689, 769, 2753, or 3073 depending on the camera model and bit depth." I interpreted this to mean that the Huffman table was fixed, was not computed to optimize for each image and did not cover all possible values. Skipping the lossless encoding scheme would pass raw values to the non-adaptive Huffman table. What was missing from the explanation- did the Huffman code cover all possible input values. It has been a very long time since I wrote software to compute Huffman tables, I remember it was not fast, and the table changed dramatically based on scene content. You could easily get file expansion by using a non-optimal table. What I do not understand- running difference schemes are computationally expedient and offer ~50% reduction. Some image content result in "expansion", but just opt out of the compression and store the uncompressed file. You would need to hold the frame in the buffer until the decision is made. This stuff was early 90s for me, it's been a while. Link to post Share on other sites More sharing options...
Advertisement Posted May 10, 2015 Posted May 10, 2015 Hi Lenshacker, Take a look here Monochrom M246 DNG technical analysis. I'm sure you'll find what you were looking for!
mjh Posted May 11, 2015 Share #62 Posted May 11, 2015 I guess what I find confusing- the second stage table is a non-adaptive Huffman table. I take that to mean a fixed table, somehow "optimized" for best fit across many images. A Huffman table may either be static (either always the same or optimised for a given set of data) or adaptive, meaning that it may change while data is transmitted, accounting for changing characteristics of different parts of the data transmitted. ‘Non-adaptive’ doesn’t imply that the Huffman table would always be the same, and even if it should be (which isn’t that unusual) it would still account for all the possible values. By the way, the metadata suggests that DNG files created by the M Monochrom (Typ 246) are compressed using a lossless Huffman compression (the value of the Compression tag is 7, i.e. JPEG which in this case implies a lossless Huffman compression). Link to post Share on other sites More sharing options...
tthorne Posted May 11, 2015 Share #63 Posted May 11, 2015 Call me wacko - but I prefer prints over bits.... Hey, Mr. Wacko! I was at the Leica Store Los Angeles yesterday and they quite the collection of M246 prints all laid out. I am not going to get into the details of what I thought, but lets just say I am very happy to be at the top of their preorder list and look forward to picking mine up very soon. Lets just say... Remarkable... Link to post Share on other sites More sharing options...
Lenshacker Posted May 11, 2015 Share #64 Posted May 11, 2015 A Huffman table may either be static (either always the same or optimised for a given set of data) or adaptive, meaning that it may change while data is transmitted, accounting for changing characteristics of different parts of the data transmitted. ‘Non-adaptive’ doesn’t imply that the Huffman table would always be the same, and even if it should be (which isn’t that unusual) it would still account for all the possible values. By the way, the metadata suggests that DNG files created by the M Monochrom (Typ 246) are compressed using a lossless Huffman compression (the value of the Compression tag is 7, i.e. JPEG which in this case implies a lossless Huffman compression). That makes sense- again back to interpreting a description when the details of the algorithm are not furnished. I would have gone with a running difference frame, then encode the difference values with the Huffman table. The point response of the sensor and MTF of the lens should make the number of difference values more manageable for the Huffman table. Link to post Share on other sites More sharing options...
Lenshacker Posted May 11, 2015 Share #65 Posted May 11, 2015 Just to be clear- the Nikon compression scheme only gets a 2:1 compression going through the Huffman table. The running difference would get about the same for 14 bit data. Running the differences through the Huffman table should do better than 2:1. Jpeg was designed to deliver good results for lossy compression, for lossless- relies on the Huffman code alone. Within JPEG there are many techniques, 28 last time I counted for old-style Jpeg and even more adding in Jpeg-2000. Old-style Jpeg originally handled 8-bit and 12-bit data. New Jpeg-2000 would handle 14-bit data, but does not seem to get wide-spread use. Link to post Share on other sites More sharing options...
mjh Posted May 11, 2015 Share #66 Posted May 11, 2015 A lossless 2:1 compression of raw data is about the best you can achieve; the files are usually larger. For example, losslessly compressed CR2 files of the EOS 5D Mark III can be as small as 25 MB or thereabouts, but not much smaller. I would have to hunt for NEF files to compare but from what I remember, lossless compression results are quite similar across vendors and raw file formats, probably because they all use basically the same methods (Huffman encoding). And by the way, Huffman compression isn’t limited to 8 or 12 bits as a JPEG DCT compression is (which is what most people have in mind when JPEG is referred to). 14 bit raw data can and regularly is compressed using lossless JPEG (i.e. Huffman). Link to post Share on other sites More sharing options...
Jeff S Posted July 8, 2015 Share #67 Posted July 8, 2015 Advertisement (gone after registration) Puts comments on use of 12 bits in the MM 246... http://www.imx.nl/photo/blog/ A complex read, but here's his final conclusion…"Final conclusion: the 12 bit resolution of the MM-2 is not a compromise solution but a logical choice given the practical and theoretical issues of photon flux, full well capacity, ADC converter technology and the demands of tonal range and tonal separation." Jeff Link to post Share on other sites More sharing options...
dritz Posted July 24, 2015 Share #68 Posted July 24, 2015 Use a slow SD card, I use 4x cards. The idea is to eliminate data bursts, keep everything as smooth as possible. I have never understood the general assertion that using fast cards causes banding for high ISO images. I experience the banding, and do use high speed cards. It is the data burst that is corrupting the data? 4x is the way to go? Link to post Share on other sites More sharing options...
fiftyonepointsix Posted July 25, 2015 Share #69 Posted July 25, 2015 Some have speculated that data bursts cause more fluctuations in power being supplied through the system as the data is being digitized, others that the fast cards emit more RF noise. Myself: I have tested the M Monochrom with high-speed cards and saw banding at highest ISO, the banding was not present with the PNY and SanDisk 4x cards. That was good enough for me, I stick with the 4x cards. http://www.l-camera-forum.com/topic/246085-m-monochrom-vertical-banding-with-low-iso-high-contrast-shots/ More discussion about this issue in the above thread. Link to post Share on other sites More sharing options...
algrove Posted July 25, 2015 Share #70 Posted July 25, 2015 Some have speculated that data bursts cause more fluctuations in power being supplied through the system as the data is being digitized, others that the fast cards emit more RF noise. Myself: I have tested the M Monochrom with high-speed cards and saw banding at highest ISO, the banding was not present with the PNY and SanDisk 4x cards. That was good enough for me, I stick with the 4x cards. http://www.l-camera-forum.com/topic/246085-m-monochrom-vertical-banding-with-low-iso-high-contrast-shots/ More discussion about this issue in the above thread. Interesting results. I must see if I still have any "slow" 4x SD cards laying around anymore. Link to post Share on other sites More sharing options...
wlaidlaw Posted July 25, 2015 Share #71 Posted July 25, 2015 I would be astonished beyond words, if even Leica did not incorporate a voltage stabilisation device/circuitry within any of their digital cameras. This is absolutely standard on battery driven complicated electronic devices. They take up very little room. For example a Texas Industries VSSOP stabiliser is a surface mount chip 3mm x 3mm. Wilson Link to post Share on other sites More sharing options...
fiftyonepointsix Posted July 26, 2015 Share #72 Posted July 26, 2015 Personally, I think the banding has to do with RF emissions from the devices. However- small changes in voltages and power draw does occur with digital devices. Hook up an Analog O-Scope to a digital line, or a fast digital Volt-Meter and you can measure small voltage differences on the digital lines as a system is running. These small differences can creep into the image in its original analog form. Much work goes into isolating the digital from analog side of a digital imager, but at highest gain settings- these small differences show up in the image. Been there, done that with digital acquisition systems. Link to post Share on other sites More sharing options...
CheshireCat Posted July 26, 2015 Share #73 Posted July 26, 2015 Puts comments on use of 12 bits in the MM 246... http://www.imx.nl/photo/blog/ A complex read, but here's his final conclusion…"Final conclusion: the 12 bit resolution of the MM-2 is not a compromise solution but a logical choice given the practical and theoretical issues of photon flux, full well capacity, ADC converter technology and the demands of tonal range and tonal separation." The DXO guys have found 13.3 stops of dynamic range in the M240 (same sensor), and that's with a Bayer filter on it. So who's wrong ? Putt's technical explanation smells a lot like a way to justify Leica's choice. And smells fishy. Link to post Share on other sites More sharing options...
Overgaard Posted July 26, 2015 Share #74 Posted July 26, 2015 The DXO guys have found 13.3 stops of dynamic range in the M240 (same sensor), and that's with a Bayer filter on it. So who's wrong ? Putt's technical explanation smells a lot like a way to justify Leica's choice. And smells fishy. Do we know the dynamic range of the M246 from what Erwin writes? Link to post Share on other sites More sharing options...
CheshireCat Posted July 26, 2015 Share #75 Posted July 26, 2015 Do we know the dynamic range of the M246 from what Erwin writes? No, but he says: "Leica uses a compression scheme to match the range from 0 - 60000 into the 12 bit dynamic range from 0 - 4095. [...] The trick is square compression before digitization." And this means the camera allegedly squeezes a theoretical 16 bit dynamic range into 12 bits by means of a square compression, similar to the one used by the old M9 compressed DNG, with the implication that M246 raw files would contain non-linear data He also says that "Even 14 bit resolution is impossible to reach." Which is consistent with DXO findings for the M240 (13.3 stops). But we obviously cannot store fractional bits, so we would use 14 bits per pixel. Now, my interpretation of all this is: The M246 has at least 13.3 stops of dynamic range, but Leica decided to use lossy compression to make the M246 DNG files 1/7 smaller. ... which sounds like nonsense to me. And someone should check whether the M246 raw files contain square-compressed data or not. Link to post Share on other sites More sharing options...
wlaidlaw Posted July 26, 2015 Share #76 Posted July 26, 2015 similar to the one used by the old M9 compressed DNG, Don't you perhaps mean the logarithmic compression used on all the M8 DNG files. I thought the optional compression on the M9 was linear but with lossless mapping, which always seemed to be a slight contradiction in terms to me. As I understood it, the M8 compressed the bright end of the 0-256 range more than the dark end because there was more significant information for human eye perception, to be gained from a less compressed dark end. Wilson Link to post Share on other sites More sharing options...
Overgaard Posted July 26, 2015 Share #77 Posted July 26, 2015 Thanks CheshireCat,I don't understand all this, but when I look at the theory of bits and the M246 only uses one channel (or all three channels for just one tone), it should be ok with 12 bits. I think I'm waiting for the moment where the theory and what I see in the images match up. Link to post Share on other sites More sharing options...
CheshireCat Posted July 26, 2015 Share #78 Posted July 26, 2015 Don't you perhaps mean the logarithmic compression used on all the M8 DNG files. Yes. I don't have a M8, but I understand it uses the same compression as in the M9. Link to post Share on other sites More sharing options...
CheshireCat Posted July 26, 2015 Share #79 Posted July 26, 2015 I think I'm waiting for the moment where the theory and what I see in the images match up. The problem is: you cannot see in the images more than the information they contain. Remember when Leica told us the M8 compression was not affecting images. But they were proven wrong as soon as the M9 came out and allowed saving uncompressed images. Now Putts is telling us that "14 bit resolution is impossible to reach", and we cannot prove him wrong because we cannot configure the M246 to output 14 bit raws. Yet the DXO guys claim 14.8 stops for the D810, and 13.3 stops for the M240 (same sensor, plus color filters !). These are measured values, i.e. "what they see in images" vs Erwin's theoretical disquisition on something he cannot see because the M246 output is castrated to 12 bits. Link to post Share on other sites More sharing options...
wlaidlaw Posted July 26, 2015 Share #80 Posted July 26, 2015 Yes. I don't have a M8, but I understand it uses the same compression as in the M9. The compression algorithm on the M9 was supposedly lossless which Leica never claimed for the M8. What they did claim was that the logarithmic compression was better than linear and was I assumed, needed to speed up card writing speed from the buffer. By the time that the M9 came along, card speeds had improved to the point that compression was more of a space saving choice than a necessity. Wilson Link to post Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.