Jump to content

Part 2 of Puts' M240 Review


Recommended Posts

Following part one, I was expecting more direct comparison between the M and M9, particularly since he said he wouldn't report until the RAW developers for the M were fully ready.

 

This doesn't tell me much in practical print terms, except that the M and MM are both pretty spectacular machines, especially when coupled with the cream of the crop 50 APO Summicron. [And that the M9 and M-E are no slouches either.] I think we knew that conceptually already.

 

Jeff

Link to post
Share on other sites

Interesting and surprising regarding the choice of comparison.

 

I didn't follow completely the EV, dynamic range conversation and looking at the graph, which appeared to have 0-255 (So assume this is IRE) on the vertical and EV on the horizontal I could see the very valid point about shadow detail but the dynamic range seemed to be higher than 13 ?

Link to post
Share on other sites

Another thought and observation is my own unscientific impressions of CMOS sensors is they can show a familiar trait in the highlights that to my eyes look characteristic of Nikon, Canon etc and this is highlights, particularly on skin where although they are not clipped they somehow look more digital than the M9/M8 do

 

I am ready to be cut down in flames ( technical or subjective) but I wonder if the shape of the CMOS curve and mapping this results in this 'look' ?

Link to post
Share on other sites

Yes - I noticed the DR thing as well, but Erwin appears to suggest that a DR over 8-9 is irrelevant.

 

I think Erwin is suggesting any graduation at near white, or near black is nice to see on the graph but not repoduicable in print.

 

Interestingly I don't have an issue seeing 250-255, just in print but agree at the lower end (particularly matt papers) I struggle below 15, perhaps higher. My understanding is matt prints can only achieve 6-7 stops, but the comparison is perhaps less than direct for, projected/reflected etc.

 

I if you looked at sat 250-10 then the range does see to better match the figures noted, but difficult to find 13.3 on the graph from my perspective ?

Link to post
Share on other sites

Advertisement (gone after registration)

Me too haven't well clear how the DR chart must be interpreted... I remember to have seen the ISO 21150 std.referred to scanners... I think that the "13.3 bits" cannot be "read" on the graph in itself... 13.3 bits is the density range of the ISO target that is scanned (Photographated, in our case), 2^13,3 = 10.000.. I suppose is a map of greytones, which propably can be found for sale somewhere... I tried quickly at Edmund Optics site and it seems to me they have it; so, if anaylizing the Out-Of-Camera bitmap you find different values for all the greytones you can draw the conclusion that the theorical DR is up to the one of the chart - 13.3 bits.

What I cannot understand (and now haven't time to search for... :o) is the value to 20 on the x axis...

 

Indeed, I think that if this computation can have sense for images projected or monitor-displayed, with printing is all another matter, surely highly dependent on media (paper) as you say.

Link to post
Share on other sites

Perhaps I can help to make understand, what Puts meant. He wrote to me by Mail and allowed to publish in the forum:

 

"the theoretical dynamic range is, as I remark, grossly overvalue and in fact useless. The useful exposure range is also determined by the software in the camera. The problem is this: the software must determine what the minimum charge is of the voltage of the individual pixel to set a black level and the same goes for the white level. The exiftool analysis shows that the M9(-P) and the M both use a 14 bit depth dynamic range, that theoretically runs from 0 to 16383. The M9-P sets the black point to 44 and the saturation point to 16383. The high black point (or noise-floor) reduces the noise in dark areas, but the saturation point (max white) allows for no over-exposure. The M has values of 0 and 15000. The low black point is possible because the noise is lower and the lower saturation point allows more room for over-exposure.

In other words: with standard exposure the M fares better than the M9 because it has more over exposure latitude. You can get the same result when setting the exposure compensation of the M9 to -2/3.

I am not sure if the M has a larger dynamic range than the M9 (the Monochrom is even better and this one uses the CCD sensor too)."

 

Elmar

Link to post
Share on other sites

Yes - I noticed the DR thing as well, but Erwin appears to suggest that a DR over 8-9 is irrelevant.

 

If that's what he is really arguing, then it would reflect a disregard for why (and how) dynamic range is translated from input (camera or film) to output (paper). Yes, you have to translate the greater dynamic range of a sensor or film (say 13 EV) to the lesser DR of the output medium (8-9EV). But that doesn't change the fact that the difference in DR between input and output really represents your ability to make arbitrary tonal choices. On the other hand, if you excluded aesthetic choices and regarded the camera/pp/paper as a pure recording machine, then yes, you could argue that the system DR is only 8-9.

 

But this is something of a minor point. One thing that is striking about this report is that the M represents far less of a resolution sacrifice than you might be led to believe (if you set aside the gross resolution difference). The MM has a theoretical resolution of 1736 lp/ph (based on the physical number of pixels on sensor), and the M has a real-world delivery of 1700. The M has 98% of the MM's resolution in linear measure and 95% in two dimensions. In the real world, I saw a lot of lenses fall short of fulfilling the promise of even the M8/M9's pixel density, so for most people who are not buying $8K 50mm lenses, the difference between M and MM may be a non-issue.

 

I would tend to disregard the discussion about the MM doing 2000+ because it presupposes a resolution that exceeds what the sensor can physically represent (the precision of a system can never exceed the precision of any single component). The way MTF is translated into resolution - based on contrast - explains how the program got to this number (EP says as much). And though it is not the clearest part of this piece, EP himself says that you should use the "theoretical" figure [1736 lp/ph]. Even if you took 2000+ at face value, though, the MM was not tested as a system (note the references to focus bracketing) - so your big limiter may be the focusing system (or more acutely, your lenses).

 

So when you read down to the bottom of this, the big differentiator is highlight and shadow reproduction - as shown on a chart. It's hard to gauge, looking at this, how much of a factor it indeed would be in real life. This report will, no doubt, read like an inkblot test does - different things to different people.

 

Dante

Link to post
Share on other sites

In daily use with the Leica Monochrom (like the M8 and M9) we always lose pixel by the inaccurate viewfinder and the related need to crop the image. LifeView allows fully exploiting the sensor surface and helps to reduce unnecessary losses of pixel. Therefore for large prints the Leica M (Type 240) perhaps is more suitable than the Leica Monochrom. In my opinion, not only theoretical comparisons of numbers and graphs, but also practical experience with these two cameras in action will demonstrate the practically relevant differences between the Leica M (Type 240) and the Leica Monochrom.

Link to post
Share on other sites

Looking at the graph and considering the point about the extremes, if you look at the EV (If indeed the bottom track is EV) then the MM is 35% greater between 10 and 245 (I again assume this is a 0-255 scale) and at 50-200 the MM is 15% greater.

Link to post
Share on other sites

In other words: with standard exposure the M fares better than the M9 because it has more over exposure latitude. You can get the same result when setting the exposure compensation of the M9 to -2/3.

Elmar

 

How can you get the same result in the M9 if the latitude is different?

 

Does this imply that exposure compensation reduces the white point in the camera? Is exposure compensation therefore different than simply reducing the exposure, f-stop or shutter speed, by 2/3? Have I all along incorrectly thought they were doing the same thing?

Link to post
Share on other sites

How can you get the same result in the M9 if the latitude is different?

 

Does this imply that exposure compensation reduces the white point in the camera? Is exposure compensation therefore different than simply reducing the exposure, f-stop or shutter speed, by 2/3? Have I all along incorrectly thought they were doing the same thing?

 

I think that the key to Elmars reasoning is that he quotes the OVER exposure latitude, not the whole latitude : basically, if the M can manage better overblown lights than the M9 (at same exposure) , to achieve the same behavior IN HIGH LIGHTS one has to underexpose a bit the M9 (I don't know how it's exactly 2/3, though....but assume that someone has done the computation...)

Link to post
Share on other sites

Ok, thanks. I am trying to wrap my mind around this. So, the latitude of the M sensor is greater than the M9 (we assume), but it still has to be fit into a 14bit gradation. The M9 took what latitude the sensor had and put it all into the 14bit space. The M takes it's sensors latitude and assigns it into less than the 14bit space (15000).

 

What is done with the remaining bits that are over 15000 if, 15000 is defined as white? I know I'm missing some key piece of this concept. Can you take this back to the beginning for me or does someone have a link that I can do my homework on this and then come back and re-read Puts?

Link to post
Share on other sites

And, I am now not sure what exposure compensation does. Since I don't use it, I adjust the manual exposure controls, I am not sure I have thought about how it works. Exposure compensation obviously isn't changing shutter speed or f-stop. So, what is it that it is changing or reassigning? Does it change the sensitivity of the sensor or does it change the mapping of the black point and the white point? Is there an advantage to using exposure compensation rather than adjusting the manual settings (other than the obvious characteristics of f-stop and shutter speed changes)?

Link to post
Share on other sites

I guess the M takes the latitude of its sensor and assigns 15000 as the white point. I think this is more correct. Are the remaining EV in the well assigned to the remaining bits? I assume this represents highlight information that is above the white point. And, I assume this is information that can be retrieved in PP and compressed below the white point so, it is about 2/3 of a stop that can be recovered beyond what the M9 can, because the M wells can hold more electrons?

Link to post
Share on other sites

My interpretation is both cameras are 14 bit and so 14 stops of output range capable, the 9 set the bottom a 44, not sure about the M. The M seems to see maximum 15,000 not 16,384. My fag packet maths suggests that 0-44 is about 5,5 stops so the output would appear limited to 8.5 stops for the 9 in terms of output ?!.

 

All a bit confusing, one other point that is inherent in the technology is the lack of linearity within. I am guessing that with gamma corrections akin to CRT's, the good old 2 or 2.2, 2.4 could mean that the CCD has advantages with data at the lower end of the scale and hence the better shadow detail from the MM.

Link to post
Share on other sites

My interpretation is both cameras are 14 bit and so 14 stops of output range capable, the 9 set the bottom a 44, not sure about the M. The M seems to see maximum 15,000 not 16,384. My fag packet maths suggests that 0-44 is about 5,5 stops so the output would appear limited to 8.5 stops for the 9 in terms of output ?!.

 

All a bit confusing, one other point that is inherent in the technology is the lack of linearity within. I am guessing that with gamma corrections akin to CRT's, the good old 2 or 2.2, 2.4 could mean that the CCD has advantages with data at the lower end of the scale and hence the better shadow detail from the MM.

 

Yes... that's what math says... and being 16384-15000 less than half a stop, this would lead to the conclusion of the "over 13 bit" DR quoted by Puts... but the non linearity you quote DOES complicate things about... I do remember that at the "M8 times" there were many discussion about the "curve" applied by Leica in M8 firmware to obtain the raw file from the A/D output... there were precise info about this curve... I haven't yet read something about regarding M240... are there some documentations on this matter ?

Link to post
Share on other sites

It's a shame the old Puts articles have been removed. I recall the differences between CCD and CMOS of the Nikon being discussed. Again from memory another real world issue is higher ISO and retaining the dynamic range sonething CMOS was better at.

 

The only other thing I can say is that when setting up projectors the black level and the graduation out of black 0-15% or so is critical to how the image looks, particularly depth and body to the image. I guess the question is doe the low base noise level with the CMOS sensor ( I assume closer for zero than 44) compensate for the less sympathetic response curve in comparison to the CCD.

Link to post
Share on other sites

Yes... that's what math says... and being 16384-15000 less than half a stop, this would lead to the conclusion of the "over 13 bit" DR quoted by Puts...

Actually 15,000 equates to 13.87 stops which should be indistinguishable from 14 stops.

 

It has to be said though that we do not know how the values between 0 and 15,000 are derived in the M. If you re-read the old threads about the blackpoint settings in the M9, you see that the blackpoint isn’t fixed at 44; rather it depends on the ISO setting. There is a theory that the M9 adds a bias before the signal is digitised that the variable blackpoint accounts for (whereas there was no such bias added by the M8 where the blackpoint was fixed).

 

I do remember that at the "M8 times" there were many discussion about the "curve" applied by Leica in M8 firmware to obtain the raw file from the A/D output... there were precise info about this curve...

That wasn’t actually a gamma curve but the lossy compression scheme at work. As this was undone in raw processing (the camera multiplied by 4 and took the square root, the raw converter squared the value and divided by 4) it made no difference with regard to gamma.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...