ho_co Posted October 27, 2010 Share #1 Posted October 27, 2010 Advertisement (gone after registration) oops! originally mis-posted in M8 forum Please ignore that post. New LuLa Mark Dubovoy article emphasizes once again why digital and analog don't behave the same way in regard to depth of field. An Open Letter To The Major Camera Manufacturers Overall, I've got problems with DxO's techniques, and I question not their results but the meaning they assign to the results. But I think the point that digital pixel wells lie at the bottom of tubes and don't get the side-striking rays as film did goes a long way toward explaining why the old calculations regarding depth of field aren't borne out with digital. I don't know why CMOS is singled out in the article (in what way does a CCD sensor differ in terms of physical location of the photosites?) and would like to see similar comparisons with both CCD sensors and with the Foveon design, which throws a new architecture into the mix. Link to post Share on other sites More sharing options...
Advertisement Posted October 27, 2010 Posted October 27, 2010 Hi ho_co, Take a look here another reason to toss DoF calculations. I'm sure you'll find what you were looking for!
christakis Posted October 27, 2010 Share #2 Posted October 27, 2010 oops! originally mis-posted in M8 forum Please ignore that post. New LuLa Mark Dubovoy article emphasizes once again why digital and analog don't behave the same way in regard to depth of field. An Open Letter To The Major Camera Manufacturers Overall, I've got problems with DxO's techniques, and I question not their results but the meaning they assign to the results. But I think the point that digital pixel wells lie at the bottom of tubes and don't get the side-striking rays as film did goes a long way toward explaining why the old calculations regarding depth of field aren't borne out with digital. I don't know why CMOS is singled out in the article (in what way does a CCD sensor differ in terms of physical location of the photosites?) and would like to see similar comparisons with both CCD sensors and with the Foveon design, which throws a new architecture into the mix. Thanks for the post and the link to the article. It's a tough read for now so can't make any meaningful comments. Want to understand it thoroughly first. Chris Link to post Share on other sites More sharing options...
mjh Posted October 27, 2010 Share #3 Posted October 27, 2010 I don't know why CMOS is singled out in the article (in what way does a CCD sensor differ in terms of physical location of the photosites?) and would like to see similar comparisons with both CCD sensors and with the Foveon design, which throws a new architecture into the mix. CCD sensors, or at least those of the full-frame transfer variety as used in the M8 and M9, are extremely simple designs with a minimum of infrastructure. Most of the chip area is dedicated to absorbing photons and accumulating electrons, with just the bare minimum of wiring so the accumulated charges can be read out. CMOS sensors have a considerably more complex infrastructure so the area actually available for catching light is smaller. But this isn’t just a matter of area. A sensor is a three-dimensional structure and the infrastructure of a CMOS sensor doesn’t just take away space from the photosites – these photosites are at the lowest layer of a multi-layered chip, with all the infrastructure components towering above. On the other hand, CCDs with their minimalist infrastructure are relatively flat. When catching light with a CMOS sensor is like getting some sun on the street level of Midtown Manhattan, a CCD is like Central Park – I’m exaggerating of course, but you get the idea. Foveon/Sigma’s X3 sensors rely on the fact that light of different wave lengths penetrates the silicon to different depths before it finally gets absorbed. This would tend to exacerbate the issues with ordinary CMOS sensors. That isn’t to say that all CCDs were free of these issues. According to one of the charts in Mark Dubovoy’s article, among the worst offenders are the Nikon D70 and D50 – DSLRs with CCD sensors. But these are interline-transfer CCDs like those found in compact cameras. Interline-transfer CCDs support the implementation of an electronic shutter which makes the sensor design more complex; the percentage of chip area used for collecting light is smaller, just as it is with CMOS sensors. Link to post Share on other sites More sharing options...
mjh Posted October 27, 2010 Share #4 Posted October 27, 2010 While Mark Dubovoy correctly identifies the issue, it isn’t quite as simple. He wonders: “If the camera is automatically going to increase the ISO due to a significant light loss at the sensor, does it make sense to buy bigger, heavier and much more expensive large aperture lenses?” But what exactly makes a lens a large aperture lens? When we are talking about the aperture size, there are actually three aperture sizes to consider: the diameter of the diaphragm, the diameter of the entry pupil (the virtual image of the aperture seen through the fron lens), and the exit pupil (the virtual image of the aperture seen through the rear lens). The opening of the diaphragm will usually be different from the entry pupil and the exit pupil, and with the exception of strictly symmetric lens designs the entry pupil will also be different from the exit pupil. The speed of a lens depends solely on the entry pupil – the larger its entry pupil the more light a lens will gather. On the other hand the issues with light not reaching a photosite that lies at the bottom of a deep well depend on the exit pupil – the entry pupil is immaterial. Now when stopping down a lens both the entry and exit pupils will shrink proportionally, so if a large aperture creates problems, stopping down to a smaller aperture is guaranteed to help. But this doesn’t imply that a faster lens (i.e. a lens with a larger entry pupil) will neccessarily be more problematic than a slower lens. With some lenses the exit pupil is larger than the entry pupil; with other lenses it is the other way round. Thus a faster lens may create no issues while a slower lens does – namely when the latter’s exit pupil is larger. Link to post Share on other sites More sharing options...
Guest Overview Posted October 27, 2010 Share #5 Posted October 27, 2010 Thanks for the link, I enjoy all info and the posters take and info on all! cheers, Rip Link to post Share on other sites More sharing options...
ho_co Posted October 28, 2010 Author Share #6 Posted October 28, 2010 Michael-- Thanks for the clear explanations. There seem to me to be too many issues to "data-mine" DxO Mark comparisons. I question the value of the factors DxO Mark measures, as well as their style (fixing a meaningless value to several decimal places). I question whether there's actually anything of value in their charts. If you get a chance and have the interest, I'd be interested in your take on my comments in another thread that mentions the same article, at http://www.l-camera-forum.com/leica-forum/leica-m9-forum/148135-light-loss-wide-apertures.html#post1498403. Thanks! Link to post Share on other sites More sharing options...
Nicoleica Posted October 28, 2010 Share #7 Posted October 28, 2010 Advertisement (gone after registration) Whilst I agree that the basic architecture of CMOS sensors results in the photosites being more shielded than on CCD sensors, doesn't an efficient microlens array negate this difference for all practical purposes? Link to post Share on other sites More sharing options...
mjh Posted October 28, 2010 Share #8 Posted October 28, 2010 Whilst I agree that the basic architecture of CMOS sensors results in the photosites being more shielded than on CCD sensors, doesn't an efficient microlens array negate this difference for all practical purposes? Ideally it would. Apparently it doesn’t work out perfectly in practice, but only to some extent. I can only speculate as to why this might be so. For example: for a microlens to cope with a large exit pupil, its focal length would need to be short, i.e. it would have to be a wide-angle lens so the photosite can see all of the exit pupil. On the other hand the depth of the chip’s layered structure defines a “flange distance” placing a lower limit on the focal length, so a retrofocus microlens or some kind of fiber optics would be required to achieve a perfect match. But again, that’s just speculating … Link to post Share on other sites More sharing options...
Nicoleica Posted October 28, 2010 Share #9 Posted October 28, 2010 Thanks Michael. I imagine that lens designers are working closely with sensor designers to optimise their designs so as to reduce conflicts in this area. Another factor to be considered would be the new generation of rear illuminated CMOS sensors of course. We live in interesting times indeed. Link to post Share on other sites More sharing options...
mjh Posted October 28, 2010 Share #10 Posted October 28, 2010 But I think the point that digital pixel wells lie at the bottom of tubes and don't get the side-striking rays as film did goes a long way toward explaining why the old calculations regarding depth of field aren't borne out with digital. I am not convinced that would be the case. I think that while each photosite may not always see all of a huge exit pupil, that does not necessarily imply that the effect of a larger circle of confusion would be neutralized. A pixel near the edge of a circle of confusion would still receive about the same amount of light than a pixel near its center. This is a different situation from the vignetting issues with large angles of incidence; within a circle of confusion the incident angle would show little variation (whereas the angle varies considerably from the center of the sensor to its edges). Link to post Share on other sites More sharing options...
adan Posted October 28, 2010 Share #11 Posted October 28, 2010 The original article has to do with exposure - but what is the tie to DoF (depth of field)? Is the theory here (not expressed in the original article) that pixel wells cutting off edge rays will somehow reduce the recorded size of blur circles? I.E. making blurry spots smaller and "sharper" appearing? I'm not sure there is a real connection. The article notes that any given pixel may be getting less light from a fast lens, because the edge rays don't record. But a pixel is (for all intents and purposes) dimensionless - i.e. it is the smallest unit of a digital picture. No such thing as "half a pixel". Or a "blurry" pixel, for that matter - they all have hard silicon edges. A blur circle is, by definition, big enough to see. Therefore it must cover multiple pixels to be recorded as anything other than a point. I think the article indicates that the blur circles from, say, a Canon lens @ f/1.2, may be "darker" at all points/pixels in the blur (corrected for by the artifical ISO increase the article mentions) - but the same number of pixels will be included in the blur, thus it will be the same size (for DoF calculations). Link to post Share on other sites More sharing options...
ho_co Posted October 28, 2010 Author Share #12 Posted October 28, 2010 The original article has to do with exposure - but what is the tie to DoF (depth of field)?... Andy, I only mentioned what Mark wrote. I was also surprised when he specifically brought in DoF. As I said, his point is interesting, but a lot hangs on his promised follow-up. I'm quite bothered by his statement that "DxO is currently performing thorough focus measurements to support this [depth-of-field] argument." Normally in research, one doesn't set out to find data "to support [an] argument." Dubovoy's statement may just be a poor formulation, but it obviously suggests looking for results which support a specific conclusion. Not the best scientific practice, but I'm sure DxO is delighted with the attention. If you're not aware of it, Rubén elsewhere linked to the LuLa thread on the "open letter": Mark Dubovoy's essay. Link to post Share on other sites More sharing options...
SJP Posted October 28, 2010 Share #13 Posted October 28, 2010 Hmm... multiple threads on this now, I posted my p.o.v in the M8 forum but in general I would agree with Andy. DoF is NOT connected to pixel size/depth/type of sensor/film etc. This can be researched as long as they like but the answer is wrong if they claim otherwise. It makes no sense from a physics point of view, which is my line of business. Disclaimer: OK,OK if you use a very grainy film then maybe you will be less critical in terms of DoF. Same applies to using vaseline on the lens/filter to get a soft look. Link to post Share on other sites More sharing options...
adan Posted October 29, 2010 Share #14 Posted October 29, 2010 Howard - you're right. I missed those DoF references in scanning the article. I still stand by my analysis. It's easy to test without DxO's complexities - shoot identical pix with a 75 f/1.4 wide-open on an M9 and a fine-grained, thin-emulsion film in an M3-7 (or - if one thinks CMOS vs. CCD is actually a factor, a Canon 85 f/1.2 or Leica 80 f/1.4 on a 5D and a film EOS body). Then measure how big the blurs are. I suspect the verdict will be "insignificant difference, if any," based on my own experience with a 75 f/1.4 on the M9. Data Mining is great - but one can still hit a vein of Fool's Gold. ______ Edit: But there are other good reasons (as your title suggests) to rethink DoF markings on lenses in this day and age, since they are generally based on about 8 x10 or smaller prints as made in 1960, whereas the norm today (for people who care about DoF at all) is likely more like 13 x 19 or larger, thanks to wide-platen inkjets. Link to post Share on other sites More sharing options...
ho_co Posted October 29, 2010 Author Share #15 Posted October 29, 2010 ... This can be researched as long as they like but the answer is wrong if they claim otherwise.... Well done, Stephen! "They" go searching for evidence to support a preconceived conclusion, and you counter that no evidence to the contrary matters, because you've already got the answer. And here sit I, grasping at any straws turned up which I can twist to support my own contentions. ... Data Mining is great - but one can still hit a vein of Fool's Gold.... Amen, Andy! The topic has been hotly discussed, but no one has defended the DxO/Duboboy side. People have pointed out here and in the LuLa thread a number of serious problems with both DxO's methodology and the conclusions drawn. On the LuLa thread, Mark Dubovoy posted today that he thinks "... we have reached the point where we have beaten this horse to death, so this will be my last post on this thread." Despite serious reservations about the ideas and pretensions behind the DxO Mark site, I nonetheless cited that one paragraph to keep the fires stoked. Got my fingers burned, too. My thanks to you, Andy, Michael, Sandy and Stephen for elucidating the wrong turns taken. Though I disliked the pseudo-scientific smell of the argument, my dissection was less exact. I still think the advent of digital requires us to rethink the definition of depth of field, but whether the change is simply in the size to which we blow things up or in the size of the digital as compared to the analog blur circle, I'm willing to learn. Of course, my willingness to learn comes with a bias. Link to post Share on other sites More sharing options...
pgk Posted October 30, 2010 Share #16 Posted October 30, 2010 The bit that really puzzles me is the bit about 'increasing the ISO without the photographer's knowledge'. I assume that a 'pre-gain' or whatever is applied to correct for the differential between the light hitting the sensor and that actually needed to enable the photographer to use a set (base) ISO. In the old days of film this would have been carried out be using chemicals to achieve a specific film sensitivity. But I don't see this as being an increase in ISO - or am I missing something. In the article its referred to as an engineer's trick but I can't remember Kodak giving a base ISO prior to adding the colour sensitizing dyes used to enhance sensitivity, which would be a similar requirement. I'm with Adan regarding this - my f/1.4 pix still seem to show big blurs. All-in-all a curious article. Link to post Share on other sites More sharing options...
mjh Posted October 30, 2010 Share #17 Posted October 30, 2010 I assume that a 'pre-gain' or whatever is applied to correct for the differential between the light hitting the sensor and that actually needed to enable the photographer to use a set (base) ISO. In the old days of film this would have been carried out be using chemicals to achieve a specific film sensitivity. But I don't see this as being an increase in ISO - or am I missing something. I could refer you to my series of articles in LFI a couple of months ago, especially the article in LFI 3/2010. A sensor has a certain native sensitivity – typically but not always corresponding to the lowest ISO setting available. All the other ISO settings are implemented by some kind of digital push or pull development. When Mark Dubovoy refers to an increase in ISO he means that the camera processes the sensor data as if the photographer had selected, say, ISO 400 when the actually chosen ISO setting was 200. Link to post Share on other sites More sharing options...
pgk Posted October 30, 2010 Share #18 Posted October 30, 2010 Thanks Michael Thanks Michael - I'll unwrap LFI 3/2010! So what the article as a whole is getting at is that the actual usable light at the sensor is substantially less than it should be as an increasingly fast aperture is used? ie the T stop transmission (measured at the sensor) cannot all be used? So I assume that 'corrections' are made by the camera manufacturer (who do not disclose this) to take this into account by increasing the gain (boosting ISO) and that this is unavoidable without a corresponding shutter speed reduction. Fair enough, this makes sense. But I am still puzzled by the suggestion (easy enough to check as Adan says) that faster aperture lenses show an increased depth of field wide open - although I can see how this might be assumed. If he is correct then would each camera have a maximum aperture at any given focal length beyond which it would be pointless progressing? I have to say that my own experience does not bear this out (I have both Canon 35mm f/2 and f/1.4 lenses and they have very different wide open signatures for example). Link to post Share on other sites More sharing options...
SJP Posted October 30, 2010 Share #19 Posted October 30, 2010 Empirical science is what we need, if a picture is exposed properly with a summilux at 1/125 & f/2 then if their article makes sense re. the sensitivity the image should expose wrong at 1/250 f/1.4 - similar can be calculated for a Noctilux (i.e. 1/500 f/1). I don't have either of these lenses so I can't check but I am sure that with a summicron the normal rules hold. I expect someone would have noticed after having spent multiple 1000 $ on their lux or noctilux if it didn't behave as intended re. exposure (especially those experienced with manual.exposure). Even if this about light sensitivity were true - it still does not and can not impact on DoF. DoF has nothing to do with light sensitivity, again emprical science: keep a fixed aperture and under/overexpose via shutter time or ISO, DOF stays the same. qed Link to post Share on other sites More sharing options...
ho_co Posted October 30, 2010 Author Share #20 Posted October 30, 2010 Paul, my feeling is that the whole article is much ado about nothing. The LuLa thread is still getting some rather good technical posts, but IMHO, just about all the arguments Dubovoy made have been refuted both here and there. In regard to "increasing the ISO without the photographer's knowledge," if you look at the DxO data in terms of a time line, the observed effect (whatever its cause) is less with more recent designs. One can infer that the manufacturers are learning how to avoid having to do whatever dastardly thing they were doing. And pragmatically--as long as I get the shot I want, I'm not worried how the camera did it. Your description is right: "All in all, a curious article." Link to post Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.