Jump to content

F8: last stop for gas


Recommended Posts

Advertisement (gone after registration)

Lars,

 

As I understand it, there are two effects going on here; diffraction at the lens level (which we've lived with on film) and diffraction from the optical characteristics of the sensor in terms of pixel physical size (and this interacts with the limits of Bayer filter interpolation).

 

I'm perfectly willing to be wrong here, of course. Even the experts were arguing over at LL. Personally, I was convinced, though, my Myhrvold and the links he shows.

Link to post
Share on other sites

x

Lars--

Do check out the "yes it does!"-"no it doesn't!" article on LuLa that Sean and Jamie linked to.

 

The two contributors to the article are in strong disagreement, and I'm sorry the first yielded the floor when he did because it seems to me that the second has some holes in his argument.

 

--HC

Link to post
Share on other sites

Lars--you're right about lens diffraction, but apparently the sensor plays a large role here as well.

 

Jamie, the article(s) is/are interesting, but I wonder if I have not ben misunderstood, though not necessarily by you, I should say.

 

Any 'sensor', be it film or a chip or a daguerreotype plate, has inherent limitations as to resolution. These limitations –

 

——— exist irrespective of lens performance ————

 

but limitations of lens performance can make these limitations moot. At wide apertures at least, and often at all apertures, this 'lens limitation' is due to classical aberrations. At small apertures, increasing diffraction may be more important. I can attest, due to direct observation and printing of negative grain under a 50 mm El-Nikkor, that diffraction can be limiting in real optical systems. But an El-Nikkor or a Schneider Componon is of course not a myriad-component zoom.

 

It is in most cases possible to ascertain by microscopic observation of the imaging of a point source of light, if an optic is diffraction limited. If the resulting disc has an even distribution of light, or shows the characteristic picture of over- or under-corrected spherical aberration, then the system is not diffraction limited. If it is a typical Airy disc, i.e. shows the typical circular diffraction pattern, then it is. (BTW Mr Airy did not 'discover' this pattern. It has been known more or less since the late 18th century.)

 

Now, a small f-stop may produce a decrease of resolution and contrast due to the Airy disc being too large for the definition capacity of the sensor. But that does not mean that diffraction occurs IN the sensor, or is a sensor problem, as some seem to believe. Smilarly, the disc of confusion created by chromatic and spherical aberration, astigmatism and coma can make the sensor limitation irrelevant, but that does not mean that these aberrations occur IN the sensor, or is a sensor problem. Both types of resolution loss by enlargement of the projected disc occur whenever the inherent resolution of the lens is less than that of the sensor or silver halide emulsion or whatever. – I should perhaps add that much of the discussion in the articles is of doubtful relevance as the M8 does not use an anti-aliasing filter (or soft-filter, to call a spade a spade).

 

The fact that some people confuse apples and oranges does not mean that they are the same species.

 

Best regards from the old man from the Age of the Slide Rule

Link to post
Share on other sites

Thank you, Lars, for your explanation. Of course what you say is true and well known and I doubt many of us would argue otherwise. You objection seems to be semantic in that you feel the word diffraction for the resolution limit of the sensor is incorrect. I see that it can be confusing to use the same word for two different effects. To equate it with film, however is incorrect as well. The way the light point is recorded on film, be it an even disk (although that is a concept like infinity or immaculate virtue, that can be approached but not reached) or an Airy disk, is totally different on film than on a sensor. Film is more random, but more importantly, three-dimensional, as the emulsion has a thickness. I once likened that to the shining of a torch into a murky plate of soup. Thus the original disk is smeared out and distorted, making it impossible to determine an exact resolution limit for film on this microscopic level. A sensor, however especially one without a matte filter in front, allows an exact mathematical determination of the resolution. Thus I feel to use the term "sensor diffraction" to distinguish it from film is defensible, although I give you that it would be more precise to reserve that word for the pure optical phenomen.

Link to post
Share on other sites

  • 1 month later...
Jamie,

it is correct that the properties of the 'sensor' – grain and emulsion depth of film, pixel diameter in a chip – will ultimately limit the resolution of the image. This however is a different matter than diffraction in a strict optical sense, which occurs in the optical system. Diffraction may well occur in the small lenses above the different pixel loci, but as these lenses are not image-forming – or at least, the individual captor cell does not record an image of its own, only the overall intensity of the light it captures – it is irrelevant. Optical diffraction is a continuous variable, while 'sensor resolution cutoff' is just that, an abrupt limit.

 

This is how I understand it. Diffraction happens when light paths are bent by passing close to optical edges; it has already happened when that light hits the film or the chip. Am I dead wrong?

 

P.S. I do know what a diffraction grating is.

 

The old man from the Age of Classical Physics

 

I wanted to say thanks to you all for great discussions, including this one, on the 'board. Every few times I read something here I learn something new. The year's M8 discussions have been enlightening. This particular topic interested me because I was reading a review of the DMC-L1 and noted that the test image started getting fuzzy after F11.

 

I always thought that, with a good lens, F2 to F22, you would have a decent image. I'm sorry to hear that isn't true.

 

This does explain somewhat why small sensor P/S cameras are limited to F8 though... I'm sure there are other reasons too.

 

I have read that Ansel used F64 on some of his shots (must have taken ages to get an image on the old emulsions at that aperture). I thought he did it to get the best DOF with the big negatives. There are other reasons though, like crummy old lens designs?:eek:

Link to post
Share on other sites

I have read that Ansel used F64 on some of his shots (must have taken ages to get an image on the old emulsions at that aperture). I thought he did it to get the best DOF with the big negatives. There are other reasons though, like crummy old lens designs?:eek:

 

The small aperture increased depth of field, and increased coverage of the lens (allowing larger film, and use of camera movements). Many old lenses were designed to be used stopped down and provided their maximum aperture mostly to make focusing possible. Negatives from these cameras typically weren't enlarged much, so diffraction in many cases was less of a concern.

 

Just like today, some of these old lenses are junk, and others are excellent, even by modern standards. In good light, f64 can give you speeds around a second even with a slow film!

 

--clyde

Link to post
Share on other sites

Large format lenses have a different design than M lenses, and can remain sharp at smaller aperture openings. I don't know if he would have seen diffraction with those lenses at f/64 though.

 

I have studied a little the math/physics of diffraction: the reason for large formats lenses suffer less from diffraction is related to the fact that the diffraction phenomenon is related to the ABSOLUTE value of the hole through which light passes (the diaphragm) in respect to the wavelength(s) of the light itself; the ratio of these values determines the aspect of the typical diffraction patterns known as "Fraunhofer figures"; now, large formats have generally much LONGER focals than 35mm and more again than M8: onto a 13x18 Linhof you have a normal lens of 210mm (or 240...) vs., let'say the 35mm of M8: that is 6x more, and means that at a certain aperture (say f8) you have a hole 6 times larger, and that makes a lot of difference regarding diffraction. Roughly, but not too much, one can say that with your Cron or Lux 35 at f11 on your M8 you have the same diffraction of your Symmar 210 at f 64 on your Linhof.

That's also the reason for if you use some strong Telyt (400 or 560) on your M8/Viso, you don't have to worry closing to f11 or 16: diffraction is limited, and the 2-elements design of the Telyts (with spherical abherration loosely corrected) makes it sure that stopping down you do obtain a sharper image.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...