Jump to content

Lens diffraction


Albert Andersen

Recommended Posts

Advertisement (gone after registration)

It is well known that the optimum sharpness (not DOP) of images taken with a camera with a high resolution such as Nikon D810 - 36 MP is not achieved with the maximum aperture (e.g. f: 22), but at aperture f:8.

How is it with M 240?

Are there some who have experience of this? It's not the lens ability to draw sharp but the sensor pixel size that affect this, but as far as I know, it is becaused of the angle of the light through the lens to the sensor.

Link to post
Share on other sites

No, it is because the light gets "bent" around the edges of small apertures.

It is a marginal effect in most normal photography, but it certainly exists.

 

Most lenses have their optimum aperture between f 5.6 and 8.0 where the diffraction effect is balanced by the aberration suppression, but it is wholly dependent on lens design.

 

For instance the APO Telyt R 280/4.0 is diffraction limited, and will not improve on stopping down, but image quality will get less from wide open because of diffraction.

 

I would not be surprised if the Apo Summicron were one of those rare lenses too.

  • Like 1
Link to post
Share on other sites

No, it is because the light gets "bent" around the edges of small apertures.

It is a marginal effect in most normal photography, but it certainly exists.......... I would not be surprised if the Apo Summicron were one of those rare lenses too.

 

Is this to suggest that the 50mm Apo-Summicron-M ASPH is diffraction limited at f/2?

 

That would be some lens by any standards. (Leica would never stop talking about it!)

 

The legendary 280mm f/4 APO is, of course, only diffraction limited wide open in the very centre of the image.

Edited by Peter Branch
Punctuation
Link to post
Share on other sites

It's not the lens ability to draw sharp but the sensor pixel size that affect this ...

This is a common misconception.

 

The optimal aperture—i. e. the aperture where the image sharpness at the plane of focus will be at maximum (at least in the frame's center)—is a property of the lens. Different lenses will have different optimal apertures ... albeit for many (not all) 35-mm-format lenses it tends to be somewhere around f/8 indeed. Modern Leica lenses, however, usually have somewhat wider optimal apertures.

 

Contrary to common belief, the sensor's pixel pitch has absolutely nothing to do with this.

  • Like 1
Link to post
Share on other sites

No - but the pixel number will dictate the 100% magnification on the monitor, making the exact effects of the lens properties easier to see.

The same goes for defocus, motion blur, etc.

The net result is that people tend to forget that all this is nothing new and was exactly the same on film.

  • Like 1
Link to post
Share on other sites

This is what Nikon says in this matter,

The effects of diffraction are partly influenced by the size of the pixels on the camera’s image sensor, but with the high resolution offered by the D800/D800E,

 

I think it should read "the effects of diffraction are noticed more easily with more pixels"

 

As jaapv said, diffraction is light "bending" around edges. It always happens. Its why you shadow is blurry and gets more blurry as you move further from the ground. Having smaller pixels just means you can see it. Two things can effect the distance the light is displaced by the time it gets to the imaging plane.

1) the size of the aperture (smaller holes bend the light more)

2) the distance between the aperture and the imaging plane.

 

Imagine you are imaging the shadow of a tennis ball with pixels a bit bigger than the ball for instance. The shadow would fill one pixel when the ball was closet the ground. Raise the ball higher and the shadow will blur and get bigger (thats diffraction). Soon the shadow will spill into the neighbouring pixels (maybe after 50 cm or so). Now do the same experiment but each pixel is 1 mm squared. We have an image of the tennis ball that has lots of pixels, but raise the ball up and the shadow will spill into the neighbouring pixels almost immediately. Physics hasn't changed, you're just sampling it enough to see reality more closely.

 

While this example deals deals with #2 above, the exact same thing occurs as a result of #1, its just not as obvious in everyday life.

 

The good news is if you're printing or resizing to a given size (i.e - not pixel peeping), it doesn't really matter, you just might not notice the extra resolution.

 

Cheers,

Michael

  • Like 2
Link to post
Share on other sites

I belive jaapv has posted some good answers.

 

In addition: Imo people worry to much about diffraction, in particular on FF. Some can be improved by sharpening in post. Whats left doesnt keep me awake at nights :).

 

I belive my 90/4 macro is at sharpest at f/4 - f/5.6, but that doesnt stop me from stopping down if the scene needs this.

  • Like 3
Link to post
Share on other sites

I belive jaapv has posted some good answers.

 

In addition: Imo people worry to much about diffraction, in particular on FF.

 

Probably. Lapses in technique ruin matters far more often.

 

Some can be improved by sharpening in post.

 

Not really. The image becomes progressively duller as per-pixel sharpness is blunted.

 

 

Some non-Leica cameras try to mitigate diffraction effects in-camera with native lenses.

 

How?

 

I thought it is a function/limitation of the lens itself, in concert with sensor pixel pitch; the smaller the pixel, the more sensitive it become to the enlarging "circle of confusion" as you stop down.

  • Like 1
Link to post
Share on other sites

Probably. Lapses in technique ruin matters far more often.

 

 

 

Not really. The image becomes progressively duller as per-pixel sharpness is blunted.

 

 

 

 

How?

 

I thought it is a function/limitation of the lens itself, in concert with sensor pixel pitch; the smaller the pixel, the more sensitive it become to the enlarging "circle of confusion" as you stop down.

 

 

I believe through in-camera processing, sharpening as a function also of aperture of the lens.

Robin Wong: Olympus OM-D E-M1 Review: Introduction and High ISO Shooting

Robin Wong: Olympus OMD-E-M1 Review: Comparison with E-M5

Edited by k-hawinkler
Link to post
Share on other sites

I believe through in-camera processing, sharpening as a function also of aperture of the lens.

Robin Wong: Olympus OM-D E-M1 Review: Introduction and High ISO Shooting

Robin Wong: Olympus OMD-E-M1 Review: Comparison with E-M5

 

It would seem to me that IBIS and faster available shutter speeds would be a more effective strategy to allow full use of the wider, diffraction-free apertures.

  • Like 1
Link to post
Share on other sites

It would seem to me that IBIS and faster available shutter speeds would be a more effective strategy to allow full use of the wider, diffraction-free apertures.

 

... Except if one needs large DoF I would think. Thanks.

 

Some time ago I did a comparison between an Olympus and a Sigma lens on the E-M1.

F/22 looked unusable to me. F/16 I thought was borderline okay.

Both lenses gave excellent results up to f/8 IMHO.

For f/11 and f/16 the Olympus lens with diffraction compensation, i.e. sharpening, gave better looking OOC JPG images. But I shoot always raw files as well anyway.

  • Like 2
Link to post
Share on other sites

... Except if one needs large DoF I would think. Thanks.

 

Some time ago I did a comparison between an Olympus and a Sigma lens on the E-M1.

F/22 looked unusable to me. F/16 I thought was borderline okay.

Both lenses gave excellent results up to f/8 IMHO.

For f/11 and f/16 the Olympus lens with diffraction compensation, i.e. sharpening, gave better looking OOC JPG images. But I shoot always raw files as well anyway.

 

Does it really work? It would stand to reason seems that a degraded image isn't really retrievable, even with software tricks.

 

And indeed, the need for the better DOF does complicate matters. Once a 50+ MP imager is de rigueur, ƒ/4-5.6 might be the upper acceptable limit instead of ƒ/8.

Link to post
Share on other sites

Does it really work? It would stand to reason seems that a degraded image isn't really retrievable, even with software tricks.

 

It is software magic, and works fine ;)

 

Simulated example:

A "simulation" of mitgation of diffraction with deconvolution - Open Photography Forums

 

The real problem is knowing the point-spread function, but that should be no big deal for the lens manufacturer.

Edited by CheshireCat
Link to post
Share on other sites

It is software magic, and works fine ;)

 

Simulated example:

A "simulation" of mitgation of diffraction with deconvolution - Open Photography Forums

 

The real problem is knowing the point-spread function, but that should be no big deal for the lens manufacturer.

 

I am skeptical because the simulated diffraction deviates from the original 3D image because its source is 2D, and is easily deconstructed. Recreating using sharpness techniques cannot produce the same as a lens with huge diffraction especially if it is produced from a plastic Lomo piece of crap lens which we know have entirely random, unanticipated defects.

 

Just having fun with you, Mr. Cat. (I need an emotion for the Cheshire)

 

Thanks for your post.

.

  • Like 1
Link to post
Share on other sites

It is software magic, and works fine ;)

 

Simulated example:

A "simulation" of mitgation of diffraction with deconvolution - Open Photography Forums

 

The real problem is knowing the point-spread function, but that should be no big deal for the lens manufacturer.

 

Thanks for the link... but me too am a bit skeptical looking at the linked examples... it takes a normal image, applies aritificially (alghoritmically...) a diffraction blur.. then corrects it with another alghoritm... : frankly this is not so convincing... you HAD the original "clean" information : would be much more interesting to look at it applied to an originally diffraction-affected image...

Link to post
Share on other sites

Nor am I sure. Diffraction behaves a bit like spherical aberration. The edge rays are bent differently from the centre ones. Thus the resulting blurring is not uniform over the image. It is hard to see how an algorithm can produce any significant correction

 

Basically deconvolution software works best with uniformly blurred images.

Forensic type sharpening like Focus Magic is an example. (Quite an interesting plugin, btw., but a bit pricey imo.)

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...