Jump to content

Lens diffraction


Albert Andersen

Recommended Posts

Advertisement (gone after registration)

Basically deconvolution software works best with uniformly blurred images.

And the example shown is actually quite 'simple' in that it shows distinct and well defined dark areas over, and thus contrasting with, relatively light areas. Applying such software correction to real world, 'complex' images, might well result in substantial shifts which we may not be able to recognise as such without reference to a sharp view of the original scene. And its planar, and there is no variation in blurriness as jaapv says. I'm not convinced at all.

Link to post
Share on other sites

Diffraction behaves a bit like spherical aberration. The edge rays are bent differently from the centre ones. Thus the resulting blurring is not uniform over the image. It is hard to see how an algorithm can produce any significant correction

 

Basically deconvolution software works best with uniformly blurred images.

(Quite an interesting plugin, btw., but a bit pricey imo.)

 

Seems best suited, at the moment, for images from a spy satellite.

Link to post
Share on other sites

 

Great stuff!

 

I used to dream of coming up with some things like that, but was never smart enough to figure it out. The article's explanations are very good while mathematics still stuns me.

Link to post
Share on other sites

... would be a more effective strategy to allow full use of the wider, diffraction-free apertures.

There is no such thing as a 'diffraction-free aperture.'

 

 

The edge rays are bent differently from the centre ones. Thus the resulting blurring is not uniform over the image.

In fact, it usually is. You seem to believe that the frame's central area is formed by rays passing the aperture's center and the frame's edges by those from the aperture's periphery ... which is a fundamentally false notion.

 

There would be some non-uniformity of the diffraction blur across the frame in the presence of vignetting. The effective aperture is smaller for the frame's corners than for the frame's center which will reduce corner brightness and accordingly increase both depth-of-field as well as diffraction blur in the frame's corners. However at small apertures where diffraction blur will become bothersome, most lenses won't show any perceptible vignetting.

  • Like 2
Link to post
Share on other sites

There would be some non-uniformity of the diffraction blur across the frame in the presence of vignetting. The effective aperture is smaller for the frame's corners than for the frame's center which will reduce corner brightness and accordingly increase both depth-of-field as well as diffraction blur in the frame's corners. However at small apertures where diffraction blur will become bothersome, most lenses won't show any perceptible vignetting.

 

I would think the lens makers know the characteristics of their design and I would hope they take that into account if necessary.

  • Like 1
Link to post
Share on other sites

Advertisement (gone after registration)

There is no such thing as a 'diffraction-free aperture.'

 

 

 

In fact, it usually is. You seem to believe that the frame's central area is formed by rays passing the aperture's center and the frame's edges by those from the aperture's periphery ... which is a fundamentally false notion.

 

There would be some non-uniformity of the diffraction blur across the frame in the presence of vignetting. The effective aperture is smaller for the frame's corners than for the frame's center which will reduce corner brightness and accordingly increase both depth-of-field as well as diffraction blur in the frame's corners. However at small apertures where diffraction blur will become bothersome, most lenses won't show any perceptible vignetting.

I believe no such thing, obviously. But there is a fundamental difference to blur due to lens aberrations and blur through simple defocus and motion,the things these deconvolution programs are designed to address.

  • Like 1
Link to post
Share on other sites

I believe no such thing, obviously.

You don't!? :confused: Umm, good ...

 

But then, why did you say diffraction blur would behave similar to blur from spheric aberration and would not be uniform across the frame? As a matter of fact, diffraction blur usually is very uniform across the frame and hence an easy target for deconvolution algorithms.

  • Like 1
Link to post
Share on other sites

You don't!? :confused: Umm, good ...

 

But then, why did you say diffraction blur would behave similar to blur from spheric aberration and would not be uniform across the frame? As a matter of fact, diffraction blur usually is very uniform across the frame and hence an easy target for deconvolution algorithms.

 

Uhm... is it really like this ? I am not so sure.... the lightrays which strike the borders of the sensor/film do cross a diaphragm "hole" that is geometrically "larger" (elliptic, with an axis bigger than the diaph diameter) than the lightrays axial or next to... I admit that haven't gone in deep with the math behind, but I suppose that the blur effect is, at least, someway different.

  • Like 1
Link to post
Share on other sites

Actually, Luigi - it is the opposite. The elliptical diaphragm "hole" as seen from the corner and edges has a smaller short axis, not a larger long axis.

 

https://cdn.photographylife.com/wp-content/uploads/2013/10/Optical-Vignetting.png

 

Net smaller area, thus vignetting (less light gets through smaller hole).

 

Anyway, doing studio work where I wanted maximal DoF, I found that the predicted diffraction limits given via various calculators for 20-24 Mpixel FF sensors are on target - the image starts to blur a bit below f/11 and is definitely compromised at f/22. I can see where f/8 is about the limit for 36 Mpixels.

Link to post
Share on other sites

When my father gave me my first "real" camera, I knew nothing of difraction. My 50mm lens was so so and it was the only one I had. So I went out and took photos. I loved photography. And I knew straight away it was what I wanted to do. I learned a bit about aperture and depth of field and I knew if I wanted stuff in focus I needed to stop down. But mostly I looked for interesting things, to me, to take photos of. Most of them were boring as bat s**t. But I loved the process. Every minute of it. And I craved being a better photographer.

 

Once the internet came along I learned all kinds of cool technical stuff about MTF curves and titanium shutter blades and wireless TTL flash. And I learned that all my lenses, built up into a nice little kit with hard work, suffered from all sorts of afflictions. All of a sudden they weren't sharp enough at wide apertures so I had to stop down two or three stops. Some of them were deemed so poor that I felt I had to replace them entirely. Then I learned that i couldn't stop down too far because my lenses would become "diffraction limited". Of course I wanted to make my photos as perfect as possible so I only shot the best lenses between 2.8 and f8. I did wonder why camera companies even bothered with the other numbers. Surely all my lenses would be cheaper if they removed f11-22 from my lenses.

 

Some lenses I shot at only medium distances because I knew they weren't as good at infinity. But I had gone and got a lens for infinity shooting, so I was set. I was however diffraction limited to f8 so I bought a big fast computer recommended in the tech forums so I could focus stack. So there I was. 20 years later carrying 15 50mm lenses for all occasions with my Gitzo tripod and Aratech head, focus stacking 45 shots of a landscape all shot at f4. Pin sharp. Rock solid. Technically perfect. No purple fringing and definitely no visible diffraction Hallelujah.

 

Most of the time I was out taking photos I did wish I had a camera with one of those new Sony sensors. And that new Pentax medium format could take my photographs to the next level., as long as I could create a matching wide gamut profile for it when I printed to my Epson 3880. I nearly taped over my LCD to stop the dreaded "chimping". But I knew I need to critically examine the histogram after each shot so I was exposed to the right, just right.

 

Of course a few things had to give. There was just too much going on in the photo taking process. Because I spent so much time on being technically perfect I decided that I no longer needed to spend as much time on the feel of the image. The light and the composition wouldn't matter as much if I had it perfectly sharp. Everyone would see how "perfect" my photos had become. Of course I religiously used the rule of thirds, once I had bought a camera that would allow me to use it without having to focus and recompose as that would shift my focal point along a wavy moustache focus plane. As long as I didn't have to stop down to f9.5 I would be OK.

 

And then I realised I didn't like photography much, any more. It seemed to take a really really long time to get one photograph. And my photos didn't say what I wanted them to say. I'm sure I remembered photography being easy and fun a long time ago. Hey! If only I knew than what I know now? I figured since I had no achieved technical perfection I was bored because I had nothing to learn. Time to move on to Golf....

 

Gordon

  • Like 4
Link to post
Share on other sites

We need diffraction deconvolution only at smaller apertures, and at smaller apertures the vignetting is virtually absent and the PSF virtually uniform.

 

When we'll have sensors with a few hundreds MP, thus diffraction visible at f/2, the PSF could be sampled or physically modeled based on the lens and sensor location.

 

"It ... could... work !!!"
Link to post
Share on other sites

And then I realised I didn't like photography much, any more. It seemed to take a really really long time to get one photograph. And my photos didn't say what I wanted them to say. I'm sure I remembered photography being easy and fun a long time ago. Hey! If only I knew than what I know now? I figured since I had no achieved technical perfection I was bored because I had nothing to learn. Time to move on to Golf....

 

I assume your post is a provocation; a simulation of what could happen to people obsessed about technical perfection. Otherwise you won't be around on this forum :rolleyes:

 

However, this kind of neurotic perfectionism can spoil any passion.

Golf included.

Link to post
Share on other sites

Actually, Luigi - it is the opposite. The elliptical diaphragm "hole" as seen from the corner and edges has a smaller short axis, not a larger long axis. ...

 

.

Uh.. sorry,... I wrote "larger" having in my mind the larger axis of the ellipse... which is indeed not larger than the original hole... :o

Link to post
Share on other sites

I wish somebody would post a comparison: Image at optimum f-stop, image with diffraction damage, corrected image.

In my experience (but I only have an older version) Focus Magic does indeed "create" a more detailed and sharper image on defocus and motion blur, but hardly ever produces acceptable microcontrast. For that one needs to stack a number of corrected images, which defeats the purpose, as stacking techniques can be used to make deep-DOF images.

Link to post
Share on other sites

But there is a fundamental difference to blur due to lens aberrations and blur through simple defocus and motion,the things these deconvolution programs are designed to address.

Every kind of blur has different characteristics. Blur from defocusing is not susceptible to deconvolution so nobody even tries, but diffraction and motion blur are among the issues that can be adressed by deconvolution. Camera vendors have started to exploit the potential of non-blind deconvolution to reduce diffraction-induced blur, Fuji’s lens modulation optimizer (LMO) being one example.

Link to post
Share on other sites

SmartDeBlur does not seem to work on the MacOS.

In another thread I have published a picture made with the 1930er Ikonette. This picture is not really sharp. If somebody wants to test this picture with his working copy of SmartDeBlur, I can send him a fresh scanned JPEG.

Jan

 

The idea for this proposal came from Paul (member pgk).

Edited by jan_kappetijn
Link to post
Share on other sites

Every kind of blur has different characteristics. Blur from defocusing is not susceptible to deconvolution so nobody even tries, but diffraction and motion blur are among the issues that can be adressed by deconvolution. Camera vendors have started to exploit the potential of non-blind deconvolution to reduce diffraction-induced blur, Fuji’s lens modulation optimizer (LMO) being one example.

Focus Magic claims to sharpen defocus blur through deconvolution.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...