Jump to content

Leica and digital corrections


ho_co

Recommended Posts

Advertisement (gone after registration)

"Losing" data in processing visual information happens only when working on lossily compressed files. Up to 2:1 compression can be lossless with pictures (DNG, various video schemes like digital betacam). JPEG is a lossy compression scheme, after several "generations" (processes of decompressing and recompressing) the picture will become more and more degraded. A "loss" of data occurs (or rather its distortion)

With lossless compression or no compression (uncompressed files/pictures) original data can be retrieved via reversing all operations.

We'll see more and more "picture corrections" conducted in silicone and less in glass.

Silicone is simply cheaper. If the method is indistinguishable to the careful viewer,who cares? Panasonic G1 kit lens was widely considered surprisingly good until it became known that corrections are done in-camera (JPEG) or in-software post ( RAW in LR2.3/ ACR ).

Human eye is a poor lens, yet the brain makes up for the deficiencies.

Link to post
Share on other sites

  • Replies 42
  • Created
  • Last Reply

We'll see more and more "picture corrections" conducted in silicone and less in glass.

 

I agree completely. For instance, the test I posted indicates to me that a lens designer could leave some distortion in a lens if that helps improve the lens in some other way. Or it could simply make it lighter or less expensive to produce.

 

The fact that distortion , vignetting and c/a can now be corrected with software allows me the flexibility of using zoom lenses for architectural photography. This allows for very precise framing and less cropping than if I used primes. I usually shoot stopped down, so the images are sharp enough. And it is more convenient.

Link to post
Share on other sites

...

Human eye is a poor lens, yet the brain makes up for the deficiencies.

 

As a lens the cornea is pretty simple, but as a system the eye is pretty amazing.

The processing is equally amazing.

Assuming you have two eyes - then you also have a rangefinder system.

Link to post
Share on other sites

As a lens the cornea is pretty simple, but as a system the eye is pretty amazing.

The processing is equally amazing.

Assuming you have two eyes - then you also have a rangefinder system.

 

Eye as a lens is f.2.8-11/22.5 mm fixed heavily distorted lens.

It is the "sensor" (6 Megapixel BTW) and neural networks plus software that make it a part of an amazing system.

The range can be measured even without stereoscopy. Just like pigeons it's enough to move the head up and down and the brain can aproximate distance with one eye.

 

As a system Panasonic G1 is pretty amazing too. It has the face recognition feature.

When I saw the first time the little yellow rectangles around peoples' faces in the viewfinder FOLLOWING THEM...I recalled sci-fi movies from not that long ago.

iPhoto'09 takes it even further. You can give faces in your library names just once, and the software will search these people for you in all your different library pictures.

 

A piece of glass is dumb as a stump by comparison.

Until optical computing takes over.

Link to post
Share on other sites

As a system Panasonic G1 is pretty amazing too. It has the face recognition feature.

When I saw the first time the little yellow rectangles around peoples' faces in the viewfinder FOLLOWING THEM...I recalled sci-fi movies from not that long ago.

 

Some cameras now let you choose which face is important and will key on it when you shoot.

Link to post
Share on other sites

Rubén, I respect your opinion very highly. It's clear, for example, that (almost?) any post-processing manipulations will lose data.

 

But what about a case where aberrations were intentionally left for later processing (D-Lux 4 et al)? (Not that I understand the design choice, but it seems to work.)

 

What about a case where a camera can automatically make corrections for CA, say, as is claimed for some current Nikon models, or for other unspecified aberrations as implied by the Leica R ROM lenses? (Assumption on my part about the latter; I'm not very familiar with them.)

 

In my generation, image corrections were made in glass, and what is so exciting to me about some of these new ideas is that that practice is being re-evaluated.

 

Well, you have two alternatives: software correction and optical correction.

 

The first choice (software) is cheaper and offers good results. The second choice is better but much more expensive, and requires larger lenses. Therefore, for most applications and users the first solution is the best, considering price, size and image qualitity together.

 

AlanG reasoning is interesting:

 

I think you have to consider if it may be possible to get better results from a lens that is originally designed needing a specific correction such as distortion or c/a. Maybe this will produce a better overall result than the compromises in the optical design that have to be employed to get rid of the distortion or c/a. And some lens designs may only be possible with software correction so they will beat having nothing else comparable.

 

If image quality (IQ) depends on 4 different types of aberrations (A,B,C,D) and you can allow higher values for aberrations A and B (for postprocessing correction) and concentrate on C and D, achieving very low values for them, you can get and overall better result than by applying a medium correction to all four aberrations. That's Alan idea. I think you can achieve any target values for A,B,C and D using only optical means, but it can result in a very expensive and large lens. Is a pure optical solution worthwhile? For a professional camera, maybe; for a compact camera I doubt it is. The Leica/Panasonic DLux-4/LX3 relies on software corrections, and I understand that. "Digital" designs for this kind of lenses is a very good idea.

 

Most software corrections are for distortion (it can be corrected optically), vignetting (not a true aberration; to some point unavoidable) and CA (fringing, also due to sensor's filters and other problems, so it needs ex-post corrections in any case).

 

Then, some software (or hardware, but "ex-post") based corrections are good for the final quality of the system, even for highly optically corrected ones. Optical design adapted to ex-post corrections ("digital" designs) can offer great results and/or lower costs/sizes (in the film days these designs were impossible). But any software correction implies processing time and pixel reallocations/reinterpolations. It can be done in a very sophisticated way, but you know the most you alter your original image the greater are the losses.

Link to post
Share on other sites

Advertisement (gone after registration)

"Losing" data in processing visual information happens only when working on lossily compressed files.

 

JPEG losses occurs when you save the file, applying the compressing algorithm and the 8-bit color space.

 

Even when you open a 16-bit RAW file you have losses when you change/alter/modify the original information, and you can see them in the histogram.

Link to post
Share on other sites

Traditional lens designers had a variety of things to consider in order to achieve their goals -

 

1. What is the intended application?

2. What lens formulas exist and is it practical to design a new one (computers made this faster and easier)

3. What glass is available - refractive index. (Anti-reflective coatings too.)

4. What will it cost to manufacture?

5. Will the end result be something people want to use? If it costs too much there might not be much market for it. If the quality is too low it may not sell either. Size and weight will be a factor.

 

There may be other factors I can't think of.

 

Now consider that software correction - either via in-camera firmware or later processing is simply another factor in the equation.

 

The end result is not a lens but a photograph with that lens.

Link to post
Share on other sites

 

I think you can achieve any target values for A,B,C and D using only optical means, but it can result in a very expensive and large lens.

 

I am not an optical engineer but I don't believe this is so. I've always heard that lens designing is a balance of compromises - even if price, size, and weight are not a concern. And it must be harder to approach that goal for complex lens designs.

 

There are inescapable aspects of physics for any lens -e.g. the aperture is presented as a circle to on-axis rays yet is an ellipse for oblique rays. I was just skimming through the book, "Photographic Lenses" by C.B. Neblette and it gives me a better appreciation for just how complicated some of these issues are.

 

I think there are always trade-offs and software can always improve an image from any lens - if the goal is no distortion, no vignetting, and no c/a.- even if it is only a very marginal improvement. Consider that all Bayer sensor images require interpolation and it is standard to apply sharpening to an image. So it isn't as if we are starting with something that is so pure.

Link to post
Share on other sites

I guess Leica's priority is maximum image quality. That is what Leica users expect. Leica hasn't too much competition in this kind of product. The better the image is before it gets into the processing the better is the final result, caeteris paribus. The same goes for noise: you can correct it after the image is saved, but it is better not to have to deal with it at all, because any correction destroys some detail. It can be worth to apply this ex-post correction for some uses though. Per-pixel resolution is better on CCDs without anti-alias filters. You can apply ex-post edge sharpness or acutance to CMOS-based with AA files, but you can see a difference. Etc.

 

Leica's tradition is (was) photojournalism, not heavily manipulated images or images good enough of later heavy manipulation.

Link to post
Share on other sites

first comment, nice test Alan, kinda severe though ;)

 

actually Oly does lens distortion correction too, you see it in jpeg, and it applies to ORF's (RAW) when used in Studio or Master.

 

I think Panasonic have set down this road for 2 reasons,

1/ less CA in a less corrected lens,(CA takes more processing power to eliminate, and Nikon do CA subtraction in jpeg too (not sure about RAW)).

2/ Then they can apply correction in jpeg, AND in mpeg 3 for video-on-the-fly

 

the issue as I see it with Panasonic kit lens for mFT, is the amount of correction

Link to post
Share on other sites

Consider that all Bayer sensor images require interpolation and it is standard to apply sharpening to an image. So it isn't as if we are starting with something that is so pure.

 

We are talking about "additional" interpolation/corrections... additional to the necessary ones, in marginal terms...

Link to post
Share on other sites

I think Panasonic have set down this road for 2 reasons,

1/ less CA in a less corrected lens,(CA takes more processing power to eliminate, and Nikon do CA subtraction in jpeg too (not sure about RAW)).

2/ Then they can apply correction in jpeg, AND in mpeg 3 for video-on-the-fly

 

Panasonic cannot sell a compact camera like the LX3 with a huge lens and a cost/price x3, x4... x10 times greater than the price of the current lens. So it is a price/size/final IQ compromise, as Alan has explained. The point is what works for some type of camera (target price/market segment) doesn't work for others; and what works for some brands doesn't work for others. Leica's differentiation is in the optics and high raw image quality.

Link to post
Share on other sites

Panasonic cannot sell a compact camera like the LX3 with a huge lens and a cost/price x3, x4... x10 times greater than the price of the current lens. So it is a price/size/final IQ compromise, as Alan has explained. The point is what works for some type of camera (target price/market segment) doesn't work for others; and what works for some brands doesn't work for others. Leica's differentiation is in the optics and high raw image quality.

 

LX3 is so much in demand they cant make enough of them, the price wont fall anytime soon given that situation.

 

I think Alan (although he can speak for himself) had a more balanced view, and his test concludes that IQ compromise wont be all that great *for distortion alone*. At least thats what I read into it.

 

mFT and G1 are of course on another parallel, not being fitted with Leica branded optics, at least as far as we can tell with just the kit lenses apparent.

Link to post
Share on other sites

I think Leica, just as any other lens designer, will have to consider what design criteria will give the best results in final image quality - even if they can charge a lot for a lens. They surely can run distortion, C/A, vignetting, and other test better than I can. But I've used DXO for optical correction on thousands of images for more than two years and don't see a down side.

 

Leica may be somewhat restricted by the fact that a lot of people still use their lenses on film cameras.

Link to post
Share on other sites

"Losing" data in processing visual information happens only when working on lossily compressed files.

Actually, that's only part of it. A number of Photoshop manipulations lose data.

 

An obvious one is "Shadows/Highlights...."

 

Adobe advises that some data are destroyed with each use of "Transform," and therefore recommends doing multiple adjustments in one "Transform" operation.

 

If you change a layer, you've lost some data irretrievably.

 

 

OTOH, as you say, it's of technical interest that the G1 performs as well as it does by means of firmware corrections, but of no interest at all to the photographer who is happy with the output. That's the reason I can't understand why some folks are upset by the use of software correction in the D-Lux 4. I'm very interested in the technologies involved, but they are unimportant when I make a picture.

 

I agree with you, Rubén, Alan and a lot of others that design standards are changing. That's why I was so interested in the fact that Leica has reportedly said they don't want their name on interchangeable lenses for the G1 that rely on software processing. That's interesting and traditional; will that choice redound to Leica's benefit or harm, or will it make no difference?

Link to post
Share on other sites

Here is c/a correction. I don't see any loss of detail. This is from the top corner of a 24mm TS-E shifted all the way up. (best I recall.) The corner sharpness is not real great with this lens fully shifted but the biggest problem is c/a not sharpness. We'll see what the next model is like in 2 months. But it will be bigger and more expensive.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

Adobe advises that some data are destroyed with each use of "Transform," and therefore recommends doing multiple adjustments in one "Transform" operation.

 

If you change a layer, you've lost some data irretrievably.

 

 

 

I didn't know that. It seems strange to me.Either a shortcut or a shortcoming in the algorithm. Or the effect of an error or purposely introduced randomizer, but it's not that easy at all to generate truly random numbers. If you have a string of ones and zeros 11000110101000001111 which every digital file is, a mathematical formula working on it should be able to undo changes. Unless it's a purpose made formula to make the file significantly shorter, like in lossy compression algorithms.

Last fall police of many countries were searching for a pedophile working as an English teacher in Far East. He appeareed in pedophile forums with an avatar-photo of himself transformed with warp many times. Some processing time and the photo was unwarped and the guy arrested.

PS

As an "amateur scientist" I might be unaware of some factors in picture data manipulation

Link to post
Share on other sites

I think you two are talking about two different things. One is lossless compression. The other is transform.

 

Transform actually changes the size, shape or otherwise distorts the image. So of course it either discards or generates pixels in the process. This i no different in principal than if you took a large file, reduced it, and then scaled it back up. Lots of image manipulations permanently affect the image. That is the whole point. (Of course you have various ways to undo.)

-----------------------

"Last fall police of many countries were searching for a pedophile working as an English teacher in Far East. He appeared in pedophile forums with an avatar-photo of himself transformed with warp many times. Some processing time and the photo was unwarped and the guy arrested." Recently there was an Episode of "Law and Order' that had a similar story. Robin Williams played the subject and argued as his own lawyer that the software was at best making an educated guess based on how it was programmed and that when you examined a blow-up of the original, it could have been a photograph of anyone.

Link to post
Share on other sites

I didn't know that. It seems strange to me. ...

As an "amateur scientist" I might be unaware of some factors in picture data manipulation

And I'm not even an amateur! No worry, it's certainly not a big issue because when we make those manipulations, it's because the picture "isn't right" without them. :)

 

I'm simply quoting a very knowledgeable photographer and Photoshop artist whose Photoshop course I took. I think almost every meeting of the class he mentioned that doing XXX would cause a data loss, so we should be sure we had done what we wanted before saving the result.

 

I think the destructive nature of Photoshop is also one of the touted benefits of Lightroom and Aperture, whose ads refer to them as "nondestructive."

 

Your tale of uncovering the pedophile's original avatar is good, but it applies because known algorithms were used and could be reversed. If I apply a "curves" layer in Photoshop and then merge the image and adjustment layers, the original data can't be recovered because no one knows what the curves layer looked like.

 

That's all I was saying. Some processes are reversible and others aren't. :(

 

That's what's so interesting to me about this new level of post-processing from Panasonic: Suddenly, there are new rules that go beyond the ones we learned. We build on the old rules and extend them, and then one day we're likely to find that we're doing things that directly oppose the logic we used to get there.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...