Jump to content

MTF Curves ~ Sensor Resolution


k-hawinkler

Recommended Posts

Advertisement (gone after registration)

In my experience, when ‘lines’ are quoted instead of ‘line pairs’ then in most cases black and white lines are counted separately, i.e. the ‘lines per mm’ figure is twice as large as the ‘line pairs per mm’ measurement. Still, some people write ‘lines’ when they actually mean ‘line pairs’ (I don’t).

  • Like 1
Link to post
Share on other sites

  • 1 year later...
So let me repeat it once again:

Lenses don't outresolve sensors, and sensors don't outresolve lenses.

Better lenses are better on any sensor and better sensors are better behind any lens.

 

The two statements you make don't seem to be corollaries of each other and should be evaluated separately.

Could you please provide a rational or proof why either one is correct? TIA.

Link to post
Share on other sites

Hey... :) an old heated thread comes back to life...

just my opinion on the two statements, which I agree are not one the corollary of the other :

 

The 2nd is generally true... to be pedantic, the term "better" ought to be better (:p) specified in the context of the items we are speaking of (lens and sensor)

 

The 1st is an ambiguous negation... : suppose that

1) "lenses don't outresolve sensor" is true

but I also affirm that

2) "a certain lens can have a resolution higher than a certain sensor" IS true

 

So I conclude that 2) does not mean that lens "outresolves" the sensor .... we are again on semantic... :rolleyes: ... let's define better what it means "to outresolve"...

Link to post
Share on other sites

Advertisement (gone after registration)

Both statements are true, and still there are situations where replacing one component by a higher resolution version would give tangible benefits whereas replacing the other would not. Describing this in terms of one component outresolving the other, while technically incorrect, gets the point across.

  • Like 1
Link to post
Share on other sites

.....there are situations where replacing one component by a higher resolution version would give tangible benefits whereas replacing the other would not. Describing this in terms of one component outresolving the other, while technically incorrect, gets the point across.

The REAL questions that should be asked are:

 

What are the tangible benefits? And

 

Are they relevant?

 

Excessive information will be lost if not presented in a way where it can be viewed or utilised. For the vast, vast majority of images this is the case. For the few where absolute resolution is an absolute requirement it makes sense to use the highest resolving sensor and lens available for the application. In the real world there are many factors of greater importance that sensor/lens resolution figures.

Link to post
Share on other sites

The REAL questions that should be asked are:

 

What are the tangible benefits? And

 

Are they relevant?

 

They are relevant by virtue of being tangible. Or the other way round, whatever you prefer. The thing is that you could get to the point where investing in, say, a ten times more expensive lens would still improve resolution, but only by an amount so tiny that it would hardly make a difference in practice.

  • Like 3
Link to post
Share on other sites

Both statements are true, and still there are situations where replacing one component by a higher resolution version would give tangible benefits whereas replacing the other would not. Describing this in terms of one component outresolving the other, while technically incorrect, gets the point across.

 

Well, we ought to be much more careful with the language and first define the terms used.

 

Statement 1: Lenses don't outresolve sensors, and sensors don't outresolve lenses.

Statement 2: Better lenses are better on any sensor and better sensors are better behind any lens.

 

When I use the term outresolve - a term that I introduced as OP in the first post, namely "I am interested in quantitatively comparing the resolution of Leica lenses with the resolution of digital sensors. So, I would like to know how one can determine whether a given lens outresolves a given sensor and vice versa, or whether the resolution of lens and sensor are comparable." - I assumed that one can determine the resolution of a sensor and a lens, compare them and see which one is larger. For example the Sony sensor in the Nikon D800/E or Sony A7R has about 100 line pairs per mm (lppm) resolution, whereas the APO-R 280/4 has about 500 lppm. So, one clearly is larger than the other and I tried to characterize any such comparison by using the term outresolve. Of course, the fact that the sensor resolution is different from the lens resolution can have consequences.

 

Furthermore, I do not see Statement 1 connected to Statement 2. These statements ought to be examined independently.

 

Of course in Statement 1 I have no insight how 01af defines that term outresolves.

In Statement 2 the term better is ambiguous and needs to be clearly defined.

That statement deserves closer scrutiny with regards to its validity and limitations - after properly defining the term better first.

 

So, I guess we may disagree.

Edited by k-hawinkler
Link to post
Share on other sites

I assumed that one can determine the resolution of a sensor and a lens, compare them and see which one is larger.

The misconception that 01af wanted to dispel is that the metaphor about the weakest link applies here – even when the resolution of one component was higher than that of the other (assuming both could usefully be measured in comparable ways), the total resolution of the system would not be determined by the lower resolution component (the weakest link) alone. This is quite generally true; it applies to lenses and sensors with regard to resolution, just as it applies to memory cards and card readers with regard to throughput et cetera. Improving the strongest link, i.e. the resolution of the component that already ‘outresolves’ the other (in that its resolution is measurably higher) will still improve resolution on the whole. It is just that improving the weakest link would give you a much higher return on investment.

 

So if you interpret ‘outresolve’ to refer to one figure being larger than the other, then of course lenses and sensors could outresolve each other. But if you took it to imply (as one usually does) that the component outresolved by the other would determine overall resolution then you were mistaken.

  • Like 5
Link to post
Share on other sites

Very clear, Michael... but I agree also with Karlheinz that the statement "Lenses don't outresolve sensors , sensors don't outresolve lenses" so simply said, is someway questionable : a lens CAN have a resolution higher than a sensor... the combined effect of each of the two tangible values is indeed a complex/subtle matter as you said.

  • Like 1
Link to post
Share on other sites

Both statements are true, and still there are situations where replacing one component by a higher resolution version would give tangible benefits whereas replacing the other would not. Describing this in terms of one component outresolving the other, while technically incorrect, gets the point across.

 

 

I agree with, quote:

"still there are situations where replacing one component by a higher resolution version would give tangible benefits whereas replacing the other would not."

 

But I don't agree with this bizarre statement, quote;

"Describing this in terms of one component outresolving the other, while technically incorrect, gets the point across."

 

No, it doesn't get the point across.

 

Statement 1 is plain and simple confusing and wrong!

 

Einstein's razor comes to mind.

Edited by k-hawinkler
Link to post
Share on other sites

The resolution of the sensor is what it is, no amount of changing lenses will adjust that. What changing lenses will do is change the image that is projected on the sensor.

 

The theories, math and everything else do not change the resolution, rather they change our understanding of it. I think, but may be wrong, that much of the work in Lens design for a system involves matching the Lens to the Sensor for a wide variety of factors. I'm thinking here of that new X compact where Leica limited the Aperture in some conditions ... as an extreme example.

Link to post
Share on other sites

I'm going to throw my hat in the ring because I enjoy the technical aspects of our hobby. I'm not an expert, though. Even the statements I make below that sound definite are probably based only on rough understanding, but I hope they can contribute to improving our understanding of these issues.

 

So, please take everything I write with a grain of salt and a smiley face. :)

 

...I would like to know how one can determine whether a given lens outresolves a given sensor and vice versa, or whether the resolution of lens and sensor are comparable.

 

Let me start by saying that I don't think you can do this by reading spec sheets. Here's why:

 

The spec sheets tell us how many sensor elements (sensels) are on an M9, so we can calculate ~145 px/mm, but this does not translate into 72.5 lp/mm except for lines that run either perfectly vertical or horizontal. Let's say, for example, that these are "perfect" pixels with a "perfect" lens. With horizontal lines at 72.5 lp/mm, pixels running top to bottom alternate between black and white. Now, shift the lines to an inclination of 1%. If the pixel at (1,1) is black and at (1,2) is white, then, with the slight angle, the pixel at (101,1) is white and at (101,2) is black. What happens to the pixels at (50,1) and (50,2)? They're both 50% grey because they're averaging the black and white lines which each cover half of those pixels. This is called moire, which gets worse when you add color filter arrays and is why most digital cameras come with a low-pass filter: at the sensor's finest resolution, with a perfect lens, per-sensel detail is likely to create false data, which is often worse from an imaging standpoint than simple noise. Adding a low-pass filter blurs out those finest details, avoiding false data and enabling lines at angles (including curves) to be resolved about as well as lines that are horizontal or vertical, but limiting the maximum resolution of even a sensor with "perfect" sensels. (And most lenses have low enough contrast at the >100 lp/mm of a D800 or NEX-7 to make low-pass filters arguably unnecessary.) Since Leica (along with every other camera manufacturer I'm aware of) doesn't publish the specs of its low-pass system, which would need to include both the cut-off resolution and the slope of the transition, it is impossible to derive maximum resolution from the spec sheet.

 

Similarly, I don't think MTF curves can provide the desired data to determine maximum resolution. The style of chart that is most common today--contrast as a function of distance from axis, charted at specific resolutions--definitely won't. It is impossible to reliably extrapolate performance at 70 or 100 lp/mm from data at 5, 10, 20, and 40 lp/mm. What you want is a graph of contrast as a function of resolution, charted at specific distances from the axis, which I've seen used by Angenieux and Erwin Puts. However, even this style of chart won't tell you what is limiting contrast at a given resolution. For example, coma will affect maximum practical resolution differently than lateral chromatic aberration, and obviously rays at high angles of incidence may be well focused but not accessible to the sensor. MTF charts need to exclude that sort of information in order to present the information that they do. The MTF chart of a nearly perfect lens--showing, say, 80% contrast at 200 lp/mm and 22mm off axis--can tell us that a lens is phenomenal along the plane of focus, but it doesn't tell us whether the lens will work nicely with even a 1 megapixel sensor. So, MTF curves, even though they're valuable sources of information, aren't useful for our theoretical analysis of the "outresolving" question.

 

(I think this subject can be analyzed theoretically if one has all the pertinent data. With full design information for the entire imaging chain, an engineer could balance the trade-offs of improving the sensor versus improving the lens without ever going beyond a theoretical model of the camera. This information isn't available to us, though. It isn't a matter of "theoretical analysis" being impossible but rather "our theoretical analysis" being the thing that won't work.)

 

Thankfully, though, we're photographers and not only armchair analysts. We have actual cameras and actual lenses to work with. So, if the problem at hand is identifying the component of our imaging process that has the strongest deleterious effect on the fine resolution of final output, and if we assume that we're using best practices such as using a tripod and credible processing so that we're limited to analyzing our lenses and our cameras, what can we do?

 

I think it simply comes down to looking at the photos. If the finest details are rendered with high acutance--one or two pixels to transition from bright to dark--then you'd probably see bigger gains going with a higher-megapixel sensor than with a new lens. If it takes thirty pixels to draw a faint spot where there actually was a black bird against a bright cloud, then you'd probably see bigger gains going with a better lens (or lens hood) than a new camera. If it usually takes three to five pixels (before sharpening) to transition from bright white to deep dark, then the optical system is likely well balanced. These are rough numbers off the top of my head, but they indicate how I'd analyze the issue.

 

So, what's a better lens? Yes, you could compare MTF charts of two comparable lenses (e.g., Macro-Elmarit-R 60mm and Summilux-R 50mm) to determine whether one would have better peak resolution than the other. But I think too many things change between types of lenses and eras of design, not to mention different brands, to tell from the spec sheet alone. Test photos are invaluable here.

 

Personally, though, I don't think of resolution as the limiting factor of modern lenses, including good lenses that are over 30 years old. As much as I enjoy exploring the fine details in photos, both my own and from others, I'm more affected by how lenses fail than by how they succeed. That is, I'd sacrifice resolution to avoid harshness, or, in statistical terms, I'll accept lower precision in exchange for higher accuracy. Frankly, this is what I love about my Leica lenses: compared to everything else that I've shot, they have less false color and fewer false shapes. Which is to say, they have well balanced aberrations, which I prefer even over lenses that have higher resolution. I mention all of this because it may tell you that my priorities don't align with yours, so your analysis may take a different shape than what I've discussed here.

 

Whether a lens can (or cannot) or does (or does not) outresolve a sensor, or vice versa, is mostly a question of how the terms are defined. From what I've read, everybody's points have merit but those who disagree are talking at cross-purposes. Once the issue is understood, though, I still don't think the practical answer changes. The bottom line, as far as I'm aware, is that imaging is complicated and there's no analytic shortcut available to us. So, we look at pictures. We make informed guesses to buy what seems like the best gear for our needs. Then, we go out and take photos. From time to time, we might even share what we've learned and what we've captured with our family, friends, and a handful or two of the good folks we've met on the internet.

 

Cheers,

Jon

  • Like 1
Link to post
Share on other sites

Hello, K.H.,

 

Yesterday, I went with a friend ( together with files created by D800) to an Apple dealer to put some files made by A7R+ Leica R & M lenses, Pentax 645D, and Leica D-Lux 5 (all in sRGB color space) on the 5K Apple i-Mac with retina display.

 

The rendering of A7R+ Leica lenses files distinguish itself and convinces us of buying one for photo browsing.

 

I guess this computer may help out in your course of investingating the interaction between lens and sensor.

 

All the Best,

 

Thomas Chen

  • Like 1
Link to post
Share on other sites

I'm going to throw my hat in the ring because I enjoy the technical aspects of our hobby. ...

 

Cheers,

Jon

 

 

Many thanks Jon. I really appreciate your feedback.

I am preoccupied right now with another matter - I have found a virus on my Mac. :eek:

That's a first for me! :o:confused:

 

As soon as I have found the time to read your post carefully and think about it, I will reply.

Thanks again.

Edited by k-hawinkler
Link to post
Share on other sites

Hello, K.H.,

 

Yesterday, I went with a friend ( together with files created by D800) to an Apple dealer to put some files made by A7R+ Leica R & M lenses, Pentax 645D, and Leica D-Lux 5 (all in sRGB color space) on the 5K Apple i-Mac with retina display.

 

The rendering of A7R+ Leica lenses files distinguish itself and convinces us of buying one for photo browsing.

 

I guess this computer may help out in your course of investingating the interaction between lens and sensor.

 

All the Best,

 

Thomas Chen

 

Thanks Thomas. I am sure you are right about the 5K Apple i-Mac with retina display. It's terrific.

However, earlier this year I bought a new 6-core loaded Mac Pro. Also a very powerful machine.

So, I eventually have to get a display matching that machine's performance.

Any recommendations will be welcome.

Thanks again.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...