Jump to content

Recommended Posts

Advertisement (gone after registration)

What irritates me is the following:

 

Did we not read in an M10 thread a few months ago how the M10 engineers had to design the sensor in a specific way to match best the M-lenses. At that time we could read some sort of agreement or conclusion that the 24Mp were kind of optimum for our M-lenses. There were several posts saying that we needed after all more resolution. These posts were turned down: This seemed technically not possible. Now after all we see some examples with the exact same M-lenses on different cameras where we have better results on a 40Mp Sony sensor that on the 24Mp Leica (designed) sensor.

 

This leads me to thinking that an M10 with the Sony sensor built in instead of the very specialized Leica sensor with its special micro lenses would bring better results. Is this now a contradiction?

 

Ask Chaemono to show us the corner performance of the M10 vs A7RIII, (or download his hi-res samples at look at the corners yourself) and you will understand. Whatever extra resolution the Sony sensor may show in the center of the image, it smears like crazy in the corners with his 35mm Leica lens - because it does not have the same microlenses as Leica's sensor.

 

Also - to say "Leica lenses" is misleading, in that they do not all have identical performance. Some are better and some are worse, depending on many factors. A 24mm Summilux is not simply a 75 Noctilux with a wider field of view.

  • Like 1
Link to post
Share on other sites

Yes, a better lens will always increase the final image resolution, and a better sensor will always increase the final resolution ...

What I keep saying for years ...

 

 

... but the final image resolution will always be worse than either the resolution of the lens alone or the sensor alone.

Yes, naturally.

 

 

However, it does not follow that 'a lens cannot out-resolve a sensor.'

'A out-resolves B' doesn't mean that A's resolution was greater then B's. Instead, it means that A's resolution is so high that increasing it would not make any difference because B's resolution limits the system.

 

But as we know, the system is always limited by both A's and B's resolutions. So B cannot keep A from improving the whole system, hence A cannot 'out-resolve' B.

 

 

The fact that (8*4)/(8+4) = 2.667 does not change the fact that 8 is larger than 4.

Of course not.

 

But (10*4)/(10+4) = 2.857, and this is greater than 2.667. That's the point. An 8 lens doesn't out-resolve a 4 sensor because switching to a 10 lens still cranks more out of the 4 sensor than the 8 lens did.

 

 

Simple to prove who is right.

No, it's not necessarily simple. The equations are simple to compute. But at the extremes, improvements may become infinitesimally small (asymptotic behaviour) so possibly you won't be able to see them anymore.

 

 

At that time we could read some sort of agreement or conclusion that the 24 MP were kind of optimum for our M lenses.

This has never been said. 24 MP is in no way an 'optimum for M lenses.' Instead, it has been said that 24 MP was kind of optimum in terms of general feasibility and usability. Higher pixel counts lead to larger image files that would be hard to handle (shooting hand-held, writing to card, storing on disk, processing in software), and—allegedly—no-one really needs more resolution anyway.

 

But then, the same has been said when we were at 6 MP, and then 10 MP, and then 16 MP ... always the same old, same old.

 

 

I don't know if I can find a lens with low-enough resolution but I'll try.

Don't waste your time. You won't generate any new insights. Instead, just have a look here and enjoy.

Edited by 01af
  • Like 2
Link to post
Share on other sites

I think we need to be careful how we apply mathematics and how we interpret results to determine resultant effect of lens and sensor resolving powers.

 

Assuming a common unit for both sensor "A" and lens "B", for A=B resolving power would produce resultant of 0.5. 

What is without doubt, and is mathematically correct, any increase in either sensor or lens value would produce increasing resultant value.

 

 

Edit:- Excel to PDF, PDF to JPG

 

 

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Edited by mmradman
Link to post
Share on other sites

Sure. But what you (most people, actually) fail to understand is how the image resolution RI depends on both the lens' and the sensor's resolutions:

 

1/RI = 1/RL + 1/RS

 

So no matter which of the three conditions cited above holds—a better lens will always increase the final image resolution. And a better sensor will always increase the final image resolution, too.Therefore, lenses cannot out-resolve sensors, and sensors also cannot out-resolve lenses.

 

Nonsense. If there is no more data to extract from an image projected by a lens then you will merely add spurious data to the end result NOT resolution.

Link to post
Share on other sites

I’ll try this 1955 Summicron Collapsible. :)

 

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Edited by Chaemono
Link to post
Share on other sites

But at the extremes, improvements may become infinitesimally small (asymptotic behaviour) so possibly you won't be able to see them anymore.

 

And as photography is about visual information this means that differences which cannot be seen are irrelevant. So increasing sensor MPixelsis only relevant up to the point at which differences cannot be seen. Which will depend on the lens. So the lens's ability to resolve finally  limits to whole system assuming that we can go on increasing sensor's MPixels. Not forgetting practicalities such as optimal subject matter, lighting, microlenses and so on. We go around this topic repeatedly. The bottom line is that simply increasing sensor MPixels is not a straightforward solution to increasing image 'quality' however we define it.

Link to post
Share on other sites

Advertisement (gone after registration)

Eventually, when Leica produces M camera with more than 24Mp full frame sensor all this will be forgotten and new "24+" will become new standard. 

In the meantime notion that more than 24Mp can be better for M photography than current 24Mp, or photography in general, seems to be fought like life depends on it.

Link to post
Share on other sites

No one is fighting here. This is fun. Just curious to see whether a 42 MPx sensor improves “the resolution of the final image, behind any lens.” A 1955 Summicron should help to shed some light on whether this argument holds water in practice.

This question is irrelevant. More pixels will always improve the resolution of the final image, behind any lens.
.
.
.

That's the usual mental short-circuit when people confuse lens resolution, sensor resolution, and image resolution. The latter is not equal to the minimum of the other two, but always a product of both the lens' and the sensor's resolution.

Link to post
Share on other sites

Eventually, when Leica produces M camera with more than 24Mp full frame sensor all this will be forgotten and new "24+" will become new standard. 

In the meantime notion that more than 24Mp can be better for M photography than current 24Mp, or photography in general, seems to be fought like life depends on it.

I wonder how many advocates for a higher MPX M actually use cameras with high resolutions.  There is no free lunch and the trade-offs associated with high resolution cameras may not be acceptable to all,.  This is why current makers of such bodies offer those resolutions in a single body rather than across the product line. I regularly shoot with a 45mpx body, but when light levels drop my 20mpx body produces much better results. 

 

I'm sure a 45mpx M would sell.to some so perhaps there will be one at some point.  But if that happens I would be very surprised if it doesn't have a more popular lower resolution brother. 

  • Like 5
Link to post
Share on other sites

I wonder how many advocates for a higher MPX M actually use cameras with high resolutions.  There is no free lunch and the trade-offs associated with high resolution cameras may not be acceptable to all,.  This is why current makers of such bodies offer those resolutions in a single body rather than across the product line. I regularly shoot with a 45mpx body, but when light levels drop my 20mpx body produces much better results. 

 

I'm sure a 45mpx M would sell.to some so perhaps there will be one at some point.  But if that happens I would be very surprised if it doesn't have a more popular lower resolution brother. 

 

In the future either Leica provide side-by-side low and high MP version of the latest M model or adherents of low MP only approach will have to work with earlier models and put up with all other limitations that such work may entail.

 

As much as i am satisfied with current 24Mp i think doubling of pixel count, i.e increasing linear resolution by 40% would be welcome.  How do i know, i have 12mp and 24mp full frame cameras, 12mp is collecting dust mostly, and some of my 24mp shots could benefit from better resolution.  In comparison 24mp from M246 is head and shoulders above 24Mp from either M or SL line.

Link to post
Share on other sites

And as photography is about visual information this means that differences which cannot be seen are irrelevant. ...

I understand your point, Paul, but I think the logic is regrettably flawed.

 

When banks do their financial calculations, for example currency exchange, they inevitably result in pounds, pennies and fractions of pennies.   There is no coin that represents a fraction of a penny so using such logic the bank would be justified in ignoring and discarding the fractions of a penny.  

 

However, if it was your account and you (for whatever reason) made tens of thousands of international transactions per week then, inevitably, the fractions of pennies would amount to some value and I feel sure you would be unhappy for the bank to decide to discard it.

 

The same applies to the 'invisible' resolution in a picture; if you were to discard the invisibly resolved data and then magnify the picture then you would find unappealing pixellation.  On the other hand, if you retained the invisibly resolved data then magnifying it might still produce some detail.

 

This is just one scenario where the invisible resolution has value and is relevant.  (We are getting quite close to some rather complex Information Theory, which is probably not a direction in which the thread needs to go.)

 

Pete.

  • Like 1
Link to post
Share on other sites

Ask Chaemono to show us the corner performance of the M10 vs A7RIII, (or download his hi-res samples at look at the corners yourself) and you will understand. Whatever extra resolution the Sony sensor may show in the center of the image, it smears like crazy in the corners with his 35mm Leica lens - because it does not have the same microlenses as Leica's sensor.

 

Also - to say "Leica lenses" is misleading, in that they do not all have identical performance. Some are better and some are worse, depending on many factors. A 24mm Summilux is not simply a 75 Noctilux with a wider field of view.

1. Which only prove that the center acuity on the sensor of the Sony is superior to the Leica acuity, and that the overall acuity on the Leica sensor is better than Sony's.

The details themselves on which the images are judged are larger than the resolution limit of either sensor.

Which can easily be explained by the difference in acceptance angle of the microlenses. Narrow on Sony, wide on Leica.

The whole test tells us nothing about the actual resolution.

 

2. This is not really relevant, as Leica's stated intention is to provide optimal performance on as many of their lenses as possible, including legacy and R ones. This is reflected in their sensor design and choice.

Link to post
Share on other sites

Nonsense.

It's fine if you don't understand maths. Or physics. But please don't call them 'nonsense.'

 

 

If there is no more data to extract from an image projected by a lens then ...

That's the point: There is always some data left to extract. Or to put it the other way around: No matter what the sensor's pixel count is, it will never extract all the data projected fby the lens.

 

 

And as photography is about visual information this means that differences which cannot be seen are irrelevant.

That's right, but that's another story. For example, what appears irrelevant to you may still be relevant to others.

 

 

The bottom line is that simply increasing sensor megapixels is not a straightforward solution to increasing image 'quality' however we define it.

Of course it isn't. No-one suggested anything like that. After all, we'd run into a situation of diminishing returns at some point. But on the other hand, increasing megapixels isn't entirely pointless even when using lenses that are not the best by today's standards. In fact, most of our lenses are far from being exhausted by our current megapixel counts.

 

If we now step back from the dry equations of information theory and turn back to real-life photography then I'd say this: The lens defines the (technical) quality as well as the character of the image; the sensor determines its size. 24 MP is good for 1 × 1.5 m prints (40" × 60") or more—that's plenty for most users.

 

 

As much as I am satisfied with current 24 MP, I think doubling of pixel count, i. e. increasing linear resolution by 40 %, would be welcome.

I'm sure Leica Camera cannot, and won't, stick to 24 MP forever. Sooner or later they will increase the M's pixel count ... but I wouldn't expect doubling. I'd guess the next M model will sport 32 or 36 MP.

Link to post
Share on other sites

The same applies to the 'invisible' resolution in a picture; if you were to discard the invisibly resolved data and then magnify the picture then you would find unappealing pixellation.  On the other hand, if you retained the invisibly resolved data then magnifying it might still produce some detail.

Invisibly resolved detail is just that - invisible. Software will do exactly the same job as increased sampling if that sampling doesn't reveal further information. There is a lack of appreciation of how images are made up.

 

If someone would produce MTF data for both lenses and sensors then MTF cascades will show the interaction of the two and at what point ('resolution') there is no gain (ie approx 10% MTF) in increasing MPixels or sampling rate. Of course there are a host of problems associated in doing this because not all sensors are equal. Microlenses would presumable increase the MTF % in w/a corners - and in an ideal world sensor, microlenses and imaging lens would probably be matched for optimal 'quality'.

 

In reality we are using imperfect combinations on a range of subject matter in varying lighting conditions. So there will be variation in results. Trying to determine 'extinction conditions' (ie when no more data can be obtained) is a flawed concept without controlled conditions and these are not often reflective of real world actuality.

 

So to get back to the OP, no its not the end of the M, but the M has its limitations and we can stick our head in the sand and ignore them or accept that what it does it does well and will continue to do so.

Link to post
Share on other sites

[...] Just curious to see whether a 42 MPx sensor improves “the resolution of the final image, behind any lens.” A 1955 Summicron should help to shed some light on whether this argument holds water in practice.

 

I'm pretty sure it would at f/4 and on at least. Too bad i have not a Kolari modded A7r2 or A7r3 to confirm the obvious (to me) idea that most if not all M lenses should benefit from more than 24MP as far as resolution is concerned.

Link to post
Share on other sites

It's fine if you don't understand maths. Or physics. But please don't call them 'nonsense.'

We probably agree on a lot in reality but, despite having studied Photographic Science, photography remains a practical subject to me. Theoretical information is relevant as long as it has practical application to me. After that it is irrelevant and pointless. Sorry but that's how I see it.

Link to post
Share on other sites

>>snip<<

 

 

I'm sure Leica Camera cannot, and won't, stick to 24 MP forever. Sooner or later they will increase the M's pixel count ... but I wouldn't expect doubling. I'd guess the next M model will sport 32 or 36 MP.

 

You probably right, i was thinking next full frame Leica in general, more likely to be SL601 successor  than M10's. 

Link to post
Share on other sites

I'm pretty sure it would at f/4 and on at least. Too bad i have not a Kolari modded A7r2 or A7r3 to confirm the obvious (to me) idea that most if not all M lenses should benefit from more than 24MP as far as resolution is concerned.

Of course all lenses would benefit. That is the essential point of the argument. Whether the advantage would be relevant is something else.

  • Like 2
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...