Jump to content

Low telecentric spec...


Mauribix

Recommended Posts

Advertisement (gone after registration)

I don't understand the fixation over a 36x24 sensor. All we need are lenses that go wider and faster then what is now offered designed for the 27x18 sensor.

Yes if any digital camera has a FF sensor then the current lenses will produce the same image as IF you were using film. But we ARE NOT using film.

The majority of all digital camera have smaller sensors then theere film counter point.

 

I think it is totally foolish of Leica to NOT offer wide lenses made for the sensor in the M8.

Link to post
Share on other sites

  • Replies 63
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

This is radically wrong, If the sensor's resolution should outperfom the lenses' resolution, you probably wouldn't be so happy with the results.

 

You MUST have a lens that has got more resolution than you sensor.

.

 

I think we have a problem with the word 'resolution'. It refers to the finest detail that can be resolved. You certainly don't want a sensor that can't resolve detail as well as the lens. You also don't need one that can resolve much finer detail than the lens can. High resolution, for a given focal length (and lens), means small pixels; low resolution means big pixels. If you want high resolution and big pixels you need to have a big focal length too.

 

In the days of film most films had resolutions that far exceeded that of available lenses, except the most refined and expensive ones. There's not much point in having a wonderful lens if the sensor, or film, cant record the detail it provides.

Link to post
Share on other sites

I think we have a problem with the word 'resolution'. It refers to the finest detail that can be resolved. You certainly don't want a sensor that can't resolve detail as well as the lens. You also don't need one that can resolve much finer detail than the lens can. High resolution, for a given focal length (and lens), means small pixels; low resolution means big pixels. If you want high resolution and big pixels you need to have a big focal length too.

 

In the days of film most films had resolutions that far exceeded that of available lenses, except the most refined and expensive ones. There's not much point in having a wonderful lens if the sensor, or film, cant record the detail it provides.

 

Are you looking for a lens that resolves the same lpm of your sensor?:confused: :confused:

I guess you didn't understand what I wrote.

Given a good quality sensor (decide by yourself how much resolution should have a sensor to be judged "good"), the lens has always to outperform its resolution. It's a physic law, not my impression or vague hypotesys.

You can simply try to put the bottom of a bottle of wine in front of your camera to verify what I said.

Link to post
Share on other sites

Leica already have a wide angle lens with 60mm field coverage.

Leica Camera AG - Photography - LEICA PC-SUPER-ANGULON-R 28 mm f/2.8

If a larger chip 36x48mm is used the field of view gets wider but you lose movement options.

The resolution should be enough for a 50Mp chip as the pixel size and spacing 6 µm would be similar to the DMR back at 6.8 µm.

 

If someone in the forum has an example of a stitched panorama using the full shift with the 28 PC on a DMR then you can see what a larger chip will be capable of.

Link to post
Share on other sites

Given a good quality sensor (decide by yourself how much resolution should have a sensor to be judged "good"), the lens has always to outperform its resolution. It's a physic law, not my impression or vague hypotesys.

It’s not a law, not in physics nor anywhere else. If the sensor outperforms the lens, the effective resolution will be limited by the lens, and that’s all. If the lens outresolves the sensor, you will often get moiré which has to be taken care of somehow – by reducing the lens resolution with a low-pass filter, or by eliminating moiré in software. Either way, having one component outperform the other brings no advantages, regardless of whether it’s the sensor or the lens.

Link to post
Share on other sites

Advertisement (gone after registration)

Wide aperture Leica M 24mm Lens at Photokina is the rumour I read.

And this would need to built specifically for the M8. Otherwise the front element would cover way to much of the viewfinder.

 

That is not what I was thinking of. More of a 18mm f/2 for the Leica reduced sensor, equaling a 28mm lens on film.

Link to post
Share on other sites

does anybody know why leica could not fix the infrared problem with software instead of with the filters?

 

My idea would be that the red channel higher than a sertain number = Ir; you could take that away or have it aside for someone that would want to work with the infrared part of the picture.

 

My guess is that all the chanels are "contaminated".

 

But wouldn't it be grate if they could figure it out and then we would have IR to play with in every picture and even be able to fix the cromatic av of this IR.

 

Leica could you do that?

 

please

Link to post
Share on other sites

It just occurred to me that someone on this forum who has an Elmarit 24 or other wide angle M lens and a 1Ds Mark III or a D3 or D700 could simply set the camera to live mode and hold the lens in front of the sensor and see what happens. (Maybe use a tripod and put some black paper or cloth around it to block the light a little.)

Link to post
Share on other sites

does anybody know why leica could not fix the infrared problem with software instead of with the filters?

 

 

Because by the time the software gets the file, there is no way to distinguish data that is the result of IR from data that is the result of light.

Link to post
Share on other sites

Alan: An M 24 on an SLR would be macro-only...and also so far from the image plane that the results would say nothing about its performance 2" closer to the sensor in normal photography. Unless you are assuming the lens could fit INSIDE the mouth of any SLR lens mount, which I doubt, except perhaps a Hassy or Mamiya (and they don't do "live view")

 

But as a confirmed experimentalist I appreciate the idea!

 

Manolo: Because (to add to AlanG's response), there is no way software can tell if a sweatshirt, say, is magenta because it really was magenta, or because it was black but contaminated with infrared light - we went all through this 18 months ago.

 

Shootist: it's real simple. I want a 21mm FOV and f/2.8 (or faster) in one lens. I already own a 21mm f/2.8 that I like very much, so on the whole it is simpler and likely cheaper for ME if Leica just comes out with a 24 x 36 sensor.

 

It will be expensive to get rid of my $1000 Elmarit 21 and replace it with a $4000 Super-Wide-Elmarit 16mm - AND I don't expect I would like the SW-Elmarit as much as my Mandler 21 (since I have generally disliked all the Leica lenses I've tried designed in recent years). i.e I'd rather sell my M8s to get an M9 and keep using my old glass than sell my old glass to get new outrageously pricey glasss that I don't like as much, so far.

 

It may not make a difference to "new to Leica" buyers whether they get a 21 or a 16 for their $4000 - but as an old Leica owner I don't necessarily give a rodent's patootey what new buyers want!

 

(Wink!)

 

Note that in the meantime I just use the M8 and have fun.....

Link to post
Share on other sites

And this would need to built specifically for the M8. Otherwise the front element would cover way to much of the viewfinder.

 

This is the Nikkor AI 24mm f/2 on an M8, with the HS-2 hood. This full-frame retrofocus combination blocks a corner of the viewfinder frame, but even at closest focus doesn't interfere with the rangefinder.

 

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

 

In real life the hood would have to be a bit shorter. This one was designed for a 50mm lens on a full-frame SLR; with the M8 crop it's fine on the 24mm as shown but vignettes if the lens is wearing a filter.

Link to post
Share on other sites

It’s not a law, not in physics nor anywhere else. If the sensor outperforms the lens, the effective resolution will be limited by the lens, and that’s all. If the lens outresolves the sensor, you will often get moiré which has to be taken care of somehow – by reducing the lens resolution with a low-pass filter, or by eliminating moiré in software. Either way, having one component outperform the other brings no advantages, regardless of whether it’s the sensor or the lens.

 

As for the physic we can refer to this:

Interference - Wikipedia, the free encyclopedia

(and that's the same generating the moire effect that you described if we take into consideration the Huygens' principle).

 

Having a sensor outperforming a lens resolution gives no margin of correction, so this is obviously the biggest disadvantage.

Link to post
Share on other sites

As for the physic we can refer to this:

Interference - Wikipedia, the free encyclopedia

Feel free to refer to any phenomenon you wish, only how is it relevant to the question at hand?

 

Having a sensor outperforming a lens resolution gives no margin of correction

Correction of what? Obviously, a high resolution sensor would be best for correcting lens aberrations such as chromatic aberration, distortion, and vignetting. Having the sensor outresolve the lens would guarantee we don’t lose any detail due to interpolation artifacts when applying those corrections.

Link to post
Share on other sites

My idea would be that the red channel higher than a sertain number = Ir; you could take that away or have it aside for someone that would want to work with the infrared part of the picture.

This won’t work, I’m afraid. Any amount of red you get is due to X percent of actual red and Y percent of infrared, where X and Y are unknowns, with possibly different values for each pixel within an image.

Link to post
Share on other sites

As for the physic we can refer to this:

Interference - Wikipedia, the free encyclopedia

(and that's the same generating the moire effect that you described if we take into consideration the Huygens' principle).

 

Having a sensor outperforming a lens resolution gives no margin of correction, so this is obviously the biggest disadvantage.

 

I can do that too. Lorentz Transformation, Harmonices Mundi, Necronomicon, Nyquist Theorem. All relevant to this discussion in their own ways, but Circle of Confusion somehow seems most apposite.

Link to post
Share on other sites

Feel free to refer to any phenomenon you wish, only how is it relevant to the question at hand?

 

I feel free to argue my thoughts (and what I wrote) since you say that they're unfounded.

Being them relevant or not to you.

 

Correction of what? Obviously, a high resolution sensor would be best for correcting lens aberrations such as chromatic aberration, distortion, and vignetting. Having the sensor outresolve the lens would guarantee we don’t lose any detail due to interpolation artifacts when applying those corrections.

 

Correction of aberrations, vignetting, color shifting and diffraction mainly.

You gave yourself an answer.

 

The receptor (sensor in this case) should always have a wider vocabulary or at least equal (in this case resolution) to that of the transmitter (lens) to comprehend the message.

Have you ever tried to talk about trigonometry to a 6 years old child?

This is my karma if you wish me not to refer to scientific disciplines.

 

Probably having the perfect match between sensor and lens resolution could be the best to avoid artifacts, but I don't know how this could be possible with and interchangeable lens system.

 

Maybe I change my mind when you show me a better result with a sensor outresolving a lens' resolution in a side by side test with the same shot done with a lens outresolving the sensor's resolution (this is physic again).

Anyway, if I'm wrong please accept my excuses, but if I'm not, please tell what you're tryin' to demonstrate (I'm not sarcastic or ironic).

 

Best

 

Maurizio

Link to post
Share on other sites

I can do that too. Lorentz Transformation, Harmonices Mundi, Necronomicon, Nyquist Theorem. All relevant to this discussion in their own ways, but Circle of Confusion somehow seems most apposite.

 

It can be funny if you found it so, but the Circle of Confusion is directly subdue to the physic laws of frequency and interference.

You'd better go to the roots to comprehend this better.IMHO

Link to post
Share on other sites

I am still trying to figure out what exactly the point is of this discussion.

 

A M8 sensor crop yields approx. 23 micon for the appropriate circle of confusion. The pixels are 6.5 micron if I recall correctly so the sensor resolution is slightly better than what is actually required (note: you need about 3 pixels to define a 'sharp' line = 19.5 micron).

 

For the M8 you only start to see lens resolution limitations at f/16 and above, wider apertures are only limited by the circle of confusion that you wish to apply, not by the diffraction limitations.

 

Everything changes if you want pixel sharp resolution on A0 size prints but that would be 'non standard' use. If we take one pixel as the COC then diffraction limitation kicks in for apertures smaller than f/2.8 or so. But somehow I doubt if this is a useful criterion.

 

Assuming we had a 50 MP FF sensor (the M10) nothing much would change in the above (the pixels would be about 4 micron) and you would only gain a factor 1.7 in the sensor spatial resolution. At a cost of...... 20 k upwards at present times (see Hasselblad price list) and a less efficient high ISO perfomance.

 

So more megapixels would be rather low on my list of desires (note 'my'), a FF sensor would be fun but I am happy to wait.

 

Bottom line in most cases the lens diffraction is not a limiting factor (unless you use f/16 & up).

Link to post
Share on other sites

Indeed. But whether the limitation is caused by diffraction or aberrations is not really the issue. The question was 'Why is everyone so fixated on full-frame sensors?"

 

It appears that those who actually understand camera optics and sensor noise aren't too worried, whereas those who think you can magic away the laws of physics are rather critical of Leica for their failure to do so

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...