Jump to content

M240 image of London by Christopher Tribble


k-hawinkler

Recommended Posts

Advertisement (gone after registration)

Thanks Thomas for defining "3D-like" quality by examples form LFI and analogy with painters. In both cases we look at a whole chain of factors that have determined the end result. Cultural period, personal style, brush, canvas, paint, sensor, lens, printing medium, ink for printing, just to name a few in the two types of examples you mention.

 

Your original statement was that the "3D-like" quality was directly linked to the sensor and that CCD has it and CMOS does not. Presently your argumentation is interesting, but does not convince me at all.

 

From a technical point of view better arguments point at the lens, the composition, the scene, the postprocessing and the aperture used: although an image is 2D, the brain is able to extract depth from it by using cues like perspective, depth of focus, change in colour with distance (blueing by atmosphere), change in light direction etc. As you can see, none depend on the sensor. A sensor just needs to represent the projected image of the lens with the highest definition possible. As I understand now, the sensor of the M 240 is performing well in this respect.

Link to post
Share on other sites

  • Replies 164
  • Created
  • Last Reply
A sensor just needs to represent the projected image of the lens with the highest definition possible.

I'm not sure that this is quite quite so simple. I assume (assumption is dodgy I do realise, but.....) that the sensor's exact colour sensitivity, the effect of its cover glass/filter and not least the information interpretation by the camera's on-board electronics and variable file modification (based on the 6-bit lens code in Leica's case) all have an effect prior to the production of the raw file. So a manufacturer has a lot to sort out and so, the decisions made will result in a raw file with specific properties, and the sensor chosen has to produce data will have an effect beyond mere definition

 

I can only compare my own Leica files (M8/9) with my Fuji S2Pro, Nikon D1X and Canon1DS/5D/5D2 files. And they show differences not only in terms of information recorded (definition) but also in terms of colour variation (and it can be impossible to colour match the different files precisely) the way in which noise is produced and the production of lines and artifacts in the deepest shadow detail. Flexibilty varies too, with the M8/9 files providing more flexibility for me in my own post processing workflow.

 

Combine the file specifics with lenses which are amongst the best available (often out-resolving the sensor at medium apertures) and the effect does appear to produce images with a different 'look' at times - if that is, that all the factors influencin theis 'different look' reinforce each other (I'm dubious about this '3D-like' terminology). However if all factors don't reinforce, which they often don't, then I don't think that the files provide a significantly different look in such cases.

 

We are of course talking about nuances here, but most of us are bombarded by high quality images everyday so we are pretty sophisticated in our subconscious analysis of images even if we can't often determine the specific factors which elevate those that we are particularly struck by.

Link to post
Share on other sites

Thanks Thomas for defining "3D-like" quality by examples form LFI and analogy with painters. In both cases we look at a whole chain of factors that have determined the end result. Cultural period, personal style, brush, canvas, paint, sensor, lens, printing medium, ink for printing, just to name a few in the two types of examples you mention.

 

Your original statement was that the "3D-like" quality was directly linked to the sensor and that CCD has it and CMOS does not. Presently your argumentation is interesting, but does not convince me at all.

 

From a technical point of view better arguments point at the lens, the composition, the scene, the postprocessing and the aperture used: although an image is 2D, the brain is able to extract depth from it by using cues like perspective, depth of focus, change in colour with distance (blueing by atmosphere), change in light direction etc. As you can see, none depend on the sensor. A sensor just needs to represent the projected image of the lens with the highest definition possible. As I understand now, the sensor of the M 240 is performing well in this respect.

 

Bert,

 

I just sent you 2 pcs. of correspondence. They seems to be soon removed.

 

Anyway, enjoy your M240 and send me both the DNG and JPEG files.

 

Thanks,

 

Thomas Chen

Link to post
Share on other sites

It may well be that you underestimate at least part of the audience visiting this forum, their gear and the way they use it. As soon as I receive my M 240, I'll post a comparison and you can tell me if you see whether the "3D-like" effect is present or not in the M9 and in the M 240 images on your high-resolution screen loaded from DNG. I'll get back to you on that. :)

 

Bert, ( I'm sending you again, with some radical statement eliminated)

I never underestimate anything of anybody who visits the forum. Who affords a M camera also readily affords a Apple Retina NB, not a big deal. The point is the degree how serious his "Resolution-mania" symptom is. Or whether his final output medium of digital image is print or screen.

It's a little bit hard for parties who try to discuss the image rendering, however, without the same platform. In Apple I use i-Photo or Aperture to browse the image, while in Window system I use Nero PhtoSnap Viewer (included in Nero 7 Essential, with function of between-pixel interpolation) after I edit the file in LR 4.

Thank you for sending me the DNG files, however, please associate them with the high-quality JPEG file based on camera settings such as contrast, sharpness, saturation or others at standard (neutral) position.

In the film photogaphy era Leica's job finishes as soon as light hits the film, while in the digital era Leica's major job starts when light enter the sensor. Thus, we anticipate the best combination of JPEG processing and sensor.

Best Regards,

Thomas Chen

Link to post
Share on other sites

In the film photogaphy era Leica's job finishes as soon as light hits the film, while in the digital era Leica's major job starts when light enter the sensor. Thus, we anticipate the best combination of JPEG processing and sensor.

Best Regards,

Thomas Chen

 

Thomas - I'm not sure I agree with you here. I'd still hold that Leica's major job (and expertise) is lens design and manufacture. Its main role in designing digital Ms is to buy in or commission the engineering that's needed to translate what these superb lenses can draw into something that a computer can process. Leica don't (and can't) manufacture the shutter, the sensor, the screen, the EVF system...

 

The minor miracle has been that, with the resources it's been able to command (compared to Sony or Canon) it's been able to do such a great job with the M9 and derivatives. We're still waiting for final proof that it's been able to pull all these elements together in the M240 - but I'm hopeful that it has :)

Link to post
Share on other sites

Thomas - I'm not sure I agree with you here. I'd still hold that Leica's major job (and expertise) is lens design and manufacture. Its main role in designing digital Ms is to buy in or commission the engineering that's needed to translate what these superb lenses can draw into something that a computer can process. Leica don't (and can't) manufacture the shutter, the sensor, the screen, the EVF system...

 

The minor miracle has been that, with the resources it's been able to command (compared to Sony or Canon) it's been able to do such a great job with the M9 and derivatives. We're still waiting for final proof that it's been able to pull all these elements together in the M240 - but I'm hopeful that it has :)

 

Chris,

 

Thank you very much. Now I understand.

 

Fingers closed!

 

 

Thomas Chen

Link to post
Share on other sites

Advertisement (gone after registration)

Thomas - I'm not sure I agree with you here. I'd still hold that Leica's major job (and expertise) is lens design and manufacture. Its main role in designing digital Ms is to buy in or commission the engineering that's needed to translate what these superb lenses can draw into something that a computer can process. Leica don't (and can't) manufacture the shutter, the sensor, the screen, the EVF system...

 

The minor miracle has been that, with the resources it's been able to command (compared to Sony or Canon) it's been able to do such a great job with the M9 and derivatives. We're still waiting for final proof that it's been able to pull all these elements together in the M240 - but I'm hopeful that it has :)

 

 

Chris,

 

(again, misspelling)

 

Thank you very much. Now I understand.

 

Fingers crossed!

 

 

Thomas Chen

Link to post
Share on other sites

It may well be that you underestimate at least part of the audience visiting this forum, their gear and the way they use it. As soon as I receive my M 240, I'll post a comparison and you can tell me if you see whether the "3D-like" effect is present or not in the M9 and in the M 240 images on your high-resolution screen loaded from DNG. I'll get back to you on that. :)

 

 

Bert,

 

Would you please tell me where to downlaod the 2nd batch of images by Mr. Jean Gaumy. DNG and full scale JPEG files.

 

I want these 6 pcs for assessment:

 

1st row: middle & right

2nd row: left & right

3rd row: left & middle

 

Regards,

 

Thomas Chen

Link to post
Share on other sites

It may well be that you underestimate at least part of the audience visiting this forum, their gear and the way they use it. As soon as I receive my M 240, I'll post a comparison and you can tell me if you see whether the "3D-like" effect is present or not in the M9 and in the M 240 images on your high-resolution screen loaded from DNG. I'll get back to you on that. :)

 

Hi There Bert

My problem with these comparisons - apart from the obvious technical difficulties, and the evaluation difficulties . . . is that I feel it's also very much scene dependent.

So that even if you can come to some kind of concensus conclusion on one scene with one lens at one ISO and one aperture and one wb . . . change the scene and it may be quite different.

Of course, I have done this kind of comparison - but it's for my own benefit for the reasons above. . . . and to be honest, I haven't really reached a conclusion myself!

 

Good luck with yours!

 

all the best

Link to post
Share on other sites

With Respect to comparisons. . . . .

Recently I did a comparison between 3 cameras and 2 different focal lengths, I used an aperture to equalise the depth of field for different sensor sizes, all were taken with excellent lenses, and then I downsized the larger files to that of the smallest - cropping the . µ43 camera to 2x1.

 

Pictures were taken in good light, outside, of a well lit scene (with quite a big dynamic range). All taken on a tripod and properly focused.

 

I then sent the 6 files to a series of victims - all of them inveterate pixel peepers, and told them what the 3 cameras were (1 was µ43, 1 was APSc and the other full frame) . . all they had to do was to tell me which shot was taken with which camera. Easy. Nobody got them all right - not even nearly. In fact, I think if I did a statistical survey then it would be worse than 50/50. They didn't agree on which was the sharpest or the best either.

 

In this case everything was standardised - except the camera and the lens - and that should make it easy - but it isn't.

 

all the best

Link to post
Share on other sites

Pardon my ignorance, but Thomas, I don't understand what you say by 3-D like and in a discussion around camera bodies and sensors.

 

I always thought the 'Leica' look is a very subjective opinion and revolves around a subtle rendering quality of a lens due to taking pictures wide open, combined with contrast, bokeh and some coma.

 

If this is all due to the lens, then what has a good sensor, CCD or CMOS, to do with it?

 

Mr. Remash,

 

Lenses certainly play a certain role in the quality of image (including the 3D-like look I refer), but the JPEG processing stage in the camera body converts R,G,G,,B signal in the adjuncent four pixels created by the sensor into RGB signal into each pixel (demosaic).....

 

That's the reason why the interaction of firmware and the sensor matters in image rendering. There are couple of adjust parameters in the firmware, their setting affect the image rendering.

 

I concern this issue because I'm going to buy a M240, however, rejecting a no superior IQ to its preceder M9 or M8. As such I seek advices from experts who are active in the forum.

 

Best Regards,

 

Thomas Chen

Link to post
Share on other sites

Going back to your painting analogy, a good sensor and associated processor is like a good canvas being able to take all colors and strokes of the painter.

 

If it were to change in any way the intent of the painter or the rendering of the brush, I would seriously worry.

 

Having said that I am not a technical expert to comment on the underlying technology employed in the M240. For now, I am quite happy with my M9-P which already offers me an IQ that exceeds my needs.

Link to post
Share on other sites

Going back to your painting analogy, a good sensor and associated processor is like a good canvas being able to take all colors and strokes of the painter.

 

If it were to change in any way the intent of the painter or the rendering of the brush, I would seriously worry.

 

Having said that I am not a technical expert to comment on the underlying technology employed in the M240. For now, I am quite happy with my M9-P which already offers me an IQ that exceeds my needs.

 

I wish having a M9-P, but I cannot allow my R lenses to be orphans forever.

Link to post
Share on other sites

Mr. Remash,

 

Lenses certainly play a certain role in the quality of image (including the 3D-like look I refer), but the JPEG processing stage in the camera body converts R,G,G,,B signal in the adjuncent four pixels created by the sensor into RGB signal into each pixel (demosaic).....

 

That's the reason why the interaction of firmware and the sensor matters in image rendering. There are couple of adjust parameters in the firmware, their setting affect the image rendering.

 

I concern this issue because I'm going to buy a M240, however, rejecting a no superior IQ to its preceder M9 or M8. As such I seek advices from experts who are active in the forum.

 

Best Regards,

 

Thomas Chen

I'm not an engineer so would appreciate comments from those more knowledgeable. If one shoots raw why is the jpg conversion relevant? Isn't it then the raw developer that lets us see the image? If one could create supposed 3D or painterly effects in the firmware, then why doesn't every camera manufacturer do it? If it were just a matter of making the demosaic work a certain way, you could have options for your favorite painter just as some cameras provide options for "neutral" "vivid" etc.? And then why bother with raw at all? I must say I am confused.

Link to post
Share on other sites

Alan- From my understanding if by see the image you mean on the LCD, then its the jpeg conversion that let's you see the image. If processing a DNG on your computer then if working in DNG then it is the ACR that first let's you see the image, from what I know, which is very little on this subject.

 

To be clearer, even if you shoot RAW only, the LCD displays the image in jpeg unless you are using an MM which after a short wait displays your DNG as a DNG on the LCD.

Link to post
Share on other sites

Hi There Bert

My problem with these comparisons - apart from the obvious technical difficulties, and the evaluation difficulties . . . is that I feel it's also very much scene dependent.

 

Agreed, so that is why you only get the full story after photographing many scenes with both camera's. I was just willing to do the experiment to see what Thomas meant and if he could see what he meant. Personally I think many other factors give the illusion of depth in a photograph and not the sensor. So it is a pointless experiment anyway I think, but one has to keep an open mind.

 

 

Bert,

 

Would you please tell me where to downlaod the 2nd batch of images by Mr. Jean Gaumy. DNG and full scale JPEG files.

 

Thomas Chen

 

Sorry Thomas, can't help you there. All I can do is deliver DNG files that I made myself and only when I have the M 240. Hope this makes sense to you. :)

Link to post
Share on other sites

I am sorry for coming to this thread late.

 

I and many of us are to say the least very disappointed that the EVF will not allow for the magnified area to be moved around. I use this frequently and particularly when my cameras are tripod mounted.

 

I am not sure at this point if this is/would be a major obstacle for any of us using our 28mm f2.8 Super Angulon lenses including using them for the purpose of stitching.

 

But, for anyone looking to use a Canon or Nikon T/S lens on the M240 this restriction may almost preclude using these lenses to make proper adjustment for focus or swing movements for the purposes of using the Scheimpflug effect.

 

As an edit, the only way that I could see using the Canon or Nikon T/S lenses properly on the camera for the tilt or swing for the Scheimpflug effect would be to use a loupe on the LCD screen much like when we would do this exercise with our large format cameras.

 

Rich

Link to post
Share on other sites

Alan- From my understanding if by see the image you mean on the LCD, then its the jpeg conversion that let's you see the image. If processing a DNG on your computer then if working in DNG then it is the ACR that first let's you see the image, from what I know, which is very little on this subject.

 

To be clearer, even if you shoot RAW only, the LCD displays the image in jpeg unless you are using an MM which after a short wait displays your DNG as a DNG on the LCD.

 

Okay. Thank you. I understand a bit more now. But I am only marginally interested in the look of what the LCD shows as it is useful only for very rough examination re composition and exposure. I understand Thomas to be talking about jpegs as a means to a final image (e.g., print). That's where I just don't understand why a supposed "3D" jpg image is terribly important. At least not for me. I will usually shoot DNG only making the way jpgs render moot for purposes of what the final product will look like. If Thomas is talking about how the firmware produces a RAW image, then the downstream development of that image in ACR or otherwise also is important and contributes significantly to the feel of the image. Just like in our film days when you could take rolls of PlusX and run them through different film developers and produce different looks. The way a sensor's pixels react to light to me is an analog of how the silver reacted in film. But while a DNG or other raw file can be considered a "negative" that is not so for a jpg. So I am still confused. Perhaps Thomas can explain better what he was getting at.

Link to post
Share on other sites

Lenses certainly play a certain role in the quality of image (including the 3D-like look I refer), but the JPEG processing stage in the camera body converts R,G,G,,B signal in the adjuncent four pixels created by the sensor into RGB signal into each pixel (demosaic).....

 

What lense are you shooting for these 3D-like pictures?

 

3D-like pictures are well known, it mostly depends on very good contrast and good black levels.

The new M has a higher dynamic Range, therefore you need to tweak the contrast in post processing to achieve the same looking image (but anyway the M240 picture will hold more information - which is better IMO)

 

So go out there and seek for a "contrasty" lense, and I'm sure you want to jump into these pictures ;)

 

Cheers!

Link to post
Share on other sites

Okay. Thank you. I understand a bit more now. But I am only marginally interested in the look of what the LCD shows as it is useful only for very rough examination re composition and exposure. I understand Thomas to be talking about jpegs as a means to a final image (e.g., print). That's where I just don't understand why a supposed "3D" jpg image is terribly important. At least not for me. I will usually shoot DNG only making the way jpgs render moot for purposes of what the final product will look like. If Thomas is talking about how the firmware produces a RAW image, then the downstream development of that image in ACR or otherwise also is important and contributes significantly to the feel of the image. Just like in our film days when you could take rolls of PlusX and run them through different film developers and produce different looks. The way a sensor's pixels react to light to me is an analog of how the silver reacted in film. But while a DNG or other raw file can be considered a "negative" that is not so for a jpg. So I am still confused. Perhaps Thomas can explain better what he was getting at.

 

Agree. Frankly I shot jpeg when away from home so I can readily send images to my wife or someone OOC jpeg. The LCD for me is mostly for looking at the histogram. That might change somewhat with the 240.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...