Jump to content

Recommended Posts

Advertisement (gone after registration)

Motivated by this thread, I took a few more test shots with M9 and M (Typ 240) last weekend, in good light, at base ISO, from atop a tripod, using my best lens—which happens to be the Apo-Summicron-M 50 mm Asph no less. I took pictures with low and high subject contrast.

 

Result: The files from the M (Typ 240) win every time. Higher resolution, wider dynamic range, smoother tonal transitions, better overall image quality. Using a better lens than before, the difference in resolution (24 MP vs 18 MP) becomes much more apparent. In some parts of the pictures, the M9 files show higher local per-pixel contrast which might trick people into thinking they were higher quality—but that's just a specious effect. Overall, the files from the M (Typ 240), at base ISO, have more shadow detail and look more natural.

 

Why someone would prefer the M9 files is completely beyond me ... and even more bizarre than that is the insinuation that the differences were caused by the sensor design principle (i. e. CCD vs CMOS).

  • Like 5
Link to post
Share on other sites

x
Motivated by this thread, I took a few more test shots with M9 and M (Typ 240) last weekend, in good light, at base ISO, from atop a tripod, using my best lens—which happens to be the Apo-Summicron-M 50 mm Asph no less. I took pictures with low and high subject contrast.

 

Result: The files from the M (Typ 240) win every time. Higher resolution, wider dynamic range, smoother tonal transitions, better overall image quality. Using a better lens than before, the difference in resolution (24 MP vs 18 MP) becomes much more apparent. In some parts of the pictures, the M9 files show higher local per-pixel contrast which might trick people into thinking they were higher quality—but that's just a specious effect. Overall, the files from the M (Typ 240), at base ISO, have more shadow detail and look more natural.

 

Why someone would prefer the M9 files is completely beyond me ... and even more bizarre than that is the insinuation that the differences were caused by the sensor design principle (i. e. CCD vs CMOS).

 

I must say, the M240 paired with the 50 APO is capable of some amazing output. In fact I like how the 50 APO behaves with the M240 more than with the M9.

 

Peter.

Link to post
Share on other sites

and even more bizarre than that is the insinuation that the differences were caused by the sensor design principle (i. e. CCD vs CMOS).

 

Have you designed an imaging sensor, or work with people that do? There are fundamental differences between CMOS and CCD sensors, and these differences affect the images produced.

 

If you like the M240, you chose wisely. If you bought one and do not like it, you did not choose the camera that was best for you. I stick with cameras that produce the look that I like. I can discount the M240 from consideration due to the poor results shown at ISO3200 and ISO6400. I believe this will be corrected in the future as BSI sensors become mainstream.

Link to post
Share on other sites

Advertisement (gone after registration)

This quickly became a CCD vs CMOS debate, with some claiming that the real difference is in the Dye used in the Bayer filter. The difference is "deeper" than that.

 

The move to CMOS was largely driven by the "need for Speed", better performance at High ISO due to signal processing gain that is not available in CCD's. The M240 files at ISO3200 and ISO6400 show banding, which to me, do not make them any better than the CCD based cameras.

 

The M240 requires an IR cut filter and M240 files seem to require more post-processing as compared with the M9 to get good results. These differences are due to the design of the sensor.

 

To add- Neither existing CCD or Front-Lit sensor technology is "optimal" (ie, designer's worst nightmare) for Full-Frame M-Mount cameras. For all of you that prefer the M8 images over the M9: The KAF-10500 sensor has 3dB higher linear dynamic range than the one used in the M9. The "Saturation count" of the M8 is 50% higher than the M9. The M240 matches the M9. I'm going to speculate that this is due to the need to Thin down the sensor to accommodate light coming in at steep angles. You have to make the pixels more shallow, and that requires removing material- less saturation count. CMOS sensors traditionally have the light sensitive layer farther from the surface than a CCD based sensor. CMOSIS had to thin out the sensor, maybe the digital side starts to interfere with the light sensitive portion of the sensor. The long sheet is not available for it. The M Monochrom- I wonder if the saturation count went back to the rest of the detectors in the 6.8um family as they did not need the RGB layer. The long sheet is not available for the monochrome sensor. So- neither FSI CMOS nor CCD is optimal for the full-frame M-Mount. I'm hoping BSI sensors do better, they should. As far as continued development of CCD's to improve performance: a new material able to collect more electrons in the same volume would be one solution. It might require Unobtanium, the rarest of elements.

Edited by Lenshacker
Link to post
Share on other sites

I'm going to speculate that this is due to the need to Thin down the sensor to accommodate light coming in at steep angles. You have to make the pixels more shallow, and that requires removing material- less saturation count. CMOS sensors traditionally have the light sensitive layer farther from the surface than a CCD based sensor. CMOSIS had to thin out the sensor, maybe the digital side starts to interfere with the light sensitive portion of the sensor.

Not at all. The part that got thinner was the wiring layer in front of the light gathering and charge collecting parts of the sensor.

 

The M Monochrom- I wonder if the saturation count went back to the rest of the detectors in the 6.8um family as they did not need the RGB layer.

The sensor chips in the M9 and M Monochrom are essentially the same. The only difference is that the M9 sensor has a layer of red, green, and blue filters on top of the chip where the M Monochrom has clear filters.

Link to post
Share on other sites

Err.. No. The IR thing has to do with the effectiveness of the IR filter in front of the sensor, not with sensor technology.

 

You are wrong. It has to do with both. By changing the dopants used in the sensor you can make them more or less sensitive to infrared. Kodak added Indium Tin Oxide to increase blue sensitivity and decrease IR sensitivity. Infrared tends to get absorbed at deeper layers of the silicon, so depth of the light sensitive layer also plays a roll.

 

There are detectors that are formulated for IR- Indium Gallium Arsenide and Mercury Cadmium Telluride are two. Silicon is "VNIR"- Visible+Near Infrared.

Edited by Lenshacker
Link to post
Share on other sites

Not at all. The part that got thinner was the wiring layer in front of the light gathering and charge collecting parts of the sensor.

.

 

If the wiring is brought closer to the light gathering layer- that could possibly induce noise into the image. I would like to know why the banding is occurring on the M240 images shot at ISO3200 and ISO6400. I've shot the M9 at ISO2500 and -2EV, ISO10000 equivalent- Much more noise than an M240, banding is less than some of the M240 shots. The A/D for the M9 sensor is done off-chip, so someone did a great job with regard to isolation.

 

15753704774_08c9bf27cd_o.jpgSkating_J3_Wideopen2_ISO10000

 

I would like to see the data sheet for the Monochrome version of the sensor. The well-capacity of the other sensors in the 6.8micron family are at ~60K, as is the M8 sensor. I never expected to see that the M8 sensor has 3dB higher linear dynamic range than the M9, the latter data sheet just made available a few months ago.

Edited by Lenshacker
Link to post
Share on other sites

https://www.teledynedalsa.com/imaging/knowledge-center/appnotes/ccd-vs-cmos/

 

 

This article discusses differences in CMOS and CCD sensors, including Infrared sensitivity with regard to pixel depth. Teledyne Dalsa makes both CMOS and CCD sensors. Each has their strong points and weaknesses. They are different technologies, and have different characteristics. This comes through in the images produced.

Edited by Lenshacker
  • Like 1
Link to post
Share on other sites

This comes through in the images produced.

To the contrary. I am always flabbergasted how close the results from the M9 and M are to each other. To see any differences in the images produced, you need to do real deep pixel-peeping or shoot in extreme lighting conditions (but if you do then the differences you find always will be in favour of the M).

  • Like 2
Link to post
Share on other sites

You've posted that you apply post processing to your M240 files to make them come out closer to the M9.

 

My boss never trusted any image from me except a Polaroid that he saw come out of the camera. I was pretty good at using the computer to get any image that I wanted, including ones that had never been taken.

 

As far as sensor design goes, it should not be brushed off so lightly.

Link to post
Share on other sites

https://www.teledynedalsa.com/imaging/knowledge-center/appnotes/ccd-vs-cmos/

 

 

This article discusses differences in CMOS and CCD sensors, including Infrared sensitivity with regard to pixel depth. Teledyne Dalsa makes both CMOS and CCD sensors. Each has their strong points and weaknesses. They are different technologies, and have different characteristics. This comes through in the images produced.

 

I have a feeling that you cherry picked one paragraph and skipped the conclusion .

Link to post
Share on other sites

I have a feeling that you cherry picked one paragraph and skipped the conclusion .

 

I read the entire article. The conclusion of the article was that you need to choose CCD or CMOS to best fit an application.

 

https://www.teledynedalsa.com/imaging/knowledge-center/appnotes/ccd-vs-cmos/

 

From the article "Choosing the correct imager for an application has never been a simple task. Varied applications have varied requirements. These requirements impose constraints that affect performance and price. With these complexities at play, it is not surprising that it is impossible to make a general statement about CMOS versus CCD imagers that applies to all applications.

 

CMOS area and line scan imagers outperform CCDs in most visible imaging applications. TDI CCDs, used for high speed, low light level applications, outperform CMOS TDIs. The need to image in the NIR can make CCDs a better choice for some area and line scan applications. To image in the UV, the surface treatment after backside thinning is key, as is the global shutter requirement. The need for very low noise introduces new constraints, with CMOS generally still outperforming CCDs at high readout speeds. The price-performance trade-off can favor either CCD or CMOS imagers, depending on leverage, volume, and supply security."

 

On high-speed readout, The Nikon Df has the same sensor as the D4. The Df has been attributed with lower-noise as it has 1/2 the frame rate of the D4. 5.5FPS versus 11FPS. The frame-rate of the M9 sensor is slow, should not be considered high-speed.

 

Both have strengths and weaknesses. This certainly implies that the basic designs of the two types of sensors are different, and that has effects on the application. You work with this stuff for a few decades, you start to appreciate how the design of the sensor comes out in an image. You spend a decade writing code to trouble shoot one-of-a-kind sensors, even more so.

 

Err.. No. The IR thing has to do with the effectiveness of the IR filter in front of the sensor, not with sensor technology.

 

Now- if you have read the entire article and still stick by your statement that sensor design has nothing to do with infrared sensitivity, and that only the filter over the sensor matters- please link to the article that you base your statement on.

Edited by Lenshacker
Link to post
Share on other sites

I cut and pasted your quote regarding the IR filter into the post.

 

If you still believe what you posted with regard to IR sensitivity in sensors to be true, I would be interested in knowing the source of the statement.

 

I believe that the differences in design of CMOS and CCD detectors have an effect on the images they produce. Some in this thread dispute that statement. I've posted to one article that seems to back it up. I've also posted that neither design is optimal for a full-frame M-Mount camera, and have attempted to explain that comment. Again- forum posts of my opinion. The CCD in the M9 has lower Dynamic Range than the prior generation M8, I am speculating that the sensor had to be thinned and the pixel was made more shallow. That is speculation on my part. The CMOS sensor in the M240 was also thinned, and it was explained that the wire layer in front of the light sensitive layer was thinned down. I am speculating that the wire layer being in closer proximity to the light sensitive layer could induce noise in the image, and is a possible source of banding. 25 years ago,I would have been paid to test this. Now- just speculation that I find interesting.

Edited by Lenshacker
Link to post
Share on other sites

The CCD in the M9 has lower Dynamic Range than the prior generation M8

 

Does it really? or does IR affect the test results by brightening the shadows with the M8? I'm just asking, I don't know what test procedures were used to come to this conclusion.

 

Also, the M240 definitely suffers from more IR pollution than the M9. The M9 has a thicker IR filter than the M8 and is also not as sharp on a per pixel level than the M8. Judging from that, I believe that the IR sensitivity of the M240 was probably a design compromise because a thicker IR filter would have reduced sharpness too much.

 

Whatever the next M will be, improvements will definitely have to come from the sensor, and I don't think any improvements will come in the form of more megapixels.

 

Another thing I want to add to the discussion, and I think I have said this before: dynamic range is overrated. What matters to me is how the tones are separated within a reasonable range of light, and this is something that CCD sensors do very well. I will compare it to slide film vs negative film. In crappy lighting situations, negative film has the edge, but I believe that great photographs are made in good light. Expose the M9 or the MM well in good light, and you will get fantastic results.

Link to post
Share on other sites

...

Another thing I want to add to the discussion, and I think I have said this before: dynamic range is overrated. What matters to me is how the tones are separated within a reasonable range of light, and this is something that CCD sensors do very well. I will compare it to slide film vs negative film. In crappy lighting situations, negative film has the edge, but I believe that great photographs are made in good light. Expose the M9 or the MM well in good light, and you will get fantastic results.

 

On the whole, I think dynamic range is extremely useful ... not everything is done under ideal lighting. Recovering images with insufficient dynamic range is very difficult.

 

I would have though that dynamic range and tonal separation are very very closely linked ... and are dependent on the full well capacity and read noise of the pixel element. Effectively, this is dependent upon the bit depth of the sensor. This implies that more dynamic range simply gives "more tones".

 

My speculation is that (barring IR and UV contamination) image "colour quality" is therefor dependent upon the colour filter array and the transfer function moving (with what ever additional processing) the pixel element into the camera's image storage.

Edited by TonyField
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...