Jump to content

Recommended Posts

Advertisement (gone after registration)

I was told yesterday in person by a respected UK dealer that Leica's sensors for all M10 models (and even some previous models) are a composite of more than one sensor.  This would have been done, according to him, in order to minimise the number of whole sensors withdrawn at manufacturing stage due to dead pixel issues (if serious dead pixel issues arise, only half a sensor is replaced).  He said in addition that under the right lighting conditions, photographs of uniform subjects (e.g. blank walls) taken on such sensors can show a "seam" where the two composite parts meet.

I'd never heard of such a thing and was very surprised.  Could anyone here comment further?

Link to post
Share on other sites

There have been photos posted here where this is apparent under specific circumstances with several Leica mirrorless cameras. 

If I recall correctly .... on the M9 ..... this also improved processing performance as each half of the sensor was read separately rather than as one single continuous pass. I'm not sure that this was two halves bolted together..... or just the arrangement of the circuitry, but it did result in obvious asymmetry in images of uniform backgrounds at high ISO with excessive shadow recovery and added contrast. 

 

Edited by thighslapper
Link to post
Share on other sites

The M262 definitely has a two-part sensor: I had the slightest output variation between the two halves (something like moving the middle slider in PS 'levels' to 1.02 to balance the halves). Whilst this amount is really almost not visible for most images, when doing dramatic sky adjustments in black and white using either PS or Lr it became clear. I sent it back, got a second body, exactly the same problem. Looked at files from another body (different production run) and it was also the same. Despite bringing this to the attention of Leica, nobody really took me seriously and I moved onto the M10. I have never seen any indication of this on the M10 so I don't think you need to worry 😉

If you want to test (and this is how it showed up on the M262): Shoot a shot with an overcast bright sky, bring it into PS, adjust the central slider excessively to make the sky go darker and if there is anything to be seen, you will see the slightest line (running vertically, top to bottom) at exactly half way across the frame, one side very slightly lighter, one side very slightly darker...

I attach a cropped-in sample (from my M262 - not M10!) so you can see:

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

  • Like 2
Link to post
Share on other sites

It's not two pixel arrays glued together, but circuits for extracting the image that surround the pixel array, allowing the data to flow out in two (or more) streams at the same time.  A natural way to divide the streams is by dividing the image array into regions along the half or quarter division lines.  This was more important for CCDs (M8, M9) than it is for CMOS because of the differences in how the data is extracted and where on the chip it is converted to digital form.

  • Thanks 1
Link to post
Share on other sites

4 hours ago, scott kirkpatrick said:

This was more important for CCDs (M8, M9) than it is for CMOS because of the differences in how the data is extracted and where on the chip it is converted to digital form.

Probably for on-chip heat management too, Scott?

Pete.

Link to post
Share on other sites

1 hour ago, farnz said:

Probably for on-chip heat management too, Scott?

Pete.

Maybe, but the big difference is that the CMOS imager has its A/D right at the pixel, while the CCD signal stays analog until it reaches a serial digitizer at the edge of the chip that works on all the data (and probably gets hotter).  .

Edited by scott kirkpatrick
  • Like 1
  • Thanks 1
Link to post
Share on other sites

Advertisement (gone after registration)

No digital M has a "composite sensor."

They have always had 2-channel readouts, from the left and right sides simultaneously, to speed the output time.

Just as a stadium or theatre with two exits can empty the seats twice as fast one with only one exit.

But one would not call one stadium with two exits a "composite of two stadiums." ;)

The output circuits can occasionally be imbalanced, and thus produce slightly different "halves" to the picture, as PCPix demonstrates - something Leica can rebalance at the factory.

  • Like 1
  • Thanks 3
Link to post
Share on other sites

On 2/26/2020 at 8:51 AM, M9reno said:

I was told yesterday in person by a respected UK dealer that Leica's sensors for all M10 models (and even some previous models) are a composite of more than one sensor.  This would have been done, according to him, in order to minimise the number of whole sensors withdrawn at manufacturing stage due to dead pixel issues (if serious dead pixel issues arise, only half a sensor is replaced).  He said in addition that under the right lighting conditions, photographs of uniform subjects (e.g. blank walls) taken on such sensors can show a "seam" where the two composite parts meet.

I'd never heard of such a thing and was very surprised.  Could anyone here comment further?

Ask your dealer did he have a go at some relatively simple DIY tasks that may require aligning pieces on a very coarse scale, like wall tiles or floor boards, I know not necessarily easy.

Now, try to align two “sensor halves” where speck of dust is a boulder compared to a pixel which is 4 to 6 thousandths of a millimetre.

  • Like 1
Link to post
Share on other sites

As an fyi - multiple sensors can be tiled to create large arrays but pixels can not be positioned at the edge of the die so there will be gaps. For digital photography like the M9, this will never be acceptable. It's used when very large sensors are required such as astronomical telescopes and large panel X-ray detectors. 

 

 

 

Link to post
Share on other sites

There have been some "butted" sensors - I think mostly for astronomy (and as mmradman implies - at astronomical cost due to the trickiniess of assembly). And maybe (?) some large medium-format sensors. Some have up to 3000 output channels, for speed.

Note two different terms - "stitched sensors" which are one piece of silicon with two adjoining patterns of pixels burned onto it - and "butted sensors" which are assembled from separate original pieces of silicon (chips), floor-tile style, after forging.

https://www.bhphotovideo.com/explora/photography/features/worlds-largest-digital-camera

https://petapixel.com/2018/06/18/this-is-what-canons-ginormous-cmos-sensor-looks-like-next-to-a-dslr/

 

Link to post
Share on other sites

On 2/26/2020 at 11:57 AM, scott kirkpatrick said:

It's not two pixel arrays glued together, but circuits for extracting the image that surround the pixel array, allowing the data to flow out in two (or more) streams at the same time.  A natural way to divide the streams is by dividing the image array into regions along the half or quarter division lines.  This was more important for CCDs (M8, M9) than it is for CMOS because of the differences in how the data is extracted and where on the chip it is converted to digital form.

Thanks for the more accurate description of ‘the two halves’ Scott. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...