M9reno Posted February 26, 2020 Share #1 Posted February 26, 2020 Advertisement (gone after registration) I was told yesterday in person by a respected UK dealer that Leica's sensors for all M10 models (and even some previous models) are a composite of more than one sensor. This would have been done, according to him, in order to minimise the number of whole sensors withdrawn at manufacturing stage due to dead pixel issues (if serious dead pixel issues arise, only half a sensor is replaced). He said in addition that under the right lighting conditions, photographs of uniform subjects (e.g. blank walls) taken on such sensors can show a "seam" where the two composite parts meet. I'd never heard of such a thing and was very surprised. Could anyone here comment further? Link to post Share on other sites More sharing options...
Advertisement Posted February 26, 2020 Posted February 26, 2020 Hi M9reno, Take a look here M10 Composite Sensor?. I'm sure you'll find what you were looking for!
thighslapper Posted February 26, 2020 Share #2 Posted February 26, 2020 (edited) There have been photos posted here where this is apparent under specific circumstances with several Leica mirrorless cameras. If I recall correctly .... on the M9 ..... this also improved processing performance as each half of the sensor was read separately rather than as one single continuous pass. I'm not sure that this was two halves bolted together..... or just the arrangement of the circuitry, but it did result in obvious asymmetry in images of uniform backgrounds at high ISO with excessive shadow recovery and added contrast. Edited February 26, 2020 by thighslapper Link to post Share on other sites More sharing options...
PCPix Posted February 26, 2020 Share #3 Posted February 26, 2020 The M262 definitely has a two-part sensor: I had the slightest output variation between the two halves (something like moving the middle slider in PS 'levels' to 1.02 to balance the halves). Whilst this amount is really almost not visible for most images, when doing dramatic sky adjustments in black and white using either PS or Lr it became clear. I sent it back, got a second body, exactly the same problem. Looked at files from another body (different production run) and it was also the same. Despite bringing this to the attention of Leica, nobody really took me seriously and I moved onto the M10. I have never seen any indication of this on the M10 so I don't think you need to worry 😉 If you want to test (and this is how it showed up on the M262): Shoot a shot with an overcast bright sky, bring it into PS, adjust the central slider excessively to make the sky go darker and if there is anything to be seen, you will see the slightest line (running vertically, top to bottom) at exactly half way across the frame, one side very slightly lighter, one side very slightly darker... I attach a cropped-in sample (from my M262 - not M10!) so you can see: Welcome, dear visitor! As registered member you'd see an image here… Simply register for free here – We are always happy to welcome new members! 2 Link to post Share on other sites Simply register for free here – We are always happy to welcome new members! ' data-webShareUrl='https://www.l-camera-forum.com/topic/306843-m10-composite-sensor/?do=findComment&comment=3919997'>More sharing options...
scott kirkpatrick Posted February 26, 2020 Share #4 Posted February 26, 2020 It's not two pixel arrays glued together, but circuits for extracting the image that surround the pixel array, allowing the data to flow out in two (or more) streams at the same time. A natural way to divide the streams is by dividing the image array into regions along the half or quarter division lines. This was more important for CCDs (M8, M9) than it is for CMOS because of the differences in how the data is extracted and where on the chip it is converted to digital form. 1 Link to post Share on other sites More sharing options...
farnz Posted February 26, 2020 Share #5 Posted February 26, 2020 4 hours ago, scott kirkpatrick said: This was more important for CCDs (M8, M9) than it is for CMOS because of the differences in how the data is extracted and where on the chip it is converted to digital form. Probably for on-chip heat management too, Scott? Pete. Link to post Share on other sites More sharing options...
scott kirkpatrick Posted February 26, 2020 Share #6 Posted February 26, 2020 (edited) 1 hour ago, farnz said: Probably for on-chip heat management too, Scott? Pete. Maybe, but the big difference is that the CMOS imager has its A/D right at the pixel, while the CCD signal stays analog until it reaches a serial digitizer at the edge of the chip that works on all the data (and probably gets hotter). . Edited February 26, 2020 by scott kirkpatrick 1 1 Link to post Share on other sites More sharing options...
adan Posted February 26, 2020 Share #7 Posted February 26, 2020 Advertisement (gone after registration) No digital M has a "composite sensor." They have always had 2-channel readouts, from the left and right sides simultaneously, to speed the output time. Just as a stadium or theatre with two exits can empty the seats twice as fast one with only one exit. But one would not call one stadium with two exits a "composite of two stadiums." The output circuits can occasionally be imbalanced, and thus produce slightly different "halves" to the picture, as PCPix demonstrates - something Leica can rebalance at the factory. 1 3 Link to post Share on other sites More sharing options...
M9reno Posted February 27, 2020 Author Share #8 Posted February 27, 2020 Many thanks to you all. Link to post Share on other sites More sharing options...
mmradman Posted February 27, 2020 Share #9 Posted February 27, 2020 On 2/26/2020 at 8:51 AM, M9reno said: I was told yesterday in person by a respected UK dealer that Leica's sensors for all M10 models (and even some previous models) are a composite of more than one sensor. This would have been done, according to him, in order to minimise the number of whole sensors withdrawn at manufacturing stage due to dead pixel issues (if serious dead pixel issues arise, only half a sensor is replaced). He said in addition that under the right lighting conditions, photographs of uniform subjects (e.g. blank walls) taken on such sensors can show a "seam" where the two composite parts meet. I'd never heard of such a thing and was very surprised. Could anyone here comment further? Ask your dealer did he have a go at some relatively simple DIY tasks that may require aligning pieces on a very coarse scale, like wall tiles or floor boards, I know not necessarily easy. Now, try to align two “sensor halves” where speck of dust is a boulder compared to a pixel which is 4 to 6 thousandths of a millimetre. 1 Link to post Share on other sites More sharing options...
Mr.Prime Posted February 27, 2020 Share #10 Posted February 27, 2020 As an fyi - multiple sensors can be tiled to create large arrays but pixels can not be positioned at the edge of the die so there will be gaps. For digital photography like the M9, this will never be acceptable. It's used when very large sensors are required such as astronomical telescopes and large panel X-ray detectors. Link to post Share on other sites More sharing options...
adan Posted February 27, 2020 Share #11 Posted February 27, 2020 There have been some "butted" sensors - I think mostly for astronomy (and as mmradman implies - at astronomical cost due to the trickiniess of assembly). And maybe (?) some large medium-format sensors. Some have up to 3000 output channels, for speed. Note two different terms - "stitched sensors" which are one piece of silicon with two adjoining patterns of pixels burned onto it - and "butted sensors" which are assembled from separate original pieces of silicon (chips), floor-tile style, after forging. https://www.bhphotovideo.com/explora/photography/features/worlds-largest-digital-camera https://petapixel.com/2018/06/18/this-is-what-canons-ginormous-cmos-sensor-looks-like-next-to-a-dslr/ Link to post Share on other sites More sharing options...
PCPix Posted February 28, 2020 Share #12 Posted February 28, 2020 On 2/26/2020 at 11:57 AM, scott kirkpatrick said: It's not two pixel arrays glued together, but circuits for extracting the image that surround the pixel array, allowing the data to flow out in two (or more) streams at the same time. A natural way to divide the streams is by dividing the image array into regions along the half or quarter division lines. This was more important for CCDs (M8, M9) than it is for CMOS because of the differences in how the data is extracted and where on the chip it is converted to digital form. Thanks for the more accurate description of ‘the two halves’ Scott. Link to post Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now