Jump to content

Recommended Posts

Advertisement (gone after registration)

I understand the concept of pixel binning - taking a group of 4 pixels and treating them as one “super pixel”, thereby improving dynamic range and handling noise - but is this really the case, and are there other unintended consequences?

For a start, my understanding of the colour filter array is that those pixels will already be green (2), red and blue. How do they suddenly become combined , and how is it that 60MP becomes 36MP then 18MP?  Then, if 4 pixels become 1 super pixel (leaving aside the mathematical reality that 60 divided by 4 is 15MP), surely the circle of confusion increases (4 fold?) with a corresponding change in depth of field. 

All this raises a singular question in my mind - why?

MP is largely irrelevant to my photography (see the discussion on cropping over in the Bar).  I don’t mind MP one way or the other. I never suffer with my 18MP on my Monochrom, or 24MP on my other cameras. So, I will go with whatever comes, provided it doesn’t detract from my photography.  I see zero benefit in 60MP in an M camera, and pixel binning mystifying. 

In short, what is the point?  Surely, with an M camera, we just want the best dynamic range and noise performance at the ideal MP, whatever that may be?  It will certainly be enough with today’s technology. 

 

Link to post
Share on other sites

It's not clear why pixel interpolation or binning for superresolution is worth the cost.  Leica advertises it as offering a choice for when pictures are needed quickly (and upload is slow), or  for noise reduction.  The first reason seems sensible, but the second argument is undercut by the fact that averaging pixels with a highlight cutoff still gives you a signal with the same highlight cutoff.-- there is no tail in the data at that end to let you average things in.  There is some gain in shadow resolution, if the image actually needs this.

Also each of the three Bayer filters turns a particular linear combination of R, G, and B intensities into a single 14 bit intensity, so to perform some information-maximizing rescaling, you have to do a sort of JPEG transformation on all of that date, then transform it back into the desired resolution.  It's the only way to get two convenient smaller sizes other than 1/4 and 1/16 of the original number of pixels.  It sounds like a lot of work, but the fast algorithms to do that are already in the camera firmware, so why not?

  • Thanks 1
Link to post
Share on other sites

What Leica M11 does is not what is typically understood as pixel binning. Instead, the resolution change seems to happen in firmware. There is a limitation to which resolution Leica can scale (math issue?), e.g., 24MP resolution was not possible.

I see benefits of 60MP. The lower resolutions seems to placate those who do not want to shoot at 60 MP for whatever reason, and it will never overflow the internal buffer.

  • Like 3
Link to post
Share on other sites

There are medium and small sized DNGs posted by @tashley at:

… so you can put them through your digital workflow to see how they work for you.  

For my set-up the 36 MP versions looked great, making it an excellent option for those wo don't want to shoot 60 MP for everything (which is unfortunately what you are forced into doing with the Sony A7RIV).

Link to post
Share on other sites

24 minutes ago, SrMi said:

What Leica M11 does is not what is typically understood as pixel binning. Instead, the resolution change seems to happen in firmware. There is a limitation to which resolution Leica can scale (math issue?), e.g., 24MP resolution was not possible.

I see benefits of 60MP. The lower resolutions seems to placate those who do not want to shoot at 60 MP for whatever reason, and it will never overflow the internal buffer.

Thank you @scott kirkpatrick and @Photoworks - helpful explanations.

Srdjan, that was the conclusion I was coming to.  The whole cropping and pixel-binning announcement was confusing - it seemed rather pointless.  I have no doubt that the 60MP sensor is better than the 24MP and 40MP sensors in the M10 series.  I just don’t know why Leica went to 60MP.  Surely a lower MP sensor (40MP?) without pixel binning (or whatever Leica is doing in the firmware) would have been a better option.

An M11 with the best sensor for M photography, without the rather pointless gimmicks of cropping and pixel binning would be a better option?  Frankly, I see no point at all in 60MP in an M - zero.  But, I would accept it if that was a side effect of what is, for many other reasons, the best sensor for the system.

Edited by IkarusJohn
Link to post
Share on other sites

Advertisement (gone after registration)

I mean they most certainly do some sort of magic in the firmware. Normally you cannot (at least to my knowledge) easily reduce a raw image with pixel binning (like 4:1). So what I also wonder, whether we have some sort of advantage to use the smaller DNGs or if it actually makes no difference to during post-processing (like picking the 60MP DNG and resize the image later on).

Link to post
Share on other sites

9 minutes ago, BJohn said:

I mean they most certainly do some sort of magic in the firmware. Normally you cannot (at least to my knowledge) easily reduce a raw image with pixel binning (like 4:1). So what I also wonder, whether we have some sort of advantage to use the smaller DNGs or if it actually makes no difference to during post-processing (like picking the 60MP DNG and resize the image later on).

Leica said that there is no advantage in DR/noise if shooting smaller resolution vs. L-DNG and resize.

  • Like 1
Link to post
Share on other sites

11 minutes ago, BJohn said:

I mean they most certainly do some sort of magic in the firmware. Normally you cannot (at least to my knowledge) easily reduce a raw image with pixel binning (like 4:1). So what I also wonder, whether we have some sort of advantage to use the smaller DNGs or if it actually makes no difference to during post-processing (like picking the 60MP DNG and resize the image later on).

I would say that resizing in postprocessing is bound to lead to better results, especially if you use a dedicated program, although Photoshop is quite good as well. The simple fact is that a camera can never match the processing power of a computer, so it will use less sophisticated algorithms.

  • Like 3
Link to post
Share on other sites

22 minutes ago, IkarusJohn said:

Thank you @scott kirkpatrick and @Photoworks - helpful explanations.

Srdjan, that was the conclusion I was coming to.  The whole cropping and pixel-binning announcement was confusing - it seemed rather pointless.  I have no doubt that the 60MP sensor is better than the 24MP and 40MP sensors in the M10 series.  I just don’t know why Leica went to 60MP.  Surely a lower MP sensor (40MP?) without pixel binning (or whatever Leica is doing in the firmware) would have been a better option.

An M11 with the best sensor for M photography, without the rather pointless gimmicks of cropping and pixel binning would be a better option?  Frankly, I see no point at all in 60MP in an M - zero.  But, I would accept it if that was a side effect of what is, for many other reasons, the best sensor for the system.

I speculate that Leica wanted to use the latest Sony sensor, the 60MP sensor (same one used in GFX100, fp-L, a7rIV). The other new Sony one is the 33MP sensor. Shipping a new camera with a lower resolution than the predecessors is not an option, IMO. However, some M10-R owners may upgrade because of increased resolution (printing, cropping, malleability). Moreover, the triple-resolution can appease those users who think 60MP is wasted on an RF camera, as they can use it as a 36MP or 18MP camera.

Time will show if M11 owners will find the triple-resolution useful. I do not plan to use it. 

  • Like 2
Link to post
Share on other sites

2 hours ago, IkarusJohn said:

I understand the concept of pixel binning - taking a group of 4 pixels and treating them as one “super pixel”, thereby improving dynamic range and handling noise - but is this really the case, and are there other unintended consequences?

For a start, my understanding of the colour filter array is that those pixels will already be green (2), red and blue. How do they suddenly become combined , and how is it that 60MP becomes 36MP then 18MP?  Then, if 4 pixels become 1 super pixel (leaving aside the mathematical reality that 60 divided by 4 is 15MP), surely the circle of confusion increases (4 fold?) with a corresponding change in depth of field. 

All this raises a singular question in my mind - why?

MP is largely irrelevant to my photography (see the discussion on cropping over in the Bar).  I don’t mind MP one way or the other. I never suffer with my 18MP on my Monochrom, or 24MP on my other cameras. So, I will go with whatever comes, provided it doesn’t detract from my photography.  I see zero benefit in 60MP in an M camera, and pixel binning mystifying. 

In short, what is the point?  Surely, with an M camera, we just want the best dynamic range and noise performance at the ideal MP, whatever that may be?  It will certainly be enough with today’s technology. 

 

I do not know if you care about technical explanations, but if you think about it, this reduction in size is a mathematical/statistical problem. You have 60M pieces of information and you want to reduce them to 36M or 18M. If you do pixel binning, you are using the most naive technique/algo that ignores other neighboring pixels and is not necessarily the best. Also the output is also a DNG so it makes more sense for example to estimate the blue pixels in the new reduced DNG, using also neighboring blue pixels from the original 60MP raw file etc. I am sure Leica are using more sophisticated image processing algos, limited only by the speed of the processor in the camera. Using the original 60MP raw in a computer with more processing power and also more sophisticated algos (that may also use AI for example) will give you better results. 

  • Thanks 1
Link to post
Share on other sites

That was where I was heading, @Daedalus2000.

Clearly, Leica is not using the simplistic approach of combining 4 adjoining sensor sites into one.  Would it be a green, blue or red super pixel? And how would that work with the colour filter array?  The alternative would be binning 3 of 4 sensor sites, which would give no real improvement in image quality.  None of this makes any sense, and does not result in 36 & 18 MP … do it clearly isn’t pixel binning in the traditional sense.  It’s file reduction using firmware, as @scott kirkpatrick disscusses above.  We’re back to software manipulation in camera, which sounds like down resolution in post processing.

The one issue I raise in my opening post is the impact on the circle of confusion - a factor in depth of field …

  • Like 1
Link to post
Share on other sites

Just now, IkarusJohn said:

That was where I was heading, @Daedalus2000.

Clearly, Leica is not using the simplistic approach of combining 4 adjoining sensor sites into one.  Would it be a green, blue or red super pixel? And how would that work with the colour filter array?  The alternative would be binning 3 of 4 sensor sites, which would give no real improvement in image quality.  None of this makes any sense, and does not result in 36 & 18 MP … do it clearly isn’t pixel binning in the traditional sense.  It’s file reduction using firmware, as @scott kirkpatrick disscusses above.  We’re back to software manipulation in camera, which sounds like down resolution in post processing.

The one issue I raise in my opening post is the impact on the circle of confusion - a factor in depth of field …

Yes you are right, I am not an expert in image analysis, but the algos will be much more sophisticated than just averaging, for example they will include potentially non-linear transformations (not just averaging), larger neighborhoods of pixels, may depend on iso etc. Actually, interestingly, since you mentioned file compression, there are strong connections between statistical estimation and compression. By the way, from another thread in this forum you will see that the jpegs from the M11 suffer a bit from bad color noise reduction so I am not sure how great overall the algos within the M11 are.

I think you are right about the depth of field, I am not sure how much of an impact it will have, I find my inexperience with rangefinders has a bigger effect on where the focus will be and how much I will miss focus :)

At the end, with all these in camera options, we are always balancing convenience vs quality. I see these reduced dngs as convenience tools (e.g. with respect to space and buffer capacity) but I prefer to use 60MP and a proper raw processor that gives me more power but also more flexibility and control.

  • Like 1
Link to post
Share on other sites

If it's happening in firmware then there's some magic going on. Off chip *binning* would probably slow the write process down a bit. But the buffer goes through the roof at the smaller sizes. I was able to get more than 80 frames in continuous shooting in S-DNG. Compared to 7 shots in L-DNG.

Actually I don't care what Leica are doing. I've poked and prodded at the files in all resolutions and I'm not seeing any weird artifacts or any issues. So effectively Leica has given us three M cameras in one rather than an *R* variant.

Leica do say one more stop of DR in S-DNG but I have yet to test this. It'll just be a lower noise floor.

Gordon

  • Like 4
  • Thanks 2
Link to post
Share on other sites

I think the issue is a philosophical one for me.

If I had an M11, I would never use cropping or variable resolution, and I would pretend they didn't exist.  The M, for me, is like my M-A or my Monochrom - a camera for producing the best film or DNG files, with a maximum of direct control.  Even my M10-D, which has many options for exposure, is used as a simple tool (albeit one I enjoy using) for capturing images.

I won't be buying an M11.  I might look at an M11-D variant, if it had one resolution, no in camera cropping, the old centre-weighted meting off the shutter and the best sensor for my M lenses, producing only DNG files.  The rest, I can do in post.  Or not.

  • Like 2
Link to post
Share on other sites

3 hours ago, IkarusJohn said:

Thank you @scott kirkpatrick and @Photoworks - helpful explanations.

Srdjan, that was the conclusion I was coming to.  The whole cropping and pixel-binning announcement was confusing - it seemed rather pointless.  I have no doubt that the 60MP sensor is better than the 24MP and 40MP sensors in the M10 series.  I just don’t know why Leica went to 60MP.  Surely a lower MP sensor (40MP?) without pixel binning (or whatever Leica is doing in the firmware) would have been a better option.

An M11 with the best sensor for M photography, without the rather pointless gimmicks of cropping and pixel binning would be a better option?  Frankly, I see no point at all in 60MP in an M - zero.  But, I would accept it if that was a side effect of what is, for many other reasons, the best sensor for the system.

A lot of us just want those pixels - as long as they don’t compromise IQ - because we can corp more, zoom more, print larger. Why ever not? Storage is cheap. 

Link to post
Share on other sites

2 hours ago, Daedalus2000 said:

Yes you are right, I am not an expert in image analysis, but the algos will be much more sophisticated than just averaging, for example they will include potentially non-linear transformations (not just averaging), larger neighborhoods of pixels, may depend on iso etc. Actually, interestingly, since you mentioned file compression, there are strong connections between statistical estimation and compression. By the way, from another thread in this forum you will see that the jpegs from the M11 suffer a bit from bad color noise reduction so I am not sure how great overall the algos within the M11 are.

I think you are right about the depth of field, I am not sure how much of an impact it will have, I find my inexperience with rangefinders has a bigger effect on where the focus will be and how much I will miss focus :)

At the end, with all these in camera options, we are always balancing convenience vs quality. I see these reduced dngs as convenience tools (e.g. with respect to space and buffer capacity) but I prefer to use 60MP and a proper raw processor that gives me more power but also more flexibility and control.

DOF changes when you zoom in and out on screen. It’s not ‘real’. Real DOF is that which is evident at your chosen output size.

  • Like 2
  • Thanks 2
Link to post
Share on other sites

47 minutes ago, tashley said:

A lot of us just want those pixels - as long as they don’t compromise IQ - because we can corp more, zoom more, print larger. Why ever not? Storage is cheap. 

I'm sure that's right for many; and you can do it all in post ...

  • Thanks 1
Link to post
Share on other sites

 

19 hours ago, IkarusJohn said:

I understand the concept of pixel binning - taking a group of 4 pixels and treating them as one “super pixel”, thereby improving dynamic range and handling noise - but is this really the case, and are there other unintended consequences?

For a start, my understanding of the colour filter array is that those pixels will already be green (2), red and blue. How do they suddenly become combined , and how is it that 60MP becomes 36MP then 18MP?  Then, if 4 pixels become 1 super pixel (leaving aside the mathematical reality that 60 divided by 4 is 15MP), surely the circle of confusion increases (4 fold?) with a corresponding change in depth of field. 

All this raises a singular question in my mind - why?

MP is largely irrelevant to my photography (see the discussion on cropping over in the Bar).  I don’t mind MP one way or the other. I never suffer with my 18MP on my Monochrom, or 24MP on my other cameras. So, I will go with whatever comes, provided it doesn’t detract from my photography.  I see zero benefit in 60MP in an M camera, and pixel binning mystifying. 

In short, what is the point?  Surely, with an M camera, we just want the best dynamic range and noise performance at the ideal MP, whatever that may be?  It will certainly be enough with today’s technology. 

 

When Leica says 'pixel binning', are pixels being uniformly binned across the whole sensors, or are more pixels binned in the corners, to improve corner performance? This may explain the reduction from 60 MP to 36 MP and 18 MP.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...