Jump to content

Pixel binning - huh?


augustwest100

Recommended Posts

Advertisement (gone after registration)

Admittedly, I don’t know what I’m talking about.

I generally understand that at present, there is little difference between shooting hi resolution and downsizing, compared to shooting at lower resolution.  I typically opt for high res since it gives me room to level my otherwise wildly tilted shots.
 

I am wondering what, if any, firmware improvements can be made to the M11 in this area to make use of pixel binning or just the number of pixels in general, and image processing. For instance, could a future firmware update make the low light performance better at lower resolutions than higher? Could it be used to create a “virtual” image stabilization or a “virtual” multi shot mode? Could a future update change the current consensus? Meaning, people suddenly find the lower resolution yields different results than a downsized image from a higher resolution?

Link to post
Share on other sites

I am of the same mind. I have actually returned my M11 to the Store, but now I own Two (Black and Silver Thought Id need a Spare Listening to all the Complaints here). I have 6 Profiles set with each Resolution Option for Color and Black and White. Using Pixel Binning Helps a lot with Color Noise. I have a post around it somewhere here and confirmed it Actually works (as much as someone would choose to believe It). I can say that with confidence, as I use that Technology a lot with the Imaging Spacecrafts I work with, and having my Image Processing Team assessing the Raw Files. I have also Conversed with Leica's Upper Management during the L2 Watch launch Event ( A Proud Owner of the 20th One made). They have put me in touch with the Lead Engineers to discuss both the E-Shutter and how it can be improved, and the Pixel Binning. Their Arguments were satisfactory based on prior Knowledge. I keep stating no actual downsizing Occurs but Everyone can be their own Judge. If nothing else, try using it with Vintage Lenses such as the Nocti Classic, or a 35 pre FLE and It will have artistic Merit. 

I shoot my M9, M264D, M10D, M11 at 18, and SL2S for consistency. My M10M is my Wide Angle Flash Fashion Tool. Tests are easy to perform to confirm this with just a little bit of a lot of Pixel Peeping :p. Id say the best way to convince yourself is to look experiment yourself.

TLDR: Tested, found it actually works. But its the Internet so do your own thing and trust Your Gut. Many People have many Opinions. And You know the Internet never lies, Source Internet.

Backup Slides:

P.S. I cant disclose more details than Surface level, but Leica delivered for the past 10 Years I have Worked with their Systems and I am not Associated at all, in fact opting for not being an Ambassador for privacy purposes. I shoot Leica as a Professional (My Second Life after Space) and other than their Slow Service and the Fact that M11 Battery Bottom Plate is not Finished by Leica but the Factory I have no complaints. I just wish People would get over fetishizing things like M9 Colors or Dream Lenses or whatever else I have missed. Unpopular Opinion (My Opinion not a Fact) I Respect Mandrel but his Lenses lack and Cohesiveness to say it Bluntly. And if a Lens needs to be Fetishized its the 90 Elmarit (Post Mandrel One, Or the 75mm Summarit but again an Opinion. This is also Biased due to the Elmarit being my First Leica Lens and the 75mm Summarit's Quick and Accurate Throw which I Enjoy.) 

P.P.S. Not an Advertisement but if you need more Information look through my Instagram's Saved Stories and You'd realize how I Respect the Scientific Process of my Conclusions. I am mediocre at best when It comes to Photography, But I have Pride in my Process' and the Leica System.

P.P.P.S. Apologies, In a Bad Mood due to attempting a Routine change.

Santos.

  • Like 2
  • Thanks 1
Link to post
Share on other sites

9 hours ago, Santos said:

I am of the same mind. I have actually returned my M11 to the Store, but now I own Two (Black and Silver Thought Id need a Spare Listening to all the Complaints here). I have 6 Profiles set with each Resolution Option for Color and Black and White. Using Pixel Binning Helps a lot with Color Noise. I have a post around it somewhere here and confirmed it Actually works (as much as someone would choose to believe It). I can say that with confidence, as I use that Technology a lot with the Imaging Spacecrafts I work with, and having my Image Processing Team assessing the Raw Files. I have also Conversed with Leica's Upper Management during the L2 Watch launch Event ( A Proud Owner of the 20th One made). They have put me in touch with the Lead Engineers to discuss both the E-Shutter and how it can be improved, and the Pixel Binning. Their Arguments were satisfactory based on prior Knowledge. I keep stating no actual downsizing Occurs but Everyone can be their own Judge. If nothing else, try using it with Vintage Lenses such as the Nocti Classic, or a 35 pre FLE and It will have artistic Merit. 

I shoot my M9, M264D, M10D, M11 at 18, and SL2S for consistency. My M10M is my Wide Angle Flash Fashion Tool. Tests are easy to perform to confirm this with just a little bit of a lot of Pixel Peeping :p. Id say the best way to convince yourself is to look experiment yourself.

TLDR: Tested, found it actually works. But its the Internet so do your own thing and trust Your Gut. Many People have many Opinions. And You know the Internet never lies, Source Internet.

Backup Slides:

P.S. I cant disclose more details than Surface level, but Leica delivered for the past 10 Years I have Worked with their Systems and I am not Associated at all, in fact opting for not being an Ambassador for privacy purposes. I shoot Leica as a Professional (My Second Life after Space) and other than their Slow Service and the Fact that M11 Battery Bottom Plate is not Finished by Leica but the Factory I have no complaints. I just wish People would get over fetishizing things like M9 Colors or Dream Lenses or whatever else I have missed. Unpopular Opinion (My Opinion not a Fact) I Respect Mandrel but his Lenses lack and Cohesiveness to say it Bluntly. And if a Lens needs to be Fetishized its the 90 Elmarit (Post Mandrel One, Or the 75mm Summarit but again an Opinion. This is also Biased due to the Elmarit being my First Leica Lens and the 75mm Summarit's Quick and Accurate Throw which I Enjoy.) 

P.P.S. Not an Advertisement but if you need more Information look through my Instagram's Saved Stories and You'd realize how I Respect the Scientific Process of my Conclusions. I am mediocre at best when It comes to Photography, But I have Pride in my Process' and the Leica System.

P.P.P.S. Apologies, In a Bad Mood due to attempting a Routine change.

Santos.

I witnessed it when you tested different resolution and yes the difference is there 😀

  • Like 1
Link to post
Share on other sites

Leica has said that any benefits experienced with lower in-camera resolutions can be accomplished by resizing the full resolution image.

Also, Leica is not doing pixel binning. That term was used initially by marketing, but, AFAIK, has been removed from all Leica material.

Link to post
Share on other sites

Am 13.10.2022 um 00:41 schrieb augustwest100:

... there is little difference between shooting high resolution and downsizing, compared to shooting at lower resolution.

That depends on what you consider a 'little difference' as there is a difference ... albeit just a little one which usually is hardly worth fussing about. Shooting at high resolution and then downsizing in post-processing usually is slightly better than shooting at a natively lower resolution.

.

Am 13.10.2022 um 00:41 schrieb augustwest100:

... could a future firmware update make the low-light performance better at lower resolutions than higher?

No.

.

Am 13.10.2022 um 00:41 schrieb augustwest100:

Could it be used to create a “virtual” image stabilization ...?

Yes. Well ... in theory, at the expense of effective sensor area.

But then, it would require a lot of computing capacity so probably it won't happen due to lack of CPU power.

.

Am 13.10.2022 um 00:41 schrieb augustwest100:

... or a “virtual” multi-shot mode?

No.

.

Am 13.10.2022 um 00:41 schrieb augustwest100:

Could a future update change the current consensus? Meaning, people suddenly find the lower resolution yields different results than a downsized image from a higher resolution?

Huh!?

Not sure what you're talking about here. What do you consider the 'current consensus'? And what exactly do you mean by 'lower resolution' and 'downsized image from a higher resolution'?

The M11's sensor has 60 mio photosites, hence natively records 60 MP images. You have the options to downsize them to 36 MP or 18 MP in-camera. You may also downsize the native 60 MP images in post-processing using a down-scaling algorithm of your choice—which may or may not give better results than Leica's in-camera algorithm.

Link to post
Share on other sites

Thanks - in all fairness, I did start the thread saying I had no idea what I was talking about! 🤣 I think in retrospect what I was asking has more to do with the processor and whether there is something that can be done to make use of so many pixels other than raw resolution. I’m someone that doesn’t really need that kind of resolution but I like everything else about the camera. I’ve used the M9, M240, M10 and M10M and this one feels the best to me (not counting my M-A film body). I also think what I was trying to say is that it would be great if a group of pixels could be more light sensitive together than separately, but that’s just wishful thinking - like having the ability to choose lower resolution mode and getting light sensitivity. In other words, choose 18MP and get sensitivity of an SL2S. Choose the next higher resolution and get lower sensitivity and so on. Maybe it makes no sense in reality…

Link to post
Share on other sites

12 hours ago, 01af said:
On 10/13/2022 at 12:41 AM, augustwest100 said:

Could it be used to create a “virtual” image stabilization ...?

Yes. Well ... in theory, at the expense of effective sensor area.

But then, it would require a lot of computing capacity so probably it won't happen due to lack of CPU power.

I assume that you are thinking of electronic image stabilization that compensates for hand jitter. It is only possible for video, which M11 does not support.

Link to post
Share on other sites

5 hours ago, augustwest100 said:

Thanks - in all fairness, I did start the thread saying I had no idea what I was talking about! 🤣 I think in retrospect what I was asking has more to do with the processor and whether there is something that can be done to make use of so many pixels other than raw resolution. I’m someone that doesn’t really need that kind of resolution but I like everything else about the camera. I’ve used the M9, M240, M10 and M10M and this one feels the best to me (not counting my M-A film body). I also think what I was trying to say is that it would be great if a group of pixels could be more light sensitive together than separately, but that’s just wishful thinking - like having the ability to choose lower resolution mode and getting light sensitivity. In other words, choose 18MP and get sensitivity of an SL2S. Choose the next higher resolution and get lower sensitivity and so on. Maybe it makes no sense in reality…

Larger or grouped pixels reduce the noise per pixel but not per image. The same effect of larger pixels is accomplished by resizing in the post.

Link to post
Share on other sites

vor 2 Stunden schrieb SrMi:

I assume that you are thinking of electronic image stabilization that compensates for hand jitter.

Yes, exactly.

.

vor 2 Stunden schrieb SrMi:

It is only possible for video, which M11 does not support.

No, it's possible for still images as well. You can always divide a long exposure into a series of short exposures and then add them together, eliminating noise and jitter in the process. Astro photographers do that all the time on a regular basis, and some digital cameras can do a similar thing (hand-held high-res mode, e. g. in Olympus OM-D E-M1X, OM-D E-M1 III, OM System OM-1).

Of course, it would require lots of memory and computing capacity. The M11 has the memory but (I guess) not the CPU power, so processing one stabilized image would probably take several seconds. It might still be useful for slow hand-held photography as in landscape, architecture, night scenes but not portraiture, sports, street photography.

And it would not depend on a particularly high-resolving sensor. The principle would work with any pixel count.

Link to post
Share on other sites

On 10/13/2022 at 5:08 AM, Santos said:

I am of the same mind. I have actually returned my M11 to the Store, but now I own Two (Black and Silver Thought Id need a Spare Listening to all the Complaints here). I have 6 Profiles set with each Resolution Option for Color and Black and White. Using Pixel Binning Helps a lot with Color Noise. I have a post around it somewhere here and confirmed it Actually works (as much as someone would choose to believe It). I can say that with confidence, as I use that Technology a lot with the Imaging Spacecrafts I work with, and having my Image Processing Team assessing the Raw Files. I have also Conversed with Leica's Upper Management during the L2 Watch launch Event ( A Proud Owner of the 20th One made). They have put me in touch with the Lead Engineers to discuss both the E-Shutter and how it can be improved, and the Pixel Binning. Their Arguments were satisfactory based on prior Knowledge. I keep stating no actual downsizing Occurs but Everyone can be their own Judge. If nothing else, try using it with Vintage Lenses such as the Nocti Classic, or a 35 pre FLE and It will have artistic Merit. 

I shoot my M9, M264D, M10D, M11 at 18, and SL2S for consistency. My M10M is my Wide Angle Flash Fashion Tool. Tests are easy to perform to confirm this with just a little bit of a lot of Pixel Peeping :p. Id say the best way to convince yourself is to look experiment yourself.

TLDR: Tested, found it actually works. But its the Internet so do your own thing and trust Your Gut. Many People have many Opinions. And You know the Internet never lies, Source Internet.

Backup Slides:

P.S. I cant disclose more details than Surface level, but Leica delivered for the past 10 Years I have Worked with their Systems and I am not Associated at all, in fact opting for not being an Ambassador for privacy purposes. I shoot Leica as a Professional (My Second Life after Space) and other than their Slow Service and the Fact that M11 Battery Bottom Plate is not Finished by Leica but the Factory I have no complaints. I just wish People would get over fetishizing things like M9 Colors or Dream Lenses or whatever else I have missed. Unpopular Opinion (My Opinion not a Fact) I Respect Mandrel but his Lenses lack and Cohesiveness to say it Bluntly. And if a Lens needs to be Fetishized its the 90 Elmarit (Post Mandrel One, Or the 75mm Summarit but again an Opinion. This is also Biased due to the Elmarit being my First Leica Lens and the 75mm Summarit's Quick and Accurate Throw which I Enjoy.) 

P.P.S. Not an Advertisement but if you need more Information look through my Instagram's Saved Stories and You'd realize how I Respect the Scientific Process of my Conclusions. I am mediocre at best when It comes to Photography, But I have Pride in my Process' and the Leica System.

P.P.P.S. Apologies, In a Bad Mood due to attempting a Routine change.

Santos.

Fascinating post, more please!

So for what circumstances / shooting task / environments you use M11 lower res, @Santos?

Link to post
Share on other sites

3 hours ago, 01af said:

Yes, exactly.

.

No, it's possible for still images as well. You can always divide a long exposure into a series of short exposures and then add them together, eliminating noise and jitter in the process. Astro photographers do that all the time on a regular basis, and some digital cameras can do a similar thing (hand-held high-res mode, e. g. in Olympus OM-D E-M1X, OM-D E-M1 III, OM System OM-1).

Of course, it would require lots of memory and computing capacity. The M11 has the memory but (I guess) not the CPU power, so processing one stabilized image would probably take several seconds. It might still be useful for slow hand-held photography as in landscape, architecture, night scenes but not portraiture, sports, street photography.

And it would not depend on a particularly high-resolving sensor. The principle would work with any pixel count.

I would not call that stabilization. The point of stabilization is to increase the slowest handheld shutter speed, not to reduce noise.

Link to post
Share on other sites

Sigh.

Please read more carefully so I wouldn't have to explain twice. You can use the same basic process to eliminate—or at least mitigate—noise, or jitter, or both. You might also use it to increase effective pixel count but then you cannot use it to reduce jitter at the same time.

Link to post
Share on other sites

On 10/17/2022 at 5:28 PM, augustwest100 said:

Thanks - in all fairness, I did start the thread saying I had no idea what I was talking about! 🤣 I think in retrospect what I was asking has more to do with the processor and whether there is something that can be done to make use of so many pixels other than raw resolution. I’m someone that doesn’t really need that kind of resolution but I like everything else about the camera. I’ve used the M9, M240, M10 and M10M and this one feels the best to me (not counting my M-A film body). I also think what I was trying to say is that it would be great if a group of pixels could be more light sensitive together than separately, but that’s just wishful thinking - like having the ability to choose lower resolution mode and getting light sensitivity. In other words, choose 18MP and get sensitivity of an SL2S. Choose the next higher resolution and get lower sensitivity and so on. Maybe it makes no sense in reality…

You say you have no use for the additional resolution, but would really like the camera to be more "light sensitive".  

You're actually not far off from getting what you want. Technically, there is no way to improve the "light sensitivity" of a sensor. This value is measured in Quantum Efficiency or QE, and it is inherent in the chip itself. It is mostly independent of the physical size of the pixels and is also independent of things like binning. It is usually expressed as a percentage, i.e., what percentage of the photons that strike the sensor are actually stored as a charge. In the case of the M11, assuming the sensor is a variant of the Sony IMX455 (likely), the peak QE is somewhere around 90%. Sony doesn't publish the value--they only show relative QE's as you change wavelength--but that value is reasonably close based on other people's empirical testing. That's the peak value, though, not the average value. It varies with the color of the light. Assume an average somewhere around 80% across the visual spectrum. That means that somewhere around 80% of the light that hits the sensor is stored as charge. I'm ignoring some loss in transmission from the Bayer filters for the sake of this discussion since the vast majority of sensors have essentially the same challenges there. 90% peak QE doesn't leave a lot of room for improvement, frankly, so there is no way to make the sensor significantly more sensitive short of photomultipliers (like a night vision device).

So, why would I say you can still get what you want?  It's because you probably don't really need the sensor to be more sensitive. What you need is to lower the noise. If you get your noise floor low enough, then you can simulate the same effect as a more sensitive sensor or a faster lens. With a lower noise floor you can either apply more gain at the time of exposure (raise the ISO) without the image deteriorating badly, or you can simply move the exposure slider in post processing, thus making your images appear brighter for a given amount of light.

So, how do you lower the noise floor? One way is to lower the resolution, either at the time of image capture as the M11 can, or by down-sampling in post. While there may be some subtle differences between the two for the M11, they will be more similar than different. This is essentially what the SL2S is doing compared to the SL2. Since the SL2S has fewer pixels than the SL2, each pixel is larger. With larger pixels, you get more charge per pixel from a given amount of irradiance (light per unit area per unit time). If the read noise stays fixed, and you get more signal per pixel by using a larger pixel--think of it as a larger bucket vs. a smaller bucket catching more rain drops--then your noise floor drops and you will see a cleaner image at a given gain/ISO. Using the "binning" function in the M11 will do exactly this. It works. It doesn't necessarily work any better than just down-sampling, but there is no question that both will tend to average out much of the noise, resulting in a lower noise floor and allowing you to run higher gain. So, choosing 18MP should, indeed, get a result very similar to--perhaps even a bit better than--the SL2S. The light sensitivity technically doesn't change, but you can run the camera at a higher ISO without feeling like noise has become obtrusive. If you have no use for the extra megapixels, by all means run your camera at 18MP, raise the ISO, and you will get a less noisy result with more dynamic range than at 60MP and the same high ISO.

So, why is it the improvements over the last couple generations of chips seem to be much smaller than earlier in the evolution of CCD and CMOS sensors? Why can't we run our cameras at ISO64,000 with perfect results? Interestingly, we have gotten to the point that read noise is often not the limiting factor in low light performance. Much of the noise that you see at higher ISO's is now what is called "shot noise" meaning it is intrinsic in the light from your subject, and is not necessarily an artifact from camera sensor. Believe it or not, photons from your subject do not enter your lens in a continuous, perfectly even stream. The intensity varies, and with dimmer subjects this can become very significant. I do a lot of astrophotography with telescopes, and one picture I took earlier this year I was really struggling to capture, even with almost seventy hours of exposure time. I did some calculations based on my raw data and found that each pixel was actually receiving an average of one photon every thirty seconds or so. I was taking two minute sub exposures. That meant, on average, just four photons of light. Some images would have just one or two photons per pixel. Some five or six or seven. It isn't perfectly even--that's shot noise. Getting one photon in 30s to be above the noise floor is really, really hard, but even that level of signal can be imaged with careful technique and processing. 

So, go for it! Shoot at 18MP if that suits you and don't look back. Your camera won't magically become more sensitive, but since the noise will be lower you'll be able to run higher ISO's as though it were more sensitive. Same result in the final image. 

- Jared

Edited by Jared
  • Like 1
  • Thanks 3
Link to post
Share on other sites

On 10/19/2022 at 8:31 PM, Jared said:

You say you have no use for the additional resolution, but would really like the camera to be more "light sensitive".  

You're actually not far off from getting what you want. Technically, there is no way to improve the "light sensitivity" of a sensor. This value is measured in Quantum Efficiency or QE, and it is inherent in the chip itself. It is mostly independent of the physical size of the pixels and is also independent of things like binning. It is usually expressed as a percentage, i.e., what percentage of the photons that strike the sensor are actually stored as a charge. In the case of the M11, assuming the sensor is a variant of the Sony IMX455 (likely), the peak QE is somewhere around 90%. Sony doesn't publish the value--they only show relative QE's as you change wavelength--but that value is reasonably close based on other people's empirical testing. That's the peak value, though, not the average value. It varies with the color of the light. Assume an average somewhere around 80% across the visual spectrum. That means that somewhere around 80% of the light that hits the sensor is stored as charge. I'm ignoring some loss in transmission from the Bayer filters for the sake of this discussion since the vast majority of sensors have essentially the same challenges there. 90% peak QE doesn't leave a lot of room for improvement, frankly, so there is no way to make the sensor significantly more sensitive short of photomultipliers (like a night vision device).

So, why would I say you can still get what you want?  It's because you probably don't really need the sensor to be more sensitive. What you need is to lower the noise. If you get your noise floor low enough, then you can simulate the same effect as a more sensitive sensor or a faster lens. With a lower noise floor you can either apply more gain at the time of exposure (raise the ISO) without the image deteriorating badly, or you can simply move the exposure slider in post processing, thus making your images appear brighter for a given amount of light.

So, how do you lower the noise floor? One way is to lower the resolution, either at the time of image capture as the M11 can, or by down-sampling in post. While there may be some subtle differences between the two for the M11, they will be more similar than different. This is essentially what the SL2S is doing compared to the SL2. Since the SL2S has fewer pixels than the SL2, each pixel is larger. With larger pixels, you get more charge per pixel from a given amount of irradiance (light per unit area per unit time). If the read noise stays fixed, and you get more signal per pixel by using a larger pixel--think of it as a larger bucket vs. a smaller bucket catching more rain drops--then your noise floor drops and you will see a cleaner image at a given gain/ISO. Using the "binning" function in the M11 will do exactly this. It works. It doesn't necessarily work any better than just down-sampling, but there is no question that both will tend to average out much of the noise, resulting in a lower noise floor and allowing you to run higher gain. So, choosing 18MP should, indeed, get a result very similar to--perhaps even a bit better than--the SL2S. The light sensitivity technically doesn't change, but you can run the camera at a higher ISO without feeling like noise has become obtrusive. If you have no use for the extra megapixels, by all means run your camera at 18MP, raise the ISO, and you will get a less noisy result with more dynamic range than at 60MP and the same high ISO.

So, why is it the improvements over the last couple generations of chips seem to be much smaller than earlier in the evolution of CCD and CMOS sensors? Why can't we run our cameras at ISO64,000 with perfect results? Interestingly, we have gotten to the point that read noise is often not the limiting factor in low light performance. Much of the noise that you see at higher ISO's is now what is called "shot noise" meaning it is intrinsic in the light from your subject, and is not necessarily an artifact from camera sensor. Believe it or not, photons from your subject do not enter your lens in a continuous, perfectly even stream. The intensity varies, and with dimmer subjects this can become very significant. I do a lot of astrophotography with telescopes, and one picture I took earlier this year I was really struggling to capture, even with almost seventy hours of exposure time. I did some calculations based on my raw data and found that each pixel was actually receiving an average of one photon every thirty seconds or so. I was taking two minute sub exposures. That meant, on average, just four photons of light. Some images would have just one or two photons per pixel. Some five or six or seven. It isn't perfectly even--that's shot noise. Getting one photon in 30s to be above the noise floor is really, really hard, but even that level of signal can be imaged with careful technique and processing. 

So, go for it! Shoot at 18MP if that suits you and don't look back. Your camera won't magically become more sensitive, but since the noise will be lower you'll be able to run higher ISO's as though it were more sensitive. Same result in the final image. 

- Jared

This is a helpful description, thank you.  Out of curiosity, would the same apply to slight motion blur? So for example, at a very high resolution and a shutter speed of 1/30, I might get motion blur even with a 35mm lens. At the lower resolution, will the impact of the motion be "reduced" by the downsampling in the same way it might not be visible at the same shutter speed with the same lens on a camera with lower megapixels such as the SL2S (leaving aside image stabilization?   

Link to post
Share on other sites

4 hours ago, augustwest100 said:

This is a helpful description, thank you.  Out of curiosity, would the same apply to slight motion blur? So for example, at a very high resolution and a shutter speed of 1/30, I might get motion blur even with a 35mm lens. At the lower resolution, will the impact of the motion be "reduced" by the downsampling in the same way it might not be visible at the same shutter speed with the same lens on a camera with lower megapixels such as the SL2S (leaving aside image stabilization?   

Depends on how you are looking at the image. If you view it at 100%--one camera pixel equals one screen pixel--then, yes, the lower resolution will minimize motion blur. However, that's not how we tend to view pictures. We tend to look at them as an 8x10 print, say, or as an image that "fills the screen" on our mobile device or at a relatively low resolution on instagram. In these situations, no, shooting at a lower resolution won't help, at least not directly. However, it may help indirectly by letting you, once again, raise the gain (ISO) and thus use a higher shutter speed to reduce the chances of motion blur. 

Link to post
Share on other sites

When they pixel bin are they reading the receptors at different gain values?

I'm not sure if this is possible but one idea would be to read a smaller set of receptors to favor highlight detail and combine the two readings. Kind of like a bracketed gain reading.

Fuji did something like this with their SuperCCD, except they had an additional set of dedicated receptors used to capture highlight information. This information was combined with the main receptors and resulted in significantly more highlight retention

Arri one ups this by taking two simultaneous bracketed gain readings at each receptor well, resulting in two exposure readings that are combined on the fly and results in an obscene exposure range. The new Alexa S35 captures a true, fully useable 17 stops of range.

Edited by thrid
Link to post
Share on other sites

2 hours ago, thrid said:

Arri one ups this by taking two simultaneous bracketed gain readings at each receptor well, resulting in two exposure readings that are combined on the fly and results in an obscene exposure range. The new Alexa S35 captures a true, fully usable 17 stops of range.

There's a reason why Arri costs as much as it does.
Now, Canon entered this field with C70. I wish Leica followed this path with the future sensor. 

Edited by mirekti
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...