Jump to content
mirekti

When will (if ever) Leica M get something like Canon DGO sensor?

Recommended Posts

Advertisement (gone after registration)

The sensor tech in the M10R is not directly related to the M10P. The M10R tech is related to the M10M and S3. Saying bigger pixels are less noisy than smaller ones doesn't always work across technologies. Anyway, pixel binning can provide some advantages that single pixels can't.

I still have my M10, although it deserves a better home. In my tests, including prints, at the same output size the M10R bests the M10 at pretty much any ISO in any lighting in terms of both DR and noise. To me, pixel level comparisons are pointless. It's what I see at output that matters.

I don't know why I bother anyway. I should stay away from threads like this. I couldn't really care less about a bit of luminance noise. I usually add more rather than noise reduction which I have set to zero, most of the time. Except for scientific applications I don't think I've ever seen an image that moved me that I thought might be improved by noise reduction. DR can be useful, sometimes. But arguing ove noise in a modern sensor is kind of a nothing, for me.  I do know this. If I look at an image and what I see first is the lack of noise or CA, then that image has failed.

Gordon

Share this post


Link to post
Share on other sites
4 hours ago, FlashGordonPhotography said:

The sensor tech in the M10R is not directly related to the M10P. The M10R tech is related to the M10M and S3. Saying bigger pixels are less noisy than smaller ones doesn't always work across technologies. Anyway, pixel binning can provide some advantages that single pixels can't.

I still have my M10, although it deserves a better home. In my tests, including prints, at the same output size the M10R bests the M10 at pretty much any ISO in any lighting in terms of both DR and noise. To me, pixel level comparisons are pointless. It's what I see at output that matters.

I don't know why I bother anyway. I should stay away from threads like this. I couldn't really care less about a bit of luminance noise. I usually add more rather than noise reduction which I have set to zero, most of the time. Except for scientific applications I don't think I've ever seen an image that moved me that I thought might be improved by noise reduction. DR can be useful, sometimes. But arguing ove noise in a modern sensor is kind of a nothing, for me.  I do know this. If I look at an image and what I see first is the lack of noise or CA, then that image has failed.

Gordon

It is unfortunate that many photographers and forum participants still think that an output (e.g., printed image) will have less noise if shot with sensor that have larger pixel. Even Karl Taylor published a YouTube video that shares that misconception. Of course, that does not prevent him or others from producing excellent results.

Share this post


Link to post
Share on other sites
5 hours ago, SrMi said:

It is unfortunate that many photographers and forum participants still think that an output (e.g., printed image) will have less noise if shot with sensor that have larger pixel

I will quit thinking this way when small pixels become as clean as larger ones, which will never happen if both use the same technology i suspect but i may be wrong.

Share this post


Link to post
Share on other sites

Actually, one sensor-sized pixel would have the least noise.

Share this post


Link to post
Share on other sites
8 hours ago, SrMi said:

It is unfortunate that many photographers and forum participants still think that an output (e.g., printed image) will have less noise if shot with sensor that have larger pixel.

There are one or two things whose clarification would benefit us all, and I have no intention or desire to patronise you in any way but rather to separate some things from their surroundings so they can be considered.

Firstly, sensors don't have pixels.  The term "pixel" is an abbreviation of 'picture element'.  The sub-elements of a digital camera sensor are photodiodes, which are sometimes called 'photosites' or 'sensels' and each one's sole purpose is to produce an electrical output in response to photons striking its surface.

Secondly, noise.  There is "signal" and there is "noise".  Signal is output from the photodiode(s) that we do want and noise is everything else in the photodiode's output.  And that we don't want.  This is why discussion about 'signal to noise ratio' (SNR) is important because SNR gives an idea of how much noise there is in relation to the amount of signal, which of course can be very important in producing clear, low-grain photographs.

There are a number of sources of noise in photodiode output: noise can be already baked-into the materials used; noise can be produced or increased by a rise in temperature; noise can be introduced by neighbouring components; noise is caused by natural electron fluctuation in quantum levels (Shot noise); noise is produced during signal-processing inside the camera; noise is produced by gain (amplifying the signal); and noise can be introduced by randomly occurring charged atmospheric particles such as Neutrinos.  All of these sources combine to affect the SNR and to produce what we view as noise in a photograph.

Whether photodiodes with small surface diameters produce outputs with a higher or lower SNR than photodiodes with larger surface diameters will depend on the factors mentioned above.

What is clear is that a photodiode with a large(r) surface diameter will, by the law of averages, be struck by more photons and therefore more signal will be produced and the SNR at that point will be higher.  What happens after that point, for example, on-chip processing and noise-reduction in CMOS sensors, heat generation and heat-sinking, the number of stages of in-camera processing, etc will determine the final level of the SNR from the sensor.

It is therefore not reasonable to assume as a rule that sensors with smaller photodiodes produce more noise than sensors with larger photodiodes or vice versa because there are simply too many factors involved, particularly in modern chip technology.  

It will be interesting to see whether '5-nanometre' technology (or sub-nanometre technology that is in development now) will make a difference to sensor efficiency, SNR, or photography in general (beyond smart-phones).

Pete.

Share this post


Link to post
Share on other sites
5 hours ago, lct said:

I will quit thinking this way when small pixels become as clean as larger ones, which will never happen if both use the same technology i suspect but i may be wrong.

I am specifically not talking about pixels, but about output (printed image, instagram post). I expect smaller pixels to have more noise than larger pixels.

Edited by SrMi

Share this post


Link to post
Share on other sites

Advertisement (gone after registration)

46 minutes ago, farnz said:

There are one or two things whose clarification would benefit us all, and I have no intention or desire to patronise you in any way but rather to separate some things from their surroundings so they can be considered.

Firstly, sensors don't have pixels.  The term "pixel" is an abbreviation of 'picture element'.  The sub-elements of a digital camera sensor are photodiodes, which are sometimes called 'photosites' or 'sensels' and each one's sole purpose is to produce an electrical output in response to photons striking its surface.

Secondly, noise.  There is "signal" and there is "noise".  Signal is output from the photodiode(s) that we do want and noise is everything else in the photodiode's output.  And that we don't want.  This is why discussion about 'signal to noise ratio' (SNR) is important because SNR gives an idea of how much noise there is in relation to the amount of signal, which of course can be very important in producing clear, low-grain photographs.

There are a number of sources of noise in photodiode output: noise can be already baked-into the materials used; noise can be produced or increased by a rise in temperature; noise can be introduced by neighbouring components; noise is caused by natural electron fluctuation in quantum levels (Shot noise); noise is produced during signal-processing inside the camera; noise is produced by gain (amplifying the signal); and noise can be introduced by randomly occurring charged atmospheric particles such as Neutrinos.  All of these sources combine to affect the SNR and to produce what we view as noise in a photograph.

Whether photodiodes with small surface diameters produce outputs with a higher or lower SNR than photodiodes with larger surface diameters will depend on the factors mentioned above.

What is clear is that a photodiode with a large(r) surface diameter will, by the law of averages, be struck by more photons and therefore more signal will be produced and the SNR at that point will be higher.  What happens after that point, for example, on-chip processing and noise-reduction in CMOS sensors, heat generation and heat-sinking, the number of stages of in-camera processing, etc will determine the final level of the SNR from the sensor.

It is therefore not reasonable to assume as a rule that sensors with smaller photodiodes produce more noise than sensors with larger photodiodes or vice versa because there are simply too many factors involved, particularly in modern chip technology.  

It will be interesting to see whether '5-nanometre' technology (or sub-nanometre technology that is in development now) will make a difference to sensor efficiency, SNR, or photography in general (beyond smart-phones).

Pete.

Good detailed specification of terms, and I agree with everything  ... but you missed my point. 

I was not talking about pixel noise but about the image noise (i.e., output, same size, print, etc).

While everyone enjoys occasional (or frequent) pixel peeping, the final result is what should matter, IMO.

Edited by SrMi

Share this post


Link to post
Share on other sites
1 hour ago, SrMi said:

... but you missed my point. 

In that case, my apologies.  

Perhaps I struggled because the output, whether to a printed image or to a computer screen or to a large monitor, will inevitably be derived from the digital sensor, hence my earlier lengthy post.  You mentioned sensor and pixels so I assumed you were only referring to digital cameras and not film, and you said "e.g., printed image" so I assume that other modes of output such as monitors etc were included.

Since you differentiate between "pixel noise" and "image noise", then what is the source of the image noise that doesn't derive from the sensor? (This is a genuine question.)

It would be helpful if you could define what you mean by an image in this context because the term 'image' can obviously mean a number of different things.

Pete.

 

Share this post


Link to post
Share on other sites
22 minutes ago, farnz said:

In that case, my apologies.  

Perhaps I struggled because the output, whether to a printed image or to a computer screen or to a large monitor, will inevitably be derived from the digital sensor, hence my earlier lengthy post.  You mentioned sensor and pixels so I assumed you were only referring to digital cameras and not film, and you said "e.g., printed image" so I assume that other modes of output such as monitors etc were included.

Since you differentiate between "pixel noise" and "image noise", then what is the source of the image noise that doesn't derive from the sensor? (This is a genuine question.)

It would be helpful if you could define what you mean by an image in this context because the term 'image' can obviously mean a number of different things.

Pete.

 

Your lengthy post was educational as it clarified my approximations (pixels vs. sensels, noise vs. SNR). I liked it. 

The misunderstandings are only at what level to evaluate the noise.

In my post, I differentiate between pixel level and image/output level noise (SNR). The pixel-level noise is visible when viewing on-screen at 100%. The image-level noise is observable when looking at images with the same resolution (prints or JPGs for online sharing). 
If you compare a typical 24Mp vs. +40Mp image, one expects to see more noise per pixel (when pixel peeping). But there should be no noise difference when looking at files that have been resized to the same dimensions, e.g., for printing or sharing online.
When printing at the same size, you should not see less noise with 24Mp cameras than with +40Mp cameras.
I have described nothing new, and it has been documented by Sean Reid and many other knowledgeable reviewers. Bill Claff's PDR graphs also demonstrate the described observation.

Share this post


Link to post
Share on other sites

In my modest experience reducing the size of a noisy image does not make it appear cleaner but i'm no techie at all so again i may be wrong.

Share this post


Link to post
Share on other sites
17 minutes ago, SrMi said:

Your lengthy post was educational as it clarified my approximations (pixels vs. sensels, noise vs. SNR). I liked it. 

The misunderstandings are only at what level to evaluate the noise.

In my post, I differentiate between pixel level and image/output level noise (SNR). The pixel-level noise is visible when viewing on-screen at 100%. The image-level noise is observable when looking at images with the same resolution (prints or JPGs for online sharing). 
If you compare a typical 24Mp vs. +40Mp image, one expects to see more noise per pixel (when pixel peeping). But there should be no noise difference when looking at files that have been resized to the same dimensions, e.g., for printing or sharing online.
When printing at the same size, you should not see less noise with 24Mp cameras than with +40Mp cameras.
I have described nothing new, and it has been documented by Sean Reid and many other knowledgeable reviewers. Bill Claff's PDR graphs also demonstrate the described observation.

Thank you for clarifying.

"If you compare a typical 24Mp vs. +40Mp image, one expects to see more noise per pixel (when pixel peeping). But there should be no noise difference when looking at files that have been resized to the same dimensions, e.g., for printing or sharing online."

I would expect the magnification/resizing algorithm to have an effect.  To take it to the absurd for illustration, if you took two files of the same resolution, ie line pairs per millimetre, where File A is the size of a postage stamp and magnified it to 2400 px on the long side and then took File B that is the size of a tennis court and reduced it to 2400 px on the long side you would expect to see differences in the two 2400 px images wouldn't you?

Now do the same exercise with a 24 Mpx file and a 40+ Mpx file of the same subject then doesn't it follow that they will inevitably be at least slightly affected by the re-sizing process?  Whether this affects the inherent noise in both files equally or not at all will depend on the algorithm itself, ie does it place compounded (additive) noise arising from the reduction process above definition/clarity (call it resolution for simplicity's sake) and thereby deploy some form of noise reduction, or conversely does it value resolution higher and skip any noise reduction?  Only the algorithm's coders will know this. Entropy

The JPEG compaction process will also add it's own noise by, for example, getting to a point where it compacts, say, 5 bytes into the space allowed for 3 bytes and any 'part' bytes or register filler bytes will not represent 'signal' and will therefore be noise.

It's also important to remember that noise is randomly generated and randomly spread throughout an image so, say, removing half of the total pixels wouldn't necessarily remove half of the total noise.

I doubt that I've effectively answered your point but I hope I've provided some illustration and new pathways to consider. 🙂

Pete.

Share this post


Link to post
Share on other sites
21 hours ago, farnz said:

Thank you for clarifying.

"If you compare a typical 24Mp vs. +40Mp image, one expects to see more noise per pixel (when pixel peeping). But there should be no noise difference when looking at files that have been resized to the same dimensions, e.g., for printing or sharing online."

I would expect the magnification/resizing algorithm to have an effect.  To take it to the absurd for illustration, if you took two files of the same resolution, ie line pairs per millimetre, where File A is the size of a postage stamp and magnified it to 2400 px on the long side and then took File B that is the size of a tennis court and reduced it to 2400 px on the long side you would expect to see differences in the two 2400 px images wouldn't you?

Now do the same exercise with a 24 Mpx file and a 40+ Mpx file of the same subject then doesn't it follow that they will inevitably be at least slightly affected by the re-sizing process?  Whether this affects the inherent noise in both files equally or not at all will depend on the algorithm itself, ie does it place compounded (additive) noise arising from the reduction process above definition/clarity (call it resolution for simplicity's sake) and thereby deploy some form of noise reduction, or conversely does it value resolution higher and skip any noise reduction?  Only the algorithm's coders will know this. Entropy

The JPEG compaction process will also add it's own noise by, for example, getting to a point where it compacts, say, 5 bytes into the space allowed for 3 bytes and any 'part' bytes or register filler bytes will not represent 'signal' and will therefore be noise.

It's also important to remember that noise is randomly generated and randomly spread throughout an image so, say, removing half of the total pixels wouldn't necessarily remove half of the total noise.

I doubt that I've effectively answered your point but I hope I've provided some illustration and new pathways to consider. 🙂

Pete.

There can be many variables. What I was thinking of is a simple case;
- downsize +40MPx to 24MPx (a simple PS downsize or downsize when generating JPG)
- generate JPGs or prints the same way from both images.
The +40MPx image should look equal or better than the 24MPx image, both in noise and detail.

Share this post


Link to post
Share on other sites
1 hour ago, SrMi said:

There can be many variables. What I was thinking of is a simple case;
- downsize +40MPx to 24MPx (a simple PS downsize or downsize when generating JPG)
- generate JPGs or prints the same way from both images.
The +40MPx image should look equal or better than the 24MPx image, both in noise and detail.

In theory I agree wit you but I don't think it's quite that simple.

The PS downsizing (downsampling) will inevitably produce artefacts, which, since they are not signal are therefore noise, so the downsampling process itself will add noise.  Whether this image 'degradation' might affect the 24 Mpx or 40+ Mpx files the same, better or worse is unknown.  For example, it might be that since the 40+ Mpx file is required then stronger downsampling is needed and a higher degree of noise gets added (which was not in the picture before the downsampling), making the 40+ Mpx file look the noisier of the two.  Or it might be the other way around.  Without understanding PS's downsampling algorithms and how they treat artefacts that it creates and how they might appear visibly in an image it's not really possible to know.

I suppose that the point I'm making is that there are a large number of variables in the image train and the noise that's evident in an image might have originated elsewhere but will make the image appear noisier than it actually is.  This might happen to either the 24 Mpx or the 40+ Mpx file and might (wrongly) sway you one way or the other as to which image is noisier.

(You'll probably be relieved to hear 😄) that I'm going to leave it there.

Pete.

Share this post


Link to post
Share on other sites
49 minutes ago, farnz said:

In theory I agree wit you but I don't think it's quite that simple.

The PS downsizing (downsampling) will inevitably produce artefacts, which, since they are not signal are therefore noise, so the downsampling process itself will add noise.  Whether this image 'degradation' might affect the 24 Mpx or 40+ Mpx files the same, better or worse is unknown.  For example, it might be that since the 40+ Mpx file is required then stronger downsampling is needed and a higher degree of noise gets added (which was not in the picture before the downsampling), making the 40+ Mpx file look the noisier of the two.  Or it might be the other way around.  Without understanding PS's downsampling algorithms and how they treat artefacts that it creates and how they might appear visibly in an image it's not really possible to know.

I suppose that the point I'm making is that there are a large number of variables in the image train and the noise that's evident in an image might have originated elsewhere but will make the image appear noisier than it actually is.  This might happen to either the 24 Mpx or the 40+ Mpx file and might (wrongly) sway you one way or the other as to which image is noisier.

(You'll probably be relieved to hear 😄) that I'm going to leave it there.

Pete.

I let you have the last word :).

Share this post


Link to post
Share on other sites
Quote

When will (if ever) Leica M get something like Canon DGO sensor?

When Leica will decide it is profitable and order it from Canon. 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...