Jump to content

Video mode on future M


Eastgreenlander

Recommended Posts

Advertisement (gone after registration)

I'm not sure what your argument is. All things equal more pixels means better high-ISO performance at the same reproduction size. Shadow noise is influenced most heavily by sensor-generated noise (though shot is a big factor as well), which improves with every generation of sensors and every generation of improved lithography.

 

I don't want to go round and round on this. May we accept a wait-and-see? I believe that the more dense the pixel wells, the more noise and loss of fidelity one has to contend with because we approach the wave size of the captured light. CoC becomes commingled. We already have diffraction limited lenses.

 

Whether I live long enough to find the conclusion to this is a question, and another remains - a completely different means to capture light for photography. I think we are in a transition between major technological changes - but I digress. Suddenly I feel like shooting some 8x10 wide-open. :eek:

Link to post
Share on other sites

  • Replies 517
  • Created
  • Last Reply
All things equal more pixels means better high-ISO performance at the same reproduction size.

It doesn’t. Everything else being equal, more pixels means more noise (also less dynamic range). Depending on print size and viewing distance, the inevitable averaging of neighbouring pixels by the printing process as well as the limited resolution of human vision reduces not just the effectively visible resolution but also noise, so if you are lucky the resulting image doesn’t show more noise than one taken with a lower resolution sensor. But by the same token it will also not reveal the added resolution delivered by the higher resolution sensor. You can go for a higher resolution (and more noise) or for less noise (and a lower resolution), but not both – without increasing the sensor size. Technological advantages in sensor design don’t change these fundamental dependencies.

Link to post
Share on other sites

It doesn’t. Everything else being equal, more pixels means more noise (also less dynamic range). Depending on print size and viewing distance, the inevitable averaging of neighbouring pixels by the printing process as well as the limited resolution of human vision reduces not just the effectively visible resolution but also noise, so if you are lucky the resulting image doesn’t show more noise than one taken with a lower resolution sensor. But by the same token it will also not reveal the added resolution delivered by the higher resolution sensor. You can go for a higher resolution (and more noise) or for less noise (and a lower resolution), but not both – without increasing the sensor size. Technological advantages in sensor design don’t change these fundamental dependencies.

 

Your first assertion is wrong -- more pixels (samples) improves the SNR of the image by the root of the increase of the increase. Give the same sensor 4x the pixels and the SNR doubles. Wikipedia has a great article on information theory if you're interested.

 

Your second is wrong as well. Clearly well-size has not been a factor in dynamic range in years -- the D3x has 2 stops better DR than the D3s despite having twice the pixels, and many P&S (with ~200MP FF densities or higher) have 11 stops of DR or more.

 

As for the third, you're 100% right. No, you don't get both more resolution and better high ISO with higher megapixel counts. You get more pixels for bigger prints when you do studio or low-ISO work, and you get more accurate and noiseless images at high-ISO.

 

I think it would help you understand the logical error you're making when you consider your concept of the 'averaging process'. How would one get less detail averaging four pixels than by having one large one in their place? What if one isn't averaging the pixels but chooses to keep all four separate ones.

 

The only legitimate complaints to increasing MP counts are wafer yields and processing/storage/readout problems.

Link to post
Share on other sites

Your first assertion is wrong -- more pixels (samples) improves the SNR of the image by the root of the increase of the increase. Give the same sensor 4x the pixels and the SNR doubles. Wikipedia has a great article on information theory if you're interested.

 

Your second is wrong as well. Clearly well-size has not been a factor in dynamic range in years -- the D3x has 2 stops better DR than the D3s despite having twice the pixels, and many P&S (with ~200MP FF densities or higher) have 11 stops of DR or more.

 

As for the third, you're 100% right. No, you don't get both more resolution and better high ISO with higher megapixel counts. You get more pixels for bigger prints when you do studio or low-ISO work, and you get more accurate and noiseless images at high-ISO.

 

I think it would help you understand the logical error you're making when you consider your concept of the 'averaging process'. How would one get less detail averaging four pixels than by having one large one in their place? What if one isn't averaging the pixels but chooses to keep all four separate ones.

 

The only legitimate complaints to increasing MP counts are wafer yields and processing/storage/readout problems.

 

1)

Why don't we all shoot with PS cameras with 200MP (equal to FF)...?

 

2)

Why have the most profitable camera producer in the world over the last 10 years not lauched a 35 - or 45 MP in their new 1Dx, but only 18MP,- down from 21MP of the 1Ds III?

Link to post
Share on other sites

Why have the most profitable camera producer in the world over the last 10 years not lauched a 35 - or 45 MP in their new 1Dx, but only 18MP,- down from 21MP of the 1Ds III?

… considering that Canon has proven to be able to cram 120 MP on an APS-H sensor, see Canon develops world's first 120 megapixel APS-H CMOS sensor: Digital Photography Review. An impressive demonstration of technological prowess; still there is no product actually employing this sensor.

Link to post
Share on other sites

Advertisement (gone after registration)

Your first assertion is wrong -- more pixels (samples) improves the SNR of the image by the root of the increase of the increase. Give the same sensor 4x the pixels and the SNR doubles. Wikipedia has a great article on information theory if you're interested. [....]

 

True only in your dreams. It may look like an attractive mathematical view, but IRL, it does not work. Take into consideration the size of the CoC and density of pixels and the current Bayer averaging, more pixels create more artifacts.

 

Your second is wrong as well. Clearly well-size has not been a factor in dynamic range in years -- the D3x has 2 stops better DR than the D3s despite having twice the pixels, and many P&S (with ~200MP FF densities or higher) have 11 stops of DR or more.

 

That is due to binning of data to make up for the confusion of CoC and Bayer issues.

Link to post
Share on other sites

… considering that Canon has proven to be able to cram 120 MP on an APS-H sensor, see Canon develops world's first 120 megapixel APS-H CMOS sensor: Digital Photography Review. An impressive demonstration of technological prowess; still there is no product actually employing this sensor.

 

Because no machine can print it!

Link to post
Share on other sites

True only in your dreams. It may look like an attractive mathematical view, but IRL, it does not work.

Even the theory doesn’t work out. Averaging the signals of several sensor pixels does reduce noise, that much is true. But when you replace one big sensor pixel by several smaller ones, the smaller pixels have a reduced signal-to-noise-ratio to begin with. In the end the gain achieved by averaging wouldn’t even suffice to compensate for the loss incurred by shrinking the pixels.

 

Suppose there is a square area on the sensor measuring 6.8 microns by 6.8 microns. If the sensor in question happens to be the sensor of the M9, this area would harbour a single pixel. There is a certain number of photons hitting that area during exposure and a certain percentage of these photons gets absorbed, resulting in photoelectrons that are stored in a potential well. Its capacity is limited by the pixel size and when it’s full, it’s full, and no further electrons can be collected. Both noise and dynamic range crucially depend on this number (and a few other factors of course) and thus indirectly on the pixel area. If you divide the area into four squares measuring 3.4 microns by 3.4 microns each and build a sensor with four times as many smaller pixels fitting into those 3.4 microns by 3.4 microns squares, each pixel could (at most) store 1/4 of the electrons the original, bigger pixel could store – probably less since more pixels implies more infrastructure and less area that can be used to collect light and to store electrons. Together the four sensor pixels would probably collect fewer electrons than the large pixel could, resulting in more noise and less dynamic range. At best the number of electrons would stay the same, but there would be no net gain whatsoever.

Link to post
Share on other sites

How would one get less detail averaging four pixels than by having one large one in their place?

No, you would get exactly the same detail. And you would get the same amount of noise at best, though due to technical constraints there would usually be more noise.

 

You are trying to have your cake and eat it – you shrink pixels so you can fit more pixels in a given area, but totally ignore the loss of signal-to-noise-ratio and dynamic range this incurs. Since you mistakenly believe the smaller pixels would behave just like the larger one, you hope that you could get higher resolution without the price (paid in noise). But short of using magic to battle the laws of physics that won’t happen. A reductio ad absurdum of your idea would consist in shrinking the sensor pixels to a point where each pixel could store just one electron. In that case you would get a black-and-white image – black if there was no electron and white if there was one. And it would be a terribly noisy b&w image at that.

Link to post
Share on other sites

Even the theory doesn’t work out. Averaging the signals of several sensor pixels does reduce noise, that much is true. But when you replace one big sensor pixel by several smaller ones, the smaller pixels have a reduced signal-to-noise-ratio to begin with. In the end the gain achieved by averaging wouldn’t even suffice to compensate for the loss incurred by shrinking the pixels.

 

You're missing the entire point of my reference to information theory. Increasing the number of samples (by using more smaller pixels) increases the accuracy of the sample.

 

Both noise and dynamic range crucially depend on this number (and a few other factors of course) and thus indirectly on the pixel area.
Signal depends on the size of the photowell, but not to an extent overriding the gains from smaller pixels. My references to the D3x vis a vis D3s and P&S in regards to DR have yet to be answered -- clearly DR matters nothing at all for well size on modern sensors.

 

Together the four sensor pixels would probably collect fewer electrons than the large pixel could, resulting in more noise and less dynamic range. At best the number of electrons would stay the same, but there would be no net gain whatsoever.
I urge you to read the information theory article on wikipedia.

 

No, you would get exactly the same detail. And you would get the same amount of noise at best, though due to technical constraints there would usually be more noise.

To put what you just said in context, a sensor with just one large pixel collects exactly the same detail as one with 16MP. You're 100% wrong.

 

You are trying to have your cake and eat it – you shrink pixels so you can fit more pixels in a given area, but totally ignore the loss of signal-to-noise-ratio and dynamic range this incurs.
I thought I explicitly mentioned that you can't have your cake and eat it. To quote myself,
No, you don't get both more resolution and better high ISO with higher megapixel counts. You get more pixels for bigger prints when you do studio or low-ISO work, and you get more accurate and noiseless images at high-ISO.
.

 

Since you mistakenly believe the smaller pixels would behave just like the larger one, you hope that you could get higher resolution without the price (paid in noise).
I never said nor implied that. Four smaller pixels of 1/4 the size of a large pixel produce a more detailed image with at worst the same level of noise.

 

But short of using magic to battle the laws of physics that won’t happen. A reductio ad absurdum of your idea would consist in shrinking the sensor pixels to a point where each pixel could store just one electron. In that case you would get a black-and-white image – black if there was no electron and white if there was one. And it would be a terribly noisy b&w image at that.
There's actually a fascinating lecture by the inventor of CMOS on the state and future of digital imaging. The concept you're looking for (individual photons) is called quantum dot imaging. It's the future of digital imaging according to him. CMOS sensor inventor Eric Fossum discusses digital image sensors: Digital Photography Review if you have the time. He also brings up some good arguments against more megapixels you've not brought up yet (CoC and diffraction), but you'll note noise and DR are not among them.
Link to post
Share on other sites

1)

 

Why have the most profitable camera producer in the world over the last 10 years not lauched a 35 - or 45 MP in their new 1Dx, but only 18MP,- down from 21MP of the 1Ds III?

 

I am not arguing any of the points in this thread but I don't think we can assume simply by the introduction of one camera that Canon will not be offering cameras with higher pixel counts. Canon makes various cameras for various roles and even seems to be expanding the range with a high end EOS video camera.

 

Besides, you can also look at this as a step up from the 1D Mark IV not a step down from the 1Ds III. I will say that when it comes to sensor and processing improvements in resolution, high ISO noise, and dynamic range, it seems there has been steady (if sometimes gradual) progress in all aspects and I don't see why this won't keep up for a while at least.

Link to post
Share on other sites

Four smaller pixels of 1/4 the size of a large pixel produce a more detailed image with at worst the same level of noise.

You’ll have to argue with nature, not me. Good luck with that.

 

There's actually a fascinating lecture by the inventor of CMOS on the state and future of digital imaging.

Yeah, I know. I’ve watched it.

 

The concept you're looking for (individual photons) is called quantum dot imaging. It's the future of digital imaging according to him.

It might be. But it isn’t the same concept really. What Fossum talks about is a photon counter – a sensor that doesn’t care about storing photo electrons, neither one (as in my example) nor tens of thousands (as in current sensor designs), but just enables us to count incoming photons (if we had the technology necessary which as of now we have not). Such as sensor would have an arbitrary ISO sensitivity and an unlimited dynamic range, and it would support really small pixels as well. It is something like the holy grail of sensor design. Note that even if such a sensor was a reality we wouldn’t milk its many pixels for all the resolution they are worth; rather we would usually use software binning of multiple pixels to reduce noise (at the expense of also reducing resolution). Even a photon counter would still suffer from shot noise; there’s no way to avoid it since shot noise is inherent in light itself (if you increase the number of sensor pixels on a given area by some factor, shot noise increases by the square root of that factor).

 

Still you won’t find Fossum arguing in his lecture that – given sensor technology as we know it – arbitrarily increasing the pixel count was a good thing. Quite the opposite actually. According to Fossum, the trend to ever increasing megapixel figures is driven by marketing, not engineering.

Link to post
Share on other sites

I am not arguing any of the points in this thread but I don't think we can assume simply by the introduction of one camera that Canon will not be offering cameras with higher pixel counts. Canon makes various cameras for various roles and even seems to be expanding the range with a high end EOS video camera.

Indeed it is quite possible that the EOS 5D Mark III will sport a higher resolution sensor. It has happened with compact cameras before: The Canon G12, Nikon P7100, and Panasonic LX5 (aka Leica D-Lux 5) have 10 MP, the Olympus XZ-1 11 MP, and the Fuji X10 12 MP. They all share a pixel pitch of about 2 microns so the differences in resolution are due to the different sensor sizes ranging from 1/1,7" to 2/3". You can get much cheaper models from those vendors offering a whopping 16 MP (with a pixel pitch below 1.4 microns). I wouldn’t be surprised if Canon should introduce a midrange DSLR with a higher MP count than their top-of-the-line model. Anyone still thinking high MP figures were cool should think again.

Link to post
Share on other sites

Indeed it is quite possible that the EOS 5D Mark III will sport a higher resolution sensor.

 

I think Canon and Nikon will each have a similar two camera strategy. Nikon with a 36MP D800 and 16MP D4. Canon with the 18MP 1Dx and the ??MP 5DIII or similar. They probably figure a certain number of photographers will buy both models if there is enough differentiation. I expect that they will be able to show higher resolution with the higher pixel count cameras to encourage purchase by knowledgeable photographers who could benefit from this.

Link to post
Share on other sites

I think Canon and Nikon will each have a similar two camera strategy. Nikon with a 36MP D800 and 16MP D4. Canon with the 18MP 1Dx and the ??MP 5DIII or similar. They probably figure a certain number of photographers will buy both models if there is enough differentiation. I expect that they will be able to show higher resolution with the higher pixel count cameras to encourage purchase by knowledgeable photographers who could benefit from this.

 

I am convinced that we will not see 24 x 36 mm sensors with more than 24 million pixels.

Link to post
Share on other sites

I am convinced that we will not see 24 x 36 mm sensors with more than 24 million pixels.

 

I guess that will depend on if you have your eyes shut or open when someone puts a D800 in front of you next year. We'll all know one way or the other fairly soon despite our opinions.

 

Besides the reports that this camera already does exist and production is just being held up due to the flooding in Thailand... can you convince us why Sony is making a 24MP APS sensor but will not increase the larger sensor past 24MPs?

 

Here is what is expected. Maybe it is wrong, but it seems logical to me. Based on what Sony is doing with sensors on the APS cameras I think some FF cameras will need to have a higher count to attract a sufficient number of buyers to upgrade or to not get a Sony 24MP APS.

 

http://nikonrumors.com/2011/11/19/ladies-and-gentleman-i-present-to-you-the-nikon-d800.aspx/#more-26484

Link to post
Share on other sites

We're already at the resolution limit of 35mm lenses/sensors with 18-21MP... Bumping it to 24-36MP won't help things any, I'd say. There's a reason MF still exists.

 

A lot of MF lenses are being stressed by MF digital backs too. I used a Mamiya 28mm on a Phase One back and it was not so wonderful. Some of the newer lenses for 35mm are better and more will be coming. 36MP is actually just a small step up from 24MP and is probably the minimum if one hopes to get a noticeable increase in resolution from those lenses that may be up to the task. I'm not going to buy an Alpa, Phase One back, and a set of Rodenstocks because I am spoiled by the convenience and speed of shooting architecture on 35mm. I also don't want to deal with the LCC work. It would be about as easy for me to shoot several sections on 35mm and stitch if I needed more detail.

 

The 17 and 24 Canon TSE lenses are extremely good and I'd like to see if using them with 36MP or so gives any additional detail. (Maybe it will.) If a 36MP Canon is not out before long, I'll probably have a Nex 7 to test these lenses on and see via extrapolation what a 55MP FF sensor could do for them. You might be right and it won't be worth it. But I'd like to see it for myself. I remember when 21 MP MF backs were all the rage. They are up to 80 now.

 

Hopefully by then this adapter will be further developed and available.

 

Canon lens aperture control from Sony NEX camera body - YouTube

Link to post
Share on other sites

A lot of MF lenses are being stressed by MF digital backs too.

There is the Hasselblad H4D-200MS that does 200 MP in multi-shot mode. Neither noise nor dynamic range are an issue since the pixel-pitch is still a moderate 6.0 microns but the required contrast at extremely high spatial frequencies is certainly challenging.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...