Jump to content

Has the day of super high pixel counts passed


wlaidlaw

Recommended Posts

Advertisement (gone after registration)

If a twelve year old displayed the level of mathematics in that online photographer article in any maths exam, it would be a fail. There are a number of basic errors in it.

 

The SL sensor has 6000 pixels along a 36mm length, which equals at the most basic level 166 pixels per mm. So in order to have perfect lens resolution you need 166 line pairs per mm, so that light or dark or any stage between can be detected for that pixel. This is at about the level for current top class lenses like the best Leica/Zeiss/Nikon/Canon prime lenses and possibly the 24-90 zoom, if Leica's sales blurb is to be believed. The Canon 5DS sensor is nearly 9000 pixels along a 36mm length which is 250 pixels per mm. Nothing other than specialist photo micrography lenses currently attain 250 lp/mm at the centre of the image let alone in the corners.

166 pixels per mm in a Bayer array would allow the capture of around 55 lp/mm coming from the lens. It takes 2 pixels to represent a pair, not one. Plus, the Bayer matrix requires some interpolation that based on empirical results degrades resolution by a factor of roughly 1.5 (depending on subject). That suggests that higher megapixel counts could still capture additional detail under optimal conditions. Of course, whether those optimal conditions exist in any given real world situation is debatable. Subject motion blur, diffraction, imprecise focus, shutter shock/camera motion blur, lens aberrations, curved focal planes, and depth of field will all introduce some convolution into images, so which subjects and situations benefit from additional resolution will be quite variable. I believe this is why discussions on this topic are so contentious. Experiences will be very different from one photographer to another. For most of my images 24 megapixels does not limit my technical image quality. That would be true for the users of the D5 as well (reportage, sports, etc.) which is why Nikon is more worried about higher frame rates and faster autofocus than higher megapixel counts on its professional series of cameras. Landscape photographers shooting at long distances using tripods at optimum apertures making big prints will necessarily have a different set of priorities.

 

The math in the linked article is correct. I think you missed the additional links within the article to more detail d explanations.

 

- Jared

Link to post
Share on other sites

  • Replies 92
  • Created
  • Last Reply

I believe this is why discussions on this topic are so contentious. Experiences will be very different from one photographer to another.

Its also about expectation and the way the images are viewed. Last year I spent some time with a scientist taking photos. I was using a Canon 1DS3, he was using a Nikon D800. He was not as satisfied with his images especially sometimes if he compared with mine, as mine appeared to show more fine detail. In fact the problem was that he was shooting at f/16 to 22 and I was shooting at f/11 to 16. Comparing both at 100% on a computer screen seemed to indicate the Canon flies to be more detailed. When he opened his aperture up things corrected themselves to an extent although I would have been hard pushed to decide which files contained more fine detail IF the subject was viewed at the same size on screen. Not a particularly precise test to be sure, but one which illustrates the problem of comparison, increased MPixels and specific requirements.

 

And I fully agree that the priorities of photographers vary, but I'm still wary of thinking about 'optimum' apertures because images influenced by the obsession/quest for technical perfection often fail to appreciate the requirements of their content.

Link to post
Share on other sites

166 pixels per mm in a Bayer array would allow the capture of around 55 lp/mm coming from the lens. It takes 2 pixels to represent a pair, not one. Plus, the Bayer matrix requires some interpolation that based on empirical results degrades resolution by a factor of roughly 1.5 (depending on subject). That suggests that higher megapixel counts could still capture additional detail under optimal conditions. Of course, whether those optimal conditions exist in any given real world situation is debatable. Subject motion blur, diffraction, imprecise focus, shutter shock/camera motion blur, lens aberrations, curved focal planes, and depth of field will all introduce some convolution into images, so which subjects and situations benefit from additional resolution will be quite variable. I believe this is why discussions on this topic are so contentious. Experiences will be very different from one photographer to another. For most of my images 24 megapixels does not limit my technical image quality. That would be true for the users of the D5 as well (reportage, sports, etc.) which is why Nikon is more worried about higher frame rates and faster autofocus than higher megapixel counts on its professional series of cameras. Landscape photographers shooting at long distances using tripods at optimum apertures making big prints will necessarily have a different set of priorities.

 

The math in the linked article is correct. I think you missed the additional links within the article to more detail d explanations.

 

- Jared

 

I had this discussion with a couple of Kodak scientists who were at the presentation by one of the designers from Zeiss of their ZM lenses and the Ikon camera. I thought as you do that one line equalled one pixel and that a line pair equalled two pixels. The discussion came up when Zeiss were saying that the 50 ZM Planar, could at its optimum aperture (f5.6 from memory) and on the then new Kodak Chromogenic BW400CN V.2 when pull rated at 200 ISO, resolved up to 200p/mm. This was said to be equivalent to 34.5MP. I said that surely it was equivalent to 8.6 MP (1 pixel = 0.5 line pairs). Both the other parties were insistent that you needed one line pair per pixel for equivalence of resolution. Although I have a general physics degree and wrote my thesis on lens and camera testing, it was way before digital cameras were conceived, so I conceded that the scientists from Zeiss and Kodak, would know a lot more than I did. Hence my arguments above. 

 

BTW for those like me, who are fond of chromogenic B&W film and the convenience in out of the way places, to get it developed C41, will be saddened to learn that Kodak stopped production last year and stocks have now run out. I always preferred the Kodak to Ilford XP2, where the blacks seemed dark grey to me, meaning you had to fiddle around with black levels on scanning. 

Link to post
Share on other sites

Its also about expectation and the way the images are viewed. Last year I spent some time with a scientist taking photos. I was using a Canon 1DS3, he was using a Nikon D800. He was not as satisfied with his images especially sometimes if he compared with mine, as mine appeared to show more fine detail. In fact the problem was that he was shooting at f/16 to 22 and I was shooting at f/11 to 16. Comparing both at 100% on a computer screen seemed to indicate the Canon flies to be more detailed. When he opened his aperture up things corrected themselves to an extent although I would have been hard pushed to decide which files contained more fine detail IF the subject was viewed at the same size on screen. Not a particularly precise test to be sure, but one which illustrates the problem of comparison, increased MPixels and specific requirements.

 

And I fully agree that the priorities of photographers vary, but I'm still wary of thinking about 'optimum' apertures because images influenced by the obsession/quest for technical perfection often fail to appreciate the requirements of their content.

 

 

I believe 'optimum' aperture in this context means simply a balance between improved lens performance as one stops down (typically reduced spherical aberration and light fall-off) and the increasing effects of diffraction.  With both my 24 megapixel cameras I can get better results at f/8, for example, then f/11.  Obviously, this is completely ignoring what might be required in terms of shutter speed, ISO, depth of field, etc..  Those will be completely different from picture to picture, so making any generalizations would not be useful.

 

- Jared

Link to post
Share on other sites

I had this discussion with a couple of Kodak scientists who were at the presentation by one of the designers from Zeiss of their ZM lenses and the Ikon camera. I thought as you do that one line equalled one pixel and that a line pair equalled two pixels. The discussion came up when Zeiss were saying that the 50 ZM Planar, could at its optimum aperture (f5.6 from memory) and on the then new Kodak Chromogenic BW400CN V.2 when pull rated at 200 ISO, resolved up to 200p/mm. This was said to be equivalent to 34.5MP. I said that surely it was equivalent to 8.6 MP (1 pixel = 0.5 line pairs). Both the other parties were insistent that you needed one line pair per pixel for equivalence of resolution. Although I have a general physics degree and wrote my thesis on lens and camera testing, it was way before digital cameras were conceived, so I conceded that the scientists from Zeiss and Kodak, would know a lot more than I did. Hence my arguments above. 

 

BTW for those like me, who are fond of chromogenic B&W film and the convenience in out of the way places, to get it developed C41, will be saddened to learn that Kodak stopped production last year and stocks have now run out. I always preferred the Kodak to Ilford XP2, where the blacks seemed dark grey to me, meaning you had to fiddle around with black levels on scanning. 

 

Like you, I majored in physics in college.  This subject comes up in astronomy forums all the time.  I don't agree at all that a single pixel can represent a line pair.  In fact, even saying only two pixels are required to represent a line pair is a little generous as it is assuming only one dimension, and things get a little more complex when covering two dimensions where line pairs may not be oriented in nice columns and rows.

 

If I had lines measured at a value of 0/255/0/255/0 across my array and I continue to decrease the space between them until each line pair covers exactly one pixel, I would simply measure 127/127/127 etc. and my MTF would have dropped to zero.  This is a slight oversimplification, but it is basically accurate.  It's easy enough to establish this with fast Fourier transformations if you want to try it out.  ImageJ makes this pretty easy to do.  

 

All that being said, I believe you went the wrong direction when you were discussing this with the scientists and engineers from Kodak and Zeiss.  You would need TWICE the linear resolution on a digital sensor to capture the image from the lens, not HALF the linear resolution.  So I would argue that the lens that produces 200 lp/mm would need a 138 mega pixel 36x24 chip to accurately capture all the details provided, rather than 8.6 mega pixels.  Obviously, this is ignoring the interpolation required by a Bayer filter, but otherwise should be accurate to a first approximation.  Net result, I think the Zeiss and Kodak engineers were probably understating the resolution produced by the lens when translating it to megapixels, but it also depends on where you put your MTF cutoff for resolution.  Is it when contrast drops to 10%?  Or were they using a different metric?  What was the actual shape of the MTF curve?  It likely wasn't following the diffraction limit of an ideal lens.  These types of complexities could easily affect the results.  They may have just decided to be conservative in converting from lp/mm to megapixel equivalent.

 

- Jared

Link to post
Share on other sites

The discussion came up when Zeiss were saying that the 50 ZM Planar, could at its optimum aperture (f5.6 from memory) and on the then new Kodak Chromogenic BW400CN V.2 when pull rated at 200 ISO, resolved up to 200p/mm. This was said to be equivalent to 34.5MP. I said that surely it was equivalent to 8.6 MP (1 pixel = 0.5 line pairs). Both the other parties were insistent that you needed one line pair per pixel for equivalence of resolution. 

 

 

First, there is an important assumption that should have been done: we are talking about monochrome film and monochrome sensors (no bayer CFA).

A line pair requires two pixels, so 200 lp/mm = 400 pixels/mm.

On a 36x24mm monochrome sensor, this is 14400x9600 = 138 MP.

On a normal bayer CFA sensor, to avoid color aliasing, a line pair requires 4 pixels, pushing the count to 552 MP.

For generic images, we may be happy also with some aliasing, but we are still talking about a lot more than 34 MP.

 

Both you and the "scientists" were wrong about the equivalent MP  :)

Link to post
Share on other sites

Advertisement (gone after registration)

I believe 'optimum' aperture in this context means simply a balance between improved lens performance as one stops down (typically reduced spherical aberration and light fall-off) and the increasing effects of diffraction.  With both my 24 megapixel cameras I can get better results at f/8, for example, then f/11.  Obviously, this is completely ignoring what might be required in terms of shutter speed, ISO, depth of field, etc..  Those will be completely different from picture to picture, so making any generalizations would not be useful.

 

- Jared

 

I agree completly, even the camera/photographer interaction is important, the workflow in total, etc.

 

Back to the initial question: 24 MPx are enough, if it is good enough for the intended picture.

 

But 24MPx can also be too much or too less.... in other words, the old wisdom: the best camera is the one you have with you. ;)

Link to post
Share on other sites

This subject comes up in astronomy forums all the time.

As an aside, I was very fortunate recently to be part of a small party given a private tour of one of the astronomical facilities on La Palma (its a long story). I was absolutely staggered at the abilities of their equipment to resolve quite extraordinary fine detail at what are frankly mind boggling distances (I can't remember the figures but they were 'astronomical' if you will excuse the pun). Obviously this is not using visible light and the 'imagery' is apparently built up from multiple 'observations' using, if I remember correctly, somewhat complex statistical analysis of the data collected. I wonder if multi-sampling and subsequent analysis will eventually filter through to more 'normal' photography, and whether electronic hardware can ever be built to do this? 

Link to post
Share on other sites

"Overall, the resolution of this nanorod digital image sensor is two orders higher than that of existing CCD and CMOS digital image sensor techniques.Going forward, the scientists plan to increase the pixel number to 100,000 in order to realize a large-scale digital image sensor with unprecedented high resolution.

 

Read more: Ultrahigh-resolution digital image sensor achieves pixel size of 50 nm "

Link to post
Share on other sites

As an aside, I was very fortunate recently to be part of a small party given a private tour of one of the astronomical facilities on La Palma (its a long story). I was absolutely staggered at the abilities of their equipment to resolve quite extraordinary fine detail at what are frankly mind boggling distances (I can't remember the figures but they were 'astronomical' if you will excuse the pun). Obviously this is not using visible light and the 'imagery' is apparently built up from multiple 'observations' using, if I remember correctly, somewhat complex statistical analysis of the data collected. I wonder if multi-sampling and subsequent analysis will eventually filter through to more 'normal' photography, and whether electronic hardware can ever be built to do this? 

 

Wow, I would have loved that.  Professional observatories are very cool--to say the least.  Surprisingly, much of the technology has trickled down to amateurs as well.  My astronomy camera, for example, is only six megapixels (way less than the 24 to 50 megapixels that are the discussion point of this thread), but there are all kinds of techniques I use to get the most out of those pixels.  This includes separate stacks of dark frames, hundreds of bias frames, separate color filter exposures to avoid interpolation, thermoelectric cooling to -65C to reduce noise, separate exposures with each color filter to allow statistical removal of beta particle strikes from radioactive iodine in the cover slip on the CCD, removal of cosmic rays, adaptive optics to minimize the affects of atmospheric seeing and imperfect tracking, etc..  It makes the technical side of terrestrial imaging seem positively trivial.  I'm quite happy with my six megapixel sensor in this context as anything with smaller pixels would be oversampling my seeing conditions (the limit of resolution allowed by the quality of my typical night skies).

 

Sorry, I know I'm straying off topic, but it does prove the point that the right camera is the one that accomplishes the task at hand in a satisfactory manor regardless of megapixel count.

 

- Jared

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...