Jump to content

Is It Time To Develop a New Definition For Camera Sensors?


barjohn

Recommended Posts

Advertisement (gone after registration)

The definition of pixel (picture element) as defined in the Wikipedia is: “pixel is generally thought of as the smallest complete sample of an image…” In a color image sensor it would consist of the three primary colors, red, green and blue or cyan, magenta, yellow, and black. One could further define the quality of a pixel by the number of bits used to specify the individual colors comprising the pixel. This quality value is represented by the range of colors the pixel is capable of resolving

 

It is with some legitimacy that members of the Foveon camp argue that if Bayer designed sensors are 10 mega pixel, either because they have 10 million photo sensitive diodes or produce an artificial 10 million pixels through interpolation that their newest sensor is a 14 mega pixel sensor. When all sensors were Bayer sensors, the fact that photoreceptors were equivalenced to the camera’s sensor resolution didn’t matter. However, with new sensor technologies on the horizon and perhaps new technologies that are not yet known, it makes sense to reconsider whether these definitions provide an adequate basis for fairly evaluating a camera from both a consumer’s perspective as well as from a purely scientific basis. The Bayer sensor introduces some difficulties in the calculation by having the extra green photoreceptor which can then be used with its neighbors to better interpolate the missing pixel. Treating its site as a complete pixel is still not scientifically accurate. The Nikon design, like the Foveon produce true point source pixels.

 

With a wide range of technologies available to post process the image within the camera (bicubic, fractals, etc.), one can produce a file from the camera with just about any pixel count one desires provided that camera has enough processor horsepower. With processors rapidly increasing in compute power it doesn’t make sense to use the camera’s file output as the measure of its inherent resolution quality. Depending on the analog to digital conversion performed on each photoreceptor we get the color range for each pixel. With 8 bits (the current standard) providing 262K colors, 10 bits providing 1M colors, 12 bits providing 2.7M colors, 14 bits providing 7.5M colors and 16 bits providing 16M colors. In practice few cameras employ the full 16 bits but rather throw out at least the 2 least significant bits with some throwing out the 4 least significant bits since this is where the inherent noise floor creates false signals.

 

The question I raised initially in the title is, I believe, worthy of consideration. What do you think of a new standard for sensor resolution and shouldn’t the new standard also incorporate a quality measure? Here I encourage a little creativity. What follows isn’t super creative and I am sure members can do better but suppose the new measure were called QPIXELS. And the QPIXELS were rated such as QPIXELS-A meant 16 bit quality, QPIXELS-B were 14 bit, QPIXELS-C were 12 bit, QPIXELS-D were 10 bit, QPIXELS-E were 8 bit and QPIXELS-F were 4 bit with finally QPIXELS-G being 2 bit or black & white. For example, a vendor would rate the camera as an 8 MEGAQPIXEL-E and it would be the same irrespective of vendor or sensor.

 

Frankly, I am not sure how one would actually rate a Bayer sensor with the extra green photoreceptors other than to treat them as helper values in interpolation but to be thrown out of the equation of pixel count. I leave this to more knowledgeable members to suggest a better definition.

Link to post
Share on other sites

I am really surprised. A hundred and seventy views and not one opinion. How about experts like Sean or Mark? I would like to see a meaningful standard developed that would apply to future M9 or beyond so that we would know what to expect. No lossy compression and the call it 16 bits or 14 bits when it isn't that across the bandwidth. Ten interpolated megapixels isn't the same as a true ten megapixel. Just my humble opinion.

Link to post
Share on other sites

John,

 

If the Qpixels-x format were to be adopted then I suggest that Qpixels-A should correlate to 2-bit quality, Qpixels-B to 4-bit quality et cetera because there is every likelihood that in the future 32-bit and 64-bit quality will be introduced as processors inevitably become more powerful.

 

Incidentally I feel that the term "pixel" is entirely the wrong term to use in relation to sensors because it is derived from the term "picture element". Since there is no correlation between a picture and a sensor, which is simply an array of photodiodes, then perhaps the term "sensels" should be used to describe sensor elements.

 

However, since the weight of consumerism is heavily stacked against such changes and the redefinition you suggested I suspect that a move to change the usage would have less weight than smoke on the wind. :(

 

Pete.

Link to post
Share on other sites

I think the idea has merit, but the definition needs to be something reasonably easy to grasp.

 

For instance, one photo-sight (pixel) should be listed as size/shape

total pixels should be listed with physical sensor size/pixel size/shape/totpix

and finally, they should list raw buffer size in frames

 

so a camera like the M8 should read:

Resolution: 18x27mm/6.8 microns/square/10.31m/10/ (I can't find the info)

 

since the color filters can be changed on the fly in camera, defining the number of RGB pixels is difficult, but you can generally guess something like 30/40/30

-Steven

Link to post
Share on other sites

Steven,

 

Your calling a photo receptor a pixel is the very type of confusion I feel needs to be eliminated. A pixel- a picture element is the smallest point that contains ALL of the information i.e. RGB or three sensor elements. Pete may be on a better track with the idea of sensels; however, we are looking for an accurate rendition of a particular point on an image (at least as accurate as we can get). In the Nikon design shown here: Nikon's new full-color RGB sensor?: Digital Photography Review a hole representing a single point of light is the collection point and then the light beam is split using dichroic mirrors to 3 photo sensitive cells. Thus each hole represents a true pixel. The same can be said of the FOVEON sensor where the single point is essentially divided vertically where the three photo receptors are stacked vertically. In the Bayer design, each photo receptor is horizontally displaced in space and it is assumed that the adjacent receptor is receiving the same level of a color as the sensor that is the starting point. This assumption may or may not be correct and at edges it is probably incorrect. As a result, where finest detail and most accurate color accuracy are the goal the Bayer sensor is at a disadvantage. Conversely, it has the advantage of being easy to interpolate and less costly to manufacture in high yields, especially as size increases.

 

I find the Nikon design to be particularly intriguing though I suspect it would be hard to manufacture at the high resolutions we might desire.

 

Today, I heard about a new Panasonic HD video camera that produces true 1080P resolution images using a 3 CCD sensor, with a Lieca lens and up to 1 hour recording to a 4GB SD card using the new AVCHD video format and outputting its images via the new HDMI connector for around $1K. Maybe someone will figure out how to make a 3 CCD still camera with the high resolution we desire.

Link to post
Share on other sites

with finally QPIXELS-G being 2 bit or black & white..

 

2 bits gives your four shades of grey. You can get black and white from one bit, though not black and white in the photographic sense: for that you really need 8 bits to give you at least a 256-level greyscale, and more bits would be worthwhile.

 

I think a re-evaluation of sensor design is very meritworthy, and it's good to see some of the manufacturers thinking about it. For instance, wouldn't it be cool to be able to custom-order your M9 with a monochrome-only sensor? You'd have, eg, 16 megapixels but with NO interpolation at all – each would report its true, eg, 14-bit greyscale read, without the need for software in the camera to work out what the "true" RGB value of that photodiode would be. Now that would knock anything we used to get from film into a cocked hat...

Link to post
Share on other sites

Advertisement (gone after registration)

I really don't expect to see any less than 8 bit A/D converters but you are right about the current sensor having the potential to be an awesome B&W sensor if all the photosensors were used at 14 bit levels for luminance only. I'm not sure how that would compare to an RGB conversion to B&W but I would think it would be better. (Remove the Bayer filter but keep the micro lenses.)

Link to post
Share on other sites

I really don't expect to see any less than 8 bit A/D converters but you are right about the current sensor having the potential to be an awesome B&W sensor if all the photosensors were used at 14 bit levels for luminance only. I'm not sure how that would compare to an RGB conversion to B&W but I would think it would be better. (Remove the Bayer filter but keep the micro lenses.)

 

It ought to be a lot better. Rather than having the software analyse the output from four photodiode sites, compositing from that one RGB value for those four sites, and then working to interpolate what the variance in that RGB value "should" be for *each* of the four sites in order to guesstimate what colour of light was actually falling on each of them before it was filtered to red, green (twice) or blue... you could just measure the amount of light falling and get a true greyscale value.

 

Of course that's what some of the new technologies are trying to do in colour. But my feeling is that each of the alternatives has to have some downside, probably in terms of noise levels as the light stream is attenuated by each step of the sampling process. A true greyscale sensor would be sharp as a tack (no software-induced softness) and, I would imagine, could function at its best in terms of signal-to-noise ratio.

 

Incidentally, you mention what A to D chips are used. I don't know anything about this in relation to the M8, or indeed digital cameras, but I have a degree of experience in terms of musical A-D and D-A. In those circles one can either oversample to interpolate stages in a digital step-sequence (eg "16x oversampling"), or else employ a 1-bit sigma-delta convertor, which tracks shifts of ramp and direction in the waveform. This latter is generally considered very effective musically. I wonder if someone who knows more about the conversion in camera sensors could comment on whether a sigma-delta type, or an oversampling type, A-D would be best suited in our M9?

Link to post
Share on other sites

It ought to be a lot better. Rather than having the software analyse the output from four photodiode sites, compositing from that one RGB value for those four sites, and then working to interpolate what the variance in that RGB value "should" be for *each* of the four sites in order to guesstimate what colour of light was actually falling on each of them before it was filtered to red, green (twice) or blue... you could just measure the amount of light falling and get a true greyscale value.

In theory perhaps but in reality it may be quite different because each photodiode will have its own characteristic response to different wavelengths of light. So, for example, for the following three RGB mixtures of light falling on photodiodes:

 

100%, 0%, 0%

0%, 100%, 0%

0%, 0%, 100%

 

the photodiodes are likely to produce different voltages if they are more sensitive to wavelengths in the red, green or blue portions of the spectrum, although in theory they should all be the same. This would then require a degree of (in camera?) wavelength equalisation and we're back to software analysis again.

 

Pete.

Link to post
Share on other sites

In theory perhaps but in reality it may be quite different because each photodiode will have its own characteristic response to different wavelengths of light. So, for example, for the following three RGB mixtures of light falling on photodiodes:

 

100%, 0%, 0%

0%, 100%, 0%

0%, 0%, 100%

 

the photodiodes are likely to produce different voltages if they are more sensitive to wavelengths in the red, green or blue portions of the spectrum, although in theory they should all be the same. This would then require a degree of (in camera?) wavelength equalisation and we're back to software analysis again.

 

Pete.

 

I'm not sure what you mean here. Are photodiodes randomly more sensitive to one portion of the spectrum than another? Is this a recognised factor? If so, how's it compensated for in existing sensors? (I don't see how one could compensate for this, since there's no way for the software to know the true light value or the diode's offset. How would the waveform equalisation be performed?)

 

And I thought I was on to a good thing!

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...