stevme Posted November 19, 2006 Share #1 Posted November 19, 2006 Advertisement (gone after registration) As will soon be obvious, I am no engineer. Still, it should be possible for a fix to be made without use of lens filters or attempting to reduce magenta in a profile in post processing. At some point the sensor hardware turns an analogue signal frequency into a digital number, setting out the light frequency received in digital form. Would it not be possible for the converting mechanism simply not to report frequencies actually received for light waves in excess of a certain length? I assume this would occur within transistors within the sensor itself, requiring a new sensor. This is a back-end solution, as it were. Perhaps Leica did not wish to pay for a transistor cut-off mechanism, requiring a whole new design by Kodak, figuring the problem was minor and could be fixed with filters. Steve Link to post Share on other sites More sharing options...
Advertisement Posted November 19, 2006 Posted November 19, 2006 Hi stevme, Take a look here Fixing magenta without filters or profiles . I'm sure you'll find what you were looking for!
pascal_meheut Posted November 19, 2006 Share #2 Posted November 19, 2006 Except the sensor has no way of knowing the lightwave at any point. If not, a Bayer matrix would not be needed. Link to post Share on other sites More sharing options...
stevme Posted November 19, 2006 Author Share #3 Posted November 19, 2006 You prove my point about my engineering background. Except, at some point, something reports a light frequency that is turned into a color, perhaps this Bayer matrix. A barrier could be imposed preventing reporting of frequencies below a certain wave length. Steve Link to post Share on other sites More sharing options...
pascal_meheut Posted November 19, 2006 Share #4 Posted November 19, 2006 A barrier could be imposed preventing reporting of frequencies below a certain wave length.Steve Yes, this is what the IR filter does in almost every digital camera. Problem is, the one on the M8 does a lousy job. Link to post Share on other sites More sharing options...
marknorton Posted November 19, 2006 Share #5 Posted November 19, 2006 As will soon be obvious, I am no engineer. Yes, that seems clear. Sampling an image is not at all like sampling an audio signal. With an audio signal, you measure the same signal thousands of times a second and with that data, it's possible to apply mathematical algorithms which have the effect of filtering the signal, to reject, for example, signals above a certain frequency. An image sample on the other hand measures the light intensity at each of millions of pixels just once and the sample value of each pixel is related to the intensity of the light. The whole magenta problem is caused by the fact that the atomic level process of liberating electrons from their normal orbits by injecting energy from incident light works for a wider range of light wavelengths than we would like. As Pascal says, the sensor has no concept of colour, it's only the Bayer matrix which allows colour information to be derived by comparing adjacent pixels under different filters. If the dyes used in the Bayer matrix could reject IR more strongly, there would be less need for an IR barrier filter in the first place. Link to post Share on other sites More sharing options...
stevme Posted November 19, 2006 Author Share #6 Posted November 19, 2006 Just to make my lack of understanding absolutely clear, I am not talking about a physical barrier, such as a protective glass, but an electronic one that refuses to take color frequencies below a certain wave length. Perhaps all the sensor does is report a voltage from a given pixel that has a physical color filter on top of it (like the Technicolor 3 strip movie process). It would still seem to me that when certain colors are reported the electronics could be such as to refuse to write the color to memory. This would tend to create a "black" color for that particular pixel. Presumably their are other colors from the same light source, such as an underlying green, for example, that would be lost through this process. None of this matters, of course, since it's up to Leica and Kodak to figure something out. I find it interesting that we are now discovering similar magenta problems with other cameras, including Nikon and Sony. There is probably a lot of chortling going on over in the Canon Japan headquarters right now as Leica squirms out of this! Still, some of the outdoor shots I have in broad sunlight are thrilling in their clarity and color saturation. Steve Link to post Share on other sites More sharing options...
marknorton Posted November 19, 2006 Share #7 Posted November 19, 2006 Advertisement (gone after registration) As they say, when you are in a hole, stop digging. Steve, the sensor is not measuring frequency, it's measuring light intensity and colour is derived by comparing the measured level of adjacent pixels under different filters. The basic RGB data is obtained using the pixel values and the characteristics of the filter dyes used in the sensor. That process is breaking down here because the software cannot tell the difference between IR and true magenta, so it renders it as magenta. The tests of Sony and Nikon cameras are for sensors that are quite old. The current Nikon cameras reject IR light much more strongly. Link to post Share on other sites More sharing options...
stevme Posted November 19, 2006 Author Share #8 Posted November 19, 2006 Mark -- Bingo. I got it. Thanks. Looks like filters for me when shooting indoors and take my chances with outdoor stuff. Steve Link to post Share on other sites More sharing options...
Guest rubidium Posted November 19, 2006 Share #9 Posted November 19, 2006 At some point the sensor hardware turns an analogue signal frequency into a digital number, setting out the light frequency received in digital form. Would it not be possible for the converting mechanism simply not to report frequencies actually received for light waves in excess of a certain length? I assume this would occur within transistors within the sensor itself, requiring a new sensor. This is a back-end solution, as it were. Perhaps Leica did not wish to pay for a transistor cut-off mechanism, requiring a whole new design by Kodak, figuring the problem was minor and could be fixed with filters. These sensors do not work this way, simply because the state-of-the-art is many orders of magnitude away from being able to support this. A CCD is essentially a "light-sensitive capacitor". It is NOT something that produces a measurable signal having a time variation proportional to the frequency of light incident upon it. It is much too sluggish a device - by about 7 orders of magnitude - to do so. Thus, one cannot to a spectral decomposition of any sort of the output signal from a CCD device. When light across a broad range of frequencies (some UV, all of the visible, and some of the IR) impinges upon a sensor pixel, photons are absorbed by the semiconductor and in turn "dislodge" free electrons via a mechanism known as the photoelectric effect. The average number of electrons released per incident photon - or more accurately, the probability that an incident photon will release an electron - is known as the quantum efficiency. The quantum efficiency varies as a function of the frequency of the incident light, but not by much until that frequency get's outside the sensitivity regime of the material (Kodak publishes curves showing this). In any event, what started out as photons having particular frequencies simply becomes an accumulating charge that is proportional to the light's intensity once the photoproduction of electrons has occurred. That net static charge which built up throughout the integration period (i.e. exposure time) is what constitutes the "analog signal" that is subsequently digitized, in the sense that a voltage can be observed across the device which is proportional to the accumulated charge within it. The only variation of that voltage with time is the relatively slow rate of increase that occurs throughout the exposure, as the charge is being accumulated. There is by no means any variation on the order of the frequencies of the original photons. The only way to distinguish colors is to filter the light incident upon the devices, and this is exactly what the Bayer filter layer above the actual semiconductor sensor plane does. Of course it does so crudely in three relatively wide and overlapping bands, casually called "red" "green" and "blue." For reasons that I'm sure Kodak and other sensor developers can expound upon better than I, it is not easy to realize "red" Bayer filters that don't throw out the baby with the bathwater - i.e. filters that can strongly attenuate undesireable IR without taking out some of the desireable upper visible red-region wavelengths along with it. Likewise, I strongly suspect that the same is true of "blue" Bayer filters being unable to crisply remove UV without attenuating some of the lower visible blue-region wavelengths in the process. The latter does not seem to be an issue, since UV is already strongly attenuated by most lens elements anyway. Jim Link to post Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.