Jump to content

Towards an explanation of the Italian Flag Phenomenon


Lindolfi

Recommended Posts

Advertisement (gone after registration)

Yes giordano, it could if the whole layer of microlenses is shifted along the diagonal (left bottom to top right) by a fraction of the cross section of a single pixel relative to the pixel layer, given the principle of the loss of certain wavelength contributions as I have shown in my figure. I was already working on some modelling in that direction. A few nanometer is not enough, but I think the principle does work. Now, if the principle can be shown to work (and I think it does) and we have some information on the precision of the alignment of the microlens layer and the pixel array, we have an explanation, that even may explain the variation between M9 models.

 

Thanks. I was exaggerating with "a few nanometres"because I couldn't get a feel for whether the actual amount needed would be tens or hundreds.

Link to post
Share on other sites

As we can say that the phenomenon is neither always there with the same cameras and lenses, nor always absent, nor does it show a constant pattern or degree of disturbance there must be some important external factor.

 

The one thing that is constant is red/magenta at LH and cyan/green at RH side

Link to post
Share on other sites

No. But it would perhaps narrow the field a bit. Many mysteries go unsolved because we try to solve he wrong mystery.

 

The red-eyed old man

 

Yes :D the man who was looking for his lost contactlenses under the lamppost was asked: "did you loose them here?" He said: "no, but at least I can see something here!" He was narrowing the field too

Link to post
Share on other sites

Maybe we need a model to think of it first. First we have to agree upon whether the left side disturbance is the complementary color of the right side disturbance, whether it is red vs. cyan or magenta vs. green or a mix, this will probably be complementary, I suppose.

So if, Lindolfi following, it is something with displacement of microlenses or optic glas that covers the sensor sites one could think of a model of a Persian carpet that has a nap; looking from one side, into the nap, it is saturated and full-color, looking from the other side, onto the nap, it is reflective and bright, less colorfull. So if you hang above the centre of the carpet (like a wide angle lens above a sensor) you could see saturation looking left and reflectiveness right. So if microlenses are displaced it would cause the read-out to be red/magenta at one side and cyan/green on the other side. If this were to be a correct model we have to ask what causes the nap behavior of the sensor system.

Link to post
Share on other sites

At leadt since the last firmware you could see no few examples, where the magenta edge seemed to be moved from the lower left corner to the upper right one.

Which would indicate overcorrection rather than a variation in the underlying phenomenon (assuming there is indeed a correlation between edge colour and firmware version; it could be coincidental).

Link to post
Share on other sites

Yes :D the man who was looking for his lost contactlenses under the lamppost was asked: "did you loose them here?" He said: "no, but at least I can see something here!" He was narrowing the field too

 

Sorry, you've got it backwards. This is a matter of disposing of the places where we can't have lost our contacts, not identifying those where it would be most comfortable to search for them. Street lighting has nothing to do with it.

 

I agree with Otto: "Maybe we need a model to think of it first."

 

The old man with a head lamp

  • Like 1
Link to post
Share on other sites

If we follow adan's scheme:

 

http://www.imaging-resource.com/NPICS1/SONY_BACKILLUM_CMOS_2_S.JPG

 

It could be light that bounces off from the - in this scheme - middle microlens that comes in the outer microlens, so the microlens that is either more utterly left or utterly right, causing a readout for a different color. It depends on the rythm of the Bayern array whether this explains cyan/green at RH side and red/magenta at the LH side

Link to post
Share on other sites

Yes otto.f, the nap of the persian carpet is a nice analogy, but it does not explain.

 

The principle that could be at hand is a combination of

 

[1] chromatic aberration of the microlenses

 

[2] a slight reduction in size of the matrix of microlenses relative to the matrix of pixels so that you can capture the oblique rays at the edges of the sensor from wide angle lenses. (This made the digital M possible)

 

[3] a slight shift of the matrix of microlenses from lower left to upper right (in the image, on the sensor the shift would be in the other direction) (This would be an error in production of the microlenses-pixel array sandwich and needs to be verified to be the case)

 

If we apply all three of these conditions in a diagram you get:

 

shiftedmicrolenses2a.jpg

 

These are five cells of each 4 pixels in the sensor. These cells are located at the far left top (LT), left bottom (LB), right top (TR), right bottom (RB) and center © of the sensor.

 

The coloured squares are the colour sensitive pixels. The transparent disks are the light contributions of the microlenses in the blue and red wavelength areas. These disks are different in size and location due to the chromatic aberration of the microlenses.

 

As you can see, in the TR cell, there is a loss in percentage of red light entering the red pixel (some of the red spills over the edge of the pixel), but no loss of blue light entering the blue pixel. The result is a cyan discolouration. In the LB cell, you have no loss of percentage of red light in the red pixel, but you do get a loss of percentage of blue light in the blue pixel. This would result in a shift towards red.

 

In the LT and RB cells you have both loss of blue in the blue pixel and red in the red pixel, so that cancels. In the C cell the same is happening.

 

The fact that the Bayer arrays are read out symmetrically for each pixel by weighing neighbours of the same colour in all direction (and so symmetrical in all directions) does not change this principle. For instance all red pixels in the RT area lack red contributions and so when you average, there is still a drop in red channel.

Edited by Lindolfi
  • Like 1
Link to post
Share on other sites

This page should give some clarification on your question of the Point Spread Function (PSF), sydkugelmass. Kodak: Plugged In - On Biological and Electronic Blind Spots

The light circles under the micro lenses should be smaller than the lenses themselves (since they are strong positive and converge light), while the red light circle is larger than the blue light circle due to the chromatic aberration

Link to post
Share on other sites

Any idea about the 'image point' generated by a CV UW Heliar 12mm, measured near the borders?

Any image of the actual morphology of the PSF of a CV UW Heliar 12mm?

How many sensor pixel are hit at various wavelengths?

How does the morphology of the projected-from-PSF neighborhood vary with outcoming ray angle in the actual case of the cited lens?

Link to post
Share on other sites

The lateral (secondary) chromatic aberration of the Ultra Wide Heliar 12/5.6 makes the colours form a white point source spread over more than a few pixels on the border of the sensor. This can be seen by placing a microscope behind the Heliar and looking at the virtual image hanging in space between microscope and lens and focussing on it and measuring its spread and location for the different colour components.

But that is not relevant for even illumination like in the test image I posted, where you see the Italian Flag Phenomenon. You can define that optical image projected by a lens as a complex transform of the ideal image. The subsequent process of converting that transformed image into digital data by the sandwich in the back of the camera is something that can be looked upon as an extra transformation, with quite different properties.

Edited by Lindolfi
  • Like 1
Link to post
Share on other sites

Isn't it so that pixels are not hit?; sites are hit and a pixel is a translation of a hit site

 

It's even worse: Since each site "sees" one color only, a pixel is a combined value of several adjacent sites.

Link to post
Share on other sites

Which brings me to my next question. Is it at all possible to access the raw data as read from the sensing device? As far as I understand, the DNG format contains processed data only.

We cannot get at the raw data fresh off the A/D converter, short of some serious firmware hacking that nobody outside Jenoptik is up to.

Link to post
Share on other sites

We cannot get at the raw data fresh off the A/D converter, short of some serious firmware hacking that nobody outside Jenoptik is up to.

 

That's the man we need! Send him an invitation to LUF!

Link to post
Share on other sites

I guess I just come back to my original post way back when - this is a fascinating guessing game, but realistically, it is not as though we are going to "solve" this for Leica.

 

They DO have access to raw sensor output, and Kodak and Jenoptik engineers, and no doubt know exactly what is going on - especially since they appear to think the next FW upgrade will improve things. If they didn't know, they wouldn't even able to take a stab at a fix.

 

And it is something they've probably understood since the "beta" era of the M9 (back when, two years ago, Stefan Daniel said there were still "problems to overcome" with a yet-unspecified FF M camera). The question is, how much of the camera's finite processing power and time can and should be devoted to the problem.

 

I'd guess, after 20 months of user responses (well, 14 months of responses and 6 months of programming), that they've finally decided - "more."

Link to post
Share on other sites

We're all familiar with, or at least heard of CornerFix around here... I've fired it up a few times but never really used it. But I've been wanting a 4/21 Color Skopar for a while now to round out a travel kit and finally picked one up (knowing about the issues on the M9). A few sample shots confirmed it; wicked red edge. Now I just had to try CornerFix.

 

Little did I know it would be so quick and easy to get going. I snapped a reference image and created a profile, corrected an image (which worked, completely) and it took all of ten minutes. Sweet!

 

While this red edge issue clearly exists, and it would be nice if it were resolved... At this point, honestly, I consider it a non-issue. Especially when you can correct an image so quickly and easily from an "unusable lens" in my worst-case scenario.

 

FWIW, I wrote an article - Using CornerFix to Correct Images if anyone's interested. A big thank you to Sandy as well.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...