Jump to content

Will optical filters make lose ± 3EV on the MM?


lct

Recommended Posts

Advertisement (gone after registration)

Will optical filters make lose ± 3EV on the MM?

 

Curious question to which i thought the obvious answer was yes before some good friends here explained to me that i was wrong because "the EV change is only from the portion of the spectrum that is being cut out" and "the assumed advantage of the digital filter doesn't really exist".

I then proceeded to buy a red filter on purpose :rolleyes: i put it on my M8.2 and a 35/2 asph (640 iso, f/5.6, 1/250s) and i got the images you can see below.

I would be very happy if some of you folks could explain if the quoted statements above are compatible with these pics and if they expect the MM to behave differently for some reason.

This way i could make my mind about the pros and cons of the MM and i could check if i'm becoming absent-minded incidentally.

First pic in color straight from C1. 2nd pic in b&w straight from C1 & SilverEfex with no filter on. 3rd pic in b&w straight from the same with digital red filter on. 4th pic in b&w straight from the same with optical red filter on.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

A sensor (or a film emulsion) is sensitive to light over a certain part of the spectrum. That part is roughly equivalent to the part we call 'visible light'. If a filter cuts 8/9 of all photographically active light, then it does indeed attenuate light equivalently to minus three Exposure Values.

 

With a panchromatically sensitive sensor or film (i.e. sensitive to the whole visible spectrum) a medium yellow filter will cut about 50% of that light, = one EV or f-stop. But when I was a kid, emulsions that were sensitive to blue light only (and violet, and UVa) were still called 'ordinary'. With such an emulsion, a medium Y filter cut nearly all actinic light there was. With an ortochromatic film (sensitive to blue and green) the same filter would normally cut about 2 EV.

 

The old man from the Filter Days

Link to post
Share on other sites

Using yellow and red filters over the years for B&W photography, my experience is the same as shown in the images above - around 3 stop reduction in EV, as printed on the periphery of the filter.

 

I can't see why the sensor in the Monochrom would behave any differently.

 

Interesting theories aside, surely you can tell simply by looking at the filter? A red filter absorbs everything but the red channel (okay, yes we're talking visible light), so the idea is that the green and blue channels fall away. What you're left with is the red spectrum at a reduced EV.

 

Think of it as being a neutral density filter, but only in the red channel.

 

If the EV wasn't reduced, there would be two further problems - how do you absorb the blue and green channels without an overall reduction in EV, or how do you boost the residual red channel to maintain the EV value?

 

I appreciate it is simplistic, but it works for me. Either you can produce a filter which maintains overall EV value, but gives a uniform red cast to the image (that would be a filter to behold), or the sensor is responding to the residual red frequency transmitted by the filter, and something else, unaffected by the absorption of the blue and green light, that still maintains the EV value. If the latter were the case, then reading EV in the green, blue or red channel would be a waste of time, as it would not affect your exposure ...

 

Cheers

John

Link to post
Share on other sites

Yes there is no magical free lunch by using a digital monochrome camera. However, as you demonstrated, the color digital file already has taken the filter factors into account.

 

It seems to me from the filter box that you are using a #29 red filter and that absorbs quite a bit more than 3 stops. More like 4 1/2 stops or so.

 

A number 25 red is more typical for landscape photography (dramatic sky) and will absorb about 3 stops. But this depends on the lighting and is based on an 18% neutral grey subject and normal contrast "processing." (A white card or a darker card may be different depending on the tone curve used.) Exposure compensation may need to be altered from the recommended filter factor depending on the effect you wish to achieve whether on film or digital.

 

This explains it fairly well but I think you already had a pretty good handle on it. For laughs you ought to split the channels of the shot made through the red filter and compare that to the red channel of the unfiltered color shot. That will give you an idea of the neutral density in that filter.

 

http://www.kodak.com/US/plugins/acrobat/en/motion/support/h2/h2fltrs.pdf

 

"The total photographic effect obtained with a particular filter

depends on four main factors: its spectral absorption

characteristics, the spectral sensitivity of the sensitized

material, the color of the subject to be photographed, and

the spectral quality of the illuminant."

 

*All filters absorb part of the incident radiation, so their use usually requires some

increase in exposure over that required when no filter is used. The number of

times by which an exposure must be increased for a given filter with a given material

is called the filter factor, or multiplying factor.

Link to post
Share on other sites

Yes, the posted pictures do explain better than any phrase the effect of filters (digital and optical) : exactly what in theory one has to expect : the standard b&w image has an overall "right" balance of greytones, the digitally red-filtered has blue-greens very darkened and red turned almost white, the optical red-filtered one is so overally darkened by fliter EV factor, that the balance of graytones becomes much less significant (but is anyway, of course, appreciable - see the red plastic little bottle).

 

In my opinion, the statement "the EV change is only from the portion of the spectrum that is being cut out" is wrong and misleading : with a color filter on a panchromatic film/sensor (like MM's one) you have an EV change, usually specified in the mount of the filter, period.

I say "wrong" because, strictly and theorically speaking, if you'd have a filter that COMPLETELY cuts out a certain range of wavelengths, and you take the picture of a subject with colors whose spectrum is COMPLETELY within these range, you have a pure BLACK regardless of exposure ... the EV factor is "+ infinity"...

Link to post
Share on other sites

Advertisement (gone after registration)

Which can be proved by an IR filter, which will have a filter factor of -50 for visible light and has (should have) a filter factor of 0 for infrared light within a defined spectral range.

Link to post
Share on other sites

Here's something I've been curious about for a while and it fits in with the idea of separation filters.

 

If the filters over the M9 sensor only require an exposure correction of 1 stop of light compared with the filter free MM, then they must let half of the white light through. And that 1 stop is also accounting for any neutral density. So color attenuation will be even less. If that is the case how is it possible for any color in a channel to be separated in the resulting file by so much even if the object itself is quite saturated? (Yes I know the filter has more effect on a region of the spectrum but these examples are in the 6-7 stop range.)

 

I split the channels of this M8 color image and clearly there is much much greater than 1 stop of difference in some of the colorful objects. The values on the red lipstick go from around 2 (green channel) to about 180 (red channel) and the green on the book go from about 1 (red channel) to about 110 (green channel). These amounts will vary a little depending on where you do the sampling but you get the idea that this is a large difference. I would bet dollars to donuts that an M9 will have similar separation.

 

So is this accomplished via magic or are the filters denser than some think? Could it be that something else is different about the MM than just the removal of the Bayer filter and the replacement with some kind of clear glass?

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

I split the channels of this M8 color image and clearly there is much much greater than 1 stop of difference in some of the colorful objects. The values on the red lipstick go from around 2 (green channel) to about 180 (red channel) and the green on the book go from about 1 (red channel) to about 110 (green channel). These amounts will vary a little depending on where you do the sampling but you get the idea that this is a large difference.

As mentioned previously, the RGB values in the image aren’t indicative of the values read out from the red, green, or blue sensor pixels. In fact any value in the red channel depends not just on the red sensor pixels, but also the green and blue ones. In the simplest case the colour space transformation from the device-dependent colour space of the sensor into a device-independent colour space such a sRGB or Adobe RGB involves a multiplication of the demosaiced RGB values by a constant matrix, but there are also alternative, more complex methods.

Link to post
Share on other sites

Various color filters in color,

 

7468399820_564b2efdb0_c.jpg

Slide1 by anachronist1, on Flickr

 

and in Visible-Monochrome.

 

7468390976_355af6d71c_c.jpg

filters in monochrome by anachronist1, on Flickr

 

Note the X1 Green-Color separation filter looks identical to the ND4.

 

Kodak DCS200, KAF-1600, first generation detector. The internal hard drive still works, 20 years old. It might be the last one that is operational, and is older than the Digital camera displayed in the Smithsonian.

 

If Leica wants to send me a Monochrome M9, I will be happy to repeat this test to factor in it's spectral response. Otherwise, will have to wait for mine to arrive before I repeat it.

Looking at the histograms of the Kodak file, you can see the separate peaks under the filters.

 

EDIT- I stacked two IR cut filters, and tested that the IR bleed had been eliminated. I use a Wii light bar- uses super-bright IR LED's. The RED response of this camera is high compared to a modern camera. The R60 is brighter than it would appear with the M9 Monochrome.

Link to post
Share on other sites

Well the firmware is different and the DNG, but that should not make a difference, but the tonal response curve has been tweaked specifically by filtering the red an infrared end, which should make a difference.

Link to post
Share on other sites

As mentioned previously, the RGB values in the image aren’t indicative of the values read out from the red, green, or blue sensor pixels. In fact any value in the red channel depends not just on the red sensor pixels, but also the green and blue ones. In the simplest case the colour space transformation from the device-dependent colour space of the sensor into a device-independent colour space such a sRGB or Adobe RGB involves a multiplication of the demosaiced RGB values by a constant matrix, but there are also alternative, more complex methods.

 

I know little about chips or the processing methodology but I think I understand it is not just a simple linear readout. However you have to have the information separated in the first place in order to be able to rebuild it... regardless of the methodology. And if you use weak filters you will record a lot of muddy unsaturated colors because the color channels won't be as different from each other as you need. You then can't just take a muddy red and somehow know it should be depicted as a purer saturated red. If you did that it would be impossible to simultaneously show a muddy red.

 

I don't see that you or anyone else is really trying to answer the question. How can such weak filters make such dramatic yet accurate changes on the color channels? It is a very simple question. All separation systems I have seen require stronger filters. Let's take the M Monochrome and see what kind of filters are needed to make similar separations. I bet they will be denser than filters that have a factor of 2.

 

I measured the red on part of the red bottle and got these values: R171 G5 B6. That is pretty pure red. By what process were the values eliminated (about 6 stops of reduction) from the green and blue channels if not by fairly strong filtering when the image was captured? Likewise the green in the book has approximately these values R1 G110 B20.

 

Think about the M8 and what effect its weak internal IR filter had on the look of some images that were made under a lot of IR radiation. There is no way to remove it from the color channels once it is recorded there and still be able to accurately depict magenta in the scene. So the solution was to use an additional filter on the lens to keep it out. From that experience, many here should be familiar with the limitations of weak filtering.

Link to post
Share on other sites

Simple experiment- put an X1 filter on your camera, take a shot, remove it, repeat the shot. Then compare only the Green channel between the two shots, and compare the full image between the two shots. That should give an idea of the difference between an actual color separation filter and the green filter used in the M9 mosaic filter. Try it with a red color separation filter as well. Post the results.

Link to post
Share on other sites

From Leica

M9 spectral sensitivity data

http://www.digoliardi.net/monochrom_color.pdf

 

Michael: What about using Lab mode in Photoshop.

What can we get by interpreting the lightness (L) value,

if it can be put within the context of this thread?

 

L = 0 is black and

L = 100 is white

L = 88 is pure green

L = 54 is pure red

L = 24 is pure blue

Link to post
Share on other sites

Simple experiment- put an X1 filter on your camera, take a shot, remove it, repeat the shot. Then compare only the Green channel between the two shots, and compare the full image between the two shots. That should give an idea of the difference between an actual color separation filter and the green filter used in the M9 mosaic filter. Try it with a red color separation filter as well. Post the results.

 

That makes sense as long as you compensate for exposure by equalizing on a grey card of course. If there isn't much difference, the Bayer filter must be fairly saturated.

 

Edit - I just did your suggested test using a Konica Minolta A2 as that is the only camera I have here today. I used an R60 filter and when I split the channels the blue and green channels were black. So it was totally effective. The red channel shot through a filter blocked a bit more than the red channel of the full color shot. (Some of the magenta and yellow is a bit darker on the grey scale strip where the type is. And there might be very slight exposure differences.) But at least for this camera, the red filter is fairly close to as effective as an R60. I used electronic flash.

 

So the question is are there some kinds of filters that have a factor of 2 that are nearly as effective as those that have a factor of 6-8? If so how do they accomplish that and why aren't they available as camera filters?

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Link to post
Share on other sites

The eye is not a camera! Nor a digital sensor! It strikes me as making no sense to to ignore how the human eye perceives color as contrasted to how the sensor does.

 

In any event, this chart is an attempt to map how the Monochrom sensor responds to color as contrasted to the human eye in terms of brightness. I am sure someone will correct me if I got it wrong.

 

Now, if we can put f-stop equivalents into the chart it might be apparent how some filters have more or less effect upon the sensor and how we see the colors!

Link to post
Share on other sites

The eye is not a camera! Nor a digital sensor! It strikes me as making no sense to to ignore how the human eye perceives color as contrasted to how the sensor does.

 

 

 

I think that has all been taken into consideration in the development of films and digital camera sensors and processing. There has been lots of testing on subjects to find out preferences for reproduction. I don't see what it has to do with filter factors about using filters on the MM vs. a Bayer filtered M9. In any case when making b/w conversions from a digital file you could adjust the way colors are depicted to duplicate what you feel is the right way to duplicate human vision. Likewise you could use filters on a b/w camera to do the same thing.

Link to post
Share on other sites

Well- will be an inside weekend, daughter has a Summer cold. Found a prism.

 

I'm thinking the way to do this is to use a prism, take some pictures of the spectrum.

 

Anybody got a 5th grader? Would make a great 5th grade Science Fair Project. Nikki went with electromagnetism. We put a lot of voltage through a nail, picked up over 100 nails. Electromagnets get hot with 12v... Would have seen it in IR.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...