Jump to content

Yellow Filter On An M9, Good Idea?


Leica Fanatic

Recommended Posts

With the M8 you could post-process the IR bleed out of the image and have colors that looked reasonable. You did not have to use an IR filter. Yet, most people used the IR cut filter.

 

There is far more "red" bleed in the Blue channel than there is IR bleed in all the other channels combined. You can "post-Process it away" to look reasonable. But the RED light is Still in the BLUE channel. Just like IR bleeds into the visible band when you don't filter it out.

Link to post
Share on other sites

Rather more complex than you seem to think.

 

No it is simple. When you shoot a typical scene through an R60 filter you will lose 75% of the detail because you only have the red channel to work from. Using other filters will lose somewhat less data depending on their color and density. Nothing can replace the lost data. Don't think of this like using b/w film.

 

But when using a full color image, software can replace one color with a darker color and still preserve all of the detail in the image for the b/w conversion. (No I don't adjust individual color curves or overall color balance.) The sample I posted should make this clear to everyone.

Link to post
Share on other sites

No it is simple. When you shoot a typical scene through an R60 filter you will lose 75% of the detail because you only have the red channel to work from. Using other filters will lose somewhat less data depending on their color and density. Nothing can replace the lost data. Don't think of this like using b/w film.

 

But when using a full color image, software can replace one color with a darker color and still preserve all of the detail in the image for the b/w conversion. (No I don't adjust individual color curves or overall color balance.) The sample I posted should make this clear to everyone.

 

If the camera used true separation filters, that would be true. It does not. If you use only the red channel, you get 25% of the pixels. If you put a red filter over the lens, you get the Red channel, red bleed into the green channel, and red bleed in the blue channel. At 600nm, blue is way down, green is at ~11% and red is at 26%.

 

The more interesting case is the Yellow filter that the OP asked about. At 520nm, Blue is at 25% efficiency. So it makes sense to use the Blue channel IF you put a yellow filter over the lens.

 

Anyway, all of this gave me a good excuse to add a demosaic routine to my FORTRAN code and have a look at the individual channels.

Edited by Lenshacker
Link to post
Share on other sites

Well in my shot through an R60 there was nothing in the green or blue channels. The image really lacks detail compared with the non-filtered image. I also shot the same scene with a K2 yellow filter and it doesn't lose much detail but it doesn't darken the sky very much either.

 

So it is clear to me that I would not recommend to the OP or to anyone that they use yellow filter over the M9. Do you still advise him to do so?

 

Why don't you and others check out Capture One, I think they have a free demo, and take a look at how the color sliders work in its b/w conversion tool. (I believe some other software has a similar tool.)

 

I'll leave the Fortran and study of filter specs to you.

Edited by AlanG
Link to post
Share on other sites

For all intents and purposes, a Bayer-patterned color sensor is "color film."

 

It only makes sense to use strong colored B&W filters designed for monochrome tone controls on such a sensor - IF you would be happy shooting color film with the same filters.

 

If the second seems a silly idea (and I submit that it is - except for some artistic endeavours such as Pete Turner's work), then the first is also a silly idea.

 

Fun? Maybe.

 

When I was 8 or so, I discovered that I could "play" a phonograph record by rolling typing paper into a cone, sticking a sewing needle through the cone tip, and resting the needle point in the track of the revolving record. A home-made analog acoustic gramophone.

 

Fun? You bet. Even educational, at least for an 8-year-old. But not a lot of future in it, even in 1962.

 

Now - color-balancing filters, designed for color photography, can indeed help with lighting that is already missing certain wavelengths. E.G. indoor lighting. By holding back the excess yellow and giving your blue channel a fighting chance through additional total exposure. Just as one shot slides with a blue CC filter to get rid of yellow interiors or pink FL filters to get rid of fluorescent-light green casts.

 

It's the exact opposite of Alan's red-filter example. You are evening out the exposure across all color pixels, rather than destroying one or two channels through massive underexposure (the yellow light was already doing that, and you are fixing the problem).

 

You don't even get additional noise - provided that you actually increase exposure (time or aperture), and don't just ramp up the ISO to compensate.

Edited by adan
  • Like 2
Link to post
Share on other sites

Now - color-balancing filters, designed for color photography, can indeed help with lighting that is already missing certain wavelengths. E.G. indoor lighting. By holding back the excess yellow and giving your blue channel a fighting chance through additional total exposure. Just as one shot slides with a blue CC filter to get rid of yellow interiors or pink FL filters to get rid of fluorescent-light green casts.

 

 

Yes this seems like a good idea. I might even try strong blue filters with sodium vapor to see if there is any other color besides yellow there.

 

My typical issue is interior work with mixed light - daylight, strobes, and then the yellow from overhead lights and other room lights that are typically CFL now. But also is a problem with tungsten. A camera filter would affect overall color balance and my issue is the shadows and some areas being filled with yellow light. It is not practical to put filters on the lights or replace the bulbs. (I used to do this sometimes in the past when shooting film.)

 

My solution is using DXO for my raw conversions and I shift the yellow slider (that color only, not the color balance) to slightly red then de-saturate it some. I also can turn up the saturation or further tweak each other color individually to try to improve the shot. As time has gone by I have seen our software tools evolve.

Link to post
Share on other sites

If the second seems a silly idea (and I submit that it is - except for some artistic endeavours such as Pete Turner's work), then the first is also a silly idea.

 

This is off topic but Turner was big on using a slide duplicator and colored filters. (Among many other things of course.) He did things such as shoot slide film and process it as a negative and sandwich it with a positive. In college I tried out a lot of his techniques to get a basic understanding of how to use filters creatively.

 

When I first saw his work as a teenager back in the late 60s it looked revolutionary, inspired me and helped open new perspectives for me. (I mostly was a Leica toting Tri-X shooting photojournalist wanna be at that time.)

 

I was fortunate that when I went to RIT, I had Robert Bagby as my professor for color photography and printing... the same man that Turner said encouraged him to be creative back in the 50s. And I was very excited my first year as a student when Turner made a presentation to our class and individually reviewed and commented on our work. (Yes it was 45 years ago but is as clear as if it happened yesterday.)

Link to post
Share on other sites

Leica M8, no filter. DNG files "Opened As" RAW, bypassing the demosaic process in Photoshop. The sensor really is Monochrome...

 

16831942836_eab3296e1d_o.jpgfull_color by fiftyonepointsix, on Flickr

 

100% crop. The dark pixels are RED.

 

16235450654_70056d979a_b.jpgfull_color_crop by fiftyonepointsix, on Flickr

 

Leica M8, Y48 filter, all shot on auto-exposure, the speed is twice as long.

 

16237839433_3c1616ba80_o.jpgyellowFilter by fiftyonepointsix, on Flickr

 

16670222748_bcc5438fda_o.jpgyellowFilter_Crop by fiftyonepointsix, on Flickr

 

100% crop. The Blue channel is sensitive to light above 480nm, about the same QE as the Red channel. It is not a separation filter, there is sensitivity all the way out to the red region. Green cells are also being cut by the yellow filter. Red benefits from the additional exposure. The pixels in the Bayer cell are much more balanced. You should be able to take advantage of that for an optimized demosaic process.

 

Not many clouds, but the Yellow filter is bringing out the thin clouds. A Y52 (Deep Yellow) would increase the contrast and still pick up contributions from the Blue channel that would otherwise be lost. Probably a practical limit for blue.

 

My recommendation to the OP, and everyone else, is to try this out for themselves. These images depict what the sensor is recording. I am going to write my own demosaic routine in FORTRAN to optimize the monochrome conversion process assuming a Yellow filter has been used. If it works- someone might be able to incorporate the technique in an App.

Edited by Lenshacker
  • Like 3
Link to post
Share on other sites

I have a first-cut at the demosaic routine. Bayer cells can give you a headache, no wonder I bought the M Monochrom.

 

But this is fun.

 

This the output made from interpolating the red-green-blue channels and adding them. Levels used in Photoshop on the finished monochrome image as it expected 16-bit values and the M8 produces 14-bits. I did not scale in software as I'm going to try to stuff back into a DNG.

 

This is with the Y48 filter on the camera,

 

16245090154_2f64fa0b4c_o.jpgMONO_Y48_SMALL by fiftyonepointsix, on Flickr

 

This is the 100% crop.

 

16245090234_bfdf9aa91b_o.jpgMONO_Y48_CROP by fiftyonepointsix, on Flickr

 

Simple nearest neighbor interpolation. I might compare with a spline, I have the routine in the code. Pulled from a program written in the 1980s. But this image looks good.

 

I will compare the M8 with the M Monochrom, both with Y48 filters. Personal opinion: that's about as strong of a filter that you can use and keep the blue channel as a contributor.

 

This next image is after outputting the combination of the interpolated channels, but subtracts Blue from Red+ Green. This is after the Y48 filter, so the image is "about" what you would get with a Y52.

 

16680064750_4d12af0b96_o.jpgMONO_Y52_SMALL by fiftyonepointsix, on Flickr

 

I already had the interpolated channels computed, so it was free.

Edited by Lenshacker
Link to post
Share on other sites

:confused:In an RGB environment how can you work on individual colours without manipulating the colour channels?

I fear we will have to agree to disagree on this.

 

I will try to give you a different way to think about this.

 

Picture using a deep red filter on a 20 MP Bayer sensor. The green and blue channels will have little detail. Thus the photo's detail will have to be made up almost entirely from the information in the red channel alone using only 5MP. Then this information will be extrapolated to 60 Megabytes of information for an 8 bit RGB file.

 

If you shoot without a filter all of the detail will be preserved. (Assuming a standard scene.) The raw converter will extrapolate the detail from 20MP to 60MP of information in an 8 bit RGB 60 Megabyte file... mapping RGB values to each pixel. Once that information is extrapolated (or even in preview mode,) you can then change around the colors and tones pretty far without seriously degrading the detail.

 

You are not manipulating the color channels as a whole but are mathematically replacing the numbers being assigned as specific colors to specific pixels. For instance you can replace red with green or any other color or shade and that has nothing to do with what color was originally recorded in each color channel by the separation process. (Unlike changing the curve of one or more channels.) And of course if you use a filter when taking the photo, and don't have all of this information, you can't do this.

Edited by AlanG
Link to post
Share on other sites

16299874603_bff9055dd5_b.jpgM1015737 by fiftyonepointsix, on Flickr

 

Generated my first DNG file, Linear-DNG Monochrome from the M8. Replaced all of the image values and then patched the "Image File Directory" to set tag " '106'x Photometricinterpolation" as linear, a few other tags patched and others eliminated. All EXIF is preserved. Y48 filter. Simple interpolation, channels added.

 

You would not use an R60 filter with the M8 or M9 unless you wanted a sharper cutoff than what the dye in the Bayer pattern mosaic filter can deliver. You are going to cut the resolution by a factor of 4. An O56 might be interesting as green response will be about the same as red.

 

The Y48 works well given the spectral response of the blue-green dye. That was the original question asked by the OP.

 

You can manipulate and change colors with digital, it makes a nice computer graphic. It loses all connection with how the scene originally appeared. Two ways of doing it, I've done a lot of both. Generated fractal representations of scenes and had to convince people that they were not real, 25+ years ago. This was fun, got to learn the DNG spec.

Edited by Lenshacker
Link to post
Share on other sites

I am using a red filter as an example to explain the conceptual difference between working on color channels and working on individual colors. The concept applies to any filter or method. Besides, when I shot b/w I used red filters very frequently for dramatic skies. And then I probably still burned in some on the prints.

 

There is a huge difference between simply playing around with filters and color curves and actually using sophisticated software tools to modify individual colors before grayscale conversion.

 

You and some others don't seem to get this or care, but that's the fact jack. I am hardly the first person to realize this and utilize these tools for great effect. Countless people have been working this way for quite a while. When one shoots color objects to reproduce in b/w there are numerous reasons one might wish to change the tone of individual colors or objects in order to make them stand out or recede.

 

What all of your samples having to do with de-mosaic methods, etc. has to do with the topic is beyond me. It doesn't sound easy, intuitive or very useful and I doubt if that is what the OP is how the OP wants to work. Lenshacker, I sort of feel these responses from you get pretty sidetracked in technical departures that interest you, so I really was addressing Jaapv's comment and trying to explain there is a lot more to color adjusting and b/w conversion than what can be done with changing curves in color channels or using filters in front of the lens.

 

If you and others can't see the benefit of isolating just the blue sky in a scene to darken it before conversion, without affecting the relationship of colors in other parts of the photo, then I question how you look at photography. Consider it sort of like a very accurate graduated filter used just on the sky.

Edited by AlanG
Link to post
Share on other sites


There is a huge difference between simply playing around with filters and color curves and actually using sophisticated software tools to modify individual colors before grayscale conversion. .


Yes, there is a difference between actually using physical color filters and playing around with complicated software tools.

The filter attenuates light depending on its wavelength. The software manipulates pixel values depending on the ratio of the values of its color channels. It has no conception of the spectral colors of the light contributing to the pixel. It only sees the weighted result.

Take a yellow point, for example. Once its light has passed the color filter array and has been measured and tallied and de-mosaiced, all you have is a triplet of numbers, one for the blue component (probably close to zero) and one for the green and the red components, respectively, with roughly similar values.

The number of things you can do to the image of that yellow point is vast. You can not, however, tell whether the light which passed through the lens consisted of a single wavelength, a narrow or wide spectrum of wavelengths, a cocktail of several single wavelengths or any combination thereof. You can only tell that the cells behind the green and the red filter have been excited to such-and-such an extent.

The red filter, OTOH, will attenuate all parts of the light emanating from that point depending on the distance of their wavelengths from a specific value. That's something which is not possible in PP.

Hence, you can filter your image in two distinct ways. The result might be similar in most cases, but they will not be identical. Which of the two ways you use is up to you and should depend on the desired outcome.

A landscape reduced to its hues of the color red should be photographed through a red filter.

When wildly changing color values is the goal, you can do that in PP only, unless you have some false-color optics at your disposal.
  • Like 1
Link to post
Share on other sites

 

 

Yes, there is a difference between actually using physical color filters and playing around with complicated software tools.

 

The filter attenuates light depending on its wavelength. The software manipulates pixel values depending on the ratio of the values of its color channels. It has no conception of the spectral colors of the light contributing to the pixel. It only sees the weighted result.

 

Take a yellow point, for example. Once its light has passed the color filter array and has been measured and tallied and de-mosaiced, all you have is a triplet of numbers, one for the blue component (probably close to zero) and one for the green and the red components, respectively, with roughly similar values.

 

The number of things you can do to the image of that yellow point is vast. You can not, however, tell whether the light which passed through the lens consisted of a single wavelength, a narrow or wide spectrum of wavelengths, a cocktail of several single wavelengths or any combination thereof. You can only tell that the cells behind the green and the red filter have been excited to such-and-such an extent.

 

The red filter, OTOH, will attenuate all parts of the light emanating from that point depending on the distance of their wavelengths from a specific value. That's something which is not possible in PP.

 

Hence, you can filter your image in two distinct ways. The result might be similar in most cases, but they will not be identical. Which of the two ways you use is up to you and should depend on the desired outcome.

 

A landscape reduced to its hues of the color red should be photographed through a red filter.

 

When wildly changing color values is the goal, you can do that in PP only, unless you have some false-color optics at your disposal.

 

I am not even remotely talking about filtering light forming an image. I am talking about making an image work in the way I want that can't be accomplished with any kind of filters.

 

Yes I am reasonably familiar with filters having owned hundreds of them including almost every Kodak gel in 3" and 4", color meters, etc. Do you think I bought them by accident? This is all I have left after giving away many.

 

If you want to restrict the light in some way before it hits your color sensor, that is fine with me as long as you know what you are doing. But I can't think of any typical example outside of IR or some scientific use where I'd recommend it to anyone beyond the example given by Adan.

 

Here's my suggestion, find a scene that you think really requires you to filter the light in some way and shoot it with the filter and without. Then adjust your b/w conversion from the filtered image to your satisfaction. Send me that tif file in full res and the original raw color unfiltered file and let me see if I can match it and also try to enhance it when I convert it.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

Edited by AlanG
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...