Likaleica Posted May 20, 2015 Share #1 Posted May 20, 2015 Advertisement (gone after registration) This is regarding theoretical resolution only. Nothing else. Written by someone with no electrical engineering knowledge or background. Just a physics question, if you will. The 246 part is easy. I get it. Each pixel on the sensor reads a different density of light, all the data are collected, and a monochrome file is generated. Sounds simple although I am sure it's not. On the 240, the sensor is covered by a Bayer filter, which is a series of quadrangles of RGB micro prisms. Each prism has a "filter factor" The camera's software must consider this filter factor to come up with a true density of each pixel, and then must extrapolate from the four pixels what the color of that quadrangle of filters must be. Hence, the resolution would be less than a file generated by a monochrom camera because four pixels are involved in calculation of a color point vs. one pixel per light density point. But regarding conversion of files to monochrome in a 240, why can't the software correct for the color filter factors to assign a density of gray to each pixel while ignoring the color extrapolation? Wouldn't this theoretically keep the resolution of a monochrome file from a 240 the same as from a 246? The micro filter should not affect resolution, which is determined by what comes out of the back of the lens and the individual pixels. It's not as if the poor optical quality of each micro lens would affect image quality. When I enlarge Thighslapper's DNG files from the two cameras up to 1200% and convert the 240 to gray scale, I see absolutely no difference in the gray shading of the individual pixels. (No, this isn't extreme pixel peeping, it's trying to understand how the sensor and software work). So, I'm wondering, at low ISO's, shouldn't the resolution theoretically and practically be identical? And isn't it the lack of "filter factors" which largely accounts for the increased ISO performance in the 246? More photon can strike the individual pixels so the noise is less? 1 Quote Link to post Share on other sites More sharing options...
Advertisement Posted May 20, 2015 Posted May 20, 2015 Hi Likaleica, Take a look here Simplistic Question on 240 vs 246. I'm sure you'll find what you were looking for!
jonoslack Posted May 20, 2015 Share #2 Posted May 20, 2015 On the 240, the sensor is covered by a Bayer filter, which is a series of quadrangles of RGB micro prisms. Each prism has a "filter factor" The camera's software must consider this filter factor to come up with a true density of each pixel, and then must extrapolate from the four pixels what the color of that quadrangle of filters must be. Hence, the resolution would be less than a file generated by a monochrom camera because four pixels are involved in calculation of a color point vs. one pixel per light density point. So, I'm wondering, at low ISO's, shouldn't the resolution theoretically and practically be identical? And isn't it the lack of "filter factors" which largely accounts for the increased ISO performance in the 246? More photon can strike the individual pixels so the noise is less? What an interesting idea . . .but I suspect it couldn't be done in processing software - I would have thought it would be possible to generate a monochrome DNG . . . but I guess it would be pretty complicated stuff. But it isn't what's happening - so the resolution isn't the same, and the demosaicing, does, inevitably, lose you some resolution. How much is a big question! my reaction is that it's about twice the resolution, but it's only instinctive. 1 Quote Link to post Share on other sites More sharing options...
Lenshacker Posted May 20, 2015 Share #3 Posted May 20, 2015 The problem is the pixels underneath the Red-Green-Blue cells of the Mosaic filter see the world differently. If someone is wearing a Blue and Red plaid dress then the Monochrom will pick up much more resolution. This is how the sensor sees the world: full_color_crop by fiftyonepointsix, on Flickr And this is about how far I pushed it writing my own software for custom demosaic: https://www.flickr.com/photos/90768661@N02/sets/72157651508788447 1 Quote Link to post Share on other sites More sharing options...
Lenshacker Posted May 20, 2015 Share #4 Posted May 20, 2015 (edited) What an interesting idea . . .but I suspect it couldn't be done in processing software - I would have thought it would be possible to generate a monochrome DNG . . . but I guess it would be pretty complicated stuff. But it isn't what's happening - so the resolution isn't the same, and the demosaicing, does, inevitably, lose you some resolution. How much is a big question! my reaction is that it's about twice the resolution, but it's only instinctive. http://www.leicaplace.com/threads/1145/page-2#post-8617 With a color-demosaic algorithm 8 of every 12 pixels are interpolated. M9 Monochrome-DNG conversion. M1012676_small by fiftyonepointsix, on Flickr The Fortran program batch converts the color DNG files to Linear-Monochrome DNG. The hard part was preserving the Thumbnail image. 100% crop of M9 Monochrome linear-DNG. M1012676_crop by fiftyonepointsix, on Flickr The nice thing- you get a 15-bit image to work with. FORTRAN-77 source code available... Edited May 20, 2015 by Lenshacker 2 Quote Link to post Share on other sites More sharing options...
jaapv Posted May 20, 2015 Share #5 Posted May 20, 2015 Certainly interesting.What you won't lose, however, are the optical aberrations introduced by the Bayer filter pattern 2 Quote Link to post Share on other sites More sharing options...
mjh Posted May 20, 2015 Share #6 Posted May 20, 2015 (edited) But regarding conversion of files to monochrome in a 240, why can't the software correct for the color filter factors to assign a density of gray to each pixel while ignoring the color extrapolation? Wouldn't this theoretically keep the resolution of a monochrome file from a 240 the same as from a 246? In short, no. There is no way you could deduce a suitable correction factor for each pixel. Or put differently: Interpolating the missing colours first (i.e. demosaicing the raw data) prior to converting to monochrome is the camera’s or raw converter’s way of determining a fitting correction factor for each individual pixel. For example, in a given row of pixels you may have green and red sensitive pixels. The green sensitive pixels tell you how much green there was at that point whereas the red pixels inform you about the amount of red. Now if the world was black and white already then you could just apply a fixed correction factor (depending on the transmission of the three types of filters and the spectral sensitivity of the chip itself) but as the world is in colour, this wouldn’t work. For a green subject you would get a high value from the green pixels and a low value from the red pixels and applying a fixed correction factor wouldn’t change that at all – the red values would still be lower than the green ones. For a red subject it was the was other way round and without knowing the actual colour of the subject the camera could not know what correction to apply. It still needs the demosaicing step to guess the subject’s colour and it is this step that reduces resolution, even when the RGB image gets converted to monochrome eventually. Edited May 20, 2015 by mjh 6 Quote Link to post Share on other sites More sharing options...
jonoslack Posted May 20, 2015 Share #7 Posted May 20, 2015 Advertisement (gone after registration) In short, no. There is no way you could deduce a suitable correction factor for each pixel. Thank you - extremely clear, and rather obvious when one thinks about it properly 1 Quote Link to post Share on other sites More sharing options...
Berlinman Posted May 20, 2015 Share #8 Posted May 20, 2015 http://www.leicaplace.com/threads/1145/page-2#post-8617 With a color-demosaic algorithm 8 of every 12 pixels are interpolated. M9 Monochrome-DNG conversion. M1012676_small by fiftyonepointsix, on Flickr The Fortran program batch converts the color DNG files to Linear-Monochrome DNG. The hard part was preserving the Thumbnail image. 100% crop of M9 Monochrome linear-DNG. M1012676_crop by fiftyonepointsix, on Flickr Lenshacker, very cool. I need this for my M240 ! The nice thing- you get a 15-bit image to work with. FORTRAN-77 source code available... 2 Quote Link to post Share on other sites More sharing options...
Lenshacker Posted May 20, 2015 Share #9 Posted May 20, 2015 (edited) Most of the blue and green dye layers have a lot of overlap. By using a Yellow Y48 filter you even out the spectral response in the Blue and Green channels, the Blue channel is weaker- but has "roughly" the same shape as the green channel after filtering with the Y48. I equalized the Blue channel histogram to the Green channel Histogram by stretching it. So the above demosaic was done by combining the equalized Blue plane with the Green plane, meaning only 1/4 pixels in the "BG" plane was interpolated. The Red plane used the standard 3/4 pixels interpolated. Added the two planes, so 4 of 8 pixels were interpolated rather than 8 of 12. Was it worth it? 1) I spent $8K on the M Monochrom 2) Demonstrated that color cameras preserve highlights for converting to monochrome. The value for "white" is set at 45K rather than 16383 for the M Monochrom and 3750 for the new one. Was fun processing DNG files using Fortran running on a DOS computer, you get all of memory for your code- Pharlap extended DOS. The code needs to be extended to handle "Big-Endian" M240 files. I need to look at an M240 DNG file. Edited May 20, 2015 by Lenshacker 2 Quote Link to post Share on other sites More sharing options...
Berlinman Posted May 20, 2015 Share #10 Posted May 20, 2015 My last Fortran-Program is approx. 25 years old. Wold be interesting to find a compiler running Fortran77 on a Macbook. 1 Quote Link to post Share on other sites More sharing options...
pico Posted May 20, 2015 Share #11 Posted May 20, 2015 My last Fortran-Program is approx. 25 years old. Wold be interesting to find a compiler running Fortran77 on a Macbook. There are several Fortran compilers for OS X. 2 Quote Link to post Share on other sites More sharing options...
Likaleica Posted May 20, 2015 Author Share #12 Posted May 20, 2015 Many thanks for the thoughtful answers. Quote Link to post Share on other sites More sharing options...
mjh Posted May 20, 2015 Share #13 Posted May 20, 2015 (edited) There are several Fortran compilers for OS X. But why? Fortran 77 is obsolete even by Fortran standards. Someone forgot to drive a stake through its heart to make sure Fortran was dead. Or COBOL for that matter. (Admittedly I have a soft spot for LISP (I wrote a textbook on LISP many years ago) which is just as ancient but still – conceptually LISP was miles ahead of other programming languages back then and it still is.) Edited May 20, 2015 by mjh Quote Link to post Share on other sites More sharing options...
Lenshacker Posted May 20, 2015 Share #14 Posted May 20, 2015 (edited) Hook up LASERS and implement optical feedback loops with your computer. I use FORTRAN and assembly for mine. FORTRAN produces Synchronous code and is perfectly suited for embedded processors, realtime response is important in the Lab. Static memory allocation, no need to worry about memory management stealing cycles. Fortran-77 is the last version that gets you closest to assembly language and being able to predict what instruction sequences and optimizations that the compiler will produce. Fortran-90 started with a very poor implementation of Pointers, not worth the bother. If you stay away from dynamic memory allocation in C, you can do the same. I wrote about 120 macros so my C looks like FORTRAN. I don't have to worry about "==" vs "=" in an if{}; I get to design computers to my specifications, then program them. Wrote about 2000 lines of FORTRAN and Assembly code in the last 2 weeks. Been a great living for the past 35 years. I do miss heavy-metal. My wife used to debug the FORTRAN compilers on the Cray-XMP. The M8 with Yellow filter. M8 Converted to Linear DNG by fiftyonepointsix, on Flickr You can see the demosaic artifacts, turn out more like a dither pattern. M8 Converted to Linear DNG by fiftyonepointsix, on Flickr Besides, I wrote most of this software in the 1980s. In FORTRAN. Anyone else remember NATO standard image format? Edited May 20, 2015 by Lenshacker 1 Quote Link to post Share on other sites More sharing options...
pico Posted May 20, 2015 Share #15 Posted May 20, 2015 (edited) But why? Fortran 77 is obsolete even by Fortran standards. Someone forgot to drive a stake through its heart to make sure Fortran was dead. Or COBOL for that matter. (Admittedly I have a soft spot for LISP (I wrote a textbook on LISP many years ago) which is just as ancient but still – conceptually LISP was miles ahead of other programming languages back then and it still is.) You are showing your age. CPUs are an order of magnitude (or two) faster than they were in our age. So tell us, what is NOT obsolete today and is it efficient upon its own regardless of CPU speed? -- Pico - formerly an RTS and assembler programmer. Edited May 20, 2015 by pico Quote Link to post Share on other sites More sharing options...
Lenshacker Posted May 20, 2015 Share #16 Posted May 20, 2015 (edited) You cannot believe how FAST Fortran/Assembly language code is running on a modern CPU. Once you have rewritten all of the interrupt handlers and make Windoze go away. I programmed the first-generation parallel supercomputers. You could get a 400:1 speedup by vectorizing code. These days, writing RISC assembly language is okay, can beat the C/C++ compiler by a factor of 5x. But few applications left where "all that matters is execution speed" even with fast CPUs, I am lucky to be in a field that requires it. Edited May 20, 2015 by Lenshacker Quote Link to post Share on other sites More sharing options...
mjh Posted May 20, 2015 Share #17 Posted May 20, 2015 You are showing your age. CPUs are an order of magnitude (or two) faster than they were in our age. So tell us, what is NOT obsolete today and is it efficient upon its own regardless of CPU speed? Fortran was considered obsolete when I started studying computer science in 1978 … While assembly language has its charms (been there, done that), these days a programming language should be as distant from actual hardware as possible. CPU speed is a red herring; you want your code to support threading and multi-processing and you don’t want to deal with memory management. Quote Link to post Share on other sites More sharing options...
Lenshacker Posted May 21, 2015 Share #18 Posted May 21, 2015 (edited) FORTRAN was the language of choice through the 1970s and 1980s for Supercomputers, most VAX computers, and array processors. C was catching on, did not optimize as well. I paid my way through college by optimizing atomic structure programs. You code your way, and if people are willing to pay you for it - great. If the software meets the intended function, great. If running too slow causes the device to fail, then you need to speed up execution. Either optimize your code, or use a faster processor. Most of the world has gone the latter route, and it shows. A lot embedded code is very sloppy, it annoys me that the discreet shutter advance on the M Monochrom is so glitchy- due to firmware. Dropping the battery out of a camera to reboot it - usually firmware unable to handle exceptions. I tend to squeeze the last clock cycle out of a piece of code before moving to a more power hungry processor. After 35 years, people still willing to pay for it. Works for me. But- this thread was about monochrome conversion. Using a Yellow filter, as pointed out by others in different threads, offers an advantage for converting images from color cameras to monochrome. I just wanted to see how far it could be carried. Edited May 21, 2015 by Lenshacker Quote Link to post Share on other sites More sharing options...
Likaleica Posted May 21, 2015 Author Share #19 Posted May 21, 2015 The Forum is fun, but taking pictures is funner. My dog Gracie tonight. M-P 240, 24 Asph Welcome, dear visitor! As registered member you'd see an image here… Simply register for free here – We are always happy to welcome new members! 5 Quote Link to post Share on other sites Simply register for free here – We are always happy to welcome new members! ' data-webShareUrl='https://www.l-camera-forum.com/topic/245284-simplistic-question-on-240-vs-246/?do=findComment&comment=2819485'>More sharing options...
mjh Posted May 21, 2015 Share #20 Posted May 21, 2015 A lot embedded code is very sloppy, it annoys me that the discreet shutter advance on the M Monochrom is so glitchy- due to firmware. Dropping the battery out of a camera to reboot it - usually firmware unable to handle exceptions. If it was just a matter of gracefully handling an exception the issues with the discreet shutter option would have been solved years ago. My suspicicion is that the problems arise because the electronics need to control a mechanical device not providing some crucial feedback in this case so the firmware has to guesstimate what state the shutter is in. When it gets the timing wrong the camera stalls. Quote Link to post Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.