Jump to content

Simplistic Question on 240 vs 246


Likaleica

Recommended Posts

Advertisement (gone after registration)

This is regarding theoretical resolution only.  Nothing else.  Written by someone with no electrical engineering knowledge or background.  Just a physics question, if you will.

 

The 246 part is easy.  I get it.  Each pixel on the sensor reads a different density of light, all the data are collected, and a monochrome file is generated.  Sounds simple although I am sure it's not.

 

On the 240, the sensor is covered by a Bayer filter, which is a series of quadrangles of RGB micro prisms.  Each prism has a "filter factor"  The camera's software must consider this filter factor to come up with a true density of each pixel, and then must extrapolate from the four pixels what the color of that quadrangle of filters must be.  Hence, the resolution would be less than a file generated by a monochrom camera because four pixels are involved in calculation of a color point vs. one pixel per light density point.

 

But regarding conversion of files to monochrome in a 240, why can't the software correct for the color filter factors to assign a density of gray to each pixel while ignoring the color extrapolation?  Wouldn't this theoretically keep the resolution of a monochrome file from a 240 the same as from a 246?  The micro filter should not affect resolution, which is determined by what comes out of the back of the lens and the individual pixels.  It's not as if the poor optical quality of each micro lens would affect image quality.

 

When I enlarge Thighslapper's DNG files from the two cameras up to 1200% and convert the 240 to gray scale, I see absolutely no difference in the gray shading of the individual pixels.  (No, this isn't extreme pixel peeping, it's trying to understand how the sensor and software work).

 

So, I'm wondering, at low ISO's, shouldn't the resolution theoretically and practically be identical?  And isn't it the lack of "filter factors" which largely accounts for the increased ISO performance in the 246?  More photon can strike the individual pixels so the noise is less?

 

 

  • Like 1
Link to post
Share on other sites

 

On the 240, the sensor is covered by a Bayer filter, which is a series of quadrangles of RGB micro prisms.  Each prism has a "filter factor"  The camera's software must consider this filter factor to come up with a true density of each pixel, and then must extrapolate from the four pixels what the color of that quadrangle of filters must be.  Hence, the resolution would be less than a file generated by a monochrom camera because four pixels are involved in calculation of a color point vs. one pixel per light density point.

 

 

So, I'm wondering, at low ISO's, shouldn't the resolution theoretically and practically be identical?  And isn't it the lack of "filter factors" which largely accounts for the increased ISO performance in the 246?  More photon can strike the individual pixels so the noise is less?

What an interesting idea . . .but I suspect it couldn't be done in processing software - I would have thought it would be possible to generate a monochrome DNG . . . but I guess it would be pretty complicated stuff. 

But it isn't what's happening - so the resolution isn't the same, and the demosaicing, does, inevitably, lose you some resolution. How much is a big question! my reaction is that it's about twice the resolution, but it's only instinctive. 

  • Like 1
Link to post
Share on other sites

The problem is the pixels underneath the Red-Green-Blue cells of the Mosaic filter see the world differently. If someone is wearing a Blue and Red plaid dress then the Monochrom will pick up much more resolution.

 

This is how the sensor sees the world:

 

16235450654_1bd4345e0b_o.jpgfull_color_crop by fiftyonepointsix, on Flickr

 

And this is about how far I pushed it writing my own software for custom demosaic:

 

https://www.flickr.com/photos/90768661@N02/sets/72157651508788447

  • Like 1
Link to post
Share on other sites

What an interesting idea . . .but I suspect it couldn't be done in processing software - I would have thought it would be possible to generate a monochrome DNG . . . but I guess it would be pretty complicated stuff. 

But it isn't what's happening - so the resolution isn't the same, and the demosaicing, does, inevitably, lose you some resolution. How much is a big question! my reaction is that it's about twice the resolution, but it's only instinctive. 

http://www.leicaplace.com/threads/1145/page-2#post-8617

 

With a color-demosaic algorithm 8 of every 12 pixels are interpolated.

 

M9 Monochrome-DNG conversion.

 

16991248947_6487e3a71f_b.jpgM1012676_small by fiftyonepointsix, on Flickr

 

The Fortran program batch converts the color DNG files to Linear-Monochrome DNG. The hard part was preserving the Thumbnail image.

 

100% crop of M9 Monochrome linear-DNG.

 

16576228684_2fe9fe2d33_o.jpgM1012676_crop by fiftyonepointsix, on Flickr

 

The nice thing- you get a 15-bit image to work with.

 

FORTRAN-77 source code available...

Edited by Lenshacker
  • Like 2
Link to post
Share on other sites

But regarding conversion of files to monochrome in a 240, why can't the software correct for the color filter factors to assign a density of gray to each pixel while ignoring the color extrapolation?  Wouldn't this theoretically keep the resolution of a monochrome file from a 240 the same as from a 246?

In short, no. There is no way you could deduce a suitable correction factor for each pixel. Or put differently: Interpolating the missing colours first (i.e. demosaicing the raw data) prior to converting to monochrome is the camera’s or raw converter’s way of determining a fitting correction factor for each individual pixel.

 

For example, in a given row of pixels you may have green and red sensitive pixels. The green sensitive pixels tell you how much green there was at that point whereas the red pixels inform you about the amount of red. Now if the world was black and white already then you could just apply a fixed correction factor (depending on the transmission of the three types of filters and the spectral sensitivity of the chip itself) but as the world is in colour, this wouldn’t work. For a green subject you would get a high value from the green pixels and a low value from the red pixels and applying a fixed correction factor wouldn’t change that at all – the red values would still be lower than the green ones. For a red subject it was the was other way round and without knowing the actual colour of the subject the camera could not know what correction to apply. It still needs the demosaicing step to guess the subject’s colour and it is this step that reduces resolution, even when the RGB image gets converted to monochrome eventually.

Edited by mjh
  • Like 6
Link to post
Share on other sites

http://www.leicaplace.com/threads/1145/page-2#post-8617

 

With a color-demosaic algorithm 8 of every 12 pixels are interpolated.

 

M9 Monochrome-DNG conversion.

 

16991248947_6487e3a71f_b.jpgM1012676_small by fiftyonepointsix, on Flickr

 

The Fortran program batch converts the color DNG files to Linear-Monochrome DNG. The hard part was preserving the Thumbnail image.

 

100% crop of M9 Monochrome linear-DNG.

 

16576228684_2fe9fe2d33_o.jpgM1012676_crop by fiftyonepointsix, on Flickr

Lenshacker, very cool. I need this for my M240 !

 

The nice thing- you get a 15-bit image to work with.

 

FORTRAN-77 source code available...

  • Like 2
Link to post
Share on other sites

Most of the blue and green dye layers have a lot of overlap. By using a Yellow Y48 filter you even out the spectral response in the Blue and Green channels, the Blue channel is weaker- but has "roughly" the same shape as the green channel after filtering with the Y48.  I equalized the Blue channel histogram to the Green channel Histogram by stretching it. So the above demosaic was done by combining the equalized Blue plane with the Green plane, meaning only 1/4 pixels in the "BG" plane was interpolated. The Red plane used the standard 3/4 pixels interpolated. Added the two planes, so 4 of 8 pixels were interpolated rather than 8 of 12. Was it worth it?

 

1) I spent $8K on the M Monochrom

2) Demonstrated that color cameras preserve highlights for converting to monochrome. The value for "white" is set at 45K rather than 16383 for the M Monochrom and 3750 for the new one.

 

Was fun processing DNG files using Fortran running on a DOS computer, you get all of memory for your code- Pharlap extended DOS.

 

The code needs to be extended to handle "Big-Endian" M240 files. I need to look at an M240 DNG file.

Edited by Lenshacker
  • Like 2
Link to post
Share on other sites

There are several Fortran compilers for OS X.

But why? Fortran 77 is obsolete even by Fortran standards. Someone forgot to drive a stake through its heart to make sure Fortran was dead. Or COBOL for that matter. (Admittedly I have a soft spot for LISP (I wrote a textbook on LISP many years ago) which is just as ancient but still – conceptually LISP was miles ahead of other programming languages back then and it still is.)

Edited by mjh
Link to post
Share on other sites

Hook up LASERS and implement optical feedback loops with your computer. I use FORTRAN and assembly for mine.

 

FORTRAN produces Synchronous code and is perfectly suited for embedded processors, realtime response is important in the Lab. Static memory allocation, no need to worry about memory management stealing cycles. Fortran-77 is the last version that gets you closest to assembly language and being able to predict what instruction sequences and optimizations that the compiler will produce. Fortran-90 started with a very poor implementation of Pointers, not worth the bother. If you stay away from dynamic memory allocation in C, you can do the same. I wrote about 120 macros so my C looks like FORTRAN. I don't have to worry about "==" vs "=" in an if{};

 

I get to design computers to my specifications, then program them. Wrote about 2000 lines of FORTRAN and Assembly code in the last 2 weeks. Been a great living for the past 35 years. I do miss heavy-metal. My wife used to debug the FORTRAN compilers on the Cray-XMP.

 

The M8 with Yellow filter.

 

17292024733_324128de89_c.jpgM8 Converted to Linear DNG by fiftyonepointsix, on Flickr

 

You can see the demosaic artifacts, turn out more like a dither pattern.

 

17292023973_85e18f276b_o.jpgM8 Converted to Linear DNG by fiftyonepointsix, on Flickr

 

Besides, I wrote most of this software in the 1980s. In FORTRAN. Anyone else remember NATO standard image format?

Edited by Lenshacker
  • Like 1
Link to post
Share on other sites

But why? Fortran 77 is obsolete even by Fortran standards. Someone forgot to drive a stake through its heart to make sure Fortran was dead. Or COBOL for that matter. (Admittedly I have a soft spot for LISP (I wrote a textbook on LISP many years ago) which is just as ancient but still – conceptually LISP was miles ahead of other programming languages back then and it still is.)

 

You are showing your age. CPUs are an order of magnitude (or two) faster than they were in our age.

So tell us, what is NOT obsolete today and is it efficient upon its own regardless of CPU speed?

 

--

Pico - formerly an RTS and assembler programmer.

Edited by pico
Link to post
Share on other sites

You cannot believe how FAST Fortran/Assembly language code is running on a modern CPU. Once you have rewritten all of the interrupt handlers and make Windoze go away. I programmed the first-generation parallel supercomputers. You could get a 400:1 speedup by vectorizing code. These days, writing RISC assembly language is okay, can beat the C/C++ compiler by a factor of 5x. But few applications left where "all that matters is execution speed" even with fast CPUs, I am lucky to be in a field that requires it.

Edited by Lenshacker
Link to post
Share on other sites

You are showing your age. CPUs are an order of magnitude (or two) faster than they were in our age.

So tell us, what is NOT obsolete today and is it efficient upon its own regardless of CPU speed?

Fortran was considered obsolete when I started studying computer science in 1978 … While assembly language has its charms (been there, done that), these days a programming language should be as distant from actual hardware as possible. CPU speed is a red herring; you want your code to support threading and multi-processing and you don’t want to deal with memory management.

Link to post
Share on other sites

FORTRAN was the language of choice through the 1970s and 1980s for Supercomputers, most VAX computers, and array processors.  C was catching on, did not optimize as well. I paid my way through college by optimizing atomic structure programs.

 

You code your way, and if people are willing to pay you for it - great. If the software meets the intended function, great. If running too slow causes the device to fail, then you need to speed up execution. Either optimize your code, or use a faster processor. Most of the world has gone the latter route, and it shows. A lot embedded code is very sloppy, it annoys me that the discreet shutter advance on the M Monochrom is so glitchy- due to firmware. Dropping the battery out of a camera to reboot it - usually firmware unable to handle exceptions. I tend to squeeze the last clock cycle out of a piece of code before moving to a more power hungry processor. After 35 years, people still willing to pay for it. Works for me. 

 

But- this thread was about monochrome conversion. Using a Yellow filter, as pointed out by others in different threads, offers an advantage for converting images from color cameras to monochrome. I just wanted to see how far it could be carried.

Edited by Lenshacker
Link to post
Share on other sites

The Forum is fun, but taking pictures is funner.  My dog Gracie tonight.

 

M-P 240, 24 Asph

 

 

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

  • Like 5
Link to post
Share on other sites

A lot embedded code is very sloppy, it annoys me that the discreet shutter advance on the M Monochrom is so glitchy- due to firmware. Dropping the battery out of a camera to reboot it - usually firmware unable to handle exceptions.

If it was just a matter of gracefully handling an exception the issues with the discreet shutter option would have been solved years ago. My suspicicion is that the problems arise because the electronics need to control a mechanical device not providing some crucial feedback in this case so the firmware has to guesstimate what state the shutter is in. When it gets the timing wrong the camera stalls.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...