Jump to content

Recommended Posts

  • 1 month later...

No.  The undemosaiced file is simply an array of digital 'words'.  It's the same as listening to a fax machine on your telephone - it's just a stream of unintelligible peeps and rhythms but when it's been through the fax machine it's words or a picture printed onto thermal paper.

How it would look different after demosaicing depends on the '14 secret herbs and spices' used by the particular raw processor you've chosen to use.  For example, the .dng (or other 'raw' format) image output from, say, CaptureOne is visibly different from Adobe Camera Raw.

Pete.

Link to post
Share on other sites

  • 1 month later...

Advertisement (gone after registration)

Go to the LINK in post #3, and anyone can SEE what an un-de-mosaiced RAW file will look like. Several examples there.

It will look like a grayscale (monochrome) checkerboard image - the luminance (or brightness) that each pixel detected through its own individual Bayer color filter, without any color applied or data shared between pixels (demosaicing).

Some software can colorize it per pixel - i.e. a checkerboard of reddish, greenish or bluish pixels, not full or true demosaiced color.

But that is just "for show" - the actual raw data is strictly monochrome brightnesses. 00000000 00000000 = pure black, 11111111 11111111 = pure white, 00000000 11111111 = "middle gray," with 65534 other "gray levels" available (assuming the camera can actually capture full 16-bit data).

As easy for software to convert that to a "visible raw file" as it is for a text program to display ASCII code as letters on one's screen. But not what most photographers need or want.

The sequence (simulated and simplified). Note that the pixels are 4x "actual" size for clarity

Original scene

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

and as it is seen by a color sensor through the Bayer color filtering array - remember there are two green pixels for each red and blue pixel)

and the monochromatic brightness/luminance from each pixel as read off the sensor - monochrome checkerboard that then forms the 16-bit image data of the raw/.dng file.

Some software can actually display the picture at this point.

and then demosaiced by computer processing (LR, PS, C1, DxO), removing the checkerboard by applying the known color type (R/G/B) to each pixel and  averaging brightnesses for each color (RGB) across neighboring pixels - but still two green pixels per red and blue pixel.

and then normalized - removing the green influence, and adding a tone curve to make the contrast more "photographic."

If you are worried that the final product looks so fuzzy, remember that 1) the examples above are at 400% pixel view. And 2) a real demosaicing algorithm would be "smarter" at retaining luminance details (buttons, threads, hairs) than my simulation using Photoshop. ;)

A 100% view would look like this. Final normalized - and before demosaicing - at 100% pixels

 

 

Edited by adan
  • Like 2
  • Thanks 1
Link to post
Share on other sites

The term pixel has been coined in the beginnings of digital processing of images.

In order to represent an image in digital form you divided the image into as many small areas as required for your purpose. Those small areas usually completely filled the area of the original image without any gaps; usually they were squares arranged in a rectangular grid, but other arrangements certainly were possible. 

You then noted for each of those small areas (squares, usually) its location within the image and the average color. How you expressed the color was up to you; for monochrome images you usually just noted the brightness; for colored images you use a coding scheme which lets you reconstruct the average color for the small square.

Hence, a pixel is an 'atom' of an image: it can not be subdivided and its properties (such as color and brightness, perhaps also its transparency) were known only for the whole atom or pixel.

Following this definition of the pixel, there are only very few cameras that can take colored photographs which actually produce pixels; I only know Sigma's Foveon sensor which can do that.

As we all know, digital cameras which can take colored images do so by placing an array of colored filters in front of the sensor, so that each pixel cell of the sensor receives only part of the spectrum for its location within the image.

In order to process the image and to make it visible, we subject the raw image to the procedure known as de-mosaicing. We convert the data from the individual sensor cells into square pixels, so that each pixel only has a uniform color and brightness and so on. 

The procedure is, of course, trivial, if the the resolution of the converted image can be much smaller than that of the original one. We merely average the brightness values for the cells within the area of each pixel with the same filter color. For that, we need to know which cell used which color. Preferrably, we also should know the transmission spectrum of the color filter so that we can exactly reconstruct the original color.

The procedure becomes a bit more involved if each pixel has to be constructed from the smallest number of original cells, for reasons I won't go into.

Once the image has been converted into pixels, we can process it in many different ways. However, a digital image is quite invisible to the eye. It's just a heap of numbers stored in a computer. Preparing the image for its visual representation is, of course, a  straightforward process since we don't have to know how the camera captured the original image and how it represented it on the storage medium. The image is stored in one of a few well-documented formats.

There's one more catch: practically no device which can render digital images actually shows or paints pixels. Computer screens and projectors split each pixel into (usually) three colors and project the three colors for each pixel side by side, much as the camera split the image into monochrome cells. What printers do to the image is even more brutal, but I won't go into that here, either.

Here we are: the complete chain from taking a photograph to showing it on a computer screen first splits the image into monochrome cells, combines those monochrome cells into colored pixels and finally splits the pixels again into monochrome cells for the procjection.

To return to the original question:

If you had a screen or projector with the same number of cells, color filters and arrangement as the color filter array in your camera, you could simply submit the image as captured by your camera and would see a faithful reproduction of the scene captured by the camera. The computer and the screen showing the image must be able to interpret the data supplied by the camera, of course, but it must do that for all digital image formats.

The original image out of the camera is, of course, just a bunch of numbers and needs some kind of software to use it or to make it visible, but this equally applies to all digital image formats. 

The advantage of generic formats such as JPEG are that your software doesn't have to know how exactly the camera captured and stored the image. The disadvantage is that your processing software can not retrieve more information from the image than what the conversion program passed.

  • Like 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...