Jump to content

New Panasonic sensor........


steveclem

Recommended Posts

This is based on an article published two months ago (http://www.nature.com/nphoton/journal/v7/n3/full/nphoton.2012.345.html) and was also covered in a press release by Panasonic (Panasonic Develops Technology for Highly Sensitive Image Sensors Using Micro Color Splitters | Headquarters News | Panasonic Global).

 

Employing diffration to create colour splitters is certainly interesting; this seems to be related to previous work on zone plates in lieu of microlenses (and photonic crystals as colour filters), published by Panasonic about 5 years ago. Whether this is really practical is a different question. The colour splitters spread the light across three pixels, doubling as low-pass filters; this might not be desirable. Also the pixels underneath these splitters receive not red, green, and blue light, but (white + red), (white – red), (white + blue), and (white – blue). To get red one needs to subtract (white – red) from (white + red) whereas adding those pixels gives white. Similarly one gets white and blue from (white + blue) and (white – blue). Finally white – (red + blue) yields green. Adding and subtracting the values of neighbouring pixels again increases blur; also subtracting sensor signals is generally considered to be bad as it increases noise.

Link to post
Share on other sites

"Generally considered" or factually accurate? Has it been tested definitively yet?

That subtraction of sensor signals increases noise? That one is certain – I could explain the theory if you are interested. The question is whether this would be offset by some other effect.

Link to post
Share on other sites

Specific to Leica, one also has to wonder how redirecting light rays will play out with short-focus (<50mm) rangefinder lenses. Leica had enough trouble dealing with regular sensor architecture, with or without microlenses. (Red edge, overall vignetting, etc.)

 

And since Leica does not build sensors itself, in reality we are talking about Panasonic sharing technology with Truesense (CCD) or CMOSIS (CMOS). Which is kinda like expecting Ford or BMW to share their engine technologies with GM or Porsche. ;)

Link to post
Share on other sites

And since Leica does not build sensors itself, in reality we are talking about Panasonic sharing technology with Truesense (CCD) or CMOSIS (CMOS).

Or Panasonic could develop their own FF sensor, but it doesn’t seem likely. Right now even their top-of-the-line Micro-FourThirds cameras has a Sony sensor.

 

But before we start wondering whether Leica could make use of this technology (assuming it was worthwhile, which I do not), the first question to anwer is: Will Panasonic make use of it? There is always lots of stuff invented that is never put to use in actual products.

Link to post
Share on other sites

Advertisement (gone after registration)

This is based on an article published two months ago (http://www.nature.com/nphoton/journal/v7/n3/full/nphoton.2012.345.html) and was also covered in a press release by Panasonic (Panasonic Develops Technology for Highly Sensitive Image Sensors Using Micro Color Splitters | Headquarters News | Panasonic Global).

 

Employing diffration to create colour splitters is certainly interesting; this seems to be related to previous work on zone plates in lieu of microlenses (and photonic crystals as colour filters), published by Panasonic about 5 years ago. Whether this is really practical is a different question. The colour splitters spread the light across three pixels, doubling as low-pass filters; this might not be desirable. Also the pixels underneath these splitters receive not red, green, and blue light, but (white + red), (white – red), (white + blue), and (white – blue). To get red one needs to subtract (white – red) from (white + red) whereas adding those pixels gives white. Similarly one gets white and blue from (white + blue) and (white – blue). Finally white – (red + blue) yields green. Adding and subtracting the values of neighbouring pixels again increases blur; also subtracting sensor signals is generally considered to be bad as it increases noise.

 

I have no idea what any of that means and have no need for any detailed explanation, thank you. I am, however, curious as to whether Leica will utilise this technology and how the image will look via M systems.

Link to post
Share on other sites

I am, however, curious as to whether Leica will utilise this technology

Not very likely. I am not even convinced that Panasonic will actually utilise that technology.

 

and how the image will look via M systems.

The images would be less sharp.

Link to post
Share on other sites

Not very likely. I am not even convinced that Panasonic will actually utilise that technology.

.

 

Or, they can use it in applications outside the FF/APS photographic cameras... sensors are used in lot of systems and Panasonic is a rather important provider of basic technologies used by the most disparate OEMs.

Link to post
Share on other sites

Or, they can use it in applications outside the FF/APS photographic cameras... sensors are used in lot of systems and Panasonic is a rather important provider of basic technologies used by the most disparate OEMs.

Indeed there might be uses in industrial applications and the like, just probably not in photography as we understand the term.

Link to post
Share on other sites

Indeed there might be uses in industrial applications and the like, just probably not in photography as we understand the term.

 

A friend of mine is just developing a new system for metal selection based largely on analisys of color... it needs, in photographic terms, high ISO (metal is moving) and a good DR... he's looking for the right "camera" (better to say, the vision system) ... is definitely a limiting factor on the productivity of the machinery... probably a system based on this kind of sensor can have an edge in such an application.

Link to post
Share on other sites

That subtraction of sensor signals increases noise? That one is certain – I could explain the theory if you are interested. The question is whether this would be offset by some other effect.

 

Some years ago I researched an 'idea' begun by the notorious Nabal on DPReview. The concept is a 43rds 3CCD still camera that became known as Olympus's 'Trine'.

 

The article ran over a number of long running threads beginning with 'The Riddle of the Trine' back in April 2009, actually the 8th of April (coincidences!'). The thing went a bit viral for awhile, eventually to be recognised by Olympus even if only to deny rumours of Trines existence as an Ex replacement.

 

Logically the concept used a beamsplitter to separate the RGB values and the sensors had no bayer array on them. But I had some difficulty in deciding how much light would actually be gained by this process.

 

Perhaps you can put my restless mind at ease. :)...

 

the material below was found on a Chinese Olympus Forum

trine.jpg

Link to post
Share on other sites

Some years ago I researched an 'idea' begun by the notorious Nabal on DPReview. The concept is a 43rds 3CCD still camera that became known as Olympus's 'Trine'.

 

The article ran over a number of long running threads beginning with 'The Riddle of the Trine' back in April 2009, actually the 8th of April (coincidences!'). The thing went a bit viral for awhile, eventually to be recognised by Olympus even if only to deny rumours of Trines existence as an Ex replacement.

 

Logically the concept used a beamsplitter to separate the RGB values and the sensors had no bayer array on them. But I had some difficulty in deciding how much light would actually be gained by this process.

As it happens I had raised this issue with Panasonic, a couple of years ago. Technically it would have been possible to build a FourThirds camera with a beam splitter and 3 sensors, but the sensors need to be in perfect alignment – not that much of an issue with a video camera but more tricky with a higher resolution still camera. A beam splitter would have fit within a FourThirds camera, if only there had not been a mirror, but it would not have fit within a Micro FourThirds camera, so it had never been a real option. A beam splitter with dichroic filters is also as a disadvantage as there isn’t as much overlap between the three transmission curves as there is with red, green, and blue filters.

Link to post
Share on other sites

As it happens I had raised this issue with Panasonic, a couple of years ago. Technically it would have been possible to build a FourThirds camera with a beam splitter and 3 sensors, but the sensors need to be in perfect alignment – not that much of an issue with a video camera but more tricky with a higher resolution still camera. A beam splitter would have fit within a FourThirds camera, if only there had not been a mirror, but it would not have fit within a Micro FourThirds camera, so it had never been a real option. A beam splitter with dichroic filters is also as a disadvantage as there isn’t as much overlap between the three transmission curves as there is with red, green, and blue filters.

 

Yes it needed to be a hybrid of both types, 43rds register to fit the beamsplitter, and mirrorless configured with an EVF as it has no mirror. Over the years the technology to develop such a camera has become better, the EVF modules have improved acceptably together with more advanced cdAF autofocus.

 

I pondered on the alignment issues too, but I think it would be conquerable if the sensors were bonded to the prisms and factory aligned, while horizontal and vertical alignment would be achieved right down to the pixel level and built into each camera. But yes its tricky in an emporium that doesnt like time consuming assembly.

 

It would have gained 2/3 of a stop in noise performance (something in the order of 1.5x APSC) and these days could offer 48Mp colour resolution with 16R, 16b, 16G vs 4R, 4B, 8G in bayer configurations. I guess it would look like a low noise Foveon, that part wouldnt be a bad thing.

 

Nabal always contended that this would be easier on 43rds than any other system due to the high degree of telecentricity in the lens suite, and the rather long lens register for the width of the sensor.

 

The question as always, ....could similar achievements be made with other technologies on cheaper less complex single sensor systems?

The answer is probably closer to yes than no.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...