Jump to content

Digilux 2 sensor size?


Recommended Posts

These are two quite different questions.

 

The more important difference between the sensor of the Digilux 2 and - say - the sensor of the M10 is the size. The Wikipedia article on image sensor formats contains this graphic comparing the various sensor sizes:

 

300px-SensorSizes.svg.png

CCD and CMOS describe different technologies hat have been used for the construction of sensors. The difference  is not as important as some people here would believe, I think.

  • Like 1
Link to post
Share on other sites

A "2/3rds" sensor is about 1/4 the linear dimensions, and thus about 1/16th the area, of a so-called "full-frame" sensor that is the size of a 35mm negative (~24mm x 36mm or ~1" x 1.5")

 

Small silicon image sensors (chips) are outgrowths of video camera technology (well, so are large "film-sized" ones, but they came later). Due to a historical peculiarity, their names, which are given as a size (e.g. "4/3rds-inch", "2/3rds-inch") don't refer to any actual linear dimension of the sensor. They refer to the area, as it originally existed, on a video tube of the 1950s-1980s. This was to make life easier for video engineers, as they transitioned from RCA/NEC tubes to Sony/Panasonic chips in the 1980s.

 

A 2/3" sensor is the same size of the image area of an ancient 2/3"-inch orthicon/vidicon/plumbicon tube. The actual image dimensions are much smaller than the tube's diameter - a tiny little rectangle on the tube's round front surface. If a modern full-frame sensor were designated according to that standard, or replicated with an imaging tube, that would require about a 12/3" (4-inch diameter) tube. (2nd link shows a 4.5" Orthicon - they did exist)

 

http://www.cinematography.com/cine-uploads/monthly_02_2016/post-14557-0-20977000-1454394079.jpg

 

http://www.crtsite.com/big/camera/F9224-big.jpg

_____________________________

 

CCD is the original silicon imaging technology.  It is a passive-pixel technology - the sensor just sits there and converts light into electrical charge, which flows off the sensor for further processing. Dates roughly from 1970 (Bell Labs' first prototype 1969).

 

CMOS is an active-pixel technology - it does stuff with the charge, via microprocessing right on the sensor chip, before outputing an "enhanced" signal for further processing. Think of a CMOS sensor as having a little "pre-amp" circuit for each pixel (perhaps technically inaccurate, but conceptually a good way of thinking about it). CMOS sensors date from additional space-program experiments at the Jet Propulsion Lab (JPL) in the mid-1990s - therefore about 25 years "younger" than the CCD.

 

CMOS itself refers to the manufacturing technology (complimentary metal oxide semiconductor), which made the more complex active-pixel silicon circuitry/architecture possible, and eventually relatively cheap.

 

The first CMOS/active-pixel image sensors were, in fact, very poor compared to the older and more mature CCD technology: low fill factors, more noise, "fixed pattern noise," color casts, lack of scalability, unstable microtransistors that decayed over time. Not unlike the very first automobiles compared to a horse-and-buggy. ;)

 

My first CMOS camera was a Sony R-1 (2005), and its grid-like fixed pattern noise made every high-ISO (400+) shot look like it had been taken through a screen door, if any sharpening was applied.

 

But technology moves on, and the active-pixel architecture allowed for further development, whereas CCD technology "is what it is" - not much room for improvement. Today, CMOS can usually outperform CCD.

Edited by adan
  • Like 3
Link to post
Share on other sites

  • 2 months later...

Personally i find CCD sensors colour and shadow noise more appealing to than cmos counterpart cameras. Thats only one reason why D2, M8 and M9 are still highly sought after.

 

The modern Cmos colour accuracy is much better than older CCD but that also resulted in a a more.accurate and flatter graduations of colours transition. The photos turn out "flatter" comparing to CCD sensors photo.

 

You can take search the web for "olympus four third blue".. The earliest olympus four third was using CCD as well and many has attest the famous olympus blue was from the colour gamut output from of the ccd. It was lost after four thirds moved towards cmos sensors. You can easily tell the olympus blue tones rendering then was different.

 

Here's one of the discussion thread and you can take a lool at the blue skies colour.

 

https://www.dpreview.com/forums/thread/2436034

Edited by Radi9red
Link to post
Share on other sites

  • 1 month later...

It can also be added that the Digilux 2 still has a larger sensor area than most of today's mainstream compacts, bridge cameras, as well as phone cameras. It's still quite common with around 1/2,3" sensors.  

In terms of surface area this is about half as much for the 2/3" sensor in the Digilux 2. 

 

Of course, the technology in the Digilux 2 sensor is way outdated by now, and on a purely tech-spec level it's surpassed by the sensor in my phone.

Tech-spec aside, a larger sensor area (even if using outdated technology) will still allow a more limited depth of field to be rendered than a smaller sensor (with equivalent framing, aperture and focal length).

 

Last but not least; the Digilux 2 renders pictures beautifully and is paired with an amazing lens. It also handles well and has a genuine solid feel.

  • Like 1
Link to post
Share on other sites

Advertisement (gone after registration)

Personally i find CCD sensors colour and shadow noise more appealing to than cmos counterpart cameras. Thats only one reason why D2, M8 and M9 are still highly sought after.

 

The modern Cmos colour accuracy is much better than older CCD but that also resulted in a a more.accurate and flatter graduations of colours transition. The photos turn out "flatter" comparing to CCD sensors photo.

 

You can take search the web for "olympus four third blue".. The earliest olympus four third was using CCD as well and many has attest the famous olympus blue was from the colour gamut output from of the ccd. It was lost after four thirds moved towards cmos sensors. You can easily tell the olympus blue tones rendering then was different.

 

Here's one of the discussion thread and you can take a lool at the blue skies colour.

 

https://www.dpreview.com/forums/thread/2436034

 

 

 

CCD or CMOS has nothing to do with colour accuracy... my Leica S (Typ 006) has far more accurate colours than the M10... even though it has a CCD sensor. 

 

CCD or CMOS doesn't have much to do witha accuracy etc. A good CMOS sensor is as good as a CCD sensor and vice versa...

 

People are comparing 2008 technology with 2016 technology, and call it CCD vs CMOS. while actually its 2008CCD vs 2016CMOS. 

  • Like 1
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...