Jump to content

Recommended Posts

Advertisement (gone after registration)

1 hour ago, ksrhee said:

This is absolutely correct.  BTW, the sensor size itself doesn’t impact the depth of field.  For those of you who are new, here is a useful link to the topic:  https://www.diyphotography.net/how-sensor-size-affects-depth-of-field/#:~:text=The smaller the sensor%2C the greater the depth of field.&text=Longer focal-length lenses create,length than the sensor size.

So my smartphone has the same DOF as my SL? 

Link to post
Share on other sites

28 minutes ago, 01af said:

Hmm. Obviously you didn't fully understand the web page you're referring to. Otherwise you wouldn't deny that sensor size does impact depth-of-field. While this page is basically correct most of the time technically, it is very confusing paedagogically due to sloppy language and flimsy explanations. I strongly deplore the notion of 'three viewpoints.' After all, it addresses three different things, not just three viewpoints on the same thing. And in the first so-called ... umm, 'viewpoint,' the text even contradicts itself.

Not sure it is I but you who did not fully understand.  The poster claims that the sensor size does not directly impact the depth of field, and it does not.  Here is the formula for the depth of field:  2 x distance to subject squared x f-number x circle of confusion divided by square of focal length.  Nowhere in the formula the sensor size is present.  It is the result of changes in other variables that impact the depth of field.  

Link to post
Share on other sites

vor 43 Minuten schrieb Qwertynm:

The fallacy here is to think you can just look at sensor format as an isolated component, which it isn't.

The sensor format sure is no isolated component but one component out of four—the others being distance-to-subject, focal length, and f-number. Still you can always study what will happen if you change one component while keeping the others constant.

.

vor 38 Minuten schrieb ksrhee:

Here is the formula for the depth of field:  2 × distance to subject squared × f-number × circle of confusion divided by square of focal length.  Nowhere in the formula the sensor size is present.  It is the result of changes in other variables that impact the depth of field.  

And what's the usual circle-of-confusion? For standardized viewing conditions (i. e. viewing distance = print diagonal), it's the sensor format's diagonal divided by 1,500. So twice the (linear) sensor size means twice the circle-of-confusion's diameter means twice the depth-of-field (for the same viewing conditions). There you are.

Link to post
Share on other sites

32 minutes ago, 01af said:

And what's the usual circle-of-confusion? For standardized viewing conditions (i. e. viewing distance = print diagonal), it's the sensor format's diagonal divided by 1,500. So twice the (linear) sensor size means twice the circle-of-confusion's diameter means twice the depth-of-field (for the same viewing conditions). There you are.

Well, I have to eat my own words.  You are right of course.  My only excuse was I was reacting to the generic notion that small sensor has a great death of field than larger sensor . . .  The COC limit for the full frame (36x24) is about .029 whereas APS-C would be .018.  So, if you only focus on the sensor size, the larger sensor has the greater depth of field rather.  The concept is complex and it takes multiple factors that contribute to the depth of field, not one simple factor.  I guess I was trying to emphasize that but misspoke!  Thanks for the correction!

Edited by ksrhee
Link to post
Share on other sites

On 5/13/2024 at 1:40 PM, David Wien said:

There's an easier way: set 90mm crop in the Q3. Take a picture with large jpeg and raw files, remembering what is inside the small square shown in the viewfinder. Compare the jeg with the raw file. If your software doesnt do the crop for you automatically, as PhotoLab doesnt, do it on the jpeg with the mouse.

Compare and contrast.

.My results were identical, with the exception that the compression artifacts were worse on the jpeg that had been made in the camera.

David

 

yes, after testing with my Q3 and watching those YT clips, my own conclusion is:

there is nothing fancy about this Crop Mode. Just an auto-cropping that can be done by any simple photo edit app.

60Mega pixel + a sharp lens given Leica power to print a big photo out of this crop mode.

But I don't think it can match in photo quality, DOF etc, with a photo taken by using, say, Leica's own 90mm. + 60M full frame sensor. 

I just don't know why Leica create such confusion, by saying they utilize smaller sensors area to make it equivalent to using other focal length.

Link to post
Share on other sites

  • 7 months later...

Because they do - the only things that are not equivalent are resolution and DOF.
You don't need 60 MP  to create a big print, unless you want to cover your whole house and view from the front door. At normal viewing distance, i.e. your eye can take in all or most of the image in one go, 10 MP is ample for any size print. Basically similar  for DOF - it depends on magnification from sensor to print to viewing distance. Print size and viewing distance are - again- important parameters for the subjective perception of DOF (plus subject contrast and frequency - but that is another discussion).
And photo quality - please define the term.
And you are right - it is simple in-camera cropping. 

Link to post
Share on other sites

28 minutes ago, keithlaban.co.uk said:

Which is why I never use it.

Cropping is one use, in (or with) camera composition is another thing. It could be useful for this as has always been the case with Leica Ms with frame-lines. You can mess with this afterwards in post processing, of course, but some people like to do their framing 'in the field'.  The 'feature' will produce smaller images at the DNG processing stage, of course, but it does not take long to figure that out. Depth of field will be a function of the focal length and aperture used. In the case of the Q models, they are said to have the same sized sensor as so-called 'full frame' cameras. The perspective (compression) is also a function of focal length and plate/film/sensor size. With a cropped image from a Q, you will get a smaller piece of the original image, which may give the impression of compression, but this is not the same as, say, 90mm focal length on a 'full frame' sensor. We should just accept these things for what they are and either use them or not use them accordingly. 

William 

Link to post
Share on other sites

No, William,  the perspective is solely a function of the relative positioning of the camera and subject, nothing else. Only an earthquake can change perspective as long as the position of the camera is the same. 

Link to post
Share on other sites

On 12/30/2024 at 5:18 PM, keithlaban.co.uk said:

Which is why I never use it.

I crop all my images on the computer from the DNG but I do sometimes use the crop function to help frame the image (basically to stop me stepping in too close while shooting portraits, avoiding causing distorted noses etc). I might also consider using it if I was intending sending it immediately to someone, especially as the crop function forces the camera to expose only for the crop image, helping improve the chance of it looking ok without needing shadow/highlight adjustments. 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...