Jump to content

Camera-shake-blur and the number of pixels


Strmbrg

Recommended Posts

Posted (edited)

Advertisement (gone after registration)

I read several claims that blur from camera-shake is more pronounced if the sensor consists of many pixels than if it consists of fewer.

As far as I understand the reasoning behind the claim, it has to do with how many pixels that "fits" in the measurements/dimension of the shake. Let's think of the shake as a two-dimensional circular movement or a one-dimensional vertical movement. Both movements parallel to the plane of the sensor. Just to reduce the number of variables i.e.

I don't at all understand the claim or the reasoning behind the claim.
Well, of course, if you zoom in on the screen so much that you easily can see individual pixels, then you maybe can see that the dimension of the shake affects a bigger number of pixels on a 40 megapixel sensor than on a 10 megapixel one. Of course given that the sensor-size is the same in both cases.

So, lets think about a so called "full-frame-sensor" (approx. 24x36mm).
If it has about just a hundred pixels in all, the image (i.e the representation of the photographed object) will not be very visible at all. At least if I don't shoot at a wall with the same colour and intensity all over and no texture at all. Then a hundred pixels is enough. At least if the pixels don't have any kind of "visible edge" to each other and that every pixels sensitivity and other performance are the same.

My thinking is that if you look at the whole image, there is no difference in visible camera-shake-induced blur between say a 10 and a 40 megapixel sensor of the same sensor size.
If you zoom in enough, then you can see differences at "pixel-level", i.e. differences in affect, but that doesn't have any thing to do with the shake-induced blurriness when looking at the image and not at the pixels.
I think that the claim is based on wrong thinking, at least applied on aspects of the quality of the whole image.

I suppose - if I am right - that the claim is based on mathematical rather than practical thinking.

Comments?

Edited by Strmbrg
Link to post
Share on other sites

The motion blur argument originates from pixel-peeping. When you look at an image at 100% to check motion blur, a higher resolution image will be magnified more - so the motion blur will be magnified. The other thing is that the higher resolution isensor s able to record smaller amounts of motion blur so you will see blur that you would not see on the lower resolution sensor simply because it is unable to resolve it.

So two things:

1. Higher magnification

2. "invisible" existing blur will be recorded on the higher resolving sensor and become visible.

aThe same considerations are valid for DOF BTW. 

Link to post
Share on other sites

Posted (edited)

So, if I understand you the right way - and i if you are right - we seem to think of it the same or similar way. I have event translated the reasoning to film and film-grain. Film-grain is not exactly comparable with pixel-density in a sensor, but it does probably not matter in this case. -Do the same amount of the same kind of camera-shake make a Kodachrome-25-image blurrier than a Tri-x? No, I should say.

Besides this particular matter -- I see a common tendency everywhere in society, to just repeat what others claim - and they claim it just because they have seen it claimed before... The more repeating, the more truth is the claim believed to have.
Quantitatively-based thinking, maybe.

Edited by Strmbrg
Link to post
Share on other sites

Hard to say. The resolution-visibility argument might make a  small difference but must be seen in the light of overall rendering of detail. 

Link to post
Share on other sites

I am no expert, but agree that the idea that higher megapixels leads to more camera shake is incorrect. If there is shake, each pixel is still moving the same distance, whether there are 4 million or 100 million of them.

My conceptualisation is simply that more megapixels allows one to see more detail in the image. This means you zoom in more and see things that would not be seen in a lower MP image.

On an M9 you can't get close enough to see the blades of grass (for example). Once you zoom in too much, the whole image becomes 'softer' or even pixelated, and very small amounts of motion are lost in the softness. An M11, on the other hand, can hold enough detail so when you zoom in you can see individual blades of grass, and therefore any movement resulting from camera shake.

If you need a huge number of megapixels in order to make giant prints or extreme crops, you will need to ensure a more stable platform. However, that is because you are aiming to capture as much detail as possible - and this will include tiny bits of motion blur not seen on a lower resolution image. It's not because the camera itself introduces more motion blur - it's because you want to take maximum advantage of the more detailed sensor.

I also agree that misunderstandings are easily propagated by repetition in the absence of rational analysis - the motion blur issue being at the lower end of relative seriousness.

Link to post
Share on other sites

Posted (edited)

Not sure about the pixel - grain comparison, but I can confirm the pixel magnification relation to (perceived) blurr. Not that you have more shake, but it will be more visible if you try to magnify the image (or crop) to a larger magnification. Nothing new under the sun here. I remember my brother explaining me 30 years ago the extra care he needed to take with his 6x6 cm Hasselblad. DOF feels smaller and a tripod is of much more use with larger formats.

Now, a 60MP gives the same amount of detail in a FF negative size bringing similar issues with it.

To make it even more confusing, pixel peeping on a high resolution 4K or 5K display feels different compared to a usual display. When set to 100% the same image shows smaller on a high resolution display. On my 16" Macbook Pro (3456x2234) pixel peeping at 100% is completely different to my 30" Apple Cinema Display. The Cinema display(2560×1600) has fewer pixels and shows them much larger. My old eyes need 200% magnification on the Macbook Pro to make sure that I see the same amount of detail as with 100% on the Cinema Display.

 

Edited by dpitt
Link to post
Share on other sites

Posted (edited)

Advertisement (gone after registration)

The corollary of the explanation for the 'more blur with more pixels' issue is that increasing the number of pixels leads to diminishing returns. If the point of more pixels is to see more detail in the image (smaller crops? larger enlargements? sharper images?) then why bother if that detail is masked by blur? Why not keep to a lower resolution sensor? On a camera that appears intended for handholding, like the M, more pixels seems a waste. Unless one introduces IBIS - which runs counter to the traditionalist attitude to the M.

Edited by LocalHero1953
Link to post
Share on other sites

vor 1 Stunde schrieb jaapv:

2. "invisible" existing blur will be recorded on the higher resolving sensor and become visible.

I don't think this is true:  If the blur would occur only on one pixel the larger one might be able to conceal it, whereas the blur might easier fall on two different smaller pixels.

Though in reality the blur will cover the whole sensor and I see no chance that it will never cover more than one pixel on a 6MP sensor and only do so on a 60 MP sensor. 

If there is blur you will notice it on every sensor when you look for it. So the fact that a higher resolving sensor allows more magnification is the only reason that blurs might be more obvious. 

Link to post
Share on other sites

21 minutes ago, LocalHero1953 said:

The corollary of the explanation for the 'more blur with more pixels' issue is that increasing the number of pixels leads to diminishing returns. If the point of more pixels is to see more detail in the image (smaller crops? larger enlargements? sharper images?) then why bother if that detail is masked by blur? Why not keep to a lower resolution sensor? On a camera that appears intended for handholding, like the M, more pixels seems a waste. Unless one introduces IBIS - which runs counter to the traditionalist attitude to the M.

In the past if I wanted a very large print I would have shot on medium or even large format and to do so I would have traded portability and ease of use for weight, stability (a suitable tripod) and complexity (precise focus and/or movements). I used a Contax 645 as an excellent compromise for a good few years.

As I shoot for output I have always seen the 35mm camera and its digital derivatives as intended for hand held photography. They can of course be used for tripod mounted photography and if you are printing large and images require fine detail to be visible (or indeed pixel peeping) then a high MPixel camera is probably suitable (although many might argue that larger sensors are preferable too).

I suspect that movement blur is complicated as it consists of angular motion around the lens axis which may not be even and has to be affecting the appropriate area of the depth of field where the blur will actually impinge. The smaller the pixel the more likely that contamination of each pixel due to motion will be. But my experience suggests that whilst higher MPixel cameras can be used handheld, the results from them do as you say provide diminishing returns and not only because of motion blur (lenses may not resolve fine detail at high enough contrast, depth of field may not be adequate when enlarged substantially (little available shift to speak on) and so on. I think that there is a 'sweet spot' in the mid 20~30MPixels myself, although, that said, my M9s still produce perfectly acceptable prints and 24" x 16" and occasionally larger (though I struggle for space to hang such large prints).

But back to print size. Here is an illustrative image (not shot on a Leica I'm afraid but very pertinent and hopefully the moderaters will allow it), taken on an ancient 1865 (yes, 1865) stereo lens on a Sony A7II (I could have swapped bodies but it would have been fiddly). The lens was intended to shoot 4" x 4" max prints and these would have been contact printed so its format was both negative and print size - in those days output size was quite simply defined by format size. Surprisingly though, central definition is sufficient to print larger, potentially to produce quite acceptable 10" x 8" prints. And o line a 1200 x 1800 pixel image (as uploaded) is very effective. My point is that although the image is 24MPixels this is actually somewhat irrelevant and understanding output requirements or potentialities is essential. I have fun with this lens but understand its limitations. Motion blur on high MPixel cameras is inevitably, whatever its cause and effect, a limitation (yes, IBIS might help) so shooting style has to match output requirements, something we all too often fail to remember when dazzled by the latest phenomenally vast MPixels on offer in what is still in effect a 35mm camera.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

  • Like 3
Link to post
Share on other sites

Posted (edited)
2 hours ago, Strmbrg said:

So, if I understand you the right way - and i if you are right - we seem to think of it the same or similar way. I have event translated the reasoning to film and film-grain. Film-grain is not exactly comparable with pixel-density in a sensor, but it does probably not matter in this case. -Do the same amount of the same kind of camera-shake make a Kodachrome-25-image blurrier than a Tri-x? No, I should say.

Besides this particular matter -- I see a common tendency everywhere in society, to just repeat what others claim - and they claim it just because they have seen it claimed before... The more repeating, the more truth is the claim believed to have.
Quantitatively-based thinking, maybe.

You have the same sort of issues when discussing depth of field.

Everything depends on how you view the resultant image..... and I believe all the DOF figures for 35mm were based on viewing a 10x8 inch print.

There will be differences at a 100% pixel viewing level ..... but that is almost never the resolution that images are shown at.

I use a Fuji GFX with the double whammy of Medium Format and 100mpx ..... and yes, hand holding without IBIS and modest apertures can result in technically poor images at a pixel level ..... but at web or medium print resolutions the images look fine. 

Edited by thighslapper
Link to post
Share on other sites

Posted (edited)
vor 4 Stunden schrieb Strmbrg:

I read several claims that blur from camera-shake is more pronounced if the sensor consists of many pixels than if it consists of fewer.

Yes ... that's a common misconception. Actually, blur from camera shake (or other sources of blur) depend in no way on pixel count or pixel size. The contrary is true: the higher-resolving sensor will always yield the sharper and more detailed image, even in the presence of camera shake, motion blur, diffraction blur, or lack of lens performance.

.

vor 4 Stunden schrieb Strmbrg:

... if you zoom in on the screen so much that you easily can see individual pixels then you maybe can see that the dimension of the shake affects a bigger number of pixels on a 40 megapixel sensor than on a 10 megapixel one.

That's right. But at the same time, the pixels on the 40 MP sensor are smaller than those on the 10 MP sensor.

.

vor 2 Stunden schrieb Budfox:

If there is shake, each pixel is still moving the same distance, whether there are 4 million or 100 million of them.

This is an excellent—and totally appropriate—way of explaining it!

.

vor 4 Stunden schrieb Strmbrg:

I suppose—if I am right—that the claim is based on mathematical rather than practical thinking.

You are right. The false claim is based on neither mathematical nor practical thinking but on flawed thinking.

.

vor 57 Minuten schrieb thighslapper:

You have the same sort of issues when discussing depth of field. 

... or diffraction blur.

Edited by 01af
  • Like 2
Link to post
Share on other sites

It depends on resolution. ; if the sensor cannot resolve it,  it cannot be seen. And on magnification; if the eye cannot see it, it is err… invisible. 
The point is that people started to see more motion blur on higher resolving sensors which show more detail at 100% including blur and other flaws. 

Link to post
Share on other sites

Posted (edited)

I normally look at an image as a whole*). That is, I look at it from - at least - the minimal distance required to see the whole image at the same time.
On the other hand, if the distance is too long, the impact of it decreases. 🙂
If the image seems blurry, pixly, noisy or some other kind of disturbingly imperfect way, only when i look at it from a much closer distance than from which I easily can see the whole of it simultaneously, i am at a too close distance. Then I am not interested in the image but in the technical performance. 🙂
And, it is of course permissible to be interested in that. I am not.

*) That is, regarding finished images, not the ones I work on.

Edited by Strmbrg
Link to post
Share on other sites

2 hours ago, jaapv said:

It depends on resolution. ; if the sensor cannot resolve it,  it cannot be seen. And on magnification; if the eye cannot see it, it is err… invisible. 
The point is that people started to see more motion blur on higher resolving sensors which show more detail at 100% including blur and other flaws. 

It also depends on the subject matter. Areas of high contrast with fine detail highlights adjacent to dark or black areas, also of fine detail, will show any blurring much more intensely than areas of lower contrast fine detail simply because adjacent pixels will be more affected by light transfer across pixels during motion. And lower contrast fine detail will never be as clearly defined anyway. Trying to ascribe specifics to photographs is always going to be difficult.

  • Like 2
Link to post
Share on other sites

By the way—in post #11 above I referred to two German-language posts of mine. If you're interested but don't read German: the Google translator does an excellent job translating the German texts into English within a second or two ... with only a few minor glitches.

Link to post
Share on other sites

vor 21 Stunden schrieb jaapv:

It depends on resolution. ; if the sensor cannot resolve it,  it cannot be seen

Imagine pairs of a very fine lines (e.g. 1000 lp/mm), too fine to be resolved by your sensor: the fine lines would not disappear but look like a blotch. Add camera shake and you’ll see a blurred blotch. 

Link to post
Share on other sites

Yes, but I doubt whether my aged eyes can differentiate between a vague blob and a blurred blob. 
Let’s not go down the rabbit hole of Nyquist frequencies, Airy disks, lens-sensor interactions, etc. 

Link to post
Share on other sites

We don‘t have to. The reality we take photos of usually doesn‘t consist only of very fine line pairs which cannot be resolved by „bad“ sensors. If the sensor gives you an imagine where you can see the difference between sharp und unsharp at all - and I think every sensor you can use for photography does so, you will notice a camera shake. 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...