Jump to content

Recommended Posts

Advertisement (gone after registration)

On 1/9/2024 at 4:16 PM, Gavin G said:

Panasonic says hardware requirements for this feature are very light, so we will definitely see this feature in all upcoming bodies.

However, it may take a little while to finalize the code, so perhaps we'll see it in a future firmware update, rather than day 1.  

  • Like 3
Link to post
Share on other sites

On 1/10/2024 at 11:34 AM, Simone_DF said:

Will this feature implement for SL3 and upcoming Pana body?

Next to this they will deliver a camera that leave home all alone, knows where to go, take pictures by itself, knows exactly what must be photographed, recognize any kind of subject (from a flea to a jumbo jet) and then comes back home and put the perfect pictures it took on the computer. All this while the Photographer reads a book in his bed.

  • Haha 4
Link to post
Share on other sites

vor 15 Minuten schrieb epand56:

Next to this they will deliver a camera that leave home all alone, knows where to go, take pictures by itself, knows exactly what must be photographed, recognize any kind of subject (from a flea to a jumbo jet) and then comes back home and put the perfect pictures it took on the computer. All this while the Photographer reads a book in his bed.

... and than you have a second look at the pictures, an recognize that the camera has taken the wrong f-stop:  too much depth of field - and the end of book was boring. The KI wrote that the gardener was as always the murder ...

  • Haha 2
Link to post
Share on other sites

5 hours ago, TeleElmar135mm said:

... and than you have a second look at the pictures, an recognize that the camera has taken the wrong f-stop:  too much depth of field - and the end of book was boring. The KI wrote that the gardener was as always the murder ...

You will just have to think "I want that picture taken" and the camera will take the exact picture you want without leaving home. It's MIC, Mnemonic Intelligence Camera.

Link to post
Share on other sites

I know you are all joking, but I think you all are missing the point...there is no camera at all. You just tell the computer what you want and the AI spits out the image. Is it real? No. Is it what many clients want? Yes. We are already there. Sure, it is a dystopia, but it seems to be a freight train that nothing is going to stop, no matter the consequences. It messes up a lot right now...give a few years. Then half our photography time is going to be spent trying to figure out how to indicate to our viewers that our photos are real. That's one reason it was nice that Leica started to try to put some level of content authentication in the M11...we have to start somewhere. Hopefully they will bring it to the SL3 and future Leicas as well. (If not this version, then some version...I don't know enough about it to know if their implementation is good, but I agree with the premise.)

  • Like 4
Link to post
Share on other sites

Advertisement (gone after registration)

For commercial images? As far as I'm concerned if AI can do it cheaper and quicker they should. There's nothing real about advertising photos, including fashion, nor a good many marketing photos. AI is just making it rather obvious to us all.

  • Like 2
Link to post
Share on other sites

10 minutes ago, LocalHero1953 said:

For commercial images? As far as I'm concerned if AI can do it cheaper and quicker they should. There's nothing real about advertising photos, including fashion, nor a good many marketing photos. AI is just making it rather obvious to us all.

Except you know, the livelihoods of the people who do it for a living.

  • Like 5
Link to post
Share on other sites

On 2/3/2023 at 3:15 PM, John Smith said:

I understand Karbe said the SL primes are good for over 100MP. But how does diffraction enter that picture? I think—I'm not sure about this—that Karbe said diffraction starts kicking in at F8 on the SL2. Wouldn't diffraction come in earlier with a higher MP sensor? That is one thing I still haven't wrapped my head around. 

My interpretation on the so called diffraction distortion means when aperture reduces the point response circle  starts to spread larger. Before reaching this point, the lens point response circle may reduce when reducing the aperture. Whatever, when point response circle enlarges, the high frequency response deteriorates. 

The sensor MP represent sampling rate. For a given frequency response ( in terms of bandwidth, not the highest frequency), it requires higher sampling rate (2x of BW). So improving sampling rate enables to capture higher BW image, and increase image BW enables the capturing capability of sensor’s capability (up to the 2X limit).

Edited by Joy
Link to post
Share on other sites

"AI" is used to describe a whole range of technologies. We need to distinguish between the use of AI for image generation (Dall-e and others), and the use of AI for camera adjustments.

The first vaguely-similar system was Nikon's matrix metering in the 1980s. This compared light readings from 5 areas of your images to an in-memory database, and set the exposure according to rules it retrieved from the database. So for instance, if you had a very bright light in the upper-left corner, it might expose for the other four sections (three more corners and the middle) because that scenario corresponds to back-light. If the bright light was in the middle, it would stop-down a lot because you were probably photographing a sunset.

AI image recognition does the same thing, in a different way. You give the AI engine thousands of "properly exposed" images, and their corresponding meter readings (which can be thousands of colour samples, not just 5 brightness samples). The AI algorithm figures-out the correlation between the two on its own, and picks an exposure that should work most of the time.

Same thing with subject recognition. You give the AI thousands of images, label some of them "cat", "landscape", "human", etc. It uses this data to figure-out how to categorize live-view images. So for instance, if it recognizes a single human, it might try to focus on the near eye. If it recognizes a group, it might split the focus, and stop-down more. It's a "better" AF, but it's still just AF.

You can still use your trusty Sekonic, or focus the lens yourself. You can't do that with most smartphones, but expensive digital cameras from major brands are sold to people who believe that they can do things for themselves. Even if they don't 99% of the time.

Link to post
Share on other sites

On 2/3/2023 at 6:15 PM, John Smith said:

I understand Karbe said the SL primes are good for over 100MP. But how does diffraction enter that picture? I think—I'm not sure about this—that Karbe said diffraction starts kicking in at F8 on the SL2. Wouldn't diffraction come in earlier with a higher MP sensor? That is one thing I still haven't wrapped my head around. 

When I had a GFX100, I observed that diffraction losses happened at about 5.6 and smaller. I’m not sure if that was a function of megapixels or sensor size—or both. How this affects the print is obviously dependent on print size and whether one actually prints. I might be a little old fashioned in that I think the print is the ultimate destination. We all might share photos on phones, tablets, computer screens or even dedicated photo displays where I find the idea of diffraction limits irrelevant unless pixelpeeping.

 

Link to post
Share on other sites

6 hours ago, Stuart Richardson said:

there is no camera at all. You just tell the computer what you want and the AI spits out the image. Is it real? No.  

Actually, my team already delivered a software on a open source platform that can do this. Not just static pictures,  but also a motion video. 

 

  • Like 1
Link to post
Share on other sites

13 hours ago, LocalHero1953 said:

For commercial images? As far as I'm concerned if AI can do it cheaper and quicker they should. There's nothing real about advertising photos, including fashion, nor a good many marketing photos. AI is just making it rather obvious to us all.

 

Dissent 🙂. Fashion photography, any advertising shoot is very real and depends on authenticity as much as journalism and fine arts—if the client is interested in a great campaign. 

  • Like 3
  • Thanks 1
Link to post
Share on other sites

2 minutes ago, hansvons said:

Dissent 🙂. Fashion photography, any advertising shoot is very real and depends on authenticity as much as journalism and fine arts—if the client is interested in a great campaign. 

I was hoping someone would make a case for the defence!

Edited by LocalHero1953
  • Like 3
Link to post
Share on other sites

14 hours ago, Stuart Richardson said:

Except you know, the livelihoods of the people who do it for a living.

It happened before. Blacksmiths, ice sellers, knocker uppers, etc.

While these jobs disappeared, they were replaced by different jobs like mechanics, etc

Probably the same will happen for certain subsets of photography

Link to post
Share on other sites

for me photography is to document real moments, maybe in my view and with a little post processing. For me photography is not to produce images of things that people would like to see and which are virtual but not real.

A good exposure metering and a good AF and eye recognition are things that do help me.  Things which happen in smart phone digital image optimization are already a bit too much for me (what I experience with my iPhone 15 pro, I like it but I prefer a real camera if I have one with me)

Link to post
Share on other sites

On 1/13/2024 at 1:23 AM, SoarFM said:

When I had a GFX100, I observed that diffraction losses happened at about 5.6 and smaller. I’m not sure if that was a function of megapixels or sensor size—or both. How this affects the print is obviously dependent on print size and whether one actually prints. I might be a little old fashioned in that I think the print is the ultimate destination. We all might share photos on phones, tablets, computer screens or even dedicated photo displays where I find the idea of diffraction limits irrelevant unless pixelpeeping.

 

it is only an effect of the pixel size of the sensor (not related to the size of the sensor or the total amount of pixels on the sensor)

it does not effect print negatively, in contrary, due to less aliasing with smaller pixels, the print will show less aliasing artefacts

PhaseOne IQ4150, GFX100, Sony A7RIV/V, Leica M11, Sigma FP-L, Fujifilm X-T3/4 ... use same Sony sensor basis with 3.76 um pixels, so they show exactly the same amount of diffraction in an image at a given aperture of the lens.

Link to post
Share on other sites

On 12/29/2023 at 12:22 PM, hoolyproductions said:

Hi. My very amateur impressions on this. I'm not an expert on dynamic range measurements but I have a lot of practical experience using these cameras, particularly in very low light. I used M10 from release until about 2021. Then SL2-S and SL2 since then, and I bought an M11 a few months ago and have been using it for 90% of my shooting.

Indeed I prefer the SL2-S to the SL2 both in terms of shadow recovery and highlight clipping (e.g. when shooting landscapes), but still found the SL2 to be excellent, just had to be a little more careful with the highlights. When shooting in very low light (ISO 12500-25000) I personally find the SL2-S to be light years ahead and the SL2 to be (relatively) unusable. (Same for the Q2 that I had for a while.) 

When buying the M11 I had serious doubts it would be anywhere as good as the SL2-S in very low light but have been amazed. I regularly shoot at ISO 10000-20000 and find the results essentially as good as with the SL2-S. I've shot a bunch of candid landscapes also and have no complaints (but am perhaps not so sensitive as others on DR in decent light). The M10 in my recollection was significantly better than the SL2 in these low light conditions but not at the level of the SL2-S.

In short, if they are planning to use the 60mp sensor I have no concerns (also a couple of years have passed so even if it is the same sensor there might still be significant changes in how it renders an image e.g. processor, firmware).

 

Examining https://www.photonstophotos.net/Charts/PDR.htm measurements of sensors at ISO 3200 we see:

Leica M11 and Sony A7RVI/V (equivalent sensors), Panasonic S1R and Leics SL2s all have very similar DR of 7,0-7,3 stops.

Leica SL2 shows with 6,1 stops DR about one stop more noise than the other cameras. Which is strange because the sensor is equivalent to the one in the Panasonic SR1.

At base ISO the DR of the Leica SL2 and the Panasonic S1R are both on the same level of 11,2-11,4 stops. The Leica SL2 implementation only falls behind at higher ISO. Seems Leica was not able to take full advantage of that (Towerjazz) sensor design.

  • Like 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...