Simone_DF Posted January 10, 2024 Share #981 Posted January 10, 2024 Advertisement (gone after registration) On 1/9/2024 at 4:16 PM, Gavin G said: https://petapixel.com/2024/01/08/panasonic-made-a-new-super-accurate-in-camera-subject-recognition-ai/ Will this feature implement for SL3 and upcoming Pana body? Panasonic says hardware requirements for this feature are very light, so we will definitely see this feature in all upcoming bodies. However, it may take a little while to finalize the code, so perhaps we'll see it in a future firmware update, rather than day 1. 3 Link to post Share on other sites More sharing options...
Advertisement Posted January 10, 2024 Posted January 10, 2024 Hi Simone_DF, Take a look here SL3 Rumors . I'm sure you'll find what you were looking for!
epand56 Posted January 12, 2024 Share #982 Posted January 12, 2024 On 1/10/2024 at 11:34 AM, Simone_DF said: Will this feature implement for SL3 and upcoming Pana body? Next to this they will deliver a camera that leave home all alone, knows where to go, take pictures by itself, knows exactly what must be photographed, recognize any kind of subject (from a flea to a jumbo jet) and then comes back home and put the perfect pictures it took on the computer. All this while the Photographer reads a book in his bed. 4 Link to post Share on other sites More sharing options...
TeleElmar135mm Posted January 12, 2024 Share #983 Posted January 12, 2024 vor 15 Minuten schrieb epand56: Next to this they will deliver a camera that leave home all alone, knows where to go, take pictures by itself, knows exactly what must be photographed, recognize any kind of subject (from a flea to a jumbo jet) and then comes back home and put the perfect pictures it took on the computer. All this while the Photographer reads a book in his bed. ... and than you have a second look at the pictures, an recognize that the camera has taken the wrong f-stop: too much depth of field - and the end of book was boring. The KI wrote that the gardener was as always the murder ... 2 Link to post Share on other sites More sharing options...
SrMi Posted January 12, 2024 Share #984 Posted January 12, 2024 Imagine if they added autofocus or automatic metering to cameras. Suddenly, people without long experience could take as sharp and properly exposed images as we veterans. The horror! 😜 1 4 Link to post Share on other sites More sharing options...
epand56 Posted January 12, 2024 Share #985 Posted January 12, 2024 5 hours ago, TeleElmar135mm said: ... and than you have a second look at the pictures, an recognize that the camera has taken the wrong f-stop: too much depth of field - and the end of book was boring. The KI wrote that the gardener was as always the murder ... You will just have to think "I want that picture taken" and the camera will take the exact picture you want without leaving home. It's MIC, Mnemonic Intelligence Camera. Link to post Share on other sites More sharing options...
Stuart Richardson Posted January 12, 2024 Share #986 Posted January 12, 2024 I know you are all joking, but I think you all are missing the point...there is no camera at all. You just tell the computer what you want and the AI spits out the image. Is it real? No. Is it what many clients want? Yes. We are already there. Sure, it is a dystopia, but it seems to be a freight train that nothing is going to stop, no matter the consequences. It messes up a lot right now...give a few years. Then half our photography time is going to be spent trying to figure out how to indicate to our viewers that our photos are real. That's one reason it was nice that Leica started to try to put some level of content authentication in the M11...we have to start somewhere. Hopefully they will bring it to the SL3 and future Leicas as well. (If not this version, then some version...I don't know enough about it to know if their implementation is good, but I agree with the premise.) 4 Link to post Share on other sites More sharing options...
LocalHero1953 Posted January 12, 2024 Share #987 Posted January 12, 2024 Advertisement (gone after registration) For commercial images? As far as I'm concerned if AI can do it cheaper and quicker they should. There's nothing real about advertising photos, including fashion, nor a good many marketing photos. AI is just making it rather obvious to us all. 2 Link to post Share on other sites More sharing options...
Stuart Richardson Posted January 12, 2024 Share #988 Posted January 12, 2024 10 minutes ago, LocalHero1953 said: For commercial images? As far as I'm concerned if AI can do it cheaper and quicker they should. There's nothing real about advertising photos, including fashion, nor a good many marketing photos. AI is just making it rather obvious to us all. Except you know, the livelihoods of the people who do it for a living. 5 Link to post Share on other sites More sharing options...
Joy Posted January 12, 2024 Share #989 Posted January 12, 2024 (edited) On 2/3/2023 at 3:15 PM, John Smith said: I understand Karbe said the SL primes are good for over 100MP. But how does diffraction enter that picture? I think—I'm not sure about this—that Karbe said diffraction starts kicking in at F8 on the SL2. Wouldn't diffraction come in earlier with a higher MP sensor? That is one thing I still haven't wrapped my head around. My interpretation on the so called diffraction distortion means when aperture reduces the point response circle starts to spread larger. Before reaching this point, the lens point response circle may reduce when reducing the aperture. Whatever, when point response circle enlarges, the high frequency response deteriorates. The sensor MP represent sampling rate. For a given frequency response ( in terms of bandwidth, not the highest frequency), it requires higher sampling rate (2x of BW). So improving sampling rate enables to capture higher BW image, and increase image BW enables the capturing capability of sensor’s capability (up to the 2X limit). Edited January 12, 2024 by Joy Link to post Share on other sites More sharing options...
BernardC Posted January 12, 2024 Share #990 Posted January 12, 2024 "AI" is used to describe a whole range of technologies. We need to distinguish between the use of AI for image generation (Dall-e and others), and the use of AI for camera adjustments. The first vaguely-similar system was Nikon's matrix metering in the 1980s. This compared light readings from 5 areas of your images to an in-memory database, and set the exposure according to rules it retrieved from the database. So for instance, if you had a very bright light in the upper-left corner, it might expose for the other four sections (three more corners and the middle) because that scenario corresponds to back-light. If the bright light was in the middle, it would stop-down a lot because you were probably photographing a sunset. AI image recognition does the same thing, in a different way. You give the AI engine thousands of "properly exposed" images, and their corresponding meter readings (which can be thousands of colour samples, not just 5 brightness samples). The AI algorithm figures-out the correlation between the two on its own, and picks an exposure that should work most of the time. Same thing with subject recognition. You give the AI thousands of images, label some of them "cat", "landscape", "human", etc. It uses this data to figure-out how to categorize live-view images. So for instance, if it recognizes a single human, it might try to focus on the near eye. If it recognizes a group, it might split the focus, and stop-down more. It's a "better" AF, but it's still just AF. You can still use your trusty Sekonic, or focus the lens yourself. You can't do that with most smartphones, but expensive digital cameras from major brands are sold to people who believe that they can do things for themselves. Even if they don't 99% of the time. Link to post Share on other sites More sharing options...
SoarFM Posted January 12, 2024 Share #991 Posted January 12, 2024 On 2/3/2023 at 6:15 PM, John Smith said: I understand Karbe said the SL primes are good for over 100MP. But how does diffraction enter that picture? I think—I'm not sure about this—that Karbe said diffraction starts kicking in at F8 on the SL2. Wouldn't diffraction come in earlier with a higher MP sensor? That is one thing I still haven't wrapped my head around. When I had a GFX100, I observed that diffraction losses happened at about 5.6 and smaller. I’m not sure if that was a function of megapixels or sensor size—or both. How this affects the print is obviously dependent on print size and whether one actually prints. I might be a little old fashioned in that I think the print is the ultimate destination. We all might share photos on phones, tablets, computer screens or even dedicated photo displays where I find the idea of diffraction limits irrelevant unless pixelpeeping. Link to post Share on other sites More sharing options...
Einst_Stein Posted January 13, 2024 Share #992 Posted January 13, 2024 6 hours ago, Stuart Richardson said: there is no camera at all. You just tell the computer what you want and the AI spits out the image. Is it real? No. Actually, my team already delivered a software on a open source platform that can do this. Not just static pictures, but also a motion video. 1 Link to post Share on other sites More sharing options...
hansvons Posted January 13, 2024 Share #993 Posted January 13, 2024 13 hours ago, LocalHero1953 said: For commercial images? As far as I'm concerned if AI can do it cheaper and quicker they should. There's nothing real about advertising photos, including fashion, nor a good many marketing photos. AI is just making it rather obvious to us all. Dissent 🙂. Fashion photography, any advertising shoot is very real and depends on authenticity as much as journalism and fine arts—if the client is interested in a great campaign. 3 1 Link to post Share on other sites More sharing options...
LocalHero1953 Posted January 13, 2024 Share #994 Posted January 13, 2024 (edited) 2 minutes ago, hansvons said: Dissent 🙂. Fashion photography, any advertising shoot is very real and depends on authenticity as much as journalism and fine arts—if the client is interested in a great campaign. I was hoping someone would make a case for the defence! Edited January 13, 2024 by LocalHero1953 3 Link to post Share on other sites More sharing options...
Simone_DF Posted January 13, 2024 Share #995 Posted January 13, 2024 14 hours ago, Stuart Richardson said: Except you know, the livelihoods of the people who do it for a living. It happened before. Blacksmiths, ice sellers, knocker uppers, etc. While these jobs disappeared, they were replaced by different jobs like mechanics, etc Probably the same will happen for certain subsets of photography Link to post Share on other sites More sharing options...
Joy Posted January 13, 2024 Share #996 Posted January 13, 2024 Artificial Reality! You can generate video from text, or integrate some real object to make it look more real. 1 Link to post Share on other sites More sharing options...
Chaemono Posted January 13, 2024 Share #997 Posted January 13, 2024 Try Midjourney V6 for product images. https://medium.com/artificial-corner/my-honest-review-of-ai-art-tools-i-used-in-2023-2116883e3be6 1 Link to post Share on other sites More sharing options...
tom0511 Posted January 13, 2024 Share #998 Posted January 13, 2024 for me photography is to document real moments, maybe in my view and with a little post processing. For me photography is not to produce images of things that people would like to see and which are virtual but not real. A good exposure metering and a good AF and eye recognition are things that do help me. Things which happen in smart phone digital image optimization are already a bit too much for me (what I experience with my iPhone 15 pro, I like it but I prefer a real camera if I have one with me) Link to post Share on other sites More sharing options...
chrismuc Posted January 14, 2024 Share #999 Posted January 14, 2024 On 1/13/2024 at 1:23 AM, SoarFM said: When I had a GFX100, I observed that diffraction losses happened at about 5.6 and smaller. I’m not sure if that was a function of megapixels or sensor size—or both. How this affects the print is obviously dependent on print size and whether one actually prints. I might be a little old fashioned in that I think the print is the ultimate destination. We all might share photos on phones, tablets, computer screens or even dedicated photo displays where I find the idea of diffraction limits irrelevant unless pixelpeeping. it is only an effect of the pixel size of the sensor (not related to the size of the sensor or the total amount of pixels on the sensor) it does not effect print negatively, in contrary, due to less aliasing with smaller pixels, the print will show less aliasing artefacts PhaseOne IQ4150, GFX100, Sony A7RIV/V, Leica M11, Sigma FP-L, Fujifilm X-T3/4 ... use same Sony sensor basis with 3.76 um pixels, so they show exactly the same amount of diffraction in an image at a given aperture of the lens. Link to post Share on other sites More sharing options...
chrismuc Posted January 14, 2024 Share #1000 Posted January 14, 2024 On 12/29/2023 at 12:22 PM, hoolyproductions said: Hi. My very amateur impressions on this. I'm not an expert on dynamic range measurements but I have a lot of practical experience using these cameras, particularly in very low light. I used M10 from release until about 2021. Then SL2-S and SL2 since then, and I bought an M11 a few months ago and have been using it for 90% of my shooting. Indeed I prefer the SL2-S to the SL2 both in terms of shadow recovery and highlight clipping (e.g. when shooting landscapes), but still found the SL2 to be excellent, just had to be a little more careful with the highlights. When shooting in very low light (ISO 12500-25000) I personally find the SL2-S to be light years ahead and the SL2 to be (relatively) unusable. (Same for the Q2 that I had for a while.) When buying the M11 I had serious doubts it would be anywhere as good as the SL2-S in very low light but have been amazed. I regularly shoot at ISO 10000-20000 and find the results essentially as good as with the SL2-S. I've shot a bunch of candid landscapes also and have no complaints (but am perhaps not so sensitive as others on DR in decent light). The M10 in my recollection was significantly better than the SL2 in these low light conditions but not at the level of the SL2-S. In short, if they are planning to use the 60mp sensor I have no concerns (also a couple of years have passed so even if it is the same sensor there might still be significant changes in how it renders an image e.g. processor, firmware). Examining https://www.photonstophotos.net/Charts/PDR.htm measurements of sensors at ISO 3200 we see: Leica M11 and Sony A7RVI/V (equivalent sensors), Panasonic S1R and Leics SL2s all have very similar DR of 7,0-7,3 stops. Leica SL2 shows with 6,1 stops DR about one stop more noise than the other cameras. Which is strange because the sensor is equivalent to the one in the Panasonic SR1. At base ISO the DR of the Leica SL2 and the Panasonic S1R are both on the same level of 11,2-11,4 stops. The Leica SL2 implementation only falls behind at higher ISO. Seems Leica was not able to take full advantage of that (Towerjazz) sensor design. 1 Link to post Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now