Jump to content

bencoyote

Members
  • Posts

    521
  • Joined

  • Last visited

Reputation Activity

  1. Like
    bencoyote reacted to dkCambridgeshire in Photographing the Milky Way with the T   
    The $10 solution:
     
    http://garyseronik.com/build-a-hinge-tracker-for-astrophotography/
     
    dunk 
  2. Like
    bencoyote got a reaction from MDCT in RAW-support for Leica T on iOS > 10.2   
    I've got to say that having tried the iPad Pro route a few times. I find the workflow insufficient for my needs.
     
    Yes I can import using the SD Lightning adapter. The raw photos go to the camera roll and then you can go to LR Mobile to import them.
    The problems that I found were:
    1) importing was very slow for lots of shots.
    2) I don't want to erase my card so the second and subsequent imports take even longer
    3) there is an extra step to import into LRM
    4) my iPad only has 256G for everything and that is not enough for weeks or months away. I'm going to Nepal for a month and I would've loved to just take my iPad.Instead after testing it out I decided to downsize my MacBook Pro 15 to a new smaller MacBook Pro 13 so that I wouldn't have to carry as much while trekking up and down 3000m mountains.
    5) there is no way to make a second copy on an external drive when importing
    6) I'm not really into editing on the iPad but it is great for picking shots for my first pass. What I would like to do is geotag and tag my images.
    7) syncing the images back to LR from the iPad takes forever and you have to keep the iPad on and in LR for it to sync
     
    I really wanted the iPad Pro to work. It is small. It is light. It's surprisingly fast. It doesn't take much power. It has a remarkably good screen. But it's not there yet. Adobe seems to have gotten the UI for the develop module pretty good for a mobile device. The thing that I think would be the biggest boon is if they got the library module with tagging and syncing with the desktop working better.
  3. Like
    bencoyote got a reaction from julian m in RAW-support for Leica T on iOS > 10.2   
    I've got to say that having tried the iPad Pro route a few times. I find the workflow insufficient for my needs.
     
    Yes I can import using the SD Lightning adapter. The raw photos go to the camera roll and then you can go to LR Mobile to import them.
    The problems that I found were:
    1) importing was very slow for lots of shots.
    2) I don't want to erase my card so the second and subsequent imports take even longer
    3) there is an extra step to import into LRM
    4) my iPad only has 256G for everything and that is not enough for weeks or months away. I'm going to Nepal for a month and I would've loved to just take my iPad.Instead after testing it out I decided to downsize my MacBook Pro 15 to a new smaller MacBook Pro 13 so that I wouldn't have to carry as much while trekking up and down 3000m mountains.
    5) there is no way to make a second copy on an external drive when importing
    6) I'm not really into editing on the iPad but it is great for picking shots for my first pass. What I would like to do is geotag and tag my images.
    7) syncing the images back to LR from the iPad takes forever and you have to keep the iPad on and in LR for it to sync
     
    I really wanted the iPad Pro to work. It is small. It is light. It's surprisingly fast. It doesn't take much power. It has a remarkably good screen. But it's not there yet. Adobe seems to have gotten the UI for the develop module pretty good for a mobile device. The thing that I think would be the biggest boon is if they got the library module with tagging and syncing with the desktop working better.
  4. Like
    bencoyote reacted to oldwino in Leica T focus disappointment   
    You're right - I did screw up. I expected the T to work somewhat as advertised. I should have taken one of my Ms, and I almost did, but I wanted to try out the new one. Definitely not the right choice there, at least for me, on this day. 
     
    Nice shots, by the way. 
  5. Like
    bencoyote reacted to Louis in Leica T focus disappointment   
    +2
    Really nice!
  6. Like
    bencoyote reacted to ropo54 in Leica T focus disappointment   
    (My shots from the woman's march and inauguration protest https://goo.gl/photos/zgvwjM3BQ5ENPAK76 )
     
     
    Terrific portfolio from the protest march!
    Thanks,
    Rob
  7. Like
    bencoyote got a reaction from Ericmalap in Leica T focus disappointment   
    No I think it is more of a priority thing. If you want the shutter to activate NOW DAMNIT (because otherwise you'll miss the decisive moment) you want the camera to be responsive.` I'm going to be less charitable the other people and say, it isn't the camera's fault that you took bad pictures. It did exactly what you told it to do and you screwed up, you didn't wait for focus lock and verify that it had picked the right thing to focus on. 
    Furthermore, if things were out of focus what distance were you at in relation to your subject and what aperture were you using? Quite a lot of street photography and reportage is done at F/8. For a lens with the FOV of the 23mm a working distance of about 1.5m-2m is reasonable. At 1.5m f/8 everything from 1-3m should be in focus and at 2m it is everything from 1.2m to 6.4m. That is a lot of DOF, so if things are out of focus, it is user error.
     
    If you were trying to do something like isolate your subject with a narrow DOF by using F/2, what were you doing using a 23mm lens on ASP-C sensor camera?
     
    Quit trying to blame a machine for your own failings as a photographer.
    Learn to use your tool and know how it works including all the subtle details like how the color changes when it has focus lock.
    Practice on things you don't care about so that you learn the ins and outs and make the mistakes and learn from them before you are in a situaion where the shots are important to you.
     
    I just got my T back from repair after it was in Allendale for the past 5 months, during the intervening time I've been using my M pretty much exclusively. The thing that I _LOVE_ about the M is that it is so basic and manual that there is never an opportunity to blame a bad shot on the camera. It is always you the photographer who screwed up. (I screw up a lot.) The camera has plenty of capability so if you aren't getting the results that you want, the solution is "learn to be a better photographer". Even though there is a bit more software involved pretty much the same is true with the T, the solution is always "learn to better photographer" because you are the one whose in control of the machine.
     
    (My shots from the woman's march and inauguration protest https://goo.gl/photos/zgvwjM3BQ5ENPAK76 )
  8. Like
    bencoyote got a reaction from Jkulin in Leica T focus disappointment   
    No I think it is more of a priority thing. If you want the shutter to activate NOW DAMNIT (because otherwise you'll miss the decisive moment) you want the camera to be responsive.` I'm going to be less charitable the other people and say, it isn't the camera's fault that you took bad pictures. It did exactly what you told it to do and you screwed up, you didn't wait for focus lock and verify that it had picked the right thing to focus on. 
    Furthermore, if things were out of focus what distance were you at in relation to your subject and what aperture were you using? Quite a lot of street photography and reportage is done at F/8. For a lens with the FOV of the 23mm a working distance of about 1.5m-2m is reasonable. At 1.5m f/8 everything from 1-3m should be in focus and at 2m it is everything from 1.2m to 6.4m. That is a lot of DOF, so if things are out of focus, it is user error.
     
    If you were trying to do something like isolate your subject with a narrow DOF by using F/2, what were you doing using a 23mm lens on ASP-C sensor camera?
     
    Quit trying to blame a machine for your own failings as a photographer.
    Learn to use your tool and know how it works including all the subtle details like how the color changes when it has focus lock.
    Practice on things you don't care about so that you learn the ins and outs and make the mistakes and learn from them before you are in a situaion where the shots are important to you.
     
    I just got my T back from repair after it was in Allendale for the past 5 months, during the intervening time I've been using my M pretty much exclusively. The thing that I _LOVE_ about the M is that it is so basic and manual that there is never an opportunity to blame a bad shot on the camera. It is always you the photographer who screwed up. (I screw up a lot.) The camera has plenty of capability so if you aren't getting the results that you want, the solution is "learn to be a better photographer". Even though there is a bit more software involved pretty much the same is true with the T, the solution is always "learn to better photographer" because you are the one whose in control of the machine.
     
    (My shots from the woman's march and inauguration protest https://goo.gl/photos/zgvwjM3BQ5ENPAK76 )
  9. Like
    bencoyote got a reaction from Steve Ash in Leica M10 raw file (DNG) analysis   
    It a lossless algorithm called Deflate. I wrote a post about it yesterday: http://www.l-camera-forum.com/topic/260683-dng-compressed-vs-uncompressed/page-4
     
    "If you look at page 19 of http://wwwimages.adobe.com/content/dam/Adobe/en/products/photoshop/pdfs/dng_spec_1.4.0.0.pdf
    It specifies the compression algorithm as Deflate aka ZIP. This is a well-known compression algorithm that is very commonly used for computer binaries In packages used to distribute software. It is important to understand this because a change of even one bit in a computer binary can render it inoperable. Thus actual losslessness is very important. You can read up about Deflate compression here: https://en.m.wikipedia.org/wiki/DEFLATE
     
    There is absolutely no reason to be concerned about any sort of loss of information.
  10. Like
    bencoyote got a reaction from digitalfx in Leica M10 raw file (DNG) analysis   
    It a lossless algorithm called Deflate. I wrote a post about it yesterday: http://www.l-camera-forum.com/topic/260683-dng-compressed-vs-uncompressed/page-4
     
    "If you look at page 19 of http://wwwimages.adobe.com/content/dam/Adobe/en/products/photoshop/pdfs/dng_spec_1.4.0.0.pdf
    It specifies the compression algorithm as Deflate aka ZIP. This is a well-known compression algorithm that is very commonly used for computer binaries In packages used to distribute software. It is important to understand this because a change of even one bit in a computer binary can render it inoperable. Thus actual losslessness is very important. You can read up about Deflate compression here: https://en.m.wikipedia.org/wiki/DEFLATE
     
    There is absolutely no reason to be concerned about any sort of loss of information.
  11. Like
    bencoyote got a reaction from Berlinman in DNG compressed VS uncompressed?   
    I've done technical benchmarking very much like you describe. There is no value in uncompressed data. The CPUs and image processors are so good at doing the kinds of operations that are needed for compression that compression comes for free or even has a negative cost.
     
    To be all geeky about it:
    1) There is a cost in power to clear a block on a SD card.
    2) There is a cost in power to transfer data from the CPU's buffers to the SD card.
    3) There is a certain amount of processor time needed to transfer data.
    on the other hand there is:
    4) Processor time and power to compress the data
     
    1) When you store less data you have to clear fewer blocks on the SD card. This saves power.
    2) When you transfer less data you use less power
    3) Processors use the least amount of power when they are asleep. When you keep the processor awake for longer to transfer more data you end up using more power than if you let the processor sleep longer. 
    4) The kinds of operations to implement Deflate on image data are so quick and take so little power that the power saved by 1, 2, & 3 vastly cover the cost to Deflate the data. I don't have the stats for Leica but basically just the cost of writing to the SD card is about 4x the cost to compress the data.
     
    On the bigger machines that I work with. The speed of RAM in relation to the aggregate speed of processors and their demands for data is nearing a point such that it may be more efficient to have RAM compressed and only have the processor caches uncompressed. So don't be surprised if you start seeing things like this appearing in memory controllers. The old arguments about the cost in processor cycles to do things like compress data losslessly really don't apply anymore.
  12. Like
    bencoyote got a reaction from Vec in DNG compressed VS uncompressed?   
    I've done technical benchmarking very much like you describe. There is no value in uncompressed data. The CPUs and image processors are so good at doing the kinds of operations that are needed for compression that compression comes for free or even has a negative cost.
     
    To be all geeky about it:
    1) There is a cost in power to clear a block on a SD card.
    2) There is a cost in power to transfer data from the CPU's buffers to the SD card.
    3) There is a certain amount of processor time needed to transfer data.
    on the other hand there is:
    4) Processor time and power to compress the data
     
    1) When you store less data you have to clear fewer blocks on the SD card. This saves power.
    2) When you transfer less data you use less power
    3) Processors use the least amount of power when they are asleep. When you keep the processor awake for longer to transfer more data you end up using more power than if you let the processor sleep longer. 
    4) The kinds of operations to implement Deflate on image data are so quick and take so little power that the power saved by 1, 2, & 3 vastly cover the cost to Deflate the data. I don't have the stats for Leica but basically just the cost of writing to the SD card is about 4x the cost to compress the data.
     
    On the bigger machines that I work with. The speed of RAM in relation to the aggregate speed of processors and their demands for data is nearing a point such that it may be more efficient to have RAM compressed and only have the processor caches uncompressed. So don't be surprised if you start seeing things like this appearing in memory controllers. The old arguments about the cost in processor cycles to do things like compress data losslessly really don't apply anymore.
  13. Like
    bencoyote got a reaction from pechelman in DNG compressed VS uncompressed?   
    I've done technical benchmarking very much like you describe. There is no value in uncompressed data. The CPUs and image processors are so good at doing the kinds of operations that are needed for compression that compression comes for free or even has a negative cost.
     
    To be all geeky about it:
    1) There is a cost in power to clear a block on a SD card.
    2) There is a cost in power to transfer data from the CPU's buffers to the SD card.
    3) There is a certain amount of processor time needed to transfer data.
    on the other hand there is:
    4) Processor time and power to compress the data
     
    1) When you store less data you have to clear fewer blocks on the SD card. This saves power.
    2) When you transfer less data you use less power
    3) Processors use the least amount of power when they are asleep. When you keep the processor awake for longer to transfer more data you end up using more power than if you let the processor sleep longer. 
    4) The kinds of operations to implement Deflate on image data are so quick and take so little power that the power saved by 1, 2, & 3 vastly cover the cost to Deflate the data. I don't have the stats for Leica but basically just the cost of writing to the SD card is about 4x the cost to compress the data.
     
    On the bigger machines that I work with. The speed of RAM in relation to the aggregate speed of processors and their demands for data is nearing a point such that it may be more efficient to have RAM compressed and only have the processor caches uncompressed. So don't be surprised if you start seeing things like this appearing in memory controllers. The old arguments about the cost in processor cycles to do things like compress data losslessly really don't apply anymore.
  14. Like
    bencoyote reacted to Erik Gunst Lund in Leica M10 raw file (DNG) analysis   
    S 007 is 16 Bit
    Hello guest! Please register or sign in to view the hidden content. Hallo Gast! Du willst die Bilder sehen? Einfach registrieren oder anmelden!
  15. Like
    bencoyote reacted to sandymc in Leica M10 raw file (DNG) analysis   
    For those interested in the gory technical details, my usual "new Leica camera" analysis is up. Although in this case there's quite a lot that's (as yet) unknown.
     
    Leica M10 raw file (DNG) analysis http://chromasoft.blogspot.com/2017/01/leica-m10-raw-file-dng-analysis.html

    Sandy
     
  16. Like
    bencoyote got a reaction from skanga in DNG compressed VS uncompressed?   
    This is bordering on pedantic and so excuse me for that. There is one additional downside to even lossless compression. If there is any data corruption, then more of the image is affected or it more easily can become unreadable.
     
    So say for example a gamma ray from a supernova in Galaxy9 happens to hit your hard disk platter and flips a bit or two in your data (no seriously these things do happen all of the time - In the past I used to write drivers for RAM error detection and correction) or you have a plain old disk sector error due to a subtle manufacturing defect then if the pattern that was corrupted is repeated multiple places in your image, then all of them will be corrupted or if metadata needed to reconstruct the image is corrupted then the image may not be reconstructable. This is one of the reasons people use disk arrays with checksums.
     
    Uncompressed data doesn't have as much metadata required to reconstruct a whole image and bit flips caused by high energy photons or subatomic particles only cause localized impacts to an image. It won't be repeated throughout the image.
     
    On the other hand, because an uncompressed image physically occupies more space on a disk or in the solid state storage, it makes a bigger target for the aforementioned gamma ray particle from Galaxy9 or there are more sectors on the disk which could potentially become unreadable.
     
    All of that being said, having intimate first hand knowledge of all the problems that can possibly occur with digital storage, do I consider that to be justification to not use compressed DNGs. Emphatically NO!!!!!! It is roughly akin to a rancher fearing a terrorist attack by ISIS while watching over a cattle herd grazing the open range in northern Nevada. (OK that may be a bit hyperbolic but what do you expect from a Californian after watching Trump win the nomination?)
     
    It does mean that if you are using compressed DNGs (or even uncompressed ones), you should use good digital asset management practices. Drives fail, sectors become corrupted sometimes silently and if they do, that corruption can spread to backups and you might lose something important. Compressed data is more prone to large scale corruption and unrecoverable corruption but uncompressed data has more opportunity to become corrupted in a minor way.
     
  17. Like
    bencoyote reacted to ianman in Leica M 10   
    Nah, it's only available as a pdf in the camera.... that's why it's got wifi, so you can read it on your phone 
    Hello guest! Please register or sign in to view the hidden content. Hallo Gast! Du willst die Bilder sehen? Einfach registrieren oder anmelden!
  18. Like
    bencoyote got a reaction from Tortuga in Skin tones with the M-P240   
    I used to do a lot of work with color and specifically color printing. Color is a very complicated topic and in the color theory classes that my employer sent me to the problem that the original poster described was introduced on the first day. They kind of used it as a starting point on the topic of sensation vs. perception. It would be impossible for me to distill weeks of classroom study down to a short forum post but let me try to get at the essence of it:
     
    There is an objective measurable interaction between an object and the light that hits it. We can build sensors that capture a portion of this interaction in a way that is similar to the way that our eyes do. This is objectively measurable.
     
    When we look at the world what we think we see is our perception. The thing that we have in our mind is something more like a HDR, focus stacked, white balance corrected, sharpened image which is highly modified in many ways based upon our past experience. This is what we call perception and by the time that your brain is done manipulating it this way and that it bears little resemblance to the actual sensations that your eyes generate.
     
    When we look at a still picture several layers of that processing are no longer available. We can't shift our focus or the white balance or a whole bunch of other factors. Furthermore because it is a still image we can look at it longer.and so things that we wouldn't have time to notice when trying to process the torrent of sensory data coming from our eyes in a moving scene are able to bubble up to our awareness.
     
    A really big factor to keep in mind when when looking at a photo is the very real difference between objective reality and perceived reality. When you are paused looking at a captured still photo, there is a very strong desire in some people to apply one of the last steps in your brain's automatic post processing to the image.That is to bring in your vast experience about what you think something like skin tones should look like and override the objectively captured information recorded by the camera.
     
    That is not to say the camera is always exactly correct. As Jaap pointed out even the camera has to infer some things like white balance. If you are just trying to be artistic and make a pretty picture do whatever you want. However, if you are doing product photography and you need to make sure that the clothes look the same on the model on the runway with carefully designed lighting, in the catalog, and on the ultimate consumer in the store with their likely florescent lights, and a home with Tungsten lights, and outside then you have to do a huge amount of work.
     
    1) As Jaap said profile the camera with a Color Checker passport. You will need to do this under various lighting conditions bright daylight, overcast, and a couple of indoor lighting conditions. With outdoor natural light two things that matters more than most people think it does is elevation and the amount of water vapor in the air. So if you are more than about 1000m higher than lower than the altitude which you calibrated your camera then you want run through the color calibration again. The same is true if you go from a very moist area like near the ocean to a very dry place. Different cameras are more or less sensitive to these changes. In my experience the M is a bit more sensitve to altitude than my T. I don't remember noting a difference with water vapor. When I first calibrated my Leica cameras, I remember thinking "Wow these Germans really are into objective reality vs. making colors look good (which is what other camera vendors often do -- ehm Olympus, Panasonic)." I also remember noting that my color profiles were not really that far off of the "Embedded profile" that was in the camera. However, ACR and LR's color profile was way way off. So one of the things that I put in my default develop presets is to change from "Adobe Standard" to "Embedded profile" for Leica cameras.
     
    2) When you say the colors don't look right, what are you looking that them on? The LCD, your computer's monitor, what? Have you calibrated it? How big is its color gamut? Is it good enough to represent the colors the colors recorded? And you haven't messed with the brightness or contrast or any other settings on the monitor since you calibrated it have you? Oh and one more thing what is the ambient illumination source in the room and how does it change throughout the day? The screens are not 100% black body absorbers and so the light source in the room can mix with with the light coming out of the monitor to distort even your sensation of color. If you really want to do it right, you should only edit your photos in a dark room with no natural light on probably a brand new MacBook Pro that you have calibrated with something like a ColorMunki Photo.
     
    And all of that is long before you ever try to print something. There you have to deal with the reflectivity and spectral neutrality of the paper, the metamersim of inks or pigments and finally the limited gamut of colors possible with printing.
     
    If you want to keep in really simple: Buy a brand new MBP and use Embedded profile rather than Adobe Standard, and only do your editing in at night with the lights off and remember that there is an objective way things actually are and there is an artistic preconception of how you believe things should be.
  19. Like
    bencoyote got a reaction from paulmac in Skin tones with the M-P240   
    I used to do a lot of work with color and specifically color printing. Color is a very complicated topic and in the color theory classes that my employer sent me to the problem that the original poster described was introduced on the first day. They kind of used it as a starting point on the topic of sensation vs. perception. It would be impossible for me to distill weeks of classroom study down to a short forum post but let me try to get at the essence of it:
     
    There is an objective measurable interaction between an object and the light that hits it. We can build sensors that capture a portion of this interaction in a way that is similar to the way that our eyes do. This is objectively measurable.
     
    When we look at the world what we think we see is our perception. The thing that we have in our mind is something more like a HDR, focus stacked, white balance corrected, sharpened image which is highly modified in many ways based upon our past experience. This is what we call perception and by the time that your brain is done manipulating it this way and that it bears little resemblance to the actual sensations that your eyes generate.
     
    When we look at a still picture several layers of that processing are no longer available. We can't shift our focus or the white balance or a whole bunch of other factors. Furthermore because it is a still image we can look at it longer.and so things that we wouldn't have time to notice when trying to process the torrent of sensory data coming from our eyes in a moving scene are able to bubble up to our awareness.
     
    A really big factor to keep in mind when when looking at a photo is the very real difference between objective reality and perceived reality. When you are paused looking at a captured still photo, there is a very strong desire in some people to apply one of the last steps in your brain's automatic post processing to the image.That is to bring in your vast experience about what you think something like skin tones should look like and override the objectively captured information recorded by the camera.
     
    That is not to say the camera is always exactly correct. As Jaap pointed out even the camera has to infer some things like white balance. If you are just trying to be artistic and make a pretty picture do whatever you want. However, if you are doing product photography and you need to make sure that the clothes look the same on the model on the runway with carefully designed lighting, in the catalog, and on the ultimate consumer in the store with their likely florescent lights, and a home with Tungsten lights, and outside then you have to do a huge amount of work.
     
    1) As Jaap said profile the camera with a Color Checker passport. You will need to do this under various lighting conditions bright daylight, overcast, and a couple of indoor lighting conditions. With outdoor natural light two things that matters more than most people think it does is elevation and the amount of water vapor in the air. So if you are more than about 1000m higher than lower than the altitude which you calibrated your camera then you want run through the color calibration again. The same is true if you go from a very moist area like near the ocean to a very dry place. Different cameras are more or less sensitive to these changes. In my experience the M is a bit more sensitve to altitude than my T. I don't remember noting a difference with water vapor. When I first calibrated my Leica cameras, I remember thinking "Wow these Germans really are into objective reality vs. making colors look good (which is what other camera vendors often do -- ehm Olympus, Panasonic)." I also remember noting that my color profiles were not really that far off of the "Embedded profile" that was in the camera. However, ACR and LR's color profile was way way off. So one of the things that I put in my default develop presets is to change from "Adobe Standard" to "Embedded profile" for Leica cameras.
     
    2) When you say the colors don't look right, what are you looking that them on? The LCD, your computer's monitor, what? Have you calibrated it? How big is its color gamut? Is it good enough to represent the colors the colors recorded? And you haven't messed with the brightness or contrast or any other settings on the monitor since you calibrated it have you? Oh and one more thing what is the ambient illumination source in the room and how does it change throughout the day? The screens are not 100% black body absorbers and so the light source in the room can mix with with the light coming out of the monitor to distort even your sensation of color. If you really want to do it right, you should only edit your photos in a dark room with no natural light on probably a brand new MacBook Pro that you have calibrated with something like a ColorMunki Photo.
     
    And all of that is long before you ever try to print something. There you have to deal with the reflectivity and spectral neutrality of the paper, the metamersim of inks or pigments and finally the limited gamut of colors possible with printing.
     
    If you want to keep in really simple: Buy a brand new MBP and use Embedded profile rather than Adobe Standard, and only do your editing in at night with the lights off and remember that there is an objective way things actually are and there is an artistic preconception of how you believe things should be.
  20. Like
    bencoyote got a reaction from farnz in MP-240 Brassing   
    My M-P was stolen a month or so ago and I took the insurance money to the Leica Store in SF and asked for a new M-P which was out of stock retail out the door ~$8600. However, they did have a used M-P which they offered me. They said that it didn't come with a box or any accessories other than the charger and that it was "cosmetically challenged and had a lot of brassing". I said that I didn't mind some brassing but I was certainly not ready to accept something that was like the Lenny Kravits' special edition and that I would come over and look at it. In the end the brassing wasn't too bad -- maybe a bit worse than mine and certainly way more than you have. 
     
    So used, no box or accessories, and notable brassing and so I got it for $4300 out the door IIRC. In that case at least, a notable portion of the price reduction seemed to be due to the brassing. 
     
    I could be wrong though. The camera has who knows how many shutter activations. The firmware was old and didn't have the fix for "Check Battery Age" and so I suspect that it may have been one of the first M-Ps. 
     
    I'm strongly of the opinion that cameras are meant to be used. If in the process of being used things happen to them, so be it. Have fun taking pictures. 
  21. Like
    bencoyote reacted to Shane Guthrie in Leica Summicron-M APO 90mm ASPH.   
    I got it because I wanted a portrait lens. It has since invited experimentation for me, and I'm enjoying that too. 
     
    M-P 240
    Apo-Summicron-M 90mm ISO 200; 1/15 s; f/5.6  
    Hello guest! Please register or sign in to view the hidden content. Hallo Gast! Du willst die Bilder sehen? Einfach registrieren oder anmelden!
  22. Like
    bencoyote got a reaction from M11 for me in Leica Summicron-M APO 90mm ASPH.   
    When I was trying to get the hang of the lens, I walked down to the bike path and shot upcoming bikes. After about a week of doing this, I had it together. I tried to deconstruct the learning process.
     
    In addition to the normal things about narrower depth of field and the detail in the focus patch being harder to see. I had to reprogram my muscle memory for the 90AA
     
    The focus throw on my 50mm Lux goes from about 4pm-7pm which is about 110 degrees about the same as on my 90 AA Cron but to go from 3m to infinity on the 50mm Lux it is about 10 degrees but on the 90 AA it is about 30 degrees.
     
    The other thing was I'd mentally trained my brain to kind of know the working distance of the 50 Lux and guess things like 1m 1.5m 2m 5m. After about 5m on the 50 Lux focus really didn't matter. With the 90AA since the working distance is much farther I had to get the hang of knowing that focus still mattered out to about 10m
     
    The things that really make the 90AA practically usable for me are:
    1) Most of the time when I'm using it as a short telephoto most things are still way out at infinity. Landscape kind of stuff with no decisive moment.
    2) When using it as a portrait lens the model and I remain approximately the same distance and only fine tuning is necessary.
    3) When at a concert or something, you can get the focus then take multiple shots as moments happen.
    4) With faster moving targets like the oncoming bikes, I found it was much easier to set a focus at some distance and let someone ride into it rather than trying to trying to lock onto them. Once they have entered focus sometimes I can manage to pull focus but it is tricky.
     
    I like the lens and I do like the nice tight DOF and low light performance but someday I'll probably buy the 90mm macro-elmar and when I travel trade the size and weight for aperture . 
  23. Like
    bencoyote got a reaction from Csacwp in Leica Summicron-M APO 90mm ASPH.   
    When I was trying to get the hang of the lens, I walked down to the bike path and shot upcoming bikes. After about a week of doing this, I had it together. I tried to deconstruct the learning process.
     
    In addition to the normal things about narrower depth of field and the detail in the focus patch being harder to see. I had to reprogram my muscle memory for the 90AA
     
    The focus throw on my 50mm Lux goes from about 4pm-7pm which is about 110 degrees about the same as on my 90 AA Cron but to go from 3m to infinity on the 50mm Lux it is about 10 degrees but on the 90 AA it is about 30 degrees.
     
    The other thing was I'd mentally trained my brain to kind of know the working distance of the 50 Lux and guess things like 1m 1.5m 2m 5m. After about 5m on the 50 Lux focus really didn't matter. With the 90AA since the working distance is much farther I had to get the hang of knowing that focus still mattered out to about 10m
     
    The things that really make the 90AA practically usable for me are:
    1) Most of the time when I'm using it as a short telephoto most things are still way out at infinity. Landscape kind of stuff with no decisive moment.
    2) When using it as a portrait lens the model and I remain approximately the same distance and only fine tuning is necessary.
    3) When at a concert or something, you can get the focus then take multiple shots as moments happen.
    4) With faster moving targets like the oncoming bikes, I found it was much easier to set a focus at some distance and let someone ride into it rather than trying to trying to lock onto them. Once they have entered focus sometimes I can manage to pull focus but it is tricky.
     
    I like the lens and I do like the nice tight DOF and low light performance but someday I'll probably buy the 90mm macro-elmar and when I travel trade the size and weight for aperture . 
  24. Like
    bencoyote reacted to pedaes in Leica Summicron-M APO 90mm ASPH.   
    Can be used for moving subjects also

    Hello guest! Please register or sign in to view the hidden content. Hallo Gast! Du willst die Bilder sehen? Einfach registrieren oder anmelden!
  25. Like
    bencoyote reacted to TheGodParticle/Hari in Leica Summicron-M APO 90mm ASPH.   
    The APO 90/2 M is a cracker of a lens
     
    Prices are at an all time low so if there's been a pending decision on this lens, now's the time to buy
     
    As NB23 said, the black paint version is a work of art
     
    Many mentioned portraits already but it is equally stunning for landscape work
     

    Hello guest! Please register or sign in to view the hidden content. Hallo Gast! Du willst die Bilder sehen? Einfach registrieren oder anmelden!
×
×
  • Create New...