Jump to content

M11 adopts the Japanese~American look to color science


Jim B

Recommended Posts

Advertisement (gone after registration)

10 hours ago, Adam Bonn said:

Have Leica ever explained a colour philosophy?

Sure us users like to say oh the M9 looks just like Kodachrome (sic) or slide film etc, but pretty sure that info isn’t in the brochure

It was kind of an assumption, since the M8/M9 sensors were Kodak products with (presumably) at least some "Kodak color science" involved. ;)

.............

BTW, I have been tracking "color science" for over 50 years (beginning at age 17 in high school - has it really been that long?!) Not as a scientist, but as an informed user. Color rendering was always very interesting to me.

The "short version" (hah!)

1971 - subscribed to the TIME-LIFE Library of Photography, and read its book on COLOR. Learned how standard "tri-layer" color film worked, as well as a partial history of other processes (Autochromes, Polaroid instant color-print film). Was introduced to the work of color innovators like Ernst Haas.

https://www.amazon.com/Color-Library-Photography-Robert-editor/dp/B00125UBV0

Shot and processed my first roll of color film (High Speed Ektachrome (ISO 160) pushed to EI 400 in E4 chemistry). Examined how it reproduced color, and how that changed with push-processing. Shot some Agfa CT-19 (a dedicated process before Agfa gave up and adopted E6 chemistry) - more magenta than Kodak products.

1973 - shot a comparison of Ektachrome-X (ISO 64) with Kodachrome II (ISO 25) freshman year of college, and noted that Ektachrome was more saturated than K-chrome II (I preferred the Ektachrome color - K-chrome was clearly sharper and less grainy, though).

1974 - Explored the "new" Kodachromes with the actual names 25 and 64 instead of II and -X. Studied Nat. Geo's use of color photos.

1976 - tried a roll of GAF (ISO) 500 color slide film as part of my senior photo project - warm romantic color, and so grainy it looked like Seurat's pointillist paintings.

1977 - tried the new Kodacolor 400 negative film (first at that speed) for a grad-school (I) photoessay on women's field hockey.

1982 - working at a hospital, got to be part of a field-test for Polaroid's new PolaChrome instant 35mm slide film (weird additive-color process rather like a Bayer filter array, except linear filters: color lousy, visible RGB "stripes" if enlarged, but processed in 3 minutes).

https://fstoppers.com/film/weirdest-35mm-film-polaroid-ever-made-558319

First tried out Fujichrome film (50 RF) for my own work - found it to be both more saturated and neutral than Kchrome, although slightly less sharp and more grainy. Used that from then on.

1984-85 Grad school (II) in Photojournalism - learned how newspaper/magazine printing-press color separations worked, and how run-of-press (ROP) color was proofed and "controlled."

1986 - Went to work for a newspaper that was using the new Scitex proprietary color-scanning/pre-press/layout system, and observed how it worked.

1992 - Fuji replaced my beloved RF50 with RVP50 Velvia - finer-grained, but lost the neutral, open colors I loved. Darker/contrastier, and more purple. Adapted to it.

Worked as AME/Graphics setting up another newspaper's new desktop-based color-separation and production system - Mac II color computers, Leaf film scanners (s-l-o-w! 7 minutes per 35mm color frame), Agfa/Hyphen imagesetters, Unix RIP (****Crash(BANG!)****). Bought my first personal color Mac and Photoshop (2.5). Got to handle the first color digital news camera developed by the Associated Press/Nikon/Kodak - the DCS100 (Nikon F3 with cropped digital back, 1.3 Mpixels, over-the-shoulder processing/storage unit (hard drive, no cards). $25000 - if one was an AP client newspaper. Or available for rental.

https://www.nikonweb.com/dcs100/

Ended the year going to the NPPA Technology Conference and seeing the brand-new Nikon Coolscan LS-10 "in person."

I'll leave the story there - 30 years ago. But let's just say that this cowboy has been to the "color-science rodeo" many times. ;)

Edited by adan
  • Like 3
  • Thanks 1
Link to post
Share on other sites

20 minutes ago, adan said:

It was kind of an assumption, since the M8/M9 sensors were Kodak products with (presumably) at least some "Kodak color science" involved. ;)

My hunch is that the M9 looks filmic because Leica wanted it too appeal to folks who were still shooting with an M7, plus that (by modern standards) low DR M9 sensor has a lovely tonal roll off into highlights and shadows.

The base 16mp APSC chip from the NEX 5 (I think it’s that model) has been used in the Nikons, the Fujis, the Pentaxes and I think the Leica X cameras, yet in each application it has a different tone curve, colour science, DR figures and even (often) a different base ISO.

A little like an engine’s relationship to a gearbox (different performance from the same unit achieved via gear ratios) the base sensor’s  end look is one pushed and pulled by physical factors such as filter stack, cover glass and CFA and very much by coders and developers. 

RE Leica using a Sony sensor… Like cookery… it matters little were quality ingredients are sourced and very much on what the chef does with them

(two similes in one post, is this a personal best for me 😅)
 

  • Like 1
  • Haha 1
Link to post
Share on other sites

If there is a common point color wise between M8, M9, M240, M10 and M11 it is color saturation IMHO. Interesting to compare DNG files converted by RawDigger and other raw converters from this viewpoint. I'm no techie enough to explain but the M11 profile referred to by Adam above could be the culprit for that. 

Link to post
Share on other sites

37 minutes ago, lct said:

If there is a common point color wise between M8, M9, M240, M10 and M11 it is color saturation IMHO. Interesting to compare DNG files converted by RawDigger and other raw converters from this viewpoint. I'm no techie enough to explain but the M11 profile referred to by Adam above could be the culprit for that. 

Well super helicopter view (more like ISS view)

adobe profile =

thing one WB - thing two D50 matrix - thing three D50 tune (thing four, adobe look via profile look table, but much like Bruno, we don’t talk about that)

Leica embedded profile =

thing one.

So adobe profiles works like

WB “let’s make neutral colours” - D50 “whoa that’s saturated, lets turn that down a bit” - D50 tune “ok lets manage the saturation here”

So Leica profile works like

Thing one “hmmm let’s cram everything into here as we have no thing two or three”

  • Thanks 1
Link to post
Share on other sites

Interesting thank you so what happens when we don't use Adobe converters? Iridient Developer here. First pic Leica M11 profile, 2nd pic DNG Matrix profile. Is this similar to what you called "thing one" and "thing two" above? Just curious.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

 

 

Link to post
Share on other sites

On 6/2/2022 at 5:10 PM, Jim B said:

While people are wondering what went wrong, a quick trip over to the M11 images thread will leave you more confused than ever!  Why wasn’t more attention paid to maintaining color science between the various bodies?  That guy that got banned was right, the magic is gone… they look like they could have been shot on any of the modern high res cameras. What a shame.

This post shows such a clear lack of understanding of so many factors that I can only assume that the bloke who got banned is back with another handle. 

Link to post
Share on other sites

Advertisement (gone after registration)

Japanese colors, from the time I made color negatives and slides, was that Fuji as a brand tended to green, vibrant often. And Japanese lenses also preferred this preference.

I have not seen pictures made with the M11 with that cast or blend of green. I have some 11.DNG that I looked at, not green at all. So I think that the OP's problem is . . . 

-

I know what greenish  is in the digital world. I had a Japanese wide angle lens with a green cast. In the mountains it was very very nice by the way, dried out grass was livelier. But like hysteria and rant goes. I complained to myself the result on my M8  was unwantedly Japanese and in certain cicumstances such as woods could not be remedied. I sold it. Too bad, it was extremely sharp and full of resolution. Would have been great on the Monochroms.

The OP better stick to the real-APO or almost-APO that many Leica lenses are. These give a magic that might be appreciated also on the M10-R and even more on the M11. And take the effort to learn to shoot the new camera generation - defaults of old lenses show up more, inherently. So adapt the style and method: faster speed (but 1/f works), slower sensitivity (I stick to less than 800), and smaller aperture (-1 at least) in some occasions and wider aperture (+1) in others. 

I'm glad the Leica magic evolves as more characteristics of the lenses come to bloom in the newer cameras. Subjects just shine. I am starting to understand what Karbe has been looking for. Heck, wouldn't want an old output, not even M9, on a modern camera. Of course I like my old pictures.

But any change is trivial compared to the ease of handling images, the detail and resolution, even though profiles have a more important part in the workflow now than in the past - due to the higher resolution abu more so: with the almost 1.5x DR advantage of the M10/M11 over the M9 or M240 for instance. 

Funny, my wife said that my new high resolution pictures have an unnamable quality to them: they are 'calm', for me they have an atmosphere. So I'm lucky they are not the same . . . 

-

The famous friend of the OP has named a lot of sore spots of the new camera generation. Up till now, I am debunking them. One by one. What a shame. 😚

Link to post
Share on other sites

3 hours ago, lct said:

Interesting thank you so what happens when we don't use Adobe converters? Iridient Developer here. First pic Leica M11 profile, 2nd pic DNG Matrix profile. Is this similar to what you called "thing one" and "thing two" above? Just curious.

I'd have to see inside of the iridient profiles to know!

OK so here's a snippet of the adobe standard profile (for the M10 as I have it too hand)

(The actual profile is 10043 lines of text/code... ok some of those lines are not related to the colours, like copyright adode or unique camera model xyz, but nearly all of it is code)

"UniqueCameraModel": "LEICA M10",

"ProfileName": "Adobe Standard",

"ProfileCopyright": "Copyright 2016 Adobe Systems, Inc.",

"ProfileEmbedPolicy": "Allow copying",

"ProfileCalibrationSignature": "com.adobe",

"CalibrationIlluminant1": "StdA",

"CalibrationIlluminant2": "D65",

"ColorMatrix1": [

[ 1.059900, -0.464000, -0.007800 ],

[ -0.532000, 1.564300, -0.062200 ],

[ -0.064100, 0.227500, 0.846800 ]

],

"ColorMatrix2": [

[ 0.909000, -0.334200, -0.074000 ],

[ -0.400600, 1.345600, 0.049300 ],

[ -0.056900, 0.226600, 0.687100 ]

],

"ForwardMatrix1": [

[ 0.505100, 0.342900, 0.116300 ],

[ 0.293500, 0.626600, 0.079900 ],

[ 0.156400, 0.000500, 0.668200 ]

],

"ForwardMatrix2": [

[ 0.497400, 0.397700, 0.069200 ],

[ 0.211600, 0.773600, 0.014800 ],

[ 0.061600, 0.000200, 0.763200 ]

],

"ProfileHueSatMapDims": [ 90, 30, 1 ],

"ProfileLookTableDims": [ 36, 8, 16 ],

"ProfileHueSatMap1": [

{ "HueDiv": 0, "SatDiv": 0, "ValDiv": 0, "HueShift": 2.000000, "SatScale": 1.000000, "ValScale": 1.000000 },

The final line contains an example of the colour tweaks, this code continues for hundreds of lines or thereabouts

Now here's the Leica embedded profile (also for the M10)

{

"UniqueCameraModel": "LEICA M10",

"ProfileName": "Leica M10 Standard Embedded with FMs",

"ProfileEmbedPolicy": "Allow copying",

"ProfileCalibrationSignature": "com.adobe",

"CalibrationIlluminant1": "StdA",

"CalibrationIlluminant2": "D65",

"ColorMatrix1": [

[ 0.855800, -0.306500, 0.047200 ],

[ -0.557400, 1.539300, -0.002100 ],

[ -0.080100, 0.264700, 0.758900 ]

],

"ColorMatrix2": [

[ 0.824900, -0.284900, -0.062000 ],

[ -0.541500, 1.475600, 0.056500 ],

[ -0.095700, 0.307400, 0.651700 ]

]

}

That's it, that's every single part. Comparatively spartan isn't it?

I'll try to be brief here (I failed) as I suspect my interest in this somewhat eclipses yours!!

So what is a color matrix (other than a crime against British english) 

The colour matrices map to un-white-balanced RAW data from the standard illuminants, specifically StA (indoor yellowy lights) and D65 (cloudy midday sun)

How do they do this? Well in every (Leica) DNG there's a numeric tag called 'As Shot Neutral'

In some maths which I'll hideously over simplify for brevity

as shot neutral x color matrix = WB value for the photograph

You will notice that both Leica's profile and adobe's have color matrices, but they don't share the same values.

So exactly the same camera, exactly the same shot, exactly the same as shot neutral tag - but immediately you pick a profile you get a different colour set 

How so (ok because the numbers are different between the adobe/Leica profile), but really how so?

Here's the Leica CM again (but every CM works the same)

[ 0.824900, -0.284900, -0.062000 ],

[ -0.541500, 1.475600, 0.056500 ],

[ -0.095700, 0.307400, 0.651700 ]

It's actually a kinda recipe...

Line one: to make RED take 8 parts red and subtract 0.2 parts green and 0.06 parts blue (and so forth)

So the colours from the same camera will differ with different profiles because the recipe is different and it's also why when you set a proper WB in camera the colours come out better

as shot neutral (GUESSED BY CAMERA) x color matrix = WB value for the photograph

VS 

as shot neutral (SET ACCURATELY WITH A WB CARD) x color matrix = WB value for the photograph

Also of course

as shot neutral (GUESSED BY CAMERA) x color matrix (THAT YOU MADE YOURSELF SHOOTING A COLOUR TARGET AND MAKING A PROFILE) = WB value for the photograph

So that's two ways right they to get better colours (make your own profile and use a whibal card, and WHY those two things make better colours)

When inevitably Leica release a FM that tweaks the colours, they'll do this by changing the algorithm  behind 'As Shot Neutral' -ASN- (that way they don't break all the photos you have already with their/adobe/etc profiles)

The Leica profile solution (two matrices) places all the heavy lifting into the mapping of the ASN tag, all the colours and saturations etc into those 18 numbers.

Adobe's solution utilises FORWARD matrices.

These are recommended in the DNG spec.

The forward matrices "forward' aka chromatically adapt the two standard illuminants into the D50 colour space (sunny midday, flash, Kelvin 5000 - standard photography light)

Now if you follow the letter of the DNG spec to make a forward matrix from a CM then you'll get a linear solution that can (and does) clip and over saturate colours at certain points of exposure.

So what adobe does is ignore their own spec and do some secret propriety shit in the background

They de-sat the output of their FMs, then bring the colours back up further down stream in the Profile Hue Sat Map tables. These have a mahousive amount of entries and form the bulk of the adobe profile and serve to tailor the colour science more specifically to individual colours and provide stability to the colours during different exposures.

But profiling a camera is always a set of compromises... as you'll all know... in the simplest sense.. mess with red and you're messing with green etc. Often in profiling in order to make one colour better another must be worse..

These compromises can be driven by things like the CFA colours or by the way the base chip is coded to work... for example the M240 chip handles red and blue amplification differently than green (source), which led to green shadows when pushed and also the 240 was/is a kinda yellowy-red rendering camera...

I've written too much about this (yet again)

But the M11 is new. Many tweaks to come via FM yet... many different editing tricks to try yet - LR/ACR camera calibration sliders override the profile's D50 chromatic adaption for example, a split tone edit can do wonders for shadow casts. I'll stop mansplaining I'm sure you all know how to edit photos.

Of course if anyone eventually tires of Leica/adobe/DxO/etc canned profiles then one can make their own with some time and some effort.

Congrats to anyone who read this far!! 

In short... ahahaha...

The M11 doesn't have Japanese or any type of rendering. It has a series of compromises at a chip level (for DR, noise - stuff like that), then a guess a white balance algorithm inside which feeds into one of several profiles in various RAW apps that each has a bunch of colour compromises. Just like every other camera

Plenty of scope for tweaks and improvements.

 

 

 

 

 

 

  • Like 9
  • Thanks 3
Link to post
Share on other sites

15 hours ago, Adam Bonn said:

In short... ahahaha...

The M11 doesn't have Japanese or any type of rendering. It has a series of compromises at a chip level (for DR, noise - stuff like that), then a guess a white balance algorithm inside which feeds into one of several profiles in various RAW apps that each has a bunch of colour compromises. Just like every other camera

Plenty of scope for tweaks and improvements.

If I can add a little bit. The colour and intensity of light which enters the lens is altered by many variables prior to being output (in a variety of ways, to a variety of devices). So in essence the colour going in will not be matched by the colours coming out. All that can be achieved is a mapping of the input colours into 'similar' output colours, and this can be adjusted within limits imposed by the overall system.

As far as I am aware this is mathematically controlled and not by any form of magic, since magic does not exist. However readers may feel free to believe in magic if it makes them happier, although this won't actually change the output colours.

  • Like 1
Link to post
Share on other sites

1 hour ago, pgk said:

The colour and intensity of light which enters the lens is altered by many variables prior to being output (in a variety of ways, to a variety of devices). So in essence the colour going in will not be matched by the colours coming out

 

I'm sure you're right, but I'm not entirely sure what you mean... No camera can match the DR of real life and not only can the camera not record the full spectrum of human vision but human vision can't see the full spectrum of reality.

In terms of digital imaging the main set of variables (well IMO any way) is how the camera and the RAW apps chose to hammer and bend reality into a human defined colour space.

 

1 hour ago, pgk said:

All that can be achieved is a mapping of the input colours into 'similar' output colours, and this can be adjusted within limits imposed by the overall system.

 

Man if I'd just written that above I could've saved over an hour of my life and several minutes of the lives of the good folks on the forum that took their time to read it 😅 Seriously nice summary, thank you.

The 'mapping' is mathematical (there's a lot of maths in photography) and for those that care (....) the maths for adobe's dcp solution can be found in the links I put in my big long post above.  

 

1 hour ago, pgk said:

As far as I am aware this is mathematically controlled and not by any form of magic

 

http://www.brucelindbloom.com/ has all the maths (and a very useful colour space calculator) that anyone will ever need!

When I first fell into this marriage damaging, time stealing rabbit hole I made a real effort to understand the maths... which sucked as I failed maths at school (twice, and yes my written english and spelling were pretty shitty too!)

Then I bought lumariver and found it all so much easier with a GUI to help!

 

1 hour ago, pgk said:

since magic does not exist

 

Don't tell my 9 year old daughter who's a Harry Potter fan please 😅

 

1 hour ago, pgk said:

although this won't actually change the output colours

 

For my opinion people like to talk about getting great colours or fixing skin tones etc etc.. and obviously this is a valid desire, but we must (again, I stress - IMO) have a pretty tangible idea of what we want, what do we need to do an image to fix it and how are we going to go about it.. a decent preset is far easier to make than a profile. A profile that works great under the lighting in your lounge may very well look shit under the lighting in your buddy's lounge. Are the skin tones really that bad or is there something about that particular scene that's f**king them up (reflections, light filtering down through a green tree).

Here's a little bit of fun....

a snippet of a shot.

Welcome, dear visitor! As registered member you'd see an image here…

Simply register for free here – We are always happy to welcome new members!

The profile used here was created (by me) with an X-Rite colour checker CC24 using their propriety software.

This is exactly the same shot, exactly the same everything, but now with the M10 profile (made by Leica)

Same again, but the profile made by adobe

Which is best? Well really none of them.. they all have attributes that would work better in some scenes than others.

(I don't really like the X-Rite solution... you up load an image and it does stuff in the background and out pops a profile. You have no control over the process and the resultant profile is just one colour matrix per test shot. So it's pretty basic, just like the embedded Leica Profile. As far as I'm concerned FWIW, it's a solution designed so that you take a test shot for a scene then you make profile, then later when faced with a different scene you make another... but folks seem to like to use it to make a forever profile and I'm not convinced that's what it's really for... But if it works do it!)

Oh and worth mentioning... I guess... for those that care... the X-Rite profile is a single matrix. So I stripped out the M10 and adobe profile to be single matrix also for a fair comparison

The point is that the same shot can be profiled in many different ways and when you look at the above there's also what device (phone, tablet, screen) you're using and what browser you're using too.

Search for colours that you like, as a mandalorian might say - this is the way.

 

 

  • Like 4
  • Thanks 1
Link to post
Share on other sites

59 minutes ago, Adam Bonn said:

No camera can match the DR of real life and not only can the camera not record the full spectrum of human vision but human vision can't see the full spectrum of reality.

The New Arri Alexa 35 has a 17 stop of usable DR, SL2 has about 12, new color-modes and new log formats that can hold the DR.

It is hard to blow out highlight when exposing for shadows. quite amazing and promising.

This will push other companies to innovate.

little video here 
https://www.youtube.com/watch?v=m8EQaIWaAlE

 

 

  • Like 1
Link to post
Share on other sites

9 minutes ago, Photoworks said:

The New Arri Alexa 35 has a 17 stop of usable DR, SL2 has about 12, new color-modes and new log formats that can hold the DR.

It is hard to blow out highlight when exposing for shadows. quite amazing and promising.

Output still has to be acceptably viewable. I did start a thread on this because my suspicion is that oir 'tastes' are changing as a result of higher dynamic ranges being recorded. We now prefer greater highlight and shadow detail/tonality on screen or in prints, but it is a fine line; poorly implememted processing can all too easily result in a picture that has shades of unacceptable HDR stylisation which IMO looks horrid.

Link to post
Share on other sites

24 minutes ago, pgk said:

Output still has to be acceptably viewable. I did start a thread on this because my suspicion is that oir 'tastes' are changing as a result of higher dynamic ranges being recorded. We now prefer greater highlight and shadow detail/tonality on screen or in prints, but it is a fine line; poorly implememted processing can all too easily result in a picture that has shades of unacceptable HDR stylisation which IMO looks horrid.

You probably didn't look at the video.

The increased DR or arrival result in fine radiation and color nuance that nobody was able to capture before , looking more natural and real than ever.

But if you prefer  the M9 look it is fine too. you can still edit every image like that.  Just a apply a curve and remove data to get what KODAK was able to get 10+ years ago.

BTW. when the M9 came out canon was already capable to deliver clean beautiful images in low and high iso. Just saying 

Anyway the look you are describing is base on the limitation of the sensors, and the JPG look out of camera. As a creative I like to make my own images and not be happy from what I get out of the box.

Link to post
Share on other sites

53 minutes ago, Photoworks said:

The New Arri Alexa 35 has a 17 stop of usable DR, SL2 has about 12, new color-modes and new log formats that can hold the DR.

It is hard to blow out highlight when exposing for shadows. quite amazing and promising.

This will push other companies to innovate.

little video here 
https://www.youtube.com/watch?v=m8EQaIWaAlE

 

 

I am not sure about the 17 stops, especially since Arri does not mention how they measured it and it was not verified by a third-party. When Sony claimed 15 stops for their a7rIV, at least they wrote "Sony test conditions".

Link to post
Share on other sites

18 minutes ago, Photoworks said:

You probably didn't look at the video.

The increased DR or arrival result in fine radiation and color nuance that nobody was able to capture before , looking more natural and real than ever.

I didn't. Capturing higher dynamic range always requires some way of displaying it. Its far easier when you can do this using a 'lit' system, but trying to put excessive dynamic range into a print for example means adjustments (compression) which will have to produce something acceptable (as opposed to 'real' or 'natural', neither of which are possible and the definition of either is subjective in any case). The problem is that in the real world there is an enormous dynamic range and we see it  by viewing multiple discreet segments (the eye bit) and then adding in experience  (the bain bit) to fill in the gaps. The eye/brain system is actually not very high in optical quality but extremely competent in delivering an acceptable, apparently viewed image. This is not possible with imaging systems and the acceptability of their output is subjective and changes as our experience shifts. Capturing a high dynamic range and then using this to increase shadow and highlight detail depends n our acceptance of he shift in tonality exhibited by the output.

Link to post
Share on other sites

1 hour ago, Photoworks said:

As a creative I like to make my own images and not be happy from what I get out of the box.

Yeah me too, ergo the ton of detail I wrote about how DCP works and the maths behind it

High DR and colour isn't the same thing AFAIK.

Many people like (say) punk bands more than accomplished musicianship... IMO all things art based perform at their best when they resonate with the heart

1 hour ago, Photoworks said:

But if you prefer  the M9 look it is fine too. you can still edit every image like that.

Yeah kinda... people go on about the M9 colours (and they are nice. Sometimes) but that low DR sensor makes the M9 images light and sparkly and I think that's what people like.

The fact that the M9 is a legend, has a cult following and is still quite a few people's favourite rendering camera says a lot about attribute vs variable data. (18mp, maybe 8-9 stops of DR and only decent ISO for triple digits)

  • Like 2
Link to post
Share on other sites

2 hours ago, Photoworks said:

It is funny to my that you stay firm to you stand and refuse a little opportunity to educate yourself and still feel free to disregard other options.

No sure we need to add anything. It is not a conversation.

Explain to me how print materials have changed to show higher dynamic range. They haven't have they? So however much dynamic range you capture, it still has to be compressed in order to print. This means that decisions have to be made as to how to compress a higher tonal range into a print and it is this which is the problem that we have to learn how to deal with.

  • Like 1
Link to post
Share on other sites

25 minutes ago, pgk said:

Explain to me how print materials have changed to show higher dynamic range. They haven't have they? So however much dynamic range you capture, it still has to be compressed in order to print. This means that decisions have to be made as to how to compress a higher tonal range into a print and it is this which is the problem that we have to learn how to deal with.

IMO, the benefit of a higher dynamic range is to allow extracting details in deep shadows without resorting to bracketing/HDR. 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...