johnastovall Posted November 17, 2006 Share #1 Posted November 17, 2006 Advertisement (gone after registration) Here is Knoll's take on correcting the M8's problems just with profiles. Link to post Share on other sites More sharing options...
Advertisement Posted November 17, 2006 Posted November 17, 2006 Hi johnastovall, Take a look here Thomas Knoll on the M8 and profiles . I'm sure you'll find what you were looking for!
jrgeoffrion Posted November 17, 2006 Share #2 Posted November 17, 2006 Here is Knoll's take on correcting the M8's problems just with profiles. I'll take full credit for having said it here first when the problem IR first came out. Link to post Share on other sites More sharing options...
Guest guy_mancuso Posted November 17, 2006 Share #3 Posted November 17, 2006 Thanks JR and the reason i have been working the IR filters so hard . i knew all along you have to block the transmission of infared. Interesting about the coding and did not say wide angles or anything Link to post Share on other sites More sharing options...
jrgeoffrion Posted November 17, 2006 Share #4 Posted November 17, 2006 Thanks JR and the reason i have been working the IR filters so hard . i knew all along you have to block the transmission of infared. Interesting about the coding and did not say wide angles or anything I'll be very interested in seeing what the "fix" will be for the streaking since it is the reason I canceled my order... while I'm still "stuck" with 11 lenses waiting to shoot weddings... Link to post Share on other sites More sharing options...
Guest guy_mancuso Posted November 17, 2006 Share #5 Posted November 17, 2006 JR that has to be a frequency issue. i really don't see any other explanantion for it. Not sure they can do it in a firmware upgrade or not but seems they are so worried about ISO 2500 they may have turned that up a notch to high. Hell i am fully content with 640 and 1250 with Noise Ninja. BTW the IR filters doe's reduce the streaking because it does have IR light in it but is not the total cure either. The blobs are gone with the filter have not seen them after the filter has been used Link to post Share on other sites More sharing options...
carstenw Posted November 17, 2006 Share #6 Posted November 17, 2006 I believe that the streaking is the reason Leica is calling for hardware changes to the M8. The latest rumour is that the sensor will be replaced. Perhaps this batch were out of spec. Link to post Share on other sites More sharing options...
Jamie Roberts Posted November 17, 2006 Share #7 Posted November 17, 2006 Advertisement (gone after registration) Here is Knoll's take on correcting the M8's problems just with profiles. Hey--you know I read that too, and Thomas Knoll is undoubtedly right in these areas. But it's all a matter of degree, and it sounds like he's speaking theoretically here. And JR--you should get first credit too! After I played around with many M8 tungsten shots, I saw that yes, if I increased the saturation (in the profile, not in C1) many times--like a factor of 20 or so--the blacks turned magenta again. In other words, they weren't 0 A 0 B in Lab, like the CC patches (non IR) were. But you know what? Under any normal circumstance imagineable to me they're absolutely close enough for proofing, and for the beginnings of a fine print, too. Could Dr Know also have misrepresented one eensy teeeny tiny thing... that may have an effect on this? Please tell me if you think the following is crackers... someone else first floated this theory, but it sounds right to me looking at actual shots... The "magenta" cast isn't a "magenta" cast in the camera space. It's IR, right? "invisible" by definition, no? Then the camera space in RAW **isn't profiled** right? So that invisible IR cast gets represented in LAB:CIE space, right? But LAB contains invisible, impossible to reproduce in RGB space colours. You can have a LAB color that is yellow, but 100% luminosity. What does "yellow" look like when it's as bright as white?! So now, in the RAW converter, and through a profile, you map all the camera space (like LAB), including these impossible colors, into RGB, then they're MADE visible. So Knoll would be correct about mapping the magenta to black shifting all the other magentas / blacks IF the actual source color was magenta. But if it *isn't*--then mapping it to black poses NO SHIFT whatsoever to visible source colors... Anyway, that's what I think is going on. If not, then the profiles I've got right now should sap blue, magenta and red--and they don't OK. I have my flame-proof suit on now! Let me know how dumb that is! The streaking is different. Very different. If they're going to do the sensor thing, that's why, I think. Link to post Share on other sites More sharing options...
jrgeoffrion Posted November 17, 2006 Share #8 Posted November 17, 2006 Jamie, I've read and re-read your post four times now but I'm not sure I followed 100%... but here are some thoughts: 1) Thomas Knoll wrote: "It is possible to have objects in the real world that the camera sees as exactly the same color, but humans see as different colors.". I can't think of an example of this top of mind, can anyone present one? 2) He also wrote: "There is are also cases where the camera sees two objects as different colors, but humans see them as the same color. A profile could "fix" this, but not with out messing up other colors." By definition, the sensor ALWAYS sees differently than humans and that's why we have an IR cut filter and CFA. In all cases, the profile is what relates the sensor data to the human vision frame of reference. 3) The point that was not made clearly, IMNSHO, is the fact that that the sensor can see "beyond" human vision and that once this information is in bits and bytes, it can not easily be differentiated. Here is a way to think about it. The sensor has a CFA array comprised of red, green, and blue (with twice the green as red or blue). Thinking like a B&W photographer, the blue filter on top of the sensor most likely blocks a large proportion of the IR (since IR is essentially "beyond red"). This means that 25% of the sensor data doesn't [really] have the IR problem... Now, the green filter in the CFA array which most probably blocks a portion of the IR... Therefore further reducing and pin-pointing the problem. This leaves us with the red filters of the CFA array, which represents 25% of the "data" the sensor collects. This means that if there is IR, it will most likely affect mostly these pixels under the red filters and those under the green. This explains why most colors are not "entirely" affected. The problem is more apparent in what we see as black because each photosite should red 0,0,0. However, the IR reaches the sensor - affecting a portion of the photosites (red and green mostly). That is why we have a strong color shift in the dark areas. Link to post Share on other sites More sharing options...
Jamie Roberts Posted November 17, 2006 Share #9 Posted November 17, 2006 Jamie, {Snipped} The problem is more apparent in what we see as black because each photosite should red 0,0,0. However, the IR reaches the sensor - affecting a portion of the photosites (red and green mostly). That is why we have a strong color shift in the dark areas. JR--I agree with you mostly on the first thoughts... But here is where my point is. The camera doesn't see the red part of the sensor as RGB 0,0,0 to begin with because it's not an RGB device at the sensor level. Instead, after Bayer processing, processing, the RAW data would have color in a LAB-ish space that only ever (as you point out) maps to RGB (it's profile dependent, in other words, which is why they're so important). Let me give you a Photoshop example you can try (or others can) for yourself... If you really want to understand this, read Don Margullis's book called The Canyon Conundrum. It's great. Ok a little experiment in Photoshop... Take a red square in RGB space... the reddest! That would be 255R, 0G, 0B right? Pure red. Make a small document in your RGB profile of choice (it doesn't matter which one, I don't think, but I'm using aRGB for this example, so if you don't use aRGB then the numbers won't quite be the same). Anyway, so when you fill the color, you can see in the selector that RGB 255, 0, 0 is actually, in LAB, 63 Luminance (L), 90 on the A scale (which measures magenta green--and red will have magenta, so that's positive) and 78 on the B scale, which measures blue-yellow, and yellow is also in red (yellow + magenta = red): so that's positive number too. The negative numbers in A are all green; in B they're all blue Now convert your document to LAB space by choosing convert to profile. Ok--now we can do all kind of weirdness, because LAB separates color value from luminosity. Since it's the underlying colour model for photoshop and represents greater than the visible spectrum of colours, we can make colors that can't be seen, or even accurately represented, in RGB space. So here's what you do. Now click on the color selector and change the L--luminance--value to something, well, impossible: 100. That means the luminance setting is now at 100 percent, which means it's as "bright" as white itself. So now you have the impossible color where the luminance is as bright as possible, yet it has a fully saturated color called "red" at 90A and 78B. Weird, no? An invisible tint--there IS no RGB equivalence to this. 255 255 255 is ALWAYS white; 0 0 0 is always black. You can also make a red as dark as black in LAB. Just change the luminance to 0 L and leave the AB--color only--channels alone. Now you have something as dark as black, but also red as red! So--now what? How does PS convert these impossible, invisible colors to RGB when it can't do it, in theory? It guesses. And makes a pretty good guess. When you convert the bright as white but fully saturated red (0L 90A 78B) back to aRGB, it becomes this: 255R 15G 2B --close, but not 255,0 0. More importantly, it's no longer 0L 90A 78B; you've lost that to the guess. PS is a pretty good guesser. But that darned C1 profile for the M8 was NOT I'd say that the profile guessed that some bizarre LAB value was "magenta." In other words, if the IR cast in camera space (not RGB) was mostly these "imaginary" colors, then you can map them back to black / neutrals in RGB without affecting anything like real magenta or real black. It will affect the luminosity, though--it's light, after all--and that's exactly what we've seen. Fortunately fixing neutral luminosity in RGB is easy Link to post Share on other sites More sharing options...
jrgeoffrion Posted November 18, 2006 Share #10 Posted November 18, 2006 In other words, if the IR cast in camera space (not RGB) was mostly these "imaginary" colors, then you can map them back to black / neutrals in RGB without affecting anything like real magenta or real black. It will affect the luminosity, though--it's light, after all--and that's exactly what we've seen. Fortunately fixing neutral luminosity in RGB is easy ok, I think I see where you are going. Tell me if I'm paraphrasing correctly. Since the camera is natively "blind" to color space, it could map the IR and essentially isolate it and remove it from the equation? If I got it right, the problem with what you are suggesting is that the camera/sensor has no way of differentiating between a luminosity value created via visible light from a luminosity value created by IR -- hence can't isolate IR "mathematically". Link to post Share on other sites More sharing options...
Jamie Roberts Posted November 18, 2006 Share #11 Posted November 18, 2006 ok, I think I see where you are going. Tell me if I'm paraphrasing correctly. Since the camera is natively "blind" to color space, it could map the IR and essentially isolate it and remove it from the equation? If I got it right, the problem with what you are suggesting is that the camera/sensor has no way of differentiating between a luminosity value created via visible light from a luminosity value created by IR -- hence can't isolate IR "mathematically". JR, I think you've essentially got it right, and the "problem" you mention is something I said too, which really could be a plus...or at least it's less of a minus. But it's why this is still a workaround, and why I edited the profile in LAB space... Remember in LAB spaces, problems of luminosity are NOT problems of colour. See where this goes? You can map out the offending color and keep the resulting monochrome values. In other words, you don't change the luminosity response--it's still "IR" esque. But since the color isn't magenta anymore, well, it looks more like a film curve than anything else. IOW this makes your camera still sensitive to IR, but now that's seen as "lighter blacks" Well, geez! I've fought to keep shadows my whole life And this is why Thomas Knoll is correct (of course) but the magnitude of the problem is potentially tiny. A small bug. But this is also why I want the solution from Leica And why I want someone like Ethan Hansen or any number of color gurus to really do this! Link to post Share on other sites More sharing options...
J_Brittenson Posted November 18, 2006 Share #12 Posted November 18, 2006 The "magenta" cast isn't a "magenta" cast in the camera space. It's IR, right? "invisible" by definition, no? IR is not invisible by definition. Whether IR is visible depends on the observer. Clearly to the M8 it's not invisible but is seen as red and blue. So the answer is no, it's not invisible, either by definition or otherwise. Then the camera space in RAW **isn't profiled** right? So that invisible IR cast gets represented in LAB:CIE space, right? You need to understand the difference between color and energy. The energy level of a photon is its placement on the spectrum. Color is a quality imparted by an observer, it doesn't exist. If you stick your finger in boiling water it feels "hot", but the water simply has a temperature. It has no "hot" or "cold" property. Whether it's hot or cold is a sensation your brain imparts to it. Colors similarly are sensations produced in our brain. Color spaces deal with color, not spectrum. IR light will show up in the magenta-red part of the M8's *color* space, because that's how it's seen. To the M8 it *is* magenta-red. It's not an "IR cast", it's a magenta-red discoloration. But LAB contains invisible, impossible to reproduce in RGB space colours. You can have a LAB color that is yellow, but 100% luminosity. What does "yellow" look like when it's as bright as white?! A lot of these ramblings are nonsensical. What does an IR-induced magenta discoloration look like when it's as bright as white? Why do you think this would matter? So now, in the RAW converter, and through a profile, you map all the camera space (like LAB), including these impossible colors, into RGB, then they're MADE visible. LAB doesn't contain 'invisible' colors. It has IMAGINARY colors. These, however, are not 'invisible'. They won't fit in RGB, so are clipped -- how this clipping occurs depends on rendering intent. They aren't "impossible" either, or even all that abstract. They are called imaginary because Lab has a larger space than the human eye, meaning it can model levels of saturation beyond what our eyes can see. A red laser for instance can be used to create a 100% luminance red. It's real, it's visible, and it'll clip and produce a temporary cyan spot if you look at it. Beyond a certain luminance you'll be unable to differentiate and it'll all look the same -- whereas Lab can distinguish the color red a little further. This difference is called imaginary. It has nothing to do with visibility. So Knoll would be correct about mapping the magenta to black shifting all the other magentas / blacks IF the actual source color was magenta. You need to understand that to the M8, IR ***is*** magenta. There is no such thing as "other magentas" -- either it's seen as magenta or it isn't. Link to post Share on other sites More sharing options...
markedavison Posted November 18, 2006 Share #13 Posted November 18, 2006 The raw R, G, B coordinates of that the camera produces are not imaginary or invisible things. You can look at them by using a raw developer which will read the raw file and put it into a file format that photoshop will open without trying to "develop" the file with Adobe Camera Raw. If you want to look at some raw linear device R, G, B coordinates you can go to Decoding raw digital photos in Linux and find a link for downloading a compiled copy of dcraw that is suitable for you computer.Then run dcraw with options indicating that the program should de-mosaic the device values but not transform them to R, G, B coordinates in one of the standard working spaces (sRGB, Adobe RGB, etc.) For example dcraw -o 0 -4 -T myfile.DNG will give you a 16 bit tiff file filled up with raw device R, G, B values. You can open this in photoshop and view it as a "false" color image, and you can inspect the R, G and B values. You will never see all possible device R, G, B triples as output from the camera. The possible values are restricted to lie in a device gamut. The boundary of the gamut is simply the responses of the camera to pure monochromatic light of various wavelengths, ranging over all the wavelengths to which the camera responds. This boundary is called the spectrum locus. The spectrum locus can be easily determined if you can find the camera spectral sensitivity curves. (Example: the sensitivity curves for the Kodak chip used in the M8, but with the original IR barrier, are printed on p. 14 of http://www.kodak.com/ezpres/business/ccd/global/plugins/acrobat/en/datasheet/fullframe/KAF-10500LongSpec.pdf (equivalent tinyurl: http://tinyurl.com/y97d37) ) To be specific, if the response curves are R(lambda), G(lambda), B(lambda), then to make a graph of the device gamut in device r, g chromaticity space, simply graph the points ( r(lambda), g(lambda) ) as lambda varies over the wavelengths to which the camera responds, where r(lambda) = R(lambda) / Total( lambda) g(lambda) = G(lambda) / Total( lambda) and Total(lambda) = R(lambda) + G(lambda) + B(lambda) Intuitively the chromaticity coordinates r and g represent the fractions of the total R + G + B. If r were 1, then R would be 100% of the total. If g were 1 then G would be 100% of the total. Now for some response curves the spectrum locus has the following interesting property:every lambda is mapped to a different point in chromaticity space. This means that the camera in question can unambiguously determine the wavelength of chromatic light. (Intuitively, you can image the rainbow and no two different rainbow frequencies have the same device R, G and . If this were the case for the M8, then by looking at device coordinates, you could unambiguously determine if you were looking at monochromatic infrared light. (Points on the spectrum locus corresponding to infrared wavelengths cannot be coming from visible light.) This is the kind of light that would be reflected off of an object which is black in visible light, and reflecting a single wavelength in the infrared. Thus it is plausible that a profile could unambiguously cure some magenta fringes coming from black objects by mapping them back to black. The ambiguity problems that Thomas Knoll notes would not enter into this correction. A good example of a chromaticity diagram for the CIE X, Y, Z coordinates is at: What is CD Notice how the labels on the boundary only range over the visible wavelengths. A device gamut diagram for a device sensitive to infrared would have boundary labels that extend into the infrared. Presumably a portion of the spectrum locus would jut off out of the visible range. However, infrared contamination effects the device R, G, B coordinates of lots of other colors in the original scene, and no profile can correct all of these. (For example with the M8, an object which is green in the visible wavelengths but is highly reflective in IR will shift towards gray, since magenta is the complementary color to green.) The mathematical problem is that there are lots of points in the interior of the gamut that could correspond to objects with no IR response, or could be from objects with some visible and some IR response. I hope this discussion helps to reconcile Thomas Knoll's statement with your intuitions and the demonstrable capacity of the modified Capture One P30 tungsten easy black profile to correct magenta casts in black objects. If Kodak or Leica ever publish spectral response curves for the M8, I would be happy to make a device gamut graph and post it. A device gamut graph, with the points on the spectrum locus labelled by wavelength, is a very useful tool for understanding how a device "sees." [A side note: You should not think that any color outside the gamut of Adobe R, G, B in CIE X, Y, Z space is "impossible". Just buy a prism and look at the rainbow colors it makes on a sunny day. Those are the colors on the boundary of the CIE gamut. They are simply more saturated then any monitor can make.] Mark Davison Link to post Share on other sites More sharing options...
Jamie Roberts Posted November 18, 2006 Share #14 Posted November 18, 2006 {snipped} [A side note: You should not think that any color outside the gamut of Adobe R, G, B in CIE X, Y, Z space is "impossible". Just buy a prism and look at the rainbow colors it makes on a sunny day. Those are the colors on the boundary of the CIE gamut. They are simply more saturated then any monitor can make.] Mark Davison Mark--thanks for the explanation--it makes sense, as much as I understand it--that a good profile can knock the worst of the issue out (a single wavelength of IR reflection mapped to black) without totally curing the interference or polluting the nearby magenta shades. I used the wrong term above "impossible"--I meant 1) impossible within RGB and 2) impossible to print or display (hence usually inconsequential from a photographic perspecitve)--NOT impossible to see. As you point out, I see a lot my monitor can't display Link to post Share on other sites More sharing options...
Jamie Roberts Posted November 18, 2006 Share #15 Posted November 18, 2006 IR is not invisible by definition. Whether IR is visible depends on the observer. Clearly to the M8 it's not invisible but is seen as red and blue. So the answer is no, it's not invisible, either by definition or otherwise. You need to understand the difference between color and energy. {snipped} You need to understand that to the M8, IR ***is*** magenta. There is no such thing as "other magentas" -- either it's seen as magenta or it isn't. J--I just deleted a really long post that just wasn't helpful to the conversation here (well, except it had a really good ex-spouse joke in it. Ah well). I think I understand a wee bit more than you think I do, even if I don't express myself as rigourously as you, or some other scientist, might prefer. So you might give me the benefit of the doubt in a photographic forum and rein in a bit of the condescending and dismissive tone next time. But I'll just say this: to my photographer's eyes a lot of things, including IR, are "invisible" (by definition and otherwise, actually) without mechanical assistance. I think you also *do* understand that in RGB (not aRGB) a lot of colors are quite "impossible" to represent without, as you put it, clipping. And that's the crux of my whole question, so thanks for clarifying it for me: "is the magenta we *see* in the old M8 C1 profiles an artifact of color space clipping or something else entirely?" If it's clipping then it's possible a profile can really, simply fix it. But Mark's answer was more elegant and I'll believe it: that there is a predictable wavelength of IR light here that can be "profiled out" even if other IR effects are still evident. These will not affect humanly-visible magentas because it's evidently confined to certain neutral ranges. (Uwe Steinmuller is actually asking the question to Thomas Knoll, so if "Dr Know" responds, we'll see what he says. My money is on Mark's answer. It also jibes with my profiling experience...) I'll stand by what I wrote on LAB, luminance and color, thanks--when I use the term visible I generally mean "visible to human beings" Link to post Share on other sites More sharing options...
eronald Posted November 18, 2006 Share #16 Posted November 18, 2006 JR--I agree with you mostly on the first thoughts... It will affect the luminosity, though--it's light, after all--and that's exactly what we've seen. Fortunately fixing neutral luminosity in RGB is easy If you start mapping the strange colour into black, you are going to have luminance-mapping issues. It's not necessarily impossible, but from the graphs of that sensor's sensitivity, I'd say that gray and IR are very hard to distinguish. I'd expect one would lose shadow detail in the grays in any mapping that removes the color casts in the black items. Furthermore, any vividly colored objects that have an IR component are going to have strange colors. Software is not a solution, for people who need accurate colors. It may be a workaround for street photos, it won't work for fashion. Edmund Link to post Share on other sites More sharing options...
eronald Posted November 18, 2006 Share #17 Posted November 18, 2006 JR--I agree with you mostly on the first thoughts... It will affect the luminosity, though--it's light, after all--and that's exactly what we've seen. Fortunately fixing neutral luminosity in RGB is easy If you start mapping the strange colour into black, you are going to have luminance-mapping issues. It's not necessarily impossible, but from the graphs of that sensor's sensitivity, I'd say that gray and IR are very hard to distinguish. I'd expect one would lose shadow detail in the grays in any mapping that removes the color casts in the black items. Furthermore, any vividly colored objects that have an IR component are going to have strange colors. Software is not a solution, for people who need accurate colors. It may be a workaround for street photos, it won't work for fashion. And by the way, I think that Knoll does know one or two things about color imaging - I'd take his opinions over those of the guys at Leica any day of the week. Edmund Link to post Share on other sites More sharing options...
mwilliamsphotography Posted November 18, 2006 Share #18 Posted November 18, 2006 After being insiped by other member's efforts in post processing work, I posted my little experiment in color correction last week, and it seemed to me it was possible to write an action to mitigate the magenta problem to some degree ... Jamie Robert's new M8 C-1 profiles have taken that much further ( way to go Jamie ;-) Mind you, this has all occurred in one week of goofing around with existing data by amateurs. Proprietary software now seems a possible solution. Perhaps not a total solution, but an element of a solution that may include other tweaks in firmware and a slightly more IR absorbing sensor filter. When I say "proprietary" I mean software for the M8 alone. This is not unprecedented, but in actuality fairly common. My Flex-color software is proprietary to Imacon backs. My Leaf software is also proprietary. (BTW,both these proprietary file formats allow transfer into DNG files for further Adobe PS inclusion). In the case of both the Flexcolor and Leaf software, processing is superior in result to the use of other more all-inclusive RAW programs such as ACR. Because they are "Proprietary", various lighting conditions are profiled specifically to the characteristics of the camera's sensor and idiosyncratic capture tendencies ... this is also apparent with C-1 with its very specific profiles for Phase One digital backs. Originally, the Contax ND came with it's own propritary software that calibrated itself to each specific camera. It was an excellent idea that was so poorly executed it ruined the camera's chances. The band-aid solution was to include it as a supported camera in ACR. However, the idea would have been excellent had it been done better. Link to post Share on other sites More sharing options...
mwilliamsphotography Posted November 18, 2006 Share #19 Posted November 18, 2006 I agree that Thomas Knoll knows more than all of us combined. However, that doesn't preclude someone actually doing something that defies coventional logic, and actually works. While I do not think Jamie's workaround C-1 color profiles are THE answer, it is AN answer that seems to work well enought to mitigate the magenta issues. How many different circumstances it will work in remains to be seen. I went back to my files from last Sunday's family portrait and applied the new profiles and in every case they worked better than anything I had tried ... which included a lot more fussing than just opening the file and white balancing. Here's one of those shots I had showed before. Left: as it opened before Jamie's profiles ... Right showing the same file opened in C-1 using Jamie's profile. It actually worked even better on other shots. Not saying it is the definitive answer and it may not work all the time, but it promises to at least help lower the instances where IR is a problem. Welcome, dear visitor! As registered member you'd see an image here… Simply register for free here – We are always happy to welcome new members! Link to post Share on other sites Simply register for free here – We are always happy to welcome new members! ' data-webShareUrl='https://www.l-camera-forum.com/topic/9447-thomas-knoll-on-the-m8-and-profiles/?do=findComment&comment=97239'>More sharing options...
rosuna Posted November 18, 2006 Share #20 Posted November 18, 2006 How these profiles affect "normal" pictures? Do the profiles affect the color balance when there are greens of blues in the picture? Are they acceptably "neutral" in normal cases? Capture One LE does not allow to apply profiles per picture. Even black and white conversions are difficult. It is useless to me, and Capture One Pro is too expensive for a non-professional. Link to post Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.