Jump to content

Xdepth


Michel Boda

Recommended Posts

Raw + compression = no longer raw and loss of information because the only way to compress an image is to throw data away.

 

Seems to me that it's offering an unprocessed jpeg without the convenience of in-camera processing. Processing the 'compressed raw' file will offer less latitude than a true raw file and is likely to produce artifacts through quantising errors in the same way that jpegs do.

 

I'm not sure whether this would appeal to either 'serious' photographers, who normally want to retain the maximum amount of information from each frame, or to p&s shooters who want the camera to do everything for them. It's not clear what market this product is aimed at.

 

Pete.

Link to post
Share on other sites

It sounds like they use a JPG file to store both JPG and RAW data. I am not familiar with the JPG format, but if it is anything like TIFF/DNG, then there are custom tags that you can use to store whatever you want. They store this RAW in their own format, designed for high compression. They do not say lossless, but they say "visually lossless", which is of course a judgement call. 1:4 is not particularly aggressive, as long as they mean on average, so maybe they have just chosen some intelligent compression settings, similar to Leica with their 8-bit RAW for the M8, where you can't see the artifacts, except possibly in extreme cases.

 

Anyway, this doesn't sound like a technology which makes so much sense for computers with large harddrives, so it is probably a sales pitch aimed at camera manufacturers?

Link to post
Share on other sites

Raw + compression = no longer raw and loss of information because the only way to compress an image is to throw data away.

 

 

Hmm....... Just try to 'Zip' a few M8 DNG's. You may be surprised! I got one compressed 4Mb. Unzipped to original 10Mb DNG fine!

Link to post
Share on other sites

the only way to compress an image is to throw data away.

 

Not true. I will give a trivial example, with something called run-length encoding (RLE): you take a photo of something on a white table. The white table might be stored something like:

 

255 255 255 255 255 ... (say, 100 times, in 8-bit encoding)

 

Now, to compress this, you notice that the same value appears 100 times, so you write:

 

100 255.

 

Voila, lossless compression. There are less obvious techniques too, of course. The key here is lossless versus lossy. "Visually lossless" I take to mean "lossy" :)

Link to post
Share on other sites

Not true. I will give a trivial example, with something called run-length encoding (RLE): you take a photo of something on a white table. The white table might be stored something like:

 

255 255 255 255 255 ... (say, 100 times, in 8-bit encoding)

 

Now, to compress this, you notice that the same value appears 100 times, so you write:

 

100 255.

 

Voila, lossless compression. There are less obvious techniques too, of course. The key here is lossless versus lossy. "Visually lossless" I take to mean "lossy" :)

With the greatest respect, Carsten, I must disagree.

 

Using RLE you have still lost data; although RLE describes the data in a different way the bare facts are that before using RLE you had 100 bytes of data but afterwards you have 2 bytes so the net loss is 98 bytes of data.

 

It's not clear to me whether using, say, RLE to compress the data constitutes temporary or permanent loss of the data; in other words is there a process or mechanism to retrieve the data? If not, then it is a permanent data loss and the data cannot be uncompressed and the loss is permanent.

 

It could be reasonably argued that the 98 bytes of data are redundant because they all have the same numerical value and therefore add little benefit and won't be missed. True. But what we don't know is whether Xdepth's compression algorithm makes a value judgement that, for example, 254 is close enough to 255 so we'll count it as though it is 255. If this occurred then the data that's thrown away is no longer redundant and the compressed file will contain compression artifacts.

 

I have no wish to enter into a complicated argument about information theory and entropy but I felt it was important to support my earlier statement, and I entirely agree that 'visually lossless' to me means 'lossy' too. :)

 

Pete.

Link to post
Share on other sites

The RLE algorithm is lossless. One speaks only of a loss if the original data is no longer recoverable. Compression describes the diminished space usage. There are of course other, more effective strategies which are also lossless. RLE is just simpler to describe.

 

On a more theoretical note, since basic RLE just turns all data into a (count, value) pair, "compressing" data this way when there are never any successive identical values actually results in an increased amount of space. And yes, you are right, if there is a tolerance of even a single value, the algorithm is no longer lossless.

Link to post
Share on other sites

Only random files cannot be compressed . In photography the equivalent would be taking pictures of random photons firings with the cap lens on. Any file displaying a pattern, regularity, repetition can be treated with an algorithm looking for such regularities and attempting to re-represent the file as shorter (smaller). By applying the algorithm in reverse the original file can obtained. However there are no perfect algorithms for all kinds of files treating equally efficiently text, image, sound . Even a task specific algorithm can encounter a file that when attempted compression the result will be bigger (longer) than original. Good compression scheme will recognize such a file and disable compression. Metadata will be attached to such file saying "not-compresed" eg. 5 bytes for each 64kB.

Lossless compression is possible and works--most of the time in real world applications.

With images the actual lossless compression ratio is ususally kept below 2:1.

Eg D-lux3 RAW 10mp file is ca 20MB, M8 DNG 10mp file is ca 10MB.

Link to post
Share on other sites

With the greatest respect, Carsten, I must disagree.

 

Using RLE you have still lost data; although RLE describes the data in a different way the bare facts are that before using RLE you had 100 bytes of data but afterwards you have 2 bytes so the net loss is 98 bytes of data.

Well, of course a compression algorithm reduces the amount of data. If it didn’t, there wouldn’t be any compression, right? The lossy-lossless dichotomy applies to information, not data; lossless compression is about reducing data but keeping the information. RLE or LZW are lossless compression algorithms, implying that no information is lost in compression. Which means that when you expand the compressed files, the resulting data will be 100 percent identical to the original data.

 

JPEG, on the other hand, is a lossy compression algorithm. Compressed image files contain less information than the originals, and while expansion will recreate the original amount of data, it will not be the same data; the loss of information is thus permanent.

 

There are some theoretical and practical limits to lossless compression. There can be no lossless compression algorithm that will compress any data, even by just one byte; by necessity there will be some cases where no compression is possible at all. As far as image data is concerned, typical compression ratios will be up to 2:1 – sometimes more, but that depends on the subject matter, exposure etc. – when large parts of an image are overexposed, it will compress rather well, whereas noisy images taken at high ISO setting will compress poorly.

Link to post
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...