Editing photo metadata
March 4, 2005 10:00 PM Subscribe
I've decided to start editing metadata in my photographs. I want to add keywords via Photoshop CS. My question is, if I add keywords to a JPG, when I save it, will it recompress the image and thus degrade it?
The answer is, yes, resaving a JPEG as a JPEG will re-compress it and result in a loss of accuracy. This may or may not be visible, as Jairus points out.
posted by kindall at 10:43 PM on March 4, 2005
posted by kindall at 10:43 PM on March 4, 2005
Use another 3rd party program to edit the EXIF data instead of Photoshop. This will preserve the original JPG without recompression. Here's an example.
posted by Civil_Disobedient at 11:57 PM on March 4, 2005
posted by Civil_Disobedient at 11:57 PM on March 4, 2005
If you're using WinXP, right-click a jpg file and under the Summary/Advanced tab you edit keywords without having to resave or recompress it. Not 100% positive if this is EXIF data, but looks like it to me.
posted by bruceyeah at 12:48 AM on March 5, 2005
posted by bruceyeah at 12:48 AM on March 5, 2005
if a jpeg is uncompressed and then compressed again, at the same compression level, why is further information lost? my understanding of jpeg is that is that is divides the image into regions which are separately decomposed with fourier components. in that case, compressing from the original image will lose information, but uncompressing and then compressing again should not (apart from numerical errors due to imperfect precision etc). is that not the case? or does it get messed up some kind of "auto-sensing" compression control that progressively changes the level of compression, throwing away data each iteration?
it seems to me that you could test this by taking an image, compressing it as a jpeg (call that im1), uncompressing and recompressing to give im2, and then looking at the difference, im2-im1.
posted by andrew cooke at 7:43 AM on March 5, 2005
it seems to me that you could test this by taking an image, compressing it as a jpeg (call that im1), uncompressing and recompressing to give im2, and then looking at the difference, im2-im1.
posted by andrew cooke at 7:43 AM on March 5, 2005
andrew cooke, I thought that as well. Check out this comment of mine in this AskMe thread. Basically, given a good JPEG implementation with a minimum of rounding errors, re-saving at the exact same quality level, even after 1000 iterations, results in little to no perceptible quality loss. I should try it with Photoshop and see how good or bad it is.
posted by zsazsa at 8:30 AM on March 5, 2005
posted by zsazsa at 8:30 AM on March 5, 2005
Well, the uncompressing process introduces numerical errors of its own, since it just runs the compression algorithm in reverse.
But you don't just lose information in a jpeg from "numerical errors due to imperfect precision" (the discrete cosine transform part of the jpeg encoding is the main culprit wrt losing information due to imperfect precision, and the reason why jpegs can never be fully lossless). The primary lossy step of jpeg encoding is the quantization of the brightness values of the picture, and not the DCT.
The JPEG standard takes advantage of the fact that the human eye is good at seeing differences in brightness over a large area, but it's not very good at telling just how different the brightness values are. The quantization step compresses the brightness range by dividing all brightness color components by a constant, and then rounding to the nearest integer.
If you saved the JPEG at maximum quality (presumably this would divide each brightness component by 1), then loaded it and saved it again at maximum quality, then you're not losing very much information at each step. But if you load it and save it at even 90 percent quality, then you're losing 10 percent of your brightness values with each re-save.
(Some the above is roughly paraphrased from the Wikipedia article on the JPEG standard.)
posted by the_W at 8:41 AM on March 5, 2005
But you don't just lose information in a jpeg from "numerical errors due to imperfect precision" (the discrete cosine transform part of the jpeg encoding is the main culprit wrt losing information due to imperfect precision, and the reason why jpegs can never be fully lossless). The primary lossy step of jpeg encoding is the quantization of the brightness values of the picture, and not the DCT.
The JPEG standard takes advantage of the fact that the human eye is good at seeing differences in brightness over a large area, but it's not very good at telling just how different the brightness values are. The quantization step compresses the brightness range by dividing all brightness color components by a constant, and then rounding to the nearest integer.
If you saved the JPEG at maximum quality (presumably this would divide each brightness component by 1), then loaded it and saved it again at maximum quality, then you're not losing very much information at each step. But if you load it and save it at even 90 percent quality, then you're losing 10 percent of your brightness values with each re-save.
(Some the above is roughly paraphrased from the Wikipedia article on the JPEG standard.)
posted by the_W at 8:41 AM on March 5, 2005
A much kindler, gentler EXIF editor is EXIFER. I'm inclined to say it's the bestest one out there... for just purely editing EXIF data and effecting _nothing_ else, anyhow.
posted by fake at 9:04 AM on March 5, 2005
posted by fake at 9:04 AM on March 5, 2005
andrew cooke: It sounds like the original jpeg was created by firmware in a camera. If so, it seems unlikely that Photoshop could offer the exact same compression settings (not to mention take some work and be difficult to confirm).
posted by -harlequin- at 9:15 AM on March 5, 2005
posted by -harlequin- at 9:15 AM on March 5, 2005
Best answer: the_W, you aren't losing 10 percent of the brightness values with each save. Given a reasonable JPEG quantization algorithm, the same DCT coefficients will be quantized out for each recompression. After the first compression, quantized coefficients are thrown out. Any recompressions at the exact same compression level will result in those already non-existant coefficients being quantized out. Therefore: minimal data loss, given an error-free forward and reverse DCT. (My negligible credentials: I've written my own DCT-based image compressor)
-harlequin-, you raise a very good point. Without this guarantee, to answer the original question, an EXIF editor is the way to go here.
posted by zsazsa at 2:39 PM on March 5, 2005
-harlequin-, you raise a very good point. Without this guarantee, to answer the original question, an EXIF editor is the way to go here.
posted by zsazsa at 2:39 PM on March 5, 2005
even if it's not explicitly encoded in the format meta-data, the amount of compression used is (presumably) implicit in the data and could be inferred when the image was uncompressed. so you might find that photoshop (or whatever) sets its default values for saving to match the source. that's just a wild guess, but would be a reasonable implementation (knowing no more than i do now, it's how i would implement it, anyway).
posted by andrew cooke at 3:26 PM on March 5, 2005
posted by andrew cooke at 3:26 PM on March 5, 2005
Best answer: Metadata for pictures, whether EXIF, IPTC, or XMP, gets written into a header of the picture file that can be edited without having to recompress the picture. It's a matter of figuring out which type of metadata your picture was encoded with and then finding a program that will change that data without forcing a recompress. Photoshop CS should be smart enough to do that.
posted by airguitar at 5:48 AM on March 6, 2005
posted by airguitar at 5:48 AM on March 6, 2005
This thread is closed to new comments.
posted by Jairus at 10:27 PM on March 4, 2005