September 1, 2008 5:47 PM   Subscribe

Can one find the coordinates (x,y,height,width) of a thumbnail taken out of the original image, programmatically?

I run a highly dynamic photo sharing website that takes thumbnails of the photos that users upload. It automatically uses the center of the photo as the thumbnail, 75px by 75px, but users can change it with a handy javascript applet. They can make the thumbnailified area bigger, but it forces them to keep the 1:1 ratio, and it's downsized if it is bigger than 75x75.

So here in lies the rub: I'm pushing a redesign, and the thumbnails are now 100x100. I was dumb and did not log the thumbnail dimensions. I'd really really really like those numbers, not only to make the thumbnails bigger and look less blurry, because I know we'll also eventually be printing these out, and I'm going to want a sharper image to send to the printer.

The big question: Can I get, with some software, the x, y, h, and w of the thumbnail with some sort of fuzzy image comparison? Can image magick do this? My language of choice is python, so anything using that will be cake.
posted by Mach5 to Computers & Internet (10 answers total) 1 user marked this as a favorite
You can do this, but it is computationally very expensive.

At the least, you've got to compare every 75x75 px region of a photo with the saved thumbnail. However, lots of people are likely to have chosen a larger area, so you really need to compare EVERY region, scaled to 75x75, with the thumbnail. Very doable, but also very expensive (unless you have a very small number of images).

Better just to stick with the small thumbnails for old content and go 100x100 for new. Also, store the thumbnail coordinates in your image metadata so that you don't have this problem when you move up to 150x150...
posted by b1tr0t at 5:52 PM on September 1, 2008

The resizing is a real kicker. You could fairly easily load each image into memory and then quickly compare it to the thumbnail. Since your doing this one time, the computational cost isn't a horrible concern.

I guess I would write it to check through everything for a direct, unscaled match and see what that leaves you with, and see if you can do the rest by hand, or post it as a job to rent-a-coder at $5/hr?

Unless you can make the scaling code work the same in python as it did orignally, I would think the chances of a match would be fairly bleak.
posted by SirStan at 5:56 PM on September 1, 2008

Do what b1tr0t suggests, but more cheaply by comparing the obvious suspects first.

For each thumbnail, compute the 75x75 center area (your default), and compare it to the actual thumbnail. If that matches, you're done. If not, flag it as unfound. In either case, move on to the next picture.

Now that you've eliminated all the easy cases, see how many pictures are left to do, a visually try to see hw those thumbnails were made. Maybe many of those look like the whole pic was thumbnailed down to 75x75. So run through all the as yet unfound, and test for that, again narrowing your pool.

Then look to see what the majority of the remainder seem to be (thumbnails of the top left of the picture,or whatever), and keep reduing your pool until hopefully you have a small number that need to be sussed out by hand.
posted by orthogonality at 6:03 PM on September 1, 2008

fun fact: 18029 photos as of a few seconds ago.
I like the idea of flagging the easy ones. Definitely doing that one.
posted by Mach5 at 6:42 PM on September 1, 2008

This is similar to a pretty well-studied problem, the problem of "I have a thumbnail, is it from any of these 100,000 images and if so what part?". Your problem is much easier since you know what the source image is, you just need to find the source rectangle. But there might be research code for the first problem available somewhere which you can throw at your problem. Something along the lines of pyramidal image indexing or multiresolution/multiscale wavelet stuff.

The opposite approach is to automatically handle the simple cases like orthogonality said, and then deal with the remainder using Mechanical Turk or the like. If there are only, say, a couple thousand non-simple cases, you could put them on MT for a penny or ten cents per image, and be done with it really quickly.
posted by hattifattener at 7:13 PM on September 1, 2008

This is probably absolutely no use, but there are image hashing algorithms that are relatively scale and rotation independent. I don't know how well they work in practice, or if they're efficient or applicable here. This paper claims to have a hashing technique resistant to 2 degree rotation, cropping up to 10% ( but surely if you were just focused on this you could find a way to increase this parameter at the expense of others? ), scaling by 10%, JPEG compression, & some filtering. The references in that paper also probably wouldn't be a bad place to look.
posted by devilsbrigade at 9:50 PM on September 1, 2008

Suggestion for optimization; with 256*256*256 potential colours in an image, but only, say a few hundred * a few hundred pixels in the image, you may be able to make progress by searching for a single pixel.

Take, say, the top-right pixel in the thumbnail. Identify the location of pixels with this exact RGB combination in the original image. Search diagonally towards the bottom-right pixel in the original image, looking for a pixel with the RGB value of the bottom-right pixel of the thumbnail.

Won't work every time, but if you come up with a few multiple matching pixels, you can try them all and it won't be that much slower.

Note: Won't work if you used any kind of quadratic or cubic scaling for the thumbnail. You need an exact colour match, not something that's been smoothed.
posted by Jimbob at 4:10 AM on September 2, 2008

Hmm, JPEG artifacts in the thumbnail may also put a dent in this idea. Maybe search for pixels with an RGB range +/- 5%?
posted by Jimbob at 4:11 AM on September 2, 2008

A cross-correlation algorithm might give good results. It is not an exact solution, but it calculates a match probability. You'll need multiple filtering steps to achieve a fast solution. First, try the obvious location. Then compare low-resolution copies of the images and find probable locations. The last step is to compare these locations in the original resolution. Do not aim for a perfect solution, a near-exact match is probably just as good.
posted by Psychnic at 5:46 AM on September 2, 2008

Psychnic is right. This is a classic signal-processing problem. I am not aware of any off-the-shelf software that would do it, and the solution would make a good senior thesis for an engineering student.
posted by adamrice at 6:41 AM on September 2, 2008

« Older tree in pot in ground   |   Inflatable kayakers take note Newer »
This thread is closed to new comments.