Millions of images
March 30, 2007 10:46 AM Subscribe
A project that I am working requires the storage and retrieval of several million images. What is the best way to do this?
A project that I am working on will require the storage and retrieval of several million images. The current idea for managing these images is to create a directory, dump 3000 images into it. Then create a database record that has the name of the image and the path to that image. Repeat when new images are added. Each million images would require 334 directories. Speed of retrieval needs to be fast and highly scalable. Is there a better way?
A project that I am working on will require the storage and retrieval of several million images. The current idea for managing these images is to create a directory, dump 3000 images into it. Then create a database record that has the name of the image and the path to that image. Repeat when new images are added. Each million images would require 334 directories. Speed of retrieval needs to be fast and highly scalable. Is there a better way?
The dir based solution sounds workable. The db part might not even be needed if there is enough metadata in the name. I've set up similar issues with similar number of files before.
If there is enough info, a series of hashed directories works well. Aka for image, "foobarblippy_100.jpg" you could stash it in /fo/ob/ar/bl/foobarblippy_100.jpg and you wouldn't need a db to do the name->path lookup. But that depends on how "hashable" the filenames are.
Keeping the max number of items (be it sub dirs or files) in each dir fairly small (say, 500 or so) will help speed on many filesystems. But it sounds like that was more or less already in the plan.
How big of images are you talking about? If you are talking about huge images and ludicrous speed requirements, a specialized file system might be needed.
Some of the ideas in say, mogilefs might be useful, depending on your particular needs.
posted by alikins at 11:16 AM on March 30, 2007
If there is enough info, a series of hashed directories works well. Aka for image, "foobarblippy_100.jpg" you could stash it in /fo/ob/ar/bl/foobarblippy_100.jpg and you wouldn't need a db to do the name->path lookup. But that depends on how "hashable" the filenames are.
Keeping the max number of items (be it sub dirs or files) in each dir fairly small (say, 500 or so) will help speed on many filesystems. But it sounds like that was more or less already in the plan.
How big of images are you talking about? If you are talking about huge images and ludicrous speed requirements, a specialized file system might be needed.
Some of the ideas in say, mogilefs might be useful, depending on your particular needs.
posted by alikins at 11:16 AM on March 30, 2007
Response by poster: Smackfu,
Would we not run into performance problems, storing millions of large images in the database.
posted by Mr_Zero at 11:16 AM on March 30, 2007
Would we not run into performance problems, storing millions of large images in the database.
posted by Mr_Zero at 11:16 AM on March 30, 2007
I'd say for most scenarious this is a reasonable, fast way to go. Store some metadata in the DB and let the filesystem handle the rest. You will probably want a small heirarchy of directories instead of one level. Put your DB on a RAID 5 and your images on a RAID 0+1.
But as usual the best way depends on the particulars: how many images do you need to scale to? 10M? 100M? 1B? What is the size distribution of the images -- are they all thumbnail sized (10s of KB), or are they all "large" (1MB+ish)? What will your access patterns be -- completely random, or will you usually access them in some order? How often, if ever, will you modify the collection or add to it? Is it always growing? How important is fault tolerance and maximum uptime?
posted by ldenneau at 11:18 AM on March 30, 2007
But as usual the best way depends on the particulars: how many images do you need to scale to? 10M? 100M? 1B? What is the size distribution of the images -- are they all thumbnail sized (10s of KB), or are they all "large" (1MB+ish)? What will your access patterns be -- completely random, or will you usually access them in some order? How often, if ever, will you modify the collection or add to it? Is it always growing? How important is fault tolerance and maximum uptime?
posted by ldenneau at 11:18 AM on March 30, 2007
Don't forget that a huge advantage of the BLOBs in Database approach is that the backup and restore of your solution is just the database. You don't have to keep database and file system backups in sync. Also if you ever wanted to scale your solution to multiple web servers/multiple database servers, you may find that the built in support for clustering in your database product might have a whole lot less corner cases then your home grown file system database system + the real RDBMS.
posted by mmascolino at 11:22 AM on March 30, 2007
posted by mmascolino at 11:22 AM on March 30, 2007
And no, storing lots of binary data in the database in a single table doesn't necessarily mean it's going to be slow. Lookup will be indexed, and done via a simple ID field. That way the database doesn't care what's in the blob column until it finds the one it wants.
I know that MSSQL stores TEXT columns (textual blobs really) in seperate disk pages than the rest of the row. That means if you don't select on it, it doesn't even consider it part of the row. That's useful when you want to pull up metadata about it, but not the text itself, you just make sure your select statement doesn't pick up the text column.
posted by cschneid at 11:22 AM on March 30, 2007
I know that MSSQL stores TEXT columns (textual blobs really) in seperate disk pages than the rest of the row. That means if you don't select on it, it doesn't even consider it part of the row. That's useful when you want to pull up metadata about it, but not the text itself, you just make sure your select statement doesn't pick up the text column.
posted by cschneid at 11:22 AM on March 30, 2007
Different filesystems are going to have different performance characteristics with large numbers of files.
What are the performance parameters of your application?
What sort of concurrency level for adding and viewing images?
What sort of interface do you need to the data (HTTP? SMB, NFS?, arbitrary API in the language of your choice?)
What sort of access control does your application require?
I'm suspicious of the BLOB in database approach because I've been reading a lot about image storage approaches for web apps, and basically no-one is going that route. But that might be the way to go if your storage needs are finite, and your concurrency levels are manageably low.
It may or may not be appropriate to your application, but you should check out MogileFS.
posted by Good Brain at 11:33 AM on March 30, 2007
What are the performance parameters of your application?
What sort of concurrency level for adding and viewing images?
What sort of interface do you need to the data (HTTP? SMB, NFS?, arbitrary API in the language of your choice?)
What sort of access control does your application require?
I'm suspicious of the BLOB in database approach because I've been reading a lot about image storage approaches for web apps, and basically no-one is going that route. But that might be the way to go if your storage needs are finite, and your concurrency levels are manageably low.
It may or may not be appropriate to your application, but you should check out MogileFS.
posted by Good Brain at 11:33 AM on March 30, 2007
Response by poster: I think that you guys have answered the root question I had. It is better to write the images to some sort of nested directory scheme than to actually store them in the DB.
Thanks!
posted by Mr_Zero at 11:52 AM on March 30, 2007
Thanks!
posted by Mr_Zero at 11:52 AM on March 30, 2007
basically no-one is going that route
That's how we do it for serving dynamic maps using ARCSde, ARCIms, and Oracle. I don't love the ESRI bits, but Oracle handles it just fine.
I want to chime in for "database" as opposed to filesystem. Wouldn't have to be Oracle.
posted by everichon at 1:43 PM on March 30, 2007
That's how we do it for serving dynamic maps using ARCSde, ARCIms, and Oracle. I don't love the ESRI bits, but Oracle handles it just fine.
I want to chime in for "database" as opposed to filesystem. Wouldn't have to be Oracle.
posted by everichon at 1:43 PM on March 30, 2007
This may be completely out in left field, but I'm doing a similar project with 110,000 images and decided to just go whole hog and stick them in a MediaWiki installation. I've generated the page content with a simple script, then inserted that and the images into the MediaWiki DB using another script (googlable) called bulkinsert.php and bulkmedia.php. It may be too constricting for your purposes but it works for me.
posted by rolypolyman at 1:46 PM on March 30, 2007
posted by rolypolyman at 1:46 PM on March 30, 2007
P.S. MediaWiki does the hard work of farming content into directories to keep the content manageable.
posted by rolypolyman at 1:46 PM on March 30, 2007
posted by rolypolyman at 1:46 PM on March 30, 2007
Would we not run into performance problems, storing millions of large images in the database.
The point is that BLOBs are optimized for this, unlike something like VARCHAR(32000). So it's made to be fast.
I'm suspicious of the BLOB in database approach because I've been reading a lot about image storage approaches for web apps, and basically no-one is going that route.
I assume that's because for a web app, you can serve files directly from the file system, as long as you have the pathname. So you get major speed gains on the serving since you don't do anything. The expense is having to keep up your data integrity manually, which is not as simple as you may wish, since your average filesystem doesn't have transactions.
(Also because most web apps use free databases and they suck at fancy stuff.)
posted by smackfu at 3:34 PM on March 30, 2007
The point is that BLOBs are optimized for this, unlike something like VARCHAR(32000). So it's made to be fast.
I'm suspicious of the BLOB in database approach because I've been reading a lot about image storage approaches for web apps, and basically no-one is going that route.
I assume that's because for a web app, you can serve files directly from the file system, as long as you have the pathname. So you get major speed gains on the serving since you don't do anything. The expense is having to keep up your data integrity manually, which is not as simple as you may wish, since your average filesystem doesn't have transactions.
(Also because most web apps use free databases and they suck at fancy stuff.)
posted by smackfu at 3:34 PM on March 30, 2007
I'm suspicious of the BLOB in database approach because I've been reading a lot about image storage approaches for web apps, and basically no-one is going that route.
I've seen lots of this, it generally works fine. Plus, if load becomes an issue you can have one database for loading new images in, and mirror that database on as many individual machines as you want for serving 'em back out.
posted by davejay at 6:44 PM on March 30, 2007
I've seen lots of this, it generally works fine. Plus, if load becomes an issue you can have one database for loading new images in, and mirror that database on as many individual machines as you want for serving 'em back out.
posted by davejay at 6:44 PM on March 30, 2007
Most filesystems suck when directories get large because to open a file, one must iterate through the directory entries linearly to get the one with the right name and only then will you know where it is physically on disc and be able to read it. Directories of more than a couple hundred files are considered a Bad Idea™ because they kill the speed of file-open and directory-listing operations.
Using the filename to hash the files into lots of directories solves this by explicitly using the diretory tree structure as a tree search algorithm. For best performance, you want to arrange it so the system has the same branching factor at each level, preferably so that a whole directory listing fits in a single disc block. If you went with a branching factor of around 64-256 it'd likely be fine.
That means for an n-level hierarchy you get BFn files total, 3 levels of 256 gets you namespace for 16M files.
Consider also reiserfs; it is designed with large directories in mind: each directory listing is a tree, not a linear list so it doesn't suffer from slowness if you store huge numbers of files in a directory. In fact it'd quite possibly be faster than manually using the directory hierarchy.
BLOBs in databases are fine for storing huge wads of data efficiently and quickly accessing it since the tables can be indexed by the database's built-in indexing system, usually a B-tree variant optimised for disc-block access. The problem with BLOBs for images and the reason people don't do it on webservers is that it's not real easy to generate a URL to a BLOB because they're not visible in the native filesystem and therefore to the webserver.
If you want to serve the images over the web they either have to be visible in the filesystem and therefore to the webserver or you have to write/find a plugin (CGI, servlet, PHP, whatever; some server-side program) to your webserver that pulls data from a BLOB by name and puts it on the wire in response to an HTTP request. Much easier to just slop the files in nested directories and put the filenames in the db.
You should also think carefully about access patterns: are these images requested randomly or do they tend to go together in some way, e.g. as neighbours in a regular tiled grid? In a linear sequence? If so, you'll likely want to have your indexing scheme reflect that since it can give you advantages with caching and prefetching data.
posted by polyglot at 6:47 PM on March 30, 2007
Using the filename to hash the files into lots of directories solves this by explicitly using the diretory tree structure as a tree search algorithm. For best performance, you want to arrange it so the system has the same branching factor at each level, preferably so that a whole directory listing fits in a single disc block. If you went with a branching factor of around 64-256 it'd likely be fine.
That means for an n-level hierarchy you get BFn files total, 3 levels of 256 gets you namespace for 16M files.
Consider also reiserfs; it is designed with large directories in mind: each directory listing is a tree, not a linear list so it doesn't suffer from slowness if you store huge numbers of files in a directory. In fact it'd quite possibly be faster than manually using the directory hierarchy.
BLOBs in databases are fine for storing huge wads of data efficiently and quickly accessing it since the tables can be indexed by the database's built-in indexing system, usually a B-tree variant optimised for disc-block access. The problem with BLOBs for images and the reason people don't do it on webservers is that it's not real easy to generate a URL to a BLOB because they're not visible in the native filesystem and therefore to the webserver.
If you want to serve the images over the web they either have to be visible in the filesystem and therefore to the webserver or you have to write/find a plugin (CGI, servlet, PHP, whatever; some server-side program) to your webserver that pulls data from a BLOB by name and puts it on the wire in response to an HTTP request. Much easier to just slop the files in nested directories and put the filenames in the db.
You should also think carefully about access patterns: are these images requested randomly or do they tend to go together in some way, e.g. as neighbours in a regular tiled grid? In a linear sequence? If so, you'll likely want to have your indexing scheme reflect that since it can give you advantages with caching and prefetching data.
posted by polyglot at 6:47 PM on March 30, 2007
« Older Chicago Filter: Going to Chicago for 2 days, what... | I'm not drunk! I just have speech impediment... Newer »
This thread is closed to new comments.
posted by smackfu at 10:58 AM on March 30, 2007