How do I go about finding out how many servers (as a function of number of users) I will need to host my web application?
June 26, 2009 1:33 AM   Subscribe

I'm putting together a business model for my web application. As part of my expenditure, I need to estimate how many servers I need as a function of number of users. What's a good first step to finding this out?

Depending on whether my application can support 100 concurrent users or 10,000 concurrent users, this will vastly change my estimates of server costs. I guess, in general, the number of concurrent users per virtual server will change dramatically depending on the type of web application[1], so I was wondering what the steps are to estimate number of servers required (presuming I can make a good guess of how many users will be online at any one time).

The first step I tried was load-testing my application to see how many users a single server can handle. My application is a GWT application (running on a GlassFish server) and I have actually been having a bit of trouble doing a load test due, in part, to the way GWT handles AJAX. That is, I tried jmeter, but ran into problems in the way jmeter encodes Content-Types. I was thinking if there was an easier way.

I was thinking about estimating the session size for each user then dividing Virtual Machine memory by that amount. But then, I'd also need an estimate on the CPU usage, too.

Maybe the numbers of users per virtual server doesn't change too much from web application to web application...and a "rule of thumb" exists: e.g., 1,000 concurrent users per server, or something like that. My gut instinct says such a rule doesn't exist...

[1] For those who are interested in the details, my web application is an online file manager and general life-organizer that uses hierarchical labels. At its most basic level it allows you to make to-do lists quickly: this would mean that the data exchanged between the client and server is minimal: text. But I've made it so you can also upload files, e.g., music files, and play them online: streaming data would be resource-intensive and probably hard to make a theoretical guess. That's why I'm thinking that load testing under a variety of conditions is the only true way to estimate number of concurrent users. In which case, any suggestions on how to load test GWT applications would be highly appreciated.

Am I on the right track? If anyone has any suggestions, I'd love to hear.
posted by tomargue to Technology (4 answers total) 3 users marked this as a favorite
You don't want to store the files, music, or anything else you might stream or that would peg your web server or database server's IO on the same boxes that serve your web application.

This is what Amazon S3 is for.

S3, if you're not familiar, is cloud-based storage that is incredibly affordable, infinitely scalable, redundant, and fast enough to push out streaming video and audio. The best part is, you only pay for EXACTLY what you use, and not a penny more. You pay a amount to push data in, a small amount to store it, and a small amount to pull data back out, but all of the IO is off-loaded entirely, and it keeps your servers happily minding the farm and not saturating their pipes with large file requests.

(My usage this month, thus far, on a project that has received about 30,000 visitors, each pulling a very large photo gallery down.)
$0.150 per GB - first 50 TB / month of storage used  	 14.075 GB-Mo  	 $2.11
$0.030 per GB - all data transfer in 	0.006 GB 	$0.01
$0.170 per GB - first 10 TB / month data transfer out 	41.777 GB 	$7.10
$0.01 per 1,000 PUT, COPY, POST, or LIST requests 	599 Requests 	$0.01
$0.01 per 10,000 GET and all other requests 	1,411,390 Requests 	$1.41
  	  	  	View Usage Report 	$10.64
A lot of your per-user optimization and estimates will come down to how efficiently you write your application and how well you optimize your database. Database calls are typically the most expensive part of running a web application. If you index things properly, cache things when you can, and keep superfluous, slow, and overly complicated queries to a minimum, you'd be surprised what a single server can handle.

S3 is also intelligent in that it automatically copies your files to redundant stores, so that data loss isn't a concern. They optimize frequently requested files to ensure that they're able to scale to any load, and you can use Amazon CloudFront to dictate that you'd like files to "live" in certain datacenters, or, rather, be automatically replicated to datacenters distributed throughout the country and Europe, so that speeds are quicker for users.

Amazon also offers servers themselves in a similar manner, using their EC2 (Elastic Compute Cloud) service. These operate as "instances", like a full server, but that can be spooled up in under 5 minutes, using a copy of your software and operating system exactly how you specify (the "Amazon Machine Image") and spooled down just as quickly if demand dies. Since you pay by the hour (about $75-$150/"full server"/month), the cost savings are enormous, since you're NEVER paying for excess capacity you aren't using. Learning how to implement an application on EC2 is a bit of a learning curve, however, as you have to look into their storage block service, which allows data to persist between EC2 sessions. Things get a bit complicated from there.

I'm not trying to simply whore out Amazon's goods; they're exceptionally awesome at what they do and their prices are right, however. I'm simply wanting to stress that you should move file storage off of your web and database servers.

Finally, write good database queries and use them wisely, and learn how to index properly, and look into memcached, and expect that just for text, you can probably quite a few concurrent users. It's impossible to give a baseline without knowing just how complicated your app is, how good your devs are at what they do, and what crazy functionality you want the web server itself to accomplish. In the mean time, look to the cloud for cost-containment, instant scalability, and reliability.
posted by disillusioned at 5:07 AM on June 26, 2009 [4 favorites]

The to-do list thing is going to be cheap, but the music-streaming will be unbelievably resource-intensive. For that, your limiting factor will probably be HDD seeks: if two or more people try to stream a music file off the same hard drive at the same time, it'll thrash like crazy. On the other hand, you can serve about a bajillion simultaneous to do lists no problem.

All traffic is not the same: some stuff is really cheap and some stuff is really expensive. So before you know how many users each server can handle, you need to know what a "user" is: even if you do use S3 or some external service, you need to make some guesses about the average user's behavior.
posted by goingonit at 5:56 AM on June 26, 2009

Elasticfox and S3Fox are very convenient for playing around with Amazon EC2 instances, Elastic Block Store volumes, and S3.
posted by flabdablet at 6:33 AM on June 26, 2009

Thanks heaps, disillusioned, goingonit and flabdeblet. I've got a few points for reply/followup, but I'm going to have a sleep and think first (wanted to show appreciation before I sign off for the night).
posted by tomargue at 9:31 AM on June 26, 2009

« Older Less chalkboard, more paint.   |   Making a book out of AskMeFi? Newer »
This thread is closed to new comments.