HTTP post size limit?
August 23, 2005 11:47 AM

Is there a limit on the size of a HTTP post? (Or any Aolserver doctors in the house?)

I am developing a web application that takes addresses as input and am trying to determine the limit of number of addresses which can be accepted. At around 1000 addresses the webserver (Aolserver) logs an error "post size nnnnn exceeds maxpost limit of 65536" and firefox returns "document contains no data" (hehe).

I believe I've configured the webserver to accept larger posts (using ns_param maxpost in the ns_section "ns/parameters" of the config.tcl file) but it still doesn't seem to be taking larger posts.

Is there something magical about 65536 bytes? Is there a limit defined in the HTTP spec? Are browsers designed to handle posts larger than that?
posted by If I Had An Anus to Computers & Internet (13 answers total) 1 user marked this as a favorite
Nothing magical other than being a 16 bit number.

Did you reboot the server after making that change? When I make max file size changes in the php config they don't take till I kick the server and it reloads the setting.
posted by phearlez at 11:54 AM on August 23, 2005


The maxpost parameter goes in the ns/server/$servername section,
like pageroot, directoryfile, directorylisting, etc. At least, with aolserver 3.3 it does. I believe with 4 it does also.

655536, btw, is just the largest integer that fits into an unsigned long int. (expr pow(2,16))
posted by RustyBrooks at 11:55 AM on August 23, 2005


The extra 5 stands for BYOBB
posted by RustyBrooks at 11:55 AM on August 23, 2005


No, you can POST multi-gig binaries via HTTP just fine; this is a problem with aolserver.
posted by cmonkey at 11:58 AM on August 23, 2005


RustyBrooks is right. I had the parameter in the wrong section. Thanks.

Ok, next question, is there a reason (security, I'll blow up the machine, etc) we shouldn't accept say 2gig queries?
posted by If I Had An Anus at 12:01 PM on August 23, 2005


Ok, next question, is there a reason (security, I'll blow up the machine, etc) we shouldn't accept say 2gig queries?

As long as the script that parses that data can handle the input properly, you should be fine. And keep in mind your file system limits if you're spooling that input to a tmp file.
posted by cmonkey at 12:08 PM on August 23, 2005


It kind of depends on how you're posting. There is sort of a builtin function in aolserver that accepts mime posts that are sent with the multi part encoding. This works fine, but when files get to 50-100 megs, it slows down enormously. I found out why at some point in the past but I've long since forgotten. Most of the time I post data in binary format and use my own handler (you can get the raw post data from aolserver). I don't remember the details of this since I wrote it and forgot it, but it works OK. If I'm transferring really large files I just use ftp. Probably not an option if your application is browser-based. If it's not, there is a good ftp library built into tcllib.

Also, I think by default, with the muti-part encoding thing, it's going to dump the temp files in /tmp -- make sure you have enough room there for whatever you're uploading. Those files should get cleaned up properly when the connection closes.
posted by RustyBrooks at 12:10 PM on August 23, 2005


In answer to the original question, there's no technical limit defined by HTTP, though a TCP/IPv4 connection runs out of sequence numbers after 255 terabytes.

I've also noticed some some ISPs' transparent HTTP caches fall over when presented with POST data larger than a few KB.
posted by cillit bang at 12:13 PM on August 23, 2005


Thanks, guys...my company owes askmefi mucho dinero...it would have taken forever to research such good advice.

Right now we've capped it at 1gig and the system doesn't seem to be choking...of course that's only one query at a time. Just an FYI, I'm not talking about multi-part file upload...this is a regular post of text data.
posted by If I Had An Anus at 12:15 PM on August 23, 2005


OK, good luck. If you need any further help, I work with aolserver all day and most of the night. I'd be happy to help you out.
posted by RustyBrooks at 12:20 PM on August 23, 2005


I think the reason for the sane/smaller limits is to limit stupidity. In the vast majority of webhosting scenarios, you don't want people uploading hundreds of megabytes of data in a POST. Never underestimate stupidity, and there are idiots that will somehow find a way to try to upload an ISO image of a 4GB dvd when your form is asking for a simple image. If you don't have a reasonable limit set in the server, then it has to sit there and wait for the entire upload to finish before returning control to the underlying CGI/script which then errors out with "you are an idiot." If you tell the server to set the limits of a POST reasonably, then it can just abort the request when the limit is reached, rather than having to wait for it to finish.

You could even extrapolate this to a possible DOS attack vector. You could write a program that initiates a POST and then sends data at a constant but very slow rate, say a few hundred bytes per second. If you didn't have a limit set then an attacker could run 'N' copies of this program (where 'N' is the number of simultaneous threads/clients/preforked daemons that you have setup) and make your web server completely inaccessible to anyone else because all threads are sitting there waiting for this neverending POST.
posted by Rhomboid at 3:51 AM on August 24, 2005


Is this a text box people are posting into? If so, consider moving to file upload, aside from the other suggestions here. Browser client interfaces sometimes don't react well to having that much data in a form element.
posted by mkultra at 8:33 AM on August 24, 2005


Rhomboid has a good point. This web application interfaces with a non-browser application on the client side, that is required to provide certain HTTP header values...though I guess even those checks don't kick until in the data is received. Something to watch out for. Thanks for the heads up.

mkultra, it is not normally a text box (see paragraph above), except in the test interface.

Everybody, apparently I'm an idiot who doesn't know his bits from a bucket. The gig limit I mentioned was actually a meg. (1024 * 1024 bytes, do I got it right this time?) We've currently set it at 10 megs which should handle well over 50,000 addresses. This is all we're required to accept. Thanks again.
posted by If I Had An Anus at 8:50 AM on August 24, 2005


« Older Thank You Gift for Russian CoWorkers   |   How do I get rid of a woodpecker? Newer »
This thread is closed to new comments.