Thursday, July 31, 2008

Define: HTTP_LOAD in details

Http_load is another cool webserver performance tester that gives simple stats on how your webapp is performing.

How to install in OS X

1. Download from
2. Open terminal, cd to the directory where the archive is and unzip
$ tar -zxvf http_load-12mar2006.tar.gz
3. Move to that directory
$ cd http_load-12mar2006
4. Run
$ make
5. Run
$ make install

Once installed, using http_load for quick benchmarking is really quite straightforward. You call the program,
tell it how many requests to make concurrently, and how long to run (either in number of seconds, or total fetches),
and finally pass in a file full of URLs to request.


http_load requires at least 3 parameters:

* One start specifier, either -parallel or -rate
-parallel tells http_load to make the specified number of concurrent requests.
-rate tells http_load to start the specified number of new connections each second. If you use the -rate start
specifier, you can specify a -jitter flag parameter that tells http_load to vary the rate randomly by about 10%.
* One end specifier, either -fetches or -seconds
-fetches tells http_load to quit when the specified number of fetches have been completed.
-seconds tells http_load to quit after the specified number of seconds have elapsed.
* A file containing a list of URLs to fetch
The urls parameter specifies a text file containing a list of URLs, one per line. The requested URLs are
chosen randomly from this file.

You’re ready! Open up a text editor and write down the website’s url you want to test (your own preferably),
then cd to the directory where the .txt is and run

To see how many requests your server can take care of over a 100 fetches


$ http_load -parallel 5 -fetches 100 name_of_file.txt
which means open 5 concurrent connections and fetch the webpage 100 times.

You’ll get something like this:

100 fetches, 5 max parallel, 1.34237e+07 bytes, in 15.842 seconds
134237 mean bytes/connection
6.31234 fetches/sec, 847351 bytes/sec
msecs/connect: 28.9069 mean, 75.011 max, 14.865 min
msecs/first-response: 435.84 mean, 2484.28 max, 96.082 min
93 bad byte counts
HTTP response codes:
code 200 — 100

I highlighted the important bits. At the moment the webserver is capable of handling 6 requests per second and
has a mean average initial latency of 435 milliseconds.

The numbers you’ll want to look at in more detail are “fetches/sec” and “msecs/first-response”.
These are critical in terms of really understanding what your site is doing.

It’s important to note the difference between “benchmarking” and “profiling”. What we’re doing here with http_load
is the former: we’re getting a feel for a specific page’s overall performance. We know that it serves X pages per
second, and generally takes about Y milliseconds to response. What we don’t know yet is why either of these is the
case. You’ll have to dig in more detail into your PHP code and server configuration to determine what to tweak to
bring up your site’s performance to an acceptable level. http_load doesn’t, and can’t, do that for you.

Http_load tells you how your webapp is currently performing allowing you to test it under different conditions,
basically it’s a benchmarking tool juts like httperf. The next step is optimization. Have a look at
the 1st part of Getting Rich with PHP 5 (what a crappy title) by rasmus lerdorf for tools you can use to profile
your code and some tips on optimization. In the example shown he goes from 17 reqs/sec to 1100 reqs/sec .

$ http_load --h

usage: http_load [-checksum] [-throttle] [-proxy host:port] [-verbose] [-timeout secs] [-sip sip_file]
-parallel N | -rate N [-jitter]
-fetches N | -seconds N
One start specifier, either -parallel or -rate, is required.
One end specifier, either -fetches or -seconds, is required.

$ man http_load

http_load - multiprocessing http test client

http_load [-checksum] [-throttle] [-proxy host:port] [-verbose] [-timeout secs] [-sip sip_file]
[-cipher str] ( -parallel N | -rate N [-jitter] ) ( -fetches N | -seconds N ) url_file

http_load runs multiple http fetches in parallel, to test the throughput of a web server. However unlike
most such test clients, it runs in a single process, so it doesn’t bog down the client machine. It can be
configured to do https fetches as well.

The -checksum flag tells http_load to do checksums on the files fetched, to make sure they came across ok.
The checksums are computed the first time each URL gets fetched, and then recomputed and compared on each
subsequent fetch. Without the -checksum flag only the byte count is checked.

The -throttle flag tells http_load to throttle its consumption of data to 33.6Kbps, to simulate access by
modem users.

The -proxy flag lets you run http_load through a web proxy.

The -verbose flag tells http_load to put out progress reports every minute on stderr.

The -timeout flag specifies how long to wait on idle connections before giving up. The default is 60 seconds.

The -sip flag lets you specify a file containing numeric IP addresses (not hostnames), one per line.
These get used randomly as the *source* address of connections. They must be real routable addresses
on your machine, created with ifconfig, in order for this to work. The advantage of using this
option is you can make one client machine look like a whole bank of machines, as far as the server knows.

The -cipher flag is only available if you have SSL support compiled in. It specifies a cipher set to use.
By default, http_load will negotiate the highest security that the server has available, which is
often higher (and slower) than typical browsers will negotiate. An example of a cipher set might be
"RC4-MD5" - this will run considerably faster than the default. In addition to specifying a raw cipher
string, there are three built-in cipher sets accessible by keywords:
* fastsec - fast security - RC4-MD5
* highsec - high security - DES-CBC3-SHA
* paranoid - ultra high security - AES256-SHA
Of course, not all servers are guaranteed to implement these combinations.

One start specifier, either -parallel or -rate, is required. -parallel tells http_load to keep that
many parallel fetches going simultaneously. -rate tells http_load to start that many new connections each
second. If you use the -rate start specifier, you can also give the -jitter flag, telling http_load to
vary the rate randomly by about 10%.

One end specifier, either -fetches or -seconds, is required. -fetches tells http_load to quit when that
many fetches have been completed. -seconds tells http_load to quit after that many seconds have elapsed.

The url_file is just a list of URLs, one per line. The URLs that get fetched are chosen randomly from this

All flags may be abbreviated to a single letter.

Note that while the end specifier is obeyed precisely, the start specifier is only approximate. If you
use the -rate flag, http_load will make its best effort to start connections at that rate, but may not
succeed. And if you use the -parallel flag, http_load will attempt to keep that many simultaneous connections
going, but may fail to keep up if the server is very fast.


* Note that when you provide a file with a list of URLs make sure that you don't have empty lines in it.
If you do -- the utility won't work complaining:

./http_load: unknown protocol -

* Basic errors
- byte count wrong
- timeout
- parallel may be at most 1021


To remove "byte count wrong" error
install patch and run, to download click here
and to install here is a command
patch -p 1 < http_load-12mar2006-timeout.diff

Overviewed by other blogs/forums:

1) http_load does not replicate heavy load; it replicates a DOS attack.

2) It generates N requests every second without waiting for the previous N requests to complete. Actually, it never waits for requests to complete. It kills them so the http server has no where to send data.

3) However this is just HTTP load.. if you have JS running and making database calls after page load I don't think this will help your testing methods.

Note :- It's my personnel experience, so if any suggestion or new test cases you have please drop me in comments.