Share » Learn » eZ Publish » eZ Publish Performance Optimization...

eZ Publish Performance Optimization Part 1 of 3: Introduction and Benchmarking

Tuesday 16 January 2007 1:33:00 pm

  • Currently 3 out of 5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

http_load is a multiprocessing HTTP test client. It is capable of generating a significant amount of HTTP traffic even on modest hardware. http_load runs multiple HTTP fetches in parallel, to test the throughput of a web server. This shows how many connections a server can handle - that is, how many requests can be served in a given time period. http_load can emulate a large number of low-bandwidth connections, supporting emulated bandwidth throttling (that is, limiting the rate of data transfer). http_load runs in a single process, so the client machine will not be slowed.

Installation

You can download the http_load source from the project site.

You can compile http_load from the source. Before you install http_load, make sure that your system has standard development tools such as autoconf and libtool installed. Access to the root account might be needed as well.

The basic commands you must execute to compile and install http_load from the source are:

shell> su -
shell> cd /usr/local/src
shell> gunzip < /PATH/TO/http_load-version.tar.gz | tar xvf -
shell> http_load-version
shell> make
shell> make install

An http_load binary will be installed under /usr/local/bin/. If you install http_load as a non-root user, make sure that you have write permission to that folder.

Once http_load is installed in your system, you can access the manual page by executing:

shell> man http_load

Testing

http_load requires at least 3 parameters:

  • One start specifier, either -parallel or -rate
    -parallel tells http_load to make the specified number of concurrent requests.
    -rate tells http_load to start the specified number of new connections each second. If you use the -rate start specifier, you can specify a -jitter flag parameter that tells http_load to vary the rate randomly by about 10%.
  • One end specifier, either -fetches or -seconds
    -fetches tells http_load to quit when the specified number of fetches have been completed.-seconds tells http_load to quit after the specified number of seconds have elapsed.
  • A file containing a list of URLs to fetch
  • The urls parameter specifies a text file containing a list of URLs, one per line. The requested URLs are chosen randomly from this file.

Here is an example test showing the output results. This test runs for ten seconds, with five parallel requests:

Here is an example test showing the output results. This test runs for ten seconds, with five parallel requests:

$ http_load -parallel 5 -seconds 10 urls.txt
185 fetches, 5 max parallel, 10545 bytes, in 10.0084 seconds
57 mean bytes/connection
18.4845 fetches/sec, 1053.62 bytes/sec

msecs/connect: 0.211719 mean, 12.859 max, 0.044 min
msecs/first-response: 267.173 mean, 1465.58 max, 50.509 min
HTTP response codes:
 code 200 -- 185

As you can see, the resulting performance is about 18.5 fetches (or pageviews) per second. To clearly assess existing performance, you should load the server heavily and run the test for longer and varying periods of time.

36 542 Users on board!

Tutorial menu

Printable

Printer Friendly version of the full article on one page with plain styles

Author(s)