Imagine that you are launching a national health care website. You need to make sure that a lot of people can use the site at the same time without it falling over.
Load testing is one approach to figuring out how quickly pages load under different conditions, in particular when several users are using the site concurrently.
A popular tool is ApacheBench. It was originally developed to test the Apache server, but it is generic enough that it can test any server, whether it is running locally on your machine, or out on the internet. It comes pre-installed on MacOS.
How Reliable Are The Results?
If you use ApacheBench to test a server that is not on the local network, you will also be seeing network latency. On one hand you can’t control those middle-men in the network, but on the other hand you’re seeing the results as the user will see them.
When you do benchmarking of any kind you need to run the tests many times to reduce the impact of secondary factors (like your computer’s memory swapping, other processes taking CPU time, etc). Make sure to close other applications running on the test system.
The hardware of the test machine matters. The more CPU power you have the more requests you can churn out.
We will be using the dissaperf repository for these exercises. Start by cloning this repository:
Comparing Ruby Web Servers
You have seen that ruby has several options for open-source web servers. For example on your projects you’ve probably run WEBrick in development and something more sophisticated like Puma or Unicorn in production.
One thing that differentiates thse options is how they handle heavier load and concurrent requests. We’ll explore this idea in this lesson by benchmarking our sample app with several different web servers:
(Remember that Rack provides a uniform interface for ruby apps to interact with a web server – this allows us to swap them out seamlessly.)
Our app is a very simplistic web app. On the root path it simply prints "Hello World" – an action which should be nearly instantaneous, allowing us to see the impact of the different web servers on overall performance.
In addition, we have a
/slow endpoint, which also prints "Hello
World", but also injects a random amount of slowness into the action.
this will be userful for simulating the impact of server-side slowness
on our users.
Beginning with WEBrick
Let’s start simple with
Start the Server
Boot the app using
Now, with the server running, open another tab in your terminal window using
Imagine that 10 users are accessing your app at the same time, each of them making 10 requests. Let’s mimic the load with ApacheBench:
AB here is showing us a "histogram" of the response times for our 100 requests. Your times will be slightly different, but in our example above we can see that the slowest request took 66ms and 50% of requests took longer than 26 ms.
Notice that AB gives us increasing detail as we get closer to the slowest request. When diagnosing performance issues, it’s often most useful to focus on the worst-case or "pathological" requests – i.e. those in the 90-100 percentiles.
Understanding the Parameters
When we run ApacheBench like this:
-nconfigures the number of total requests
-cconfigures the number of concurrent requests
-tconfigures the maximum wait for responses
-psends a file containing data via a POST request
-usends a file containing data via a PUT request
-Tspecifies the content-type for POSTing or PUTing when sending a file
-especifies an output file to save results
Increase the number of total requests and concurrent requests until you cause the server to crash. Make sure the total requests are larger than the number of concurrent requests, like this:
Saving the Results
You may want to generate the results to a CSV file, so that you can graph the results:
After you run the command, a
filename.csv file will be created in the directory that you executed the command. Open it to see all the response data.
Testing Other Servers (Individual Exercise)
At this point, swap in the other server options (Thin, Puma, and Unicorn) and run your tests. Which respond fastest? Which are the most fault-tolerant? How many concurrent requests are needed to take out each one?
rackup -s thin -p 9000
rackup -s puma -p 9000
unicorn -p 9000
Compare the results of these servers to a single-threaded server (e.g. running puma with only 1 thread):
Puma with max threads set to 1:
puma -p 9000 -t 1:1
Go back to WEBrick and run some tests against the sample "slow" endpoint:
And compare the results to the faster page:
How do the stats compare? What implications can you draw about the overhead involved?
Testing Other Servers’ Slow Endpoint Performance (Individual Exercise)
Repeat the steps for testing the "slow" endpoint for each of the other servers. Do the performance profiles change as we add in more server time?
-p flag lets you perform
POST requests, passing a file that contains the data that will be submitted as the POST body. The
-T lets you specify the data you are sending.
We have included some JSON data in the
Let’s send a POST request to your your app with the
Experiment with the various
json files, and also vary the number of total requests and concurrent requests.
How does the server hold up?
Making Authenticated Requests
Often there will be pages in your application only accessible to
authenticated users. Load testing these can be a bit more difficult,
since we need to configure Apache Bench to send requests with the proper
credentials. You can pass optional cookie data to AB with the
command line flag. The format for providing cookies looks like:
So, for example:
ab -n 1 -c 1 -C "my_cookie=pizza;another_cookie=log_me_in"
In the case of standard rails apps, the Session cookie is usually the main one needed to authenticate. However more sophisticated auth systems may take a bit of trial and error to figure out just what credentials need to be supplied.
Optional: Plotting Data
You can use d3 to plot data in csv files (
-e), or GNUplot to plot data in tab delimited files (
-g) if you’re happier on the command line.
For Further Reading
- Checkout JMeter, also from Apache, for more advanced test suites