Jumpstart Lab Curriculum

Performance

Load Testing

Imagine that you are launching a national health care website. You need to make sure that a lot of people can use the site at the same time without it falling over.

Load Testing

Load testing is one approach to figuring out how quickly pages load under different conditions, in particular when several users are using the site concurrently.

Apache Bench

A popular tool is ApacheBench. It was originally developed to test the Apache server, but it is generic enough that it can test any server, whether it is running locally on your machine, or out on the internet. It comes pre-installed on MacOS.

How Reliable Are The Results?

If you use ApacheBench to test a server that is not on the local network, you will also be seeing network latency. On one hand you can’t control those middle-men in the network, but on the other hand you’re seeing the results as the user will see them.

When you do benchmarking of any kind you need to run the tests many times to reduce the impact of secondary factors (like your computer’s memory swapping, other processes taking CPU time, etc). Make sure to close other applications running on the test system.

The hardware of the test machine matters. The more CPU power you have the more requests you can churn out.

Getting Started

We will be using the dissaperf repository for these exercises. Start by cloning this repository:

Terminal

$
$
git clone git@github.com:JumpstartLab/dissaperf.gitcd dissaperf

Then bundle.

Comparing Ruby Web Servers

You have seen that ruby has several options for open-source web servers. For example on your projects you’ve probably run WEBrick in development and something more sophisticated like Puma or Unicorn in production.

One thing that differentiates thse options is how they handle heavier load and concurrent requests. We’ll explore this idea in this lesson by benchmarking our sample app with several different web servers:

  • WEBrick
  • unicorn
  • thin
  • puma

(Remember that Rack provides a uniform interface for ruby apps to interact with a web server – this allows us to swap them out seamlessly.)

Our app is a very simplistic web app. On the root path it simply prints "Hello World" – an action which should be nearly instantaneous, allowing us to see the impact of the different web servers on overall performance.

In addition, we have a /slow endpoint, which also prints "Hello World", but also injects a random amount of slowness into the action. this will be userful for simulating the impact of server-side slowness on our users.

Beginning with WEBrick

Let’s start simple with WEBrick.

Start the Server

Boot the app using rackup:

Terminal

$
rackup -s webrick -p 9000

Simulating Users

Now, with the server running, open another tab in your terminal window using CMD+T.

Imagine that 10 users are accessing your app at the same time, each of them making 10 requests. Let’s mimic the load with ApacheBench:

Terminal

$
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab -n 100 -c 10 http://0.0.0.0:9000/This is ApacheBench, Version 2.3 <$Revision: 655654 $>Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 0.0.0.0 (be patient).....doneServer Software:        WEBrick/1.3.1Server Hostname:        0.0.0.0Server Port:            9000Document Path:          /Document Length:        13 bytesConcurrency Level:      10Time taken for tests:   0.320 secondsComplete requests:      100Failed requests:        0Write errors:           0Total transferred:      29000 bytesHTML transferred:       1300 bytesRequests per second:    312.81 [#/sec] (mean)Time per request:       31.968 [ms] (mean)Time per request:       3.197 [ms] (mean, across all concurrent requests)Transfer rate:          88.59 [Kbytes/sec] receivedConnection Times (ms)min  mean[+/-sd] median   maxConnect:        0    0   0.1      0       1Processing:    12   29  11.1     26      66Waiting:        6   27  11.1     23      61Total:         12   29  11.1     26      66Percentage of the requests served within a certain time (ms)50%     2666%     3075%     3380%     3490%     4895%     5798%     5999%     66100%     66 (longest request)

AB here is showing us a "histogram" of the response times for our 100 requests. Your times will be slightly different, but in our example above we can see that the slowest request took 66ms and 50% of requests took longer than 26 ms.

Notice that AB gives us increasing detail as we get closer to the slowest request. When diagnosing performance issues, it’s often most useful to focus on the worst-case or "pathological" requests – i.e. those in the 90-100 percentiles.

Understanding the Parameters

When we run ApacheBench like this:

Terminal

$
ab -n 100 -c 10 http://0.0.0.0:9000/

We’re specifying:

  • -n configures the number of total requests
  • -c configures the number of concurrent requests
  • -t configures the maximum wait for responses
  • -p sends a file containing data via a POST request
  • -u sends a file containing data via a PUT request
  • -T specifies the content-type for POSTing or PUTing when sending a file
  • -e specifies an output file to save results

Triggering Failure

Increase the number of total requests and concurrent requests until you cause the server to crash. Make sure the total requests are larger than the number of concurrent requests, like this:

Terminal

$
ab -n 500 -c 100 http://0.0.0.0:9000/

Saving the Results

You may want to generate the results to a CSV file, so that you can graph the results:

Terminal

$
ab -n 10 -c 2 -e filename.csv http://0.0.0.0:9000/

After you run the command, a filename.csv file will be created in the directory that you executed the command. Open it to see all the response data.

Terminal

$
open filename.csv

Testing Other Servers (Individual Exercise)

At this point, swap in the other server options (Thin, Puma, and Unicorn) and run your tests. Which respond fastest? Which are the most fault-tolerant? How many concurrent requests are needed to take out each one?

Thin: rackup -s thin -p 9000

Puma: rackup -s puma -p 9000

Unicorn: unicorn -p 9000

Compare the results of these servers to a single-threaded server (e.g. running puma with only 1 thread):

Puma with max threads set to 1: puma -p 9000 -t 1:1

Slower Requests

Go back to WEBrick and run some tests against the sample "slow" endpoint:

Terminal

$
ab -n 10 -c 2 http://0.0.0.0:9000/slow

And compare the results to the faster page:

Terminal

$
ab -n 100 -c 10 http://0.0.0.0:9000/

How do the stats compare? What implications can you draw about the overhead involved?

Testing Other Servers’ Slow Endpoint Performance (Individual Exercise)

Repeat the steps for testing the "slow" endpoint for each of the other servers. Do the performance profiles change as we add in more server time?

Sending Data

The -p flag lets you perform POST requests, passing a file that contains the data that will be submitted as the POST body. The -T lets you specify the data you are sending.

We have included some JSON data in the /data folder:

  • small.json
  • medium.json
  • large.json
  • huge.json
  • ginormous.json

Let’s send a POST request to your your app with the small.json file.

Terminal

$
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab -n 10 -c 2 -p data/small.json -T 'application/json' http://0.0.0.0:9000/This is ApacheBench, Version 2.3 <$Revision: 655654 $>Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 0.0.0.0 (be patient).....doneServer Software:        WEBrick/1.3.1Server Hostname:        0.0.0.0Server Port:            9000Document Path:          /Document Length:        41 bytesConcurrency Level:      2Time taken for tests:   0.051 secondsComplete requests:      10Failed requests:        0Write errors:           0Total transferred:      3180 bytesTotal POSTed:           8970HTML transferred:       410 bytesRequests per second:    197.99 [#/sec] (mean)Time per request:       10.101 [ms] (mean)Time per request:       5.051 [ms] (mean, across all concurrent requests)Transfer rate:          61.49 [Kbytes/sec] received173.44 kb/s sent234.92 kb/s totalConnection Times (ms)min  mean[+/-sd] median   maxConnect:        0    0   0.0      0       0Processing:     6   10   3.1      9      14Waiting:        5    9   2.9      9      14Total:          6   10   3.1     10      15Percentage of the requests served within a certain time (ms)50%     1066%     1075%     1380%     1590%     1595%     1598%     1599%     15100%     15 (longest request)

Experiment with the various json files, and also vary the number of total requests and concurrent requests.

How does the server hold up?

Making Authenticated Requests

Often there will be pages in your application only accessible to authenticated users. Load testing these can be a bit more difficult, since we need to configure Apache Bench to send requests with the proper credentials. You can pass optional cookie data to AB with the -C command line flag. The format for providing cookies looks like:

<cookie_name>=<cookie_value>;<cookie2_name>=<cookie2_value>

So, for example:

ab -n 1 -c 1 -C "my_cookie=pizza;another_cookie=log_me_in" http://localhost:3000/

In the case of standard rails apps, the Session cookie is usually the main one needed to authenticate. However more sophisticated auth systems may take a bit of trial and error to figure out just what credentials need to be supplied.

Optional: Plotting Data

You can use d3 to plot data in csv files (-e), or GNUplot to plot data in tab delimited files (-g) if you’re happier on the command line.

For Further Reading

  • Checkout JMeter, also from Apache, for more advanced test suites
Feedback

Have Feedback?

Did you find an error? Something confusing? We'd love your help:

Thanks!