Load test with CPload

When building large complex infrastructures it becomes harder to validate if the performance you need can be provided by the system you’ve build. But customer demand for ever faster websites is growing by the day, how do you make sure you’ll be able to handle the next big sale or event on your web platform? I use a load generator that can replay access logs and has a way to slowly ramp up traffic to replicate a gradual or sudden inrush of traffic. Aside from the basics there are often uris you’d like to hit with a bit more precision than just replaying logs. That is where the CPload tool comes in, it has a python config file were you can run custom python code/http requests when specific uris pass through the log.

Some of the cases where you might want custom http requests in your load test

  • Steps in an order process where your end user has session data attached to their requests.
  • A date that only makes sense to be in the future could be automatically replaced with a date in the future.
  • If there is a certain set of backends you don’t want to hit in your test so you might want to exclude them or alter the requests that would normally hit this backend.

Use cases

  • During platform migration to validate stability before switchover.
  • In the application testing suite, to validate efficiency.
  • While rebuilding or splitting applications smaller components check if performance isn’t degraded.
  • When evaluating any new infra components, use it to compare performance/price to the old situation.

I use it mainly for testing the auto scaling configuration to verify if the extra infrastructures spins up fast enough to handle the increase in traffic. Also it is useful to have a super static traffic load and be able to see how many instances you need to handle that traffic, if this increases between releases have a look at where your code might be slowing things down and see if it is acceptable or should be tuned for more performance.

Why yet another tool

Other tools were not fixing my needs, were extremely expensive or had a way to broad feature set that I didn’t need. I needed something that runs everywhere with not to much hassle and had the following requirements:

  • use access logs for traffic generation
  • change the fqdn to be able to hit a specific endpoint
  • gradual increase of the traffic
  • rewrite specific requests
  • custom pre script (build session)
  • respect cookies / sessions / tokens etc.
  • high throughput per cpu core #lean

Examples

Let’s take a look at some examples, first off a simple http 200 responder written in python.

Clone the git repo

$ cd ~/git
$ git clone https://gitlab.com/cloud-people/cpload.git
$ cd cpload
$ tree
.
├── cpload.py
├── Dockerfile
├── examples
├── filters.py
├── LICENSE
├── logs.txt
├── README.md
└── requirements.txt

1 directory, 7 files
$ 

Get a http 200 responder container running

$ docker run -d -p 8080:8080 docker.io/aapjeisbaas/hello-container
9ed24540565844f6fc2de96f07c886caaf4cc358fd730936f718fbde56bc849e
$ curl 127.0.0.1:8080/kjkjkjkj
Your path is: /kjkjkjkj
$ 

Install requirements and run our first test

$ pip install -r requirements
...
...
$ python cpload.py --verbose --touri "http://127.0.0.1:8080" --ratemin 360 --ratemax 7200 --ramptime 0.1 --duration 0.2
Load test running at: 360 req/hour
200 http://127.0.0.1:8080/
200 http://127.0.0.1:8080/web/location/
200 http://127.0.0.1:8080/api/v1/location/?test=true
200 http://127.0.0.1:8080/web/location/nested/pages
200 http://127.0.0.1:8080/api/v1/date-sensitive/20200213/test
Load test running at: 873 req/hour
200 http://127.0.0.1:8080/
200 http://127.0.0.1:8080/api/v1/add/controlled/randomness/magic/test
200 http://127.0.0.1:8080/
200 http://127.0.0.1:8080/
200 http://127.0.0.1:8080/web/location/
200 http://127.0.0.1:8080/api/v1/location/?test=true
^C
Stopping load test at: 1272 req/hour

 $ 

Now let’s add the example url filters to edit some of the requests flowing through here.

$ python cpload.py --verbose --touri "http://127.0.0.1:8080" --ratemin 360 --ratemax 7200 --ramptime 0.1 --duration 0.2 --filters filters.py
loaded filters:

 function      help text                   uri filter
------------  --------------------------  ------------
magic_edit    Select only magic products  /magic/
future        Always use future dates     api/v1/date
to_upper      URI TO UPPER                web
send_request  Send the get request 

Load test running at: 360 req/hour
200 http://127.0.0.1:8080/
Load test running at: 379 req/hour
200 HTTP://127.0.0.1:8080/WEB/LOCATION/
200 http://127.0.0.1:8080/api/v1/location/?test=true
200 HTTP://127.0.0.1:8080/WEB/LOCATION/NESTED/PAGES
200 http://127.0.0.1:8080/api/v1/date-sensitive/20200317/test
Load test running at: 873 req/hour
200 http://127.0.0.1:8080/
200 http://127.0.0.1:8080/api/v1/add/controlled/randomness/magic1/test
200 http://127.0.0.1:8080/
^C
Stopping load test at: 1082 req/hour 

 $  

the true power is in the filters

As you can see in the output different requests were sent to the target but both were using the same source logs.txt file. The url edits are done in the filters.py file, have a look at some of the examples in there:

print("loaded filters:")
# this file is included in the main cpload.py to add special sauce to your log replay.

# Here's the base function, what you do inside it is up to you.
#
# def uri_editor(http_pool, url):
#     send_request(http_pool, url)
#

# Here we select a random product from the magic list instead of using the one from the log.
# This might be helpfull when you have a small subset of the products in you tst / acc / mock environment
magicList = ['magic1', 'product2', 'destination3', 'package4']
def magic_edit(http_pool, url):
    """Select only magic products"""
    magic = random.choice(magicList)
    url = str(args.touri) + "/api/v1/add/controlled/randomness/" + magic + "/test"
    send_request(http_pool, url)

# In this example we replace a uri that is date sensitive with an url that always has a date in the future.
today = datetime.date.today()
def future(http_pool, url):
    """Always use future dates"""
    days = datetime.timedelta(days=random.randint(3, 20))
    date = today + days
    url = str(args.touri) + "/api/v1/date-sensitive/" + date.strftime("%Y%m%d") + "/test"
    send_request(http_pool, url)

# ALL CAPS FOR ADDED DRAMA
def to_upper(http_pool, url):
    """URI TO UPPER"""
    send_request(http_pool, str(url).upper())

# This is how the uri gets mapped to the correct editor function.
# It's selected based on if a string is string in uri, the first match will be used.
filters = {
    '/magic/': magic_edit,
    'api/v1/date': future,
    'web': to_upper,
    '': send_request
}

# draw a pretty overview of the loaded filters
from tabulate import tabulate
table = [["function", "help text", "uri filter"]]
for filter in filters:
    table.append([filters[filter].__name__, filters[filter].__doc__, filter])
print("\n", tabulate(table,headers="firstrow"), "\n")

Replay your own logs

To be able to replay your logs you need to clean them and make them look like the bellow example:

base.uri/
base.uri/web/location/
base.uri/api/v1/location/?test=true
base.uri/web/location/nested/pages
base.uri/api/v1/date-sensitive/20200213/test
base.uri/
base.uri/api/v1/add/controlled/randomness/magic/test
base.uri/

One liners to generate this from access logs.

nginx

10.0.2.2 - - [03/Mar/2020:14:28:05 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:28:10 [error] 2#2: *1 open() "/usr/share/nginx/html/yolo" failed (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /yolo HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:28:10 +0000] "GET /yolo HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:28:13 [error] 2#2: *1 open() "/usr/share/nginx/html/yolo/test" failed (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /yolo/test HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:28:13 +0000] "GET /yolo/test HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:28:21 [error] 2#2: *1 open() "/usr/share/nginx/html/yolo/cloud-people" failed (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /yolo/cloud-people HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:28:21 +0000] "GET /yolo/cloud-people HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:28:26 [error] 2#2: *1 "/usr/share/nginx/html/yolo/cloud/index.html" is not found (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /yolo/cloud/ HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:28:26 +0000] "GET /yolo/cloud/ HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:28:31 [error] 2#2: *1 open() "/usr/share/nginx/html/static" failed (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /static HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:28:31 +0000] "GET /static HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:28:35 [error] 2#2: *1 open() "/usr/share/nginx/html/lalalala" failed (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /lalalala HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:28:35 +0000] "GET /lalalala HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:28:41 [error] 2#2: *1 open() "/usr/share/nginx/html/robots.txt" failed (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /robots.txt HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:28:41 +0000] "GET /robots.txt HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:28:54 +0000] "GET /?test=true HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:28:58 +0000] "GET /?test=fals HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:29:00 +0000] "GET /?test=false HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:29:08 +0000] "GET /?search=a HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:29:10 +0000] "GET /?search=ab HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:29:12 +0000] "GET /?search=abc HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:29:13 +0000] "GET /?search=abcd HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:29:15 +0000] "GET /?search=abcde HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
10.0.2.2 - - [03/Mar/2020:14:29:18 +0000] "GET /?search=abcdef HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
2020/03/03 14:29:28 [error] 2#2: *1 open() "/usr/share/nginx/html/result/76353220" failed (2: No such file or directory), client: 10.0.2.2, server: localhost, request: "GET /result/76353220 HTTP/1.1", host: "127.0.0.1:8080"
10.0.2.2 - - [03/Mar/2020:14:29:28 +0000] "GET /result/76353220 HTTP/1.1" 404 153 "-" "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0" "-"
$ docker logs 39c50e2d31ee | grep -ve '\[error\]' | grep 'GET\|POST' | awk '{print "base.uri" $7}'
base.uri/
base.uri/yolo
base.uri/yolo/test
base.uri/yolo/cloud-people
base.uri/yolo/cloud/
base.uri/static
base.uri/lalalala
base.uri/robots.txt
base.uri/?test=true
base.uri/?test=fals
base.uri/?test=false
base.uri/?search=a
base.uri/?search=ab
base.uri/?search=abc
base.uri/?search=abcd
base.uri/?search=abcde
base.uri/?search=abcdef
base.uri/result/76353220
$ docker logs 39c50e2d31ee | grep -ve '\[error\]' | grep 'GET\|POST' | awk '{print "base.uri" $7}' > nginx-logs.txt

Now run the test with the new log file:

$ python cpload.py --verbose --touri "http://127.0.0.1:8080" --ratemin 360 --ratemax 7200 --ramptime 0.1 --duration 0.2 --filters filters.py --urlfile nginx-logs.txt
loaded filters:

 function      help text                   uri filter
------------  --------------------------  ------------
magic_edit    Select only magic products  /magic/
future        Always use future dates     api/v1/date
to_upper      URI TO UPPER                web
send_request  Send the get request 

Load test running at: 360 req/hour
200 http://127.0.0.1:8080/
404 http://127.0.0.1:8080/yolo
404 http://127.0.0.1:8080/yolo/test
404 http://127.0.0.1:8080/yolo/cloud-people
404 http://127.0.0.1:8080/yolo/cloud/
Load test running at: 873 req/hour
404 http://127.0.0.1:8080/static
404 http://127.0.0.1:8080/lalalala
404 http://127.0.0.1:8080/robots.txt
Load test running at: 1082 req/hour
200 http://127.0.0.1:8080/?test=true
200 http://127.0.0.1:8080/?test=fals
200 http://127.0.0.1:8080/?test=false
200 http://127.0.0.1:8080/?search=a
200 http://127.0.0.1:8080/?search=ab
200 http://127.0.0.1:8080/?search=abc
200 http://127.0.0.1:8080/?search=abcd
Load test running at: 1462 req/hour
200 http://127.0.0.1:8080/?search=abcde
200 http://127.0.0.1:8080/?search=abcdef
404 http://127.0.0.1:8080/result/76353220
...
...
...

httpd / apache

Raw log:

AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.0.2.100. Set the 'ServerName' directive globally to suppress this message
[Tue Mar 03 14:44:26.181291 2020] [mpm_event:notice] [pid 1:tid 140004244472960] AH00489: Apache/2.4.41 (Unix) configured -- resuming normal operations
[Tue Mar 03 14:44:26.183779 2020] [core:notice] [pid 1:tid 140004244472960] AH00094: Command line: 'httpd -D FOREGROUND'
10.0.2.2 - - [03/Mar/2020:14:44:46 +0000] "GET / HTTP/1.1" 200 45
10.0.2.2 - - [03/Mar/2020:14:44:55 +0000] "GET /yolo HTTP/1.1" 404 196
10.0.2.2 - - [03/Mar/2020:14:45:02 +0000] "GET /yolo/test HTTP/1.1" 404 196
10.0.2.2 - - [03/Mar/2020:14:45:07 +0000] "GET /yolo/cloud-people HTTP/1.1" 404 196
10.0.2.2 - - [03/Mar/2020:14:45:12 +0000] "GET /yolo/cloud/ HTTP/1.1" 404 196
10.0.2.2 - - [03/Mar/2020:14:45:16 +0000] "GET /static HTTP/1.1" 404 196
10.0.2.2 - - [03/Mar/2020:14:45:20 +0000] "GET /lalalala HTTP/1.1" 404 196
10.0.2.2 - - [03/Mar/2020:14:45:23 +0000] "GET /robots.txt HTTP/1.1" 404 196
10.0.2.2 - - [03/Mar/2020:14:45:27 +0000] "GET /?test=true HTTP/1.1" 200 45
10.0.2.2 - - [03/Mar/2020:14:45:30 +0000] "GET /?test=fals HTTP/1.1" 200 45
...
...
...

transform it

docker logs d5b6b17ba444 | grep 'GET\|POST' | awk '{print "base.uri" $7}'
base.uri/
base.uri/yolo
base.uri/yolo/test
base.uri/yolo/cloud-people
base.uri/yolo/cloud/
base.uri/static
base.uri/lalalala
base.uri/robots.txt
base.uri/?test=true
base.uri/?test=fals
base.uri/?test=false
base.uri/?search=a
base.uri/?search=ab
base.uri/?search=abc
base.uri/?search=abcd
base.uri/?search=abcde
base.uri/?search=abcdef
base.uri/result/76353220
base.uri/
base.uri/yolo
base.uri/yolo/test

AWS Alb

Here is an example to pull in aws alb logs from yesterday from your s3 logs bucket.

$ aws s3 --profile prd sync s3://your-logs-buckut/AWSLogs/youraccountnumber/elasticloadbalancing/yr-region-1/$(date --date yesterday "+%Y/%m/%d") . ; gzip -d *
...
...
...
$ # export all GET requests to your logs file
$ cat * | grep 'GET' |  awk '{print $14}' | sed 's/.*:443/base.uri/g' > alb-logs.txt
Cloud & Open-Source magician 🧙‍♂️

I try to find the KISS in complex systems and share it with the world.

comments powered by Disqus