tutorial

Load test with CPload

When building large complex infrastructures it becomes harder to validate if the performance you need can be provided by the system you’ve build. But customer demand for ever faster websites is growing by the day, how do you make sure you’ll be able to handle the next big sale or event on your web platform? I use a load generator that can replay access logs and has a way to slowly ramp up traffic to replicate a gradual or sudden inrush of traffic.

Update forked git repo from original source

1. Clone your fork: git clone git@github.com:YOUR-USERNAME/YOUR-FORKED-REPO.git 2. Add remote from original repository in your forked repository: cd into/cloned/fork-repo git remote add upstream git://github.com/ORIGINAL-DEV-USERNAME/REPO-YOU-FORKED-FROM.git git fetch upstream 3. Updating your fork from original repo to keep up with their changes: git pull upstream master Original post

Block tor traffic in cloudflare firewall

![tor logo] ({filename}/static/images/tor.png) If you don’t want to allow access to your server through the tor network you can ask nicely or just add every malicious looking client to a list. I tried to come up with a better solution. I started with a script that blocked incomming connections on a loadbalancer (can also be used on a webserver) Then I realized that if you use cloudclare in front of that the tcp connections come from cloudare and not from the tor endpoints.

Block almost all tor traffic to your server

![tor logo] ({filename}/static/images/tor.png) If you don’t want to allow access to your server through the tor network you can ask nicely or just add every malicious looking client to a list. I tried to come up with a better solution. This Script takes the known tor endpoints from torproject.org and adds it to a ipset list. The ipset is the dropped with iptables. #!/bin/bash echo "Tor endpoint list loading" TORLIST=$(curl -s https://check.

Copy mysql database to dev location

![db copy] ({filename}/static/images/db_copy.jpg) If you are just here for the script copy the script and have fun. If you want a bit more info read on this script is tweakable to some extend. #! /bin/bash TIMESTAMP=$(date +"%F") BACKUP_DIR="/tmp/db-dump/$TIMESTAMP" MYSQL_USER="root" MYSQL=/usr/bin/mysql MYSQL_PASSWORD="password" MYSQLDUMP=/usr/bin/mysqldump mkdir -p "$BACKUP_DIR" databases="live_db" for db in $databases; do test_db=$db"_test" $MYSQLDUMP --user=$MYSQL_USER -p$MYSQL_PASSWORD $db | gzip > "$BACKUP_DIR/$db.sql.gz" mysql --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "DROP DATABASE IF EXISTS $test_db;" mysql --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "CREATE DATABASE $test_db;" zcat "$BACKUP_DIR/$db.

Let's Encrypt

![Let’s encrypt] ({filename}/static/images/letsencrypt.svg) This Article is all about the SSL cert signing and auto signing for Let’s Encrypt. I only cover nginx based setup and does require root and knowledge of some basic nginx vhost configuration. Let’s start with what Let’s Encrypt is and how to use it. It is a public CA that signs for free, there are some limits but reaching them is not likeley in most cases. To Get a valid cert they use their own application that is developed in tho open on this github page.