nginx Load Balancer Configuration
With a better understanding of multi-mode Vagrant, I can easily create multiple virtual machines and have them networked together easily.
One thing I want to learn to configure: an nginx load balancer. I have seen load balancers scale out applications, how can I configure one from a fresh install?
Goal
To configure an nginx load balancer to connect to a private network of computers based on an endpoint used.
- 1x load balancer that is publicly accessible
- 5x worker nodes connected to the load balancer with a private network
I will go through step by step, verifying configuration along the way.
Requirements
To follow along, install the following pieces of software in your system.
Regular readers will know I use these often. If you have problems, please contact me
Making a Balancer
Let’s start with a single balancer virtual machine that is connectable from the outside through the forwarded port and has its own private network.
I will have this Vagrantfile to build a balancer:
BOX_IMAGE = "ubuntu/xenial64"
BALANCER_IP_ADDRESS = "192.168.100.10"
BALANCER_PORT = 8081
Vagrant.configure("2") do |config|
config.vm.box = BOX_IMAGE
config.vm.define "balancer" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.hostname = "balancer"
subconfig.vm.network "forwarded_port", guest: 80, host: BALANCER_PORT
subconfig.vm.network :private_network, ip: BALANCER_IP_ADDRESS
subconfig.vm.provision "shell", inline: <<-SHELL
apt-get update
apt-get install -y nginx
echo "#{subconfig.vm.hostname}" > /var/www/html/index.nginx-debian.html
service nginx start
SHELL
end
endHighlights from the file:
- Global variables:
BOX_IMAGE,BALANCER_IP_ADDRESS, andBALANCER_PORTdefines which vagrant box image, internal IP address to use, and port to use in a single section of the file. - The provisioning step installs nginx and replaces the default home page with the virtual machine’s hostname. (Keep it simple!)
Starting things up to test:
$ vagrant up balancer
$ curl localhost:8081
balancerSo far, so good.
Making Workers
A single load balancer is pretty boring. We need workers to balance the load!
First, extend the Vagrantfile by adding a section to create worker:
BOX_IMAGE = "ubuntu/xenial64"
BALANCER_IP_ADDRESS = "192.168.100.10"
BALANCER_PORT = 8081
WORKER_COUNT = 5
Vagrant.configure("2") do |config|
config.vm.box = BOX_IMAGE
config.vm.define "balancer" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.hostname = "balancer"
subconfig.vm.network "forwarded_port", guest: 80, host: BALANCER_PORT
subconfig.vm.network :private_network, ip: BALANCER_IP_ADDRESS
subconfig.vm.provision "shell", inline: <<-SHELL
apt-get update
apt-get install -y nginx
echo "#{subconfig.vm.hostname}" > /var/www/html/index.nginx-debian.html
service nginx start
SHELL
end
(1..WORKER_COUNT).each do |worker_count|
config.vm.define "worker#{worker_count}" do |subconfig|
subconfig.vm.box = BOX_IMAGE
subconfig.vm.hostname = "worker#{worker_count}"
subconfig.vm.network "forwarded_port", guest: 80, host: (BALANCER_PORT + worker_count)
subconfig.vm.network :private_network, ip: "192.168.100.#{10 + worker_count}"
subconfig.vm.provision "shell", inline: <<-SHELL
apt-get update
apt-get install -y nginx
echo "#{subconfig.vm.hostname}" > /var/www/html/index.nginx-debian.html
service nginx start
SHELL
end
end
config.vm.provider "virtualbox" do |vb|
# Customize the amount of memory on the VM:
vb.memory = "512"
end
endHighlights from the addition:
- The
WORKER_COUNTvariable specifies the number of workers to create. - Provisioning each worker is the same: install nginx and replace the default nginx page.
- The original specification required workers to be only accessible on the private network, but I keep an open port so we can verify connectivity before closing it.
Let’s bring up each worker and verify accessiblity:
$ vagrant halt
...
$ vagrant up
...
$ for i in {1..6}; do curl localhost:808$i; done
balancer
worker1
worker2
worker3
worker4
worker5Everything works!
Balancer - Worker Connections
Each virtual machine is up, let’s verify each worker is accessible from the balancer:
$ vagrant ssh balancer -c 'curl 192.168.100.11; curl 192.168.100.12; curl 192.168.100.13; curl 192.168.100.14; curl 192.168.100.15;'
worker1
worker2
worker3
worker4
worker5Up to this point, we have a balancer and five workers operational. Each are acessible via a forwarded port, returns a page with their hostname on it, and connected to the balancer.
Modifying nginx
To make nginx a load balancer, we have to update its current
configuration with a new directive: upstream and proxy_pass.
Change the current default nginx config in
/etc/nginx/sites-available/default to the content:
nginx Configuration
upstream backend {
server 192.168.100.11;
server 192.168.100.12;
server 192.168.100.13;
server 192.168.100.14;
server 192.168.100.15;
}
upstream even {
server 192.168.100.12;
server 192.168.100.14;
}
upstream odd {
server 192.168.100.11;
server 192.168.100.13;
server 192.168.100.15;
}
server {
listen 80;
location /odd/ {
rewrite ^/odd(.*) /$1 break;
proxy_pass http://odd;
}
location /even/ {
rewrite ^/even(.*) /$1 break;
proxy_pass http://even;
}
location /worker {
rewrite ^/worker(.*) /$1 break;
proxy_pass http://backend;
}
}Highlights from the file:
upstream |
specify a server group |
| backend, even, odd | labels for different upstream server groups |
location |
specifies the endpoint to listen to (i.e. localhost:8081/worker) |
rewrite |
have nginx rewrite the URL before passing onto the upstream server. In this case, remove: /worker, /odd, or /even |
proxy_pass |
pass the incoming request to the specified upstream server |
Testing Load Balancer
Run the command: sudo nginx -t to test out the current configuration
and fix any errors.
Restart the nginx server with new configuration with command: sudo
service nginx restart inside the balancer virtual machine.
Or run commands:
$ vagrant ssh balancer -c 'sudo nginx -t'
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Connection to 127.0.0.1 closed.
$ vagrant ssh balancer -c 'sudo service nginx restart'
Connection to 127.0.0.1 closed.On the command line, this is quick way to verify:
For any worker:
$ for i in {1..10}; do curl localhost:8081/worker; done
worker2
worker3
worker4
worker5
worker1
worker2
worker3
worker4
worker5
worker1Just odd numbered workers:
$ for i in {1..10}; do curl localhost:8081/odd; done
worker1
worker3
worker5
worker1
worker3
worker5
worker1
worker3
worker5
worker1And finally, even numbered workers:
$ for i in {1..10}; do curl localhost:8081/even; done
worker2
worker4
worker2
worker4
worker2
worker4
worker2
worker4
worker2
worker4You can also test this using a browser.
Results
With the above configuration, this will happen:
- Loading
localhost:8081will give the default web page for the balancer, that returns ‘balancer’. - Loading
localhost:8081/workerwill pass the request to workers one through five, when loading multiple times in a row. - Loading
localhost:8081/oddwill pass the request to odd numbered workers (worker1, worker3, worker5) when loading multiple times in a row. - Loading
localhost:8081/evenwill pass the request to even numbered workers (worker2, worker4) when loading multiple times in a row.
So, according to the original specifications, this configuration has:
- 1x load balancer that is publicly accessible through the port: 8081 on localhost.
- 5x worker nodes connected to the load balancer with a private network, IP Address range: 192.168.100.11-16.
- Any worker is available using the
localhost:8081/workerbalancer endpoint. - Odd/even workers are available using the
localhost:8081/[odd | even]balancer endpoint.
The most important part of this article is the nginx configuration! This is the configuration that changes nginx from a static webserver into a load balancer.
Resources
The following are resources I used in developing this article: