Arkansas Tech University Load

Nginx is a web server, load balancer, and reverse proxy application that is quite popular in current usage. Nginx has performance advantages versus Apache, particularly when traffic volumes grow, but is slightly less flexible and is not generally utilized in shared environments (ie, it needs to be hosting a single web site on the server for maximum performance). For this lab, you’ll be setting up Nginx on some Ubuntu VMs (as it’s the most well-documented of the platform choices, there’s nothing magic about Ubuntu other than a large user base) to do load balancing between the machines. You’ll be given access to the VMs, you only need to install Nginx and then configure the installs.

To get started, make sure that your VMs are up to date. To do this, use the command ‘sudo apt-get update’ If the update shows that you need new things installed, then use ‘sudo apt-get upgrade’ to actually install the updates.

Once the VMs are up to date, you can install the Nginx software. For Ubuntu, you simply issue ‘sudo apt-get install nginx’ and then answer yes when it asks if you want to proceed. The install of nginx is put in the /etc/nginx directory. The configuration files are in here, as well as some other pieces, such as the method for making sites available for serving in the webserver. Different Linux distributions have slightly different methods for making sites available; the Ubuntu (and Debian in general, as Ubuntu is a Debian derivative) is to have two different directories: /etc/nginx/sites-available and /etc/nginx/sites-enabled. Generally, the approach taken is to put the definition for how the site is to be offered (directory the files are in, any directives about how the server is serving the files, etc.) in the /etc/nginx/sites-available and then put a link to that file in the /etc/nginx/sites-enabled. This allows you to turn sites on and off by deleting and adding the links in the sites-enabled directory. To enable a site, use the command ‘sudo ln -s /etc/nginx/sites-available/sitename /etc/nginx/sites-enabled/sitename.

Go to the /etc/nginx directory and make sure that you have files in there, and then restart nginx with ‘sudo systemctl restart nginx’ to make sure that it’s up and running. You should now be able to use a web browser to access the default ip address of your server (use the command ‘sudo ifconfig’ if you aren’t sure what it is) and get the default landing page.

To get nginx up and running as a load balancer, you’re going to need to edit a configuration file to tell it where the nodes are that are going to be balanced. This file is in /etc/nginx/conf.d/load-balancer.conf. You should use a text editor (suggested command ‘sudo nano /etc/nginx/conf.d/load-balancer.conf’) to define out your setup. In this file, there’s two different sections that have to be defined: upstream and server. Note that you only do this part on the server that is doing the load balancing. On the other servers (ie the servers who are going to actually service web requests), you just set up the website to be offered. This server is the one that receives all incoming traffic and redirects it to the other servers for handling.

In the load-balancer.conf, the upstream segment defines the pool of servers that you can direct traffic to, and you have to edit the server directive to tell it to redirect incoming requests into the load balancer definition. Your file should look like this:

http {

upstream load balancer group name (cluster1, site1, etc) {

server IPAddress; note that these are the IPs of the other servers in your group
server IPAddress;

as many other server directives as you need (note: don’t type this out literally);


Server {

listen 80 (or any other port if you want to respond on something other than 80) ;

location / {

proxy_pass http://whatever_you_named_your _upstream;




Save and exit the file (Ctrl-X if you’re using nano). Then turn off the default configuration site by either deleting or renaming /etc/nginx/sites-enabled/default. Restart the service again (sudo systemctl restart nginx) and see if you get a website when you go to the IP address of the load balancer server. If you do, then it’s working. If you don’t, then there’s something wrong.

The default load balancing configuration in nginx is round robin, which is simplistic but easy to deal with. As discussed in class, this may or may not be the best choice for your workload depending on the characteristics of that work. There are other methods available in nginx; some of these methods only work with the paid version.

The easiest extension of the round robin method is to add weighting. To do this, you simply add in a bit of information in the server definitions in your upstream, so that an example upstream might look like:

upstream example{

server weight=3;
server weight=2;
server ;

In this example, server 1 gets 3x as many requests as server 3, and server 2 gets twice as many requests. If you don’t specify a weight, it’s assumed to be 1.

Another option is to use the server handling the least amount of connections, which is turned on simply by adding least_conn; to the upstream definition above the server lines, such that an example would look like

upstream example {

servers here;


All of these approaches don’t work out if you need clients to have a stable connection to a server to maintain state. In the free version, the only directive that accomplishes this is IP hashing, which assigns all incoming requests from an IP (based on the hashed value) to the same server. For this, you would put in ip_hash; in the same spot that you put in least_conn; for the previous example.

The paid version has additional options for more advanced load balancing configurations, but these should be sufficient for most websites.

REQUIRED = please provide the IP address of the VM you set up for load balancing, and the IPs of your two backing VMs that are serving web pages.

Order this or a similar paper and get 20 % discount. Use coupon: GET20


Posted in Uncategorized