Setting up Express with nginx and pm2

After reading this article, you will know how to set up a simple web application in Node using Express, keep it alive using pm2, and use nginx as a reverse proxy that also handles caching.

The key players

Before we start, let's get an overview of the key players:

  • Express - An extensible, minimalistic web framework for Node.js that deals with routing and templating
  • pm2 - A production process manager that ensures your application stays running, and restarts it if stopped
  • nginx - A HTTP and reverse proxy server great at serving static files, load balancing, and caching

The mechanism

When a client requests a page to our application, nginx will pass the request to Express. Express will return a page to nginx, which then is sent back to the client.

All the while, pm2 monitors our Express application to ensure it is running, and restarts it if stopped.

Node

Express is built on Node, so we must first install Node. According to the official post:

$ sudo apt-get update
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs

Express

First let's set up Express. Express provides us with a Generator right off the bat, and so we can just use this to create a boilerplate application.

First, we install the Express generator tool.

$ npm install express-generator -g

This allows us to use the command express [name] to generate a boilerplate application.

$ express myapp

This will give us the following structure

create : myapp
create : myapp/package.json
create : myapp/app.js
create : myapp/public
create : myapp/public/javascripts
create : myapp/public/images
create : myapp/public/stylesheets
create : myapp/public/stylesheets/style.css
create : myapp/routes
create : myapp/routes/index.js
create : myapp/routes/users.js
create : myapp/views
create : myapp/views/index.jade
create : myapp/views/layout.jade
create : myapp/views/error.jade
create : myapp/bin
create : myapp/bin/www

install dependencies:
    $ cd myapp && npm install

run the app:
    $ DEBUG=myapp:* ./bin/www

This created all the files we need for the boilerplate application. However, the Node modules from which Express is built from, are not included because:

  1. A newer, compatible version might have been released that fixes some bugs, so we should always check for the latest compatible version
  2. Everything in the package repository should be the package's unique code, not someone elses
  3. Keeps the file download size small

In Node, every required package is listed inside a special myapp/package.json file. So before the application will work, we must install these modules.

$ cd myapp && npm install

Then you can run your application using:

$ DEBUG=myapp:* ./bin/www

If you navigate to http://[YOUR IP]:3000/ (replace [YOUR IP with your server's public IP address e.g. 128.199.243.217) you'll see a sample page with the text 'Express / Welcome to Express' on it. In the console, you'll also see something like this:

GET / 200 308.673 ms - 170
GET /stylesheets/style.css 200 7.621 ms - 110

This shows you the server is recieving the request and sending the file back with a 200 status code.

Next Steps

So you application is running, we will skip over how to do routing and templating for another article, and continue our focus on how to integrate pm2 and nginx with Express.

Next we will show you how to keep the application running, even when you're logged out, or the application stops through errors.

pm2

First let's install pm2:

$ npm install -g pm2

Instead of running the command node, we simply run the command pm2 start. So instead of

$ node bin/www

We'd run

$ pm2 start bin/www

This will be printed on stdout

[PM2] Process bin/www launched
┌──────────┬────┬──────┬───────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name │ id │ mode │ pid   │ status │ restart │ uptime │ memory      │ watching │
├──────────┼────┼──────┼───────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ www      │ 1  │ fork │ 12402 │ online │ 0       │ 0s     │ 15.488 MB   │ disabled │
└──────────┴────┴──────┴───────┴────────┴─────────┴────────┴─────────────┴──────────┘

And again, if you go to http://[YOUR IP]:3000/, you'll see the same application running again. Only this time, it will stay running even if you log out, or the application had a temporary error, it will restart.

In the table above, you can see under the column header 'restart', the number of times the application has been restarted. You can run pm2 list later on to see if the application has been restarted since the last time you checked on it.

You can also get the real-time status of the application, specifically the memory usage, by running pm2 monit.

Reboots

pm2 will keep applications running, as long as it is running itself. If pm2 itself is terminated, for example, through a reboot, then we must also configure it so that pm2 starts back up automatically.

pm2 can generate a script that you can run to ensure pm2 starts after a restart.

$ pm2 startup
[PM2] You have to run this command as root. Execute the following command:
      sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u daniel

The -u daniel indicates the user running the pm2 daemon, in this case, it's daniel.

# sudo env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u daniel
[PM2] Spawning PM2 daemon
[PM2] PM2 Successfully daemonized
[PM2] Generating system init script in /etc/init.d/pm2-init.sh
[PM2] Making script booting at startup...
[PM2] -ubuntu- Using the command:
      su -c "chmod +x /etc/init.d/pm2-init.sh && update-rc.d pm2-init.sh defaults"

[PM2] Done.

nginx

Security

So far, we have an Express application running, and it will keep running. The only thing left to do is make it available to the rest of the world.

Now you may say, it's running on port 3000, we can just go to the browser and type the IP address of the server, followed by :3000 (e.g. 128.199.243.217:3000) and you will be able to reach your application.

This is certainly doable, and the simplest solution; however, opening up many ports will expose the services listening at those ports to potential exploits. It is generally bad practice and you should use a firewall such as ufw (Uncomplicated Firewall) to limit the number of publically-accessible ports.

So in our case, we'd want to only open port 80 - the standard port for HTTP requests, and route the request to our Express application if the request URL matches. This type of server is known as a reverse proxy.

A proxy is a server, used by the client, to indirectly access other servers. To the server serving the contents, it will view the proxy server as the client, and be oblivious to the original client.

A reverse proxy is the other way around. The application server (our Express application) responds to the client through our nginx web server, but the client has no idea this was done. In the client's view, the response came directly from the web server. Hence the name - reverse proxy.

Installation

First, let's install nginx. As there are different instructions for different operating systems, I'd simply refer you to the official documentation.

For Ubuntu, run the following:

$ sudo -s
$ nginx=stable # use nginx=development for latest development version
$ add-apt-repository ppa:nginx/$nginx
$ apt-get update
$ apt-get install nginx
Configuration

If you navigate to /etc/nginx/, you'll see some configuration files. There are also the sites-available and sites-enabled directories. If you cd sites-available/ && ls -ahl, you will see a file named default.

Every site you want to run behind nginx must have a file to configure it inside the sites-available directory. So let's copy the default file and modify it.

$ cp default example.com

If you take away all the comments, it should look like this:

server {
  listen 80 default_server;
  listen [::]:80 default_server;

  root /var/www/html;
  index index.html index.htm;

  server_name _;

  location / {
      try_files $uri $uri/ =404;
  }
}

We must then enable the site. sites-available stores configurations for all sites; sites-enabled stores configurations only for sites that is served by nginx. To remain DRY, the files in sites-enabled are simply symbolic links to the corresponding files in the sites-available directory. So the next step for us is to enable the site.

$ ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

When a request for a page is made to nginx, nginx looks up the configuration files inside the sites-enabled directory, and checks each server {} block's server_name variable to see if it matches. If it does, it will use the rest of the configurations to figure out what to do with the request.

For example, nginx will look at the root property to determine where the root directory is for this site. After this, as indicated by this block:

location / {
    try_files $uri $uri/ =404;
}

For all requests, it will try fetch the resource as a file, then as a directory, and finally, if none is found, returns a 404 Not Found response.

For a normal plain HTML application not using Express, we must change the root to the root directory of where the files are, and set the server_name to the domain name that this configuration refers to.

So it'll look something like:


server {
  listen 80 default_server;
  listen [::]:80 default_server;

  root /srv/www/example.com/html;
  index index.html index.htm;

  server_name example.com www.example.com;

  location / {
      try_files $uri $uri/ =404;
  }
}
Reverse Proxy

However, our application is ran on Express, and so we don't want nginx to fetch the plain files; instead, we want to pass to Express the request headers and path sent from the client, and wait for it to respond back to nginx.

In our case, we want to set up a reverse proxy to our application. To do this, we must remove these lines:

root /srv/www/example.com/html;
index index.html index.htm;

and use the following configuration for the location / block:


server {
  listen 80 default_server;
  listen [::]:80 default_server;

  server_name example.com www.example.com;

  location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_pass http://127.0.0.1:3000/;
        proxy_redirect off;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_redirect off;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_cache one;
        proxy_cache_key sfs$request_uri$scheme;
    }
}

We removed the line try_files $uri $uri/ =404; because we no longer want nginx to fetch the files directly, instead we use nginx as a reverse proxy.

The most relevant settings here for you are the server_name, which should be set to your domain name, and proxy_pass - make sure that points to where the Express application is running.

The lines:

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

are required if you work with WebSockets, which uses HTTP for handshake, but must 'upgrade' to a WebSocket connection.

Static files

Express stores its static files such as images, stylesheets and JavaScript files are stored inside the ./public/ directory, and they often do not change over time, thus it'll take some load off our server if we enable caching for these files.

We want to intercept any requests for images, stylesheets etc. and server them up directly using nginx. To do this, add this line above the location / block.

location ~ ^/(assets/|images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) {
root /srv/www/yourapp/public/;
access_log off;
expires 24h;
}

Here, we set the Expires header to be 24 hours, you can customize this to your own need. If you tend to update your stylesheets a lot, maybe set a lower number; on the other hand, if you know your site is pretty much done and won't work on it for a while, you can set a higher number. But be careful if you do, since the client will not even query the server before that time is up, it will simply use the local copy; so remember to lower this number back down way before you make any changes - to ensure the changes are propagated quickly.

You can find valid values for the expires field headers here.

Restarting nginx

Now we have changed the settings, we must restart nginx for them to take effect.

$ /etc/init.d/nginx restart

Now, you should be able to see your application running at yourdomain.com.

Further Reading

Load balancing

In the next article, we will explore load balancing a Node application. Load balancing is important to ensure the clients can always reach a running application. If too many requests are made to one application, the load balancer will direct further requests to an identical application, or another thread, which can serve the request.

comments powered by Disqus