step 5: install and configure nginx

Nginx logo

NGINX is much more then just a server like Apache, that's probably why it gained so much market share over the few last years if you trust the Netcraft server market share statistics.

NGINX has two advantages over Apache, first one is super fast, second one it's consumes less resources then Apache. Those are two important wins in two areas that really matter when having to decide which server to choose when getting started. Nginx is so fast because it's event driven. You may already have seen how javascript uses events and asynchronous loading of data. Nginx is similar to this, it emits a request, but doesn't wait all the time for a response keeping the connection open. Nginx requests something, then closes the connection and does something else. But in the request is a callback. So when the requested data is ready the nginx callback gets called and the response is send through a new connection to nginx. This is why nginx can handle such a lot of connections.

But as i said NGINX is more then just a server. The powerful configuration mechanism lets you setup NGINX to be a proxy server for one or multiple apache servers. You can use it as load balancer for multiple node.js instances or "just" use it as normal server to host your php website. If it's a php website you want to host.

remove apache:

in case you haven't removed apache as you probably don't need it anymore or maybe because it was preinstalled on your server, you should do this now using this command:

# yum remove httpd httpd-devel httpd-tools

and also remove it from startup:

# chkconfig --del httpd

install nginx:

get the nginx.org repo to be able to use the nginx package installer:

# rpm -Uvh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm

to find the right packages for your server visit the nginx download page.

install nginx:

# yum install nginx

make sure nginx starts after server reboot:

# chkconfig --levels 235 nginx on

nginx configuration:

if you don't know where your nginx.conf is, use the "locate" tool

first update your files database

# updatedb

then use locate to find the file

# locate nginx.conf

edit the nginx configuration file using the vi editor:

# vi /etc/nginx/nginx.conf

now to edit the file press the "i" key to go into edit mode

change the values you want to change, use the arrows to navigate through the document

press the escape key to exit from the edit mode after you did all changes

type :x! to save and exit

or type :q! to quit without saving

the first change you could do, is to deactivate the server tokens

if a user reaches a page not found, or in the header of your pages there will be information about the version of nginx you use, it's not a good idea to show the version number to anybody for security reasons, because a hacker that finds a security hole in a version of nginx could search for servers that use that version of nginx and try to hack them, by hiding the version number you make it harder to guess which version you are using

as we don't want anyone to know the exact version we use, we change the tokens line to this:

server_tokens off;

you can adjust the amount worker_processes and worker_connections

if your server will host a website that isn't very busy you don't need to change anything

but you could increase the amount of worker processes and choose a value between 1 and the total amount of cores available on your server

to find out how many cores you server has, type this command:

# cat /proc/cpuinfo |grep processor

i got the following output, which means i have real 4 cores:


processor       : 0
processor       : 1
processor       : 2
processor       : 3

! don't count hyper threading cores as real cores, i mean if like me you have a xeon processor with 8 cores where 4 are real and the other 4 are from hyperthreading, then this means i can allocate a maximum of 4 cores in the nginx configuration.

to find out what the limit of maximum simultaneous clients is, nginx does this calculation: max_clients = worker_processes * worker_connections

there is also an option called "worker_cpu_affinity" which lets you assign the worker processes to the cores you want. You can also assign one worker process to multiple cores. This is a bit tricky, to find the right balance between the amount of connections you want to handle and the maximum of available power of your server. My tip here, play around with the values and do performance test to see what your server can handle.

Also don't forget that you could also tweak your keep_alive value, by reducing it you may increase the amount of connections your server can handle over a period of time, but that's also something you should test. If keep alive is on, the server will keep a connection opened for a while and send multiple files through that connection. If keep alive is off, each file will be sent through a new connection, which is slower, because you have to open and close lots of connections. On the other hand keeping a connection opened for much longer that is needed to transfer multiple files is also a waste of resources. Therefore it's up to you to do some tests, using a "real world" scenario and check which values for worker processes, amount of connections and keep alive time are best for your setup.

To monitor the performance of your server you could use a tool like nagios, but you would have to install and configure it first, or you use one those two built in tools from the command line:

# watch free

free monitors free memory, or to monitor memory, the cpu and io use the vmstat command:

# vmstat 1

this will output the vmstat values every second until you stop it

to increase the load on your server you could use Apache AB (apache benchmark) or an online service like loadimpact.com

here is what your nginx.conf may look like after you tweaked it:


user  nginx;
worker_processes 2;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    server_tokens off;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    gzip  on;
    gzip_vary on;
    gzip_min_length  1100;
    gzip_comp_level 1;
    gzip_http_version 1.0;
    gzip_proxied any;
    gzip_buffers 16 8k;
    gzip_disable "msie6";
    gzip_types  text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    include /etc/nginx/conf.d/*.conf;
}

as you can see i have activated gzip and have added some mime types of files a want to get served as gzipped version to the clients, this will decrease the loading time of your web content

! when doing the configuration of nginx you may want to get some information about the modules and their options, those can be found in the nginx modules documentation wiki

adding a vhost configuration:

now we need to create an nginx vhost configuration file like in apache, create a vhost for your domain:

# vi /etc/nginx/conf.d/my_first_vhost.conf

after tweaking your vhost file it might look like this one, this is just an example there are lots of other ways to script an nginx vhost.

ssl capable vhost:

In fact this isn't one vhost but three, i have vhost for http connections that does only redirect users to the ssl vhost

The second vhost is the main vhost, it listens on port 443 because i deliver all my content as ssl encrypted.

This is the basic ssl configuration for an nginx server, just add the correct path to your ssl files using the ssl_certificate_key and ssl_certificate directives. If you need help to get those files I recommend reading the documentation provided by your certificate reseller, I bought my certificate from Digicert and followed their recommendation for nginx that can be found here.


# ssl certificate options
ssl_certificate_key  /etc/nginx/www_chris_lu.key;
ssl_certificate  /etc/nginx/www_chris_lu.pem;

But there is more, to ensure maximum security of your ssl connections you should enable forward secrecy, which means that you should add a selection of cypher suites that should get used by order of preference to your configuration. It's important that you add the following lines to your ssl configuration:


# hardening ssl security
ssl_prefer_server_ciphers On;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
# ssl cyphers suggested by ssllabs
ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS;

For more information about hardening your server ssl security, I recommend reading this blog post.

After setting up ssl for your nginx server, use this web service from ssllabs.com, to check the security grade you achieve and if it's not a grade A then there is probably something you can improve.

Here you can see the ssl test results for chris.lu.

If you have an ssl certificate enable spdy:

If you have an ssl certificate you should also enable SPDY (if you don't have an ssl certificate you can't use spdy). SPDY can greatly improve the performance of your website in browser that support the SPDY protocol, like firefox and chrome. To enable spdy just add it to your listen directive like this:

443 default_server ssl spdy;

adding subdomains to the nginx vhost configuration:

The third entry is just a static files subdomain. The only thing that really matters here is the server_name and the root option. You can of course add a lot more subdomains.

This is the complete vhost configuration file:


    # http://example.com
    server {

        listen 80;

        server_name example.com www.example.com;

        location / {
            rewrite ^/(.*)$ https://example.com/$1 permanent;
        }

    }

    #https://example.com
    server {

        listen 443 default_server ssl spdy;

        server_name example.com www.example.com;

        # ssl certificate options
        ssl_certificate_key  /etc/nginx/www_chris_lu.key;
        ssl_certificate  /etc/nginx/www_chris_lu.pem;

        # hardening ssl security
        ssl_prefer_server_ciphers On;
        ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
        # ssl cyphers suggested by ssllabs
        ssl_ciphers EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS;

        # send header to browser to ask browser to only accept content servered by an ssl connection
        add_header Strict-Transport-Security max-age=31536000;

        # reduce SSL handshakes for processor performance, ten megabyte are enough for 40k sessions
        ssl_session_cache   shared:SSL:10m;
        ssl_session_timeout 10m;

        # minimize time to first byte
        ssl_buffer_size 4k;

        # enable spdy header compression
        spdy_headers_comp 6;

        # keep connection alive
        keepalive_timeout 75s;
        spdy_keepalive_timeout 75s;

        # default charset
        charset utf-8;

        # log files locations
        access_log  /var/log/nginx/example_com.access.log  main;
        error_log /var/log/nginx/example_com.error.log error;

        # a redirect for clients trying to access the www version of my website to a non www version
        if ($host = 'www.example.com' ) {
            rewrite  ^/(.*)$  https://example.com/$1  permanent;
        }

        location / {
		
            root   /usr/share/nginx/html/my_website/public;
            index  index.php index.html index.htm;
			
            # try to find the file on the server, if it doesn't exist, redirect the request to my index.php (zend framework)
            # if the client for example requests a css file on my server deliver it, if the request is a route to some zend framework page then redirect the request to my index.php
            try_files $uri $uri/ /index.php?$args;

            # this module provides automatic directory listings, by default we want this to be turned off
            autoindex off;

            # disable acces to some types of files
            location ~* \.(htaccess|htpasswd|ini|tmx|xml|log|sh|cgi)$ {
                deny all;
            }

            # set expiration for the following extensions
            location ~* \.(?:css|js|jpeg|jpg|gif|png|ico|mp3|mp4|ogg|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ {
                expires max;
                #etag on; # etags are not supported yet but will in v1.3.3
            }
        }

        error_page  404              /404.html;
        location = /404.html {
            root   /usr/share/nginx/html/my_website/public/error_pages;
        }

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html/public;
        }

        #
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        location ~ \.php$ {
            root           /usr/share/nginx/html/my_website/public;
            fastcgi_pass   127.0.0.1:9000;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include        fastcgi_params;
        }

    }

    # http://my_subdomain.example.com
    server {

        listen    80;

        server_name my_subdomain.example.com;

        location / {
            root   /usr/share/nginx/html/my_website/sub_domains/my_subdomain;
            index  index.php index.html index.htm;
            try_files $uri $uri/ /index.php?$args;

            # this module provides automatic directory listings
            autoindex off;

            # disable acces to some types of files
            location ~* \.(htaccess|htpasswd|ini|tmx|xml|log|sh|cgi)$ {
                deny all;
            }

            # set expiration for the following extensions
            location ~* \.(?:css|js|jpeg|jpg|gif|png|ico|mp3|mp4|ogg|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ {
                expires max;
            }
        }

    }

as server name always add both, the www and the non-www version, so that your website can be reached through both:

server_name my_subdomain.example.com;

but to avoid having duplicate content an therefore make google angry, we redirect all connections from www version to non-www version like this:


# a redirect for clients trying to access the www version of my website to a non www version
if ($host = 'www.example.com' ) {
	rewrite  ^/(.*)$  https://example.com/$1  permanent;
}

you could also do the opposite and redirect all non-www requests to www, that's up to you

it is always a good idea to set a default charset, that on this line:

charset utf-8;

try_files entry for Zend Framework and other libraries and applications that need all requests to be redirected to index.php:

The try_files entry in the location block is very important, what it does is, to check if a file exists, if this is the case it returns the file, but if no file exists for a given path the request gets redirected to index.php.


try_files $uri $uri/ /index.php;

My Zend Framework application works and the routes get resolved as they should but i don't get any GET parameters inside of my application, what should i do?

You probably forget to add the "$args" behind your index.php in the end of your try_files directive. If you need args preserved, you must do so explicitly:


try_files $uri $uri/ /index.php?$args;

Now if you pass GET parameters in an URL like this one https://www.example.com/my_route?parameter_name=parameter_value, your $_GET variable will have an entry with parameter_name as key and parameter_value as value.


$parameter_value = $_GET['parameter_name']; // php style
$parameter_value = $request->getParam('parameter_name'); // zend framework style

If you only pass the GET parameters inside of your URL as part of the route, Zend Framework will extract your GET parameters. With a URL like this one: "https://example.com/module/action/controller/parameter1/parameter2" and a corresponding route like: "/:module/:action/:controller/:parameter1/parameter2", even without $args at the end of your try_files directive your $_GET parameters will exist. But if you use an URL like "https://example.com/module/action/controller/parameter1?parameter2=foo" but without $args at the end of your try_files directive, then GET['parameter1'] will have a value but not GET['parameter2'], which won't exist. So depending on the syntax you use, if your parameters are always part of your route you can omit the $args part at the end of your try_files directive else you need to add it, it's up to you.

another thing you might want to add, is a location entry that blocks access to configuration files and other files you don't want to be accessible, therefore the following line


# disable acces to some types of files
location ~* \.(htaccess|htpasswd|ini|tmx|xml|log|sh|cgi)$ {
	deny all;
}

this location entry is for speed optimization, we say that for all these extensions we want to set the maximum expiration in the header data, to ensure that the browser will cache them for as long as possible, instead of redownloading them on every request, this makes you website faster and reduces your server load as well as your bandwidth consumption:


# set expiration for the following extensions
location ~* \.(?:css|js|jpeg|jpg|gif|png|ico|mp3|mp4|ogg|wav|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ {
	expires max;
	#etag on; # etags are not supported yet but will in v1.3.3
}

now the final step is, to reload nginx, so that the configuration changes can take effect:

# service nginx reload

launch nginx

# service nginx start

to restart nginx type

# service nginx restart

something you should read when trying to avoid pitfalls during your nginx configuration is the nginx pitfalls document