Thursday 27 June 2013

Redirect in nginx

From www.example.com to example.com


server {
         server_name     www.example.com
          return    301 $scheme://example.com$request_uri;
}

From example.com to www.example.com


server {
         server_name     example.com
          return    301 $scheme://www.example.com$request_uri;
}

-------
Some people try it in this way also, but it's a bad way of doing redirect as per nginx documentation. http://wiki.nginx.org/Pitfalls

server   {
   server_name www.domain.com;
   rewrite  ^/(.*)$  http://domain.com/$1 permanent;
}

server   {
   server_name domain.com;
   rewrite  ^/(.*)$  http://www.domain.com/$1 permanent;
}

Securing your website while using nginx, Deploying SSL certificates in nginx

Nginx is very very simple for deploying certificated and start serving HTTPS requests. Just create the copy of server block that you have written for serving HTTP requests and create another server block with the following changes.

server {
        server_name www.example.com;
        listen 443;
        ssl on;
        ssl_certificate      /etc/ssl/certs/www.example.com.crt;
        ssl_certificate_key  /etc/ssl/private/server.key;

        ssl_session_cache    shared:SSL:10m;
        ssl_session_timeout  10m;

        location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|txt|pdf)$ {
                root /var/www;
                access_log off;
                expires 365d;
        }

        location / {
                proxy_pass        http://localhost:8181/;

                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

                proxy_set_header X-Forwarded-Proto $scheme;
                add_header Front-End-Https   on;
        }
}

Issues and troubleshoots


1. While installing certificates, in the configuring you do not need to keep the intermediate certificate as you would have seen in Apache. Browsers usually store intermediate certificates which they receive and which are signed by trusted authorities, so actively used browsers may already have the required intermediate certificates and may not complain about a certificate sent without a chained bundle.
To check that try this URL : http://www.sslshopper.com/ssl-checker.html

To solve this possible issue : copy the intermediate certificate content in the main certificate content but after the main content.

$ cat bundle.crt >> www.example.com.crt

2. Here is a known error which you might face
"SSL_CTX_use_PrivateKey_file(" ... /www.example.com.key") failed (SSL: error:0B080074:x509 certificate routines: X509_check_private_key:key values mismatch)"

This error means "nginx has tried to use the private key to use the certificate" and you might have copied the intermediate certificate first and then main certificate content, because in that case private key will not match. So change the content on www.example.com.crt to have main content first and
then intermediate certificate contents.

$ cat main_certificate bundle.crt > www.example.com.crt

If that is the not the case, possibly you should check the certificate issuing authorities, because somehow private key is not matching. Or try to figure out by reading the log file "/var/log/nginx/error.log".

3. One of the most important thing is to add these lines in configuration
proxy_set_header X-Forwarded-Proto $scheme;
        add_header Front-End-Https   on;

Because when you do the proxy_pass you do it on http protocol, so even if user is making https request, your back-end server won't be aware of that. So pass that information in a X header, "X-Forwarded-Proto" is de-facto to pass the protocol information over proxies.

Correspondingly in tomcat,  if you are using JAVA based application, request.isSecure() will not work any more. So write a central API to get the
protocol information, something like this.

public static boolean isSecure(HttpServletRequest request){
String protocol=request.getHeader("X-Forwarded-Proto");
if("https".equals(protocol)){
return true;
}else{
return request.isSecure();
}
}

Saturday 22 June 2013

nginx proxy_pass configuration, complexity, settings, issues, solutions

Ideally when you set these parameters for proxy_pass, its good enough.

location / {
               proxy_pass        http://localhost:8080;
               proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
               proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             
                proxy_connect_timeout      30;
                proxy_send_timeout         30;
                proxy_read_timeout         600;

                proxy_buffer_size          4k;
                proxy_buffers              4 16k;
                proxy_busy_buffers_size    64k;
                proxy_temp_file_write_size 64k;
}

How to pass the remote address to back-end server while using nginx


In case of proxy_pass, there is a complexity, when back-end server will try to access the requested IP address it will return either 127.0.0.1 or may be the local subnet IP where nginx is deployed, because nginx is proxy server and it overrides the information of requested IP address. So the solution is to set an extra parameters in request header at the time of making proxy.
statement "proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;" meant for that only.


How to get the remote address in your back-end server while using "X-Forwarded-For"


Story is not yet over, Not now you have to do something at your back-end server to extract the requested IP from request header.

In case of back-end is Apache it is little simple, you just need to install a module
$ sudo apt-get install libapache2-mod-rpaf
And configure the file /etc/apache2/mods-available/rpaf.conf

<IfModule mod_rpaf.c>
RPAFenable On
RPAFsethostname On
RPAFproxy_ips 127.0.0.1
</IfModule>

But in case of back-end as Tomcat, it is little complex, 
you will never get it via request.getRemoteAddr(); So write a global API to access the remote address, like this,

public static String getRemoteAddress(HttpServletRequest request){
String ip = request.getHeader("X-Forwarded-For");
if(ip==null || "".equals(ip)){
ip=request.getRemoteAddr();
}
return ip;
}

So if it is found in "X-Forwarded-For" as a request headers, it will return or else it will get from request.getRemoteAddr(). This kind of programming is good because tomorrow if you plan to use Apache proxying using AJP protocol, then you don't need to make any back-end change, in case of AJP, it will get you remote address directly from request object, and in case of other proxying, it will get you from "X-Forwarded-For" header.

Some other configurations points

1. proxy_connect_timeout directive assigns a timeout for the connection to the upstream server(or back-end server). It's default value is 60s.

This is not the time until the server returns the pages, that is the proxy_read_timeout statement. If your upstream server is up, but hanging (e.g. it does not have enough threads to process your request so it puts you in the pool of connections to deal with later), then this statement will not help as the connection to the server has been made. 

So in case you ever get proxy_connect_timeout at nginx, check your back-end connection limit.

2. proxy_read_timeout - this is very very important, default value is 60s.
This directive sets the read timeout for the response of the proxied server. It determines how long nginx will wait to get the response to a request. The timeout is established not for entire response, but only between two operations of reading.

In contrast to proxy_connect_timeout, this timeout will catch a server that puts you in it's connection pool but does not respond to you with anything beyond that, then proxy_read_timeout will come in picture. Be careful though not to set this too low, as your proxy server might take a longer time to respond to requests on purpose (e.g. when serving you a report page that takes some time to compute). 

You can also set different proxy_read_timeout which could be higer value like 10minutes for certain location.

location /admin/reports/ {
// other proxy_pass settings
proxy_read_timeout  600;
}

location / {
// other proxy_pass settings
proxy_read_timeout  30;
}

3. proxy_send_timeout - default value is 60s
This directive assigns timeout with the transfer of request to the upstream server. Timeout is established not on entire transfer of request, but only between two write operations. If after this time the upstream server will not take new data, then nginx is shutdown the connection.


Nginx setup for segregating static and dynamic content from nginx and back-end server using proxy_pass

This configuration will set the static contents to be served from nginx and dynamic contents from back-end server, may be Apache (in case of PHP based application), Tomcat (in case of Java based application).

For this purpose we essentially use proxy_pass module of nginx. Its very very simple with nginx, create two different location context and serve them differently. Once using the root mean providing the directory where to find the file, and other use proxy_pass

server {
          server_name www.example.com;
 
location ~* \.(jpg|jpeg|gif|css|png|js|ico|html)$ {
root /var/www;
access_log off;
expires 365d;
}

location / {
 proxy_pass        http://localhost:8181;
 proxy_set_header Host $host;
                  proxy_set_header X-Real-IP $remote_addr;
                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  }
}

Now here come some basic knowledge of nginx

1. In nginx you can set as many locations as possible, what ever is best match will be picked and executed.
2. If you want to create something like "www.example.com/static/" and entire URL after /static/ should be served from nginx only, you can do that.
location /static/ {
         root /var/www/static/;
         access_log off;
         expires 30d;
}

3. "access_log off" means, it will not create any log record for such request which match that location.

4. "expires 30d" means, it will set expiry header to 30 days for all such requests which will match that location. Like in apache we use mod_expires for setting expiration time of the static contents, so that browser can cache that contents for a long time. In nginx its just a one line, :)

5.  proxy_pass will let you forward the request to any back-end server.
"proxy_pass        http://localhost:8080;" will forward your request for dynamic contents possibly to back-end tomcat server.

#How to set expiration time for static contents while using nginx


Friday 7 June 2013

nginx - (13: Permission denied) while reading upstream

2013/06/07 21:13:38 [crit] 17799#0: *717313 open() "/var/lib/nginx/proxy/2/19/0000000192" failed (13: Permission denied) while reading upstream, client: 122.167.17.4, server: www.example.com, request: "GET /web/jsp/example.jsp HTTP/1.1", upstream: "http://127.0.0.1:8181//web/jsp/example.jsp", host: "www.example.com"

Typically this is a problem of saving the buffered data from the proxy server and sending it back. When the upstream server response returns large number of bytes then nginx keeps the part of data at the disk and start sending the first received bytes to browser. So for that it uses a certain directory to maintain the data at the configured directory, which is "/var/lib/nginx/proxy" in my case. So you just need to give access to that directory to worker user of nginx.

1. open the /etc/nginx/nginx.conf to find worker user of nginx
2. Or ps -ef | grep nginx  and check which user is running worker process
$ ps aux | grep “nginx: worker process” | awk ‘{print $1}’
www-data
3. In my case it is www-data
4. Give access to that directory
$ chown -R www-data.www-data /var/lib/nginx
5. Done


Nginx access related articles Why nginx usually throws 403, Forbidden?

# Nginx is returning on part data in response
# Nginx response is chunked abnormally