How to Proxy Liferay Using Nginx

Swing by our booth at #DMFSNY

In this post we will describe the top 5 reasons to use Nginx as a proxy in front of your Liferay application.

What is Nginx?

Nginx is a light-weight and popular HTTP and reverse proxy server. According to Netcraft, Nginx today serves 20% of the top 1,000,000 busiest websites. We started using Nginx over the past couple of years to proxy our Liferay deployments for the following reasons:

  1. It’s light weight nature
  2. Ease of configuration

Although Apache is still more popular today and has been since the 90s, Apache starts to slow down under heavy loads because it has to keep spawning new processes that consume more memory and CPU time. Apache will also start refusing requests when it has reached its connection limit.

The difference with Nginx is that it is event based, asynchronous, and non-blocking by nature. A rule of thumb with Nginx deployments is to configure one worker per CPU on your server. Each worker can handle thousands of concurrent connections. This difference in architecture makes Nginx much faster and more memory efficient than Apache at serving up static files such as images, CSS and Javascript.

Market share of Nginx in top 1,000,000 busiest websites

When to use Nginx with Liferay?

You don’t have to place a proxy in front of Liferay, but it is a good idea to do so if you would like to load balance your requests across multiple Liferay instances, provide HTTP caching, or even just proxy different domains to the same Liferay instance.

It’s easier to configure SSL certificates using Nginx than doing so in Tomcat

Once we have our SSL key and certificates, all we have to do is tell Nginx about them:

ssl_certificate      /etc/nginx/server.bundle.crt;
ssl_certificate_key  /etc/nginx/server.key;

Finally, we define Nginx server blocks to listen on port 443 for SSL requests:

server {
     listen       443 ssl;
     ...

Now, we must configure Nginx as a proxy and let it know how to reach our Liferay instance. Assume our Liferay instance is located at 192.168.1.100 port 8080, we configure what’s referred to as an upstream server in Nginx.

upstream liferay-app-server {
   server 192.168.1.100:8080 max_fails=3 fail_timeout=30s;
}

We can then use this upstream server in other configurations. In the example below we actually configure the proxy pass and pass through requests to Liferay.

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass  http://liferay-app-server;

If you chose to use Nginx as a proxy in front of Liferay, you will also need to let Liferay know that there is a web server in front. This can be configured in your portal properties in Liferay:

web.server.http.port=80
web.server.https.port=443

The greatest benefit here is that it will always be faster to restart Nginx than to restart Liferay when loading a new SSL certificate.

Rewriting URL and other URL gymnastics

It’s common in any application to re-write URLS to make them friendlier, easier to remember or just more SEO friendly. To do this within any Java web application you have to resort to filters such as Tuckey’s, for example, to handle rewriting URLs. However, this too is much easier to do in Nginx:

 rewrite ^/web/([^./]*)$ /web/$1/ permanent;

The above rule for example enforces a trailing slash to be present at the end of each URL.

Fixing Liferay cache headers

You may have seen an issue in Liferay where IE does not recognize stylesheets as stylesheets and although they load they are not processed by IE. We ran into this last year and realized it had to do with the Liferay SASS filter failing out due to an error in one of the stylesheets. The problem here is that when this happens, Liferay still returns the stylesheet but doesn’t set the appropriate content type headers. We quickly fixed this on Nginx by enforcing the content type headers:

if ($request_filename ~* ^.+.css$) {
   add_header Content-Type text/css;
   expires 5d;
 }

Notice that the Nginx syntax also allows for such if blocks to apply specific rules at certain conditions only. It’s fairly flexible when you look at the list of variables that are available that you can base your condition on.

Load balancing

In a clustered environment with multiple Liferay deployments you will need to add a load balancer in front of your Liferay instances. This could be an actual hardware load balancer, but since most environments we deploy on today are virtualized environments this is typically done via software load balancing. Nginx’s efficiency also lends itself to be used as a load balancer. We can use Nginx as a load balancer in several ways: 1) round-robin, 2) least-connected and 3) ip-hash. The round-robin method is one of the simplest approaches. Basically, requests are handed out to each of the instances in the unit. The ip-hash method allows you to hand out requests to specific instances based on the client’s IP address. You saw the below configuration above when we were configuring the proxy to Liferay. The same configuration with additional up stream servers is how you would configure round-robin load balancing in Nginx. You would have to look into session replication or sticky sessions in Liferay so that sessions can move between different Liferay applications. An alternative is to use the ip-hash load balancing where each application server deals with requests from the same IP range and hence requests from the same client will always be routed to the same application server. This will be left to another post.

upstream liferay-app-server {
   server 192.168.1.100:8080 max_fails=3 fail_timeout=30s;
   server 192.168.1.200:80808 max_fails=3 fail_timeout=30s;
}

Caching

One last use case we will cover is Nginx caching. As you might be aware, optimizing your application for performance and fast response times can be time consuming and difficult to perform. Liferay, by itself, also requires tweaking and managing its configurations so that it actually performs as well as you would want it to. Out of the box Liferay installs are not meant to be deployed to a production environment and actually require you to review your deployment and infrastructure to optimize for this. In addition to designing your application with application-level and database-level caching another area to invest in is HTTP caching.

http {
    ...
    proxy_cache_path /data/nginx/cache keys_zone=one:10m;
    server { 
       proxy_cache one;
       location / {
           proxy_pass http://localhost:8000;
       }
    }
}

Configuring caching is as easy as identifying the proxy_cache_path to define the path to where responses can be cached and finally, defining which server blocks will utilize caching via the proxy_cache directive above.

That’s all for this post. We covered 5 use cases for using Nginx and how to utilize these with Liferay.

Was this article useful?  Let us know. Or, if you have used Nginx and Liferay together in a different way, share your experiences below. We’d love to hear what others are doing in this realm.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *