How to Proxy Liferay Using Nginx


In this post we will describe the top 5 reasons to use Nginx as a proxy in front of your Liferay application.

What is Nginx?

Nginx is a light-weight and popular HTTP and reverse proxy server. According to Netcraft, Nginx today serves 20% of the top 1,000,000 busiest websites. We started using Nginx over the past couple of years to proxy our Liferay deployments for the following reasons:

  1. It’s light weight nature
  2. Ease of configuration

Although Apache is still more popular today and has been since the 90s, Apache starts to slow down under heavy loads because it has to keep spawning new processes that consume more memory and CPU time. Apache will also start refusing requests when it has reached its connection limit.

The difference with Nginx is that it is event based, asynchronous, and non-blocking by nature. A rule of thumb with Nginx deployments is to configure one worker per CPU on your server. Each worker can handle thousands of concurrent connections. This difference in architecture makes Nginx much faster and more memory efficient than Apache at serving up static files such as images, CSS and Javascript.

Market share of Nginx in top 1,000,000 busiest websites

When to use Nginx with Liferay?

You don’t have to place a proxy in front of Liferay, but it is a good idea to do so if you would like to load balance your requests across multiple Liferay instances, provide HTTP caching, or even just proxy different domains to the same Liferay instance.

It’s easier to configure SSL certificates using Nginx than doing so in Tomcat

Once we have our SSL key and certificates, all we have to do is tell Nginx about them:

ssl_certificate      /etc/nginx/server.bundle.crt;
ssl_certificate_key  /etc/nginx/server.key;

Finally, we define Nginx server blocks to listen on port 443 for SSL requests:

server {
     listen       443 ssl;

Now, we must configure Nginx as a proxy and let it know how to reach our Liferay instance. Assume our Liferay instance is located at port 8080, we configure what’s referred to as an upstream server in Nginx.

upstream liferay-app-server {
   server max_fails=3 fail_timeout=30s;

We can then use this upstream server in other configurations. In the example below we actually configure the proxy pass and pass through requests to Liferay.

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_pass  http://liferay-app-server;

If you chose to use Nginx as a proxy in front of Liferay, you will also need to let Liferay know that there is a web server in front. This can be configured in your portal properties in Liferay:


The greatest benefit here is that it will always be faster to restart Nginx than to restart Liferay when loading a new SSL certificate.

Rewriting URL and other URL gymnastics

It’s common in any application to re-write URLS to make them friendlier, easier to remember or just more SEO friendly. To do this within any Java web application you have to resort to filters such as Tuckey’s, for example, to handle rewriting URLs. However, this too is much easier to do in Nginx:

 rewrite ^/web/([^./]*)$ /web/$1/ permanent;

The above rule for example enforces a trailing slash to be present at the end of each URL.

Fixing Liferay cache headers

You may have seen an issue in Liferay where IE does not recognize stylesheets as stylesheets and although they load they are not processed by IE. We ran into this last year and realized it had to do with the Liferay SASS filter failing out due to an error in one of the stylesheets. The problem here is that when this happens, Liferay still returns the stylesheet but doesn’t set the appropriate content type headers. We quickly fixed this on Nginx by enforcing the content type headers:

if ($request_filename ~* ^.+.css$) {
   add_header Content-Type text/css;
   expires 5d;

Notice that the Nginx syntax also allows for such if blocks to apply specific rules at certain conditions only. It’s fairly flexible when you look at the list of variables that are available that you can base your condition on.

Load balancing

In a clustered environment with multiple Liferay deployments you will need to add a load balancer in front of your Liferay instances. This could be an actual hardware load balancer, but since most environments we deploy on today are virtualized environments this is typically done via software load balancing. Nginx’s efficiency also lends itself to be used as a load balancer. We can use Nginx as a load balancer in several ways: 1) round-robin, 2) least-connected and 3) ip-hash. The round-robin method is one of the simplest approaches. Basically, requests are handed out to each of the instances in the unit. The ip-hash method allows you to hand out requests to specific instances based on the client’s IP address. You saw the below configuration above when we were configuring the proxy to Liferay. The same configuration with additional up stream servers is how you would configure round-robin load balancing in Nginx. You would have to look into session replication or sticky sessions in Liferay so that sessions can move between different Liferay applications. An alternative is to use the ip-hash load balancing where each application server deals with requests from the same IP range and hence requests from the same client will always be routed to the same application server. This will be left to another post.

upstream liferay-app-server {
   server max_fails=3 fail_timeout=30s;
   server max_fails=3 fail_timeout=30s;


One last use case we will cover is Nginx caching. As you might be aware, optimizing your application for performance and fast response times can be time consuming and difficult to perform. Liferay, by itself, also requires tweaking and managing its configurations so that it actually performs as well as you would want it to. Out of the box Liferay installs are not meant to be deployed to a production environment and actually require you to review your deployment and infrastructure to optimize for this. In addition to designing your application with application-level and database-level caching another area to invest in is HTTP caching.

http {
    proxy_cache_path /data/nginx/cache keys_zone=one:10m;
    server { 
       proxy_cache one;
       location / {
           proxy_pass http://localhost:8000;

Configuring caching is as easy as identifying the proxy_cache_path to define the path to where responses can be cached and finally, defining which server blocks will utilize caching via the proxy_cache directive above.

That’s all for this post. We covered 5 use cases for using Nginx and how to utilize these with Liferay.

Was this article useful?  Let us know. Or, if you have used Nginx and Liferay together in a different way, share your experiences below. We’d love to hear what others are doing in this realm.

Embrace the Enterprise Portal, or Get Left Behind

, ,

Over the past 8 years, Veriday has been engaged for the implementation of a number of Enterprise Portals using Liferay. The size and scale of the projects have varied but a common thread amongst the implementations has been the objective of organizations to move beyond the restraints of their current technology in order to better engage their employees.

Let’s say, you’d like to submit an IT ticket, track the status of a project, collaborate in real-time or simply update the content on your website.  Typically, you’d have to gather a number of departments together, agree on how to move forward and delegate resources to complete each task.  Let’s take Mindy the Marketing professional, for example.   Each time she would like to make a quick update on her organization’s website, she has to engage Ian the IT Manager, or outsource to a 3rd party vendor, wasting time and money.  Ian the IT Manager has to stop what he’s doing in order to facilitate the quick content update.  In comes the Enterprise Portal…

The convergence of information, process and technology

An Enterprise Portal isn’t just a fancy term for intranet.  For a business, an enterprise portal brings together its various applications, information, business units and services on one common platform.   The portal becomes an interactive platform for employees, which is customizable to each of their jobs and responsibilities. It is a system for information delivery and organization, collaboration, content and data management, workflows and operations management through an application that gives users a single point of access.

The portal framework exists to solve aggregation and personalization so that developers do not have to reinvent the wheel every time an organization scales or adds an application.  Customizing the employees’ portal space provides them with all of the information they need to do their jobs more quickly and efficiently than if they had to search out the data themselves.  The portal space becomes a unique online experience built around your brand.

More competitive, collaborative and improved ROI

The fundamental key is making your organization more competitive, isn’t it?  Organizations are turning to Enterprise Portals to offer a competitive advantage.  The availability of ‘’anytime-anywhere’’ managed information, on one-platform, makes for an agile business, leveraging new ideas and decisions faster through the streamlining of operations and business-wide knowledge sharing.

At Veriday, we are working with enterprises to implement portal solutions that enable their business to take advantage of cutting-edge technologies in order to gain a competitive advantage. Engaged employees will deliver more to your organization when they are equipped with the tools to do their job effectively.  In today’s digital landscape, employee engagement expectations continue to rise making portals, collaboration and content management solutions more critical then ever.


Question:  Does your enterprise portal effectively support the technology demands placed on it in today’s digital economy?

Let Your Backbone Slide – Extending RESTful web services


Backbone.js provides a very flexible framework for building JavaScript applications that interact with web services. Models and Collections serve to represent easily represent data entities, with operations to create, read, update, and delete. This set of operations – summarized as ‘CRUD’ – is very common to see in web applications.

How, though, do you interact with web services that provide more than just CRUD functionality? RESTful web services can certainly offer more than just CRUD operations, and application requirements may dictate such operations, so the Backbone.js Models and Collections will have to be modified or extended accordingly.

In this article, I’ll go over an example of how to use Backbone.js to communicate with some simple web services that provide CRUD functionality, or ‘endpoints’ along with some non-CRUD endpoints. Additionally, I’ll share some ideas on how to keep these endpoints organized in the Backbone.js Models and Collections, which helps keep the code and application self-documenting.

A blog post is a great example of an entity that could have CRUD endpoints, as well as additional ‘verbs’; you create new blog posts, read from the server to provide them to a reader or editor, update them with edits, corrections, or progress, and perhaps delete them. All blog posts should start as drafts, so there’s a need to publish (and unpublish) them, which is where we start to deviate from the standard web services and need some additional functionality from Backbone.js.

Let’s go ahead and define our blog post:

var BlogPost = Backbone.Model.extend({...});

Voila! Now, our CRUD operations are provided by Backbone.js:

Create (HTTP POST)

var blogPost = new BlogPost();
blogPost.set("title", "My First Post");;



Update (HTTP PUT)

blogPost.set("content", "Lorem ipsum something something");;



As an aside, most Backbone.js operations assume that a Model is part of a Collection, which is how it provides the URL of the corresponding RESTful web service. For the sake of brevity, I’ve omitted the Collection operations. If you’d like more detail, the Backbone.js documentation is absolutely outstanding, and even includes annotated source code for a deeper dive.

Now, let’s define the web service that will satisfy these requests. We’ll use ‘services’ as our root path parameter and ‘blogposts’ to represent our Collection of blog posts.

Create (HTTP POST)




Update (HTTP PUT)




Referencing the Collection again, a GET request to /services/blogposts would return all of the blog posts. This Collection would give us access to the individual blog post Models and provide the URL for the model to interact with the web services.

As you can see, the requests have an almost identical URL, with the Create operation being the exception. With the blog post’s unique ID provided in the URL, the indication is that we’re operating on a unique entity now, a single Model – our blog post. It’s logical that any operation performed on a single blog post must have an ID in the URL, followed by the action to be performed.

Let’s use ‘publish’ and ‘unpublish’ as our verbs that describe the additional operations that we’re going to perform on our blog post.

Since we’re modifying (Updating) the model, the request should be defined as:

Update (HTTP PUT)


Now, the issue becomes clear; Backbone.js doesn’t know about our custom endpoints. A Model’s ‘url’ property is provided by its Collection, but can be overridden to allow for the functionality we must provide. Let’s define a function that will allow us to access our publish and unpublish web services:

publish: function() {
 this.url = this.collection.url + "/" + + '/publish';;

In the above function, we reference the collection’s URL – ‘services/blogposts’ along with the model’s id. Our custom endpoint, ‘/publish’, is then appended to the URL.

This isn’t the cleanest way to achieve this functionality – we’re keeping text literals in functional code, and this pattern would have to be replicated for each custom endpoint. In a large application, that may be dozens or hundreds of times. Also, since we’ve overridden the model’s url field, any future requests would go to our custom endpoint.

Now is when the flexibility of Backbone.js really shines – we can customize the ‘url()’ function to work with our custom endpoints, which we’ll declare in a member object:

endpoints: {
 PUBLISH: '/publish',
 UNPUBLISH: '/unpublish'

We’ll now modify the url() function of the model to append an endpoint (if set), and add a getter and setter for the current endpoint variable, and reset the endpoint when the sync is finished:

var BlogPost = Backbone.Model.extend({
url: function() {
 var base =
 _.result(this, 'urlRoot') ||
 _.result(this.collection, 'url') ||
 if (this.isNew()) return base;
 // Add the current endpoint to the url provided
  return base.replace(/([^\/])$/, '$1/') + encodeURIComponent( + this.getCurrentEndpoint();
setCurrentEndpoint: function(endpoint) {
 this.currentEndpoint = endpoint;
getCurrentEndpoint: function() {>
 if (this.currentEndpoint) {
  return this.currentEndpoint;
 return "";
sync: function(method, model, options) {
 // sync stuff here

Now our Model has a much simpler time accessing a custom endpoint:

publish: function() {
 this.currentEndpoint = this.endpoints.PUBLISH;;

This pattern makes our Models and Collections much easier to contain, and since we’ve integrated our custom endpoint functionality in Backbone.js’ base Model (rather than our own), it’s available to every Model and Collection we create.

A comprehensive organization of web services can offer great ‘readability’ of an application; the API should tell a story. It may not always be efficient or effective to give every action required a CRUD endpoint; following a sensible progression in URL endpoints or path parameters will give those using the API a better idea of what to expect in return. Similarly, when the Backbone.js Models and Collections are implemented to match, their interactions with the web services are clear, both to a new developer and one that hasn’t seen the code in 6 months.

There are many ways to organize RESTful web services depending on the needs of the client, the server, and the application. It may not always be convenient or useful to create unique CRUD web services for every operation required for an application, so some ‘overloading’ of the models/entities to provide access to non-CRUD endpoints may be an elegant solution. With the flexibility provided by Backbone.js, it is entirely possible to modify the Models and Collections to interact with a set of RESTful web services, regardless of the architecture.

Liferay Vs. SharePoint: Who is using these technologies?

, ,

Portals continue to evolve as platforms gain new features that increasingly blur the boundaries between portals and areas such as content management. Customer experience, customer engagement, digital experience and marketing integration have been a large focus of portal platforms for the past couple of years as more and more enterprises have embarked on portal implementations. Today we will examine Liferay Vs. Sharepoint

Earlier this year, Gartner released its latest Magic Quadrant for horizontal portals. Microsoft SharePoint and Liferay were both named in the top 5 for leaders in horizontal portals.   But, who exactly is using Liferay and SharePoint and who are some of their top global customers? Below is a brief summary on the customers and industries using the Liferay and SharePoint platforms.

Liferay – who is using it?

Liferay is the leading Open Source portal server.  Many enterprises are using Liferay to build robust business solutions that deliver long-term value and results.  The company has seen a recent rapid growth in the past few years.  Liferay is an all-in-one enterprise portal with broad product capabilities that provide a user-friendly interface where you can centralize, share and collaborate.

Liferay has proven its real world performance globally with many clients across many diverse industries and business functions. It has been used in just about every industry around the world including automotive, education, government, healthcare, financial services, IT and Hi-Tech, media and entertainment and more.  It is primarily used for corporate websites, intranets and extranets but is highly scalable and easy to launch with many out of the box features.  Major organizations around the world choose Liferay for a wide variety of business functions beyond the traditional portal:

–  Intranet portals
–  Extranet portals
–  Content and Document Management
–  Web publishing and shared workspaces
–  Enterprise collaboration
–  Social networking
–  Enterprise portals and identify management

Liferay is growing year over year, and has over 150,000 community members, 5 million downloads, over 500 apps in Liferay Marketplace, and 650 employees.

Some of Liferay’s key customers include:

Learn more about their case studies and the enterprises using Liferay across industries and around the world.

SharePoint – who is using it?

SharePoint’s usage is widespread because of its complex collaboration structure. The platform allows you to develop your business collaboration solutions fast and effectively.  Similar to Liferay, SharePoint’s customers are spread globally across just about every industry including retail, education, transportation and more.

According to Microsoft, SharePoint is adding approximately 20,000 SharePoint users every day.  That is approximately 7.3 million new SharePoint users every year. Similar to Liferay, the majority of customers use SharePoint as an internal tool; intranet/extranets and enterprise content and document management.

Here are the 5 most common uses of SharePoint:

  • Intranet portals
  • Extranet portals
  • Enterprise content and document management
  • Public facing websites
  • Forms & workflow

Some of Sharepoint’s key customers include:

Check out some of SharePoint’s case studies here.

Which portal you choose depends entirely on your industry, and what tasks and objectives you are looking to accomplish.  In a previous article, we took a look at some Alternatives to SharePoint.

Question:  What portal technology are you using for your business?  Are you satisfied with it? If not, what frustrates you about your portal technology?  Share your experiences below. 

Top 5 Application Servers for Liferay Deployments


We often get asked by new Liferay customers, “What application server should we deploy Liferay on?”. Our answer always starts with, ”What are you using today?”.   If your organization already runs applications using a Java stack then there’s a good chance you can leverage that experience when building out your Liferay environment.

That said, here are our top five application servers for Liferay deployments:

1. Tomcat 7.0

Tomcat is an open source web server and a supported Liferay application server developed by the Apache foundation. Tomcat powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations. There’s a good chance it is running applications you are using on the Internet daily. The Tomcat application is lightweight, reliable and does its job well.

Veriday has been running Liferay on Tomcat for over five years. It has become our go-to server for new deployments.

2. JBoss 7.1 AS

JBoss is an open source application server developed by RedHat. It complies strictly with the Java EE 6.0 Web Specification and OSGi Core 4.2. If a full J2EE stack has appeal or your organization is already using JBoss internally, then it is a very good choice. This is also a support Liferay application server.

3. Websphere 8.5

4. Weblogic 12c

IBM’s Websphere and Oracle’s Weblogic are the big guns in the Java application server world. They do their job very well but they don’t come cheap. If you’re using them, then you likely have good reasons and likely aren’t looking for change. Liferay fully supports both Websphere and Weblogic environments.

5. Glassfish v3.1

Glassfish is an open source application server originally developed by Sun and now sponsored by Oracle. It’s a solid choice if you’re already using Glassfish, however, its future is gloomy.  Glassfish application server wouldn’t be Veriday’s top choice for new Liferay deployments.

A note about bundles.

Liferay is available bundled with Tomcat, JBoss and Glassfish. These bundles simplify the process of installing, configuring and running Liferay. Keep this in mind when selecting a target application server for your Liferay installation.

Application servers supported by Liferay but not making our top list include Resin, Tcat, tcServer, JOnAS as well as older versions of our top 5 picks. Your businesses experience, existing infrastructure and support needs might lead you to one of these options.

Of course, we’re always available for support in your decision making process and designing your Liferay target Architecture. From conceptual planning to implementation to technical support, we have your back when it comes to making your next online project a milestone success.

Leveraging Liferay’s Scripting Console


Recently, we were faced with a challenge. We were trying to add a new page to Liferay with a specific name friendly URL but when we went to add the page we saw an error indicating the URL was already in use.

Screen Shot 2014-11-12 at 9.36.41 AM

This particular site has hundreds of pages and the friendly URL didn’t appear to match any existing pages. We spent a few minutes clicking through trying to locate the conflicting page without any luck. We followed the link lead to a PDF on the site but the page name used to configure it still wasn’t obvious. We needed a better approach to solve this.

Enter Liferay’s scripting console.

Screen Shot 2014-11-12 at 9.39.04 AMIn version 6.0, Liferay added a scripting console to the Server Administration section of the control panel. This is a powerful tool that we frequently use during development, debugging and prototyping.  This was the perfect place to run a quick query to figure this out.

We wrote a simple groovy script. Given a friendly URL this script will return the associated page name.

import com.liferay.portal.service.LayoutLocalServiceUtil

friendlyURL = ‘/third-quarter-report’

groupId = 10182

boolean privateLayout = false;

layout1 =

LayoutLocalServiceUtil.getFriendlyURLLayout(groupId, privateLayout, friendlyURL)

out.println(layout1.getFriendlyURL() + ‘ : ‘ + layout1.getName());

Screen Shot 2014-11-12 at 9.58.33 AM

We ran the script, identified the conflicting page and worked with the marketing team to resolve the conflict.

Liferay’s scripting console has uses beyond simple queries. We also use it for:

  • Bulk user maintenance
  • Automating setup and configuration steps such as creating pages, users and roles
  • During development to quickly test Liferay APIs and verify results
  • As a rapid prototyping tool during development
  • Scripting repetitive operations and maintenance tasks
  • More complex queries to identify and resolve issues

The scripting console is a powerful feature within Liferay and shouldn’t be overlooked. For more information refer to:

Was this information useful? Share your comments below. 

Backbone.js Patterns: User notification system


At Veriday, we have been using Backbone.js to build rich web applications for a few years now. During this period we developed different patterns to make us more efficient building apps using Backbone.js as well as to enforce certain user experience standards across our applications. In this post, we will talk about our “BaseModel” and how we use it to enforce the same user experience when it comes to messages to the end user.

A traditional Backbone.js model will usually extend Backbone.Model:

Person = Backbone.Model.extend({
        initialize: function(){
            alert("Welcome to this world");

The above is fine for learning and experimenting with Backbone.js, however, as your team and codebase grows you need to have a different pattern for all your application’s models.

var User = BaseModel.extend({
     defaults: {
     initialize: function() {
         BaseModel.prototype.initialize.apply(this, arguments);

In our applications, we have a model called BaseModel.js which serves exactly this purpose. When we declare a new model we extend our BaseModel. The above example shows how a “User” model is declared in Veriday’s JavaScript applications. Our BaseModel.js would extend the default Backbone.Model.

First, why should you care about this?

Before we elaborate on this, compare today’s web applications with the ones in the early 2000s. There is definitely richer experiences today across a variety of web applications. It wasn’t like that before, and the mere fact that you could do something online like pay your bills was revolutionary enough. Since the launch of Gmail on April 1st 2004, we started to see richer experiences on the web, we started seeing JavaScript toolkits and full blown frameworks to help us develop these rich experiences. This eventually led to the current JavaScript MVC style frameworks we see today from Backbone.js,  Angular.js, Ember.js, Knockout.js and many more. As JavaScript becomes more of a “first-class” citizen on the web you will start to have the need for implementing common design patterns that have, until recently, been the case only on the back-end. The front-end was an thought that got slapped on later and glued together through a myriad of tricks. So, here’s two reasons why you should care:

  1. Eventually the default Backbone.Model will no longer satisfy your needs and you will need to change it. Modifying the Backbone.js source code is not the right answer for that.
  2. Eventually you might have to introduce new behaviour to all your models. Copying and pasting this new behaviour across all your models is not the right answer for that.

How can we implement subclassing in JS?

Javascript’s inheritance model is prototypical and not class based (like Java). We can still achieve something similar through the pattern we will describe here, and some coding conventions that the team understands and most importantly follow. Even though our BaseModel.js could technically be instantiated, we never do that. The convention is that these Base*.js Models (and we have several of them) should never be instantiated, they just get extended by other instantiable models.

var User = BaseModel.extend({
     defaults: {
     initialize: function() {
         BaseModel.prototype.initialize.apply(this, arguments);

We accomplish this  through the JavaScript prototype. In the case for the “User” above, the Backbone.js initialize method for the model is responsible for calling the parent’s initialize method. This gives us the appearance of subclassing and inheritance in JavaScript. All of our models’ that initialize methods contain the:


This is so we can inherit behaviour from the BaseModel.

var BaseModel = Backbone.Model.extend({
	defaults: {
	initialize: function() {

Another real world advantage of this approach is when we introduced Backbone-Relational into our models.  We only had to modify our BaseModel and extend the RelationalModel instead like this:

var BaseModel = Backbone.RelationalModel.extend({
	defaults: {
	initialize: function() {


How to implement a global notification system in Backbone.js

By global, we mean that each developer should never have to worry about implementing this for their component.  Each component should behave the same way in terms of notifications for success and error messages and finally if/when we ever change how our notification system operates we can control that in one place across the application. This place is the BaseModel.js.


Above is an example of a success message in Digital Agent. Let’s take a look at how this works. In our BaseModel, we attach several listeners to the different Backbone. Events we would like to listen to and take an action on. Today these are:

this.on("error", this.defaultErrorHandler, this);
this.on("invalid", this.defaultValidationErrorHandler, this);
this.on("sync", this.defaultSuccessHandler, this);
this.on("saving", this.defaultPendingHandler, this);
this.on("deleting", this.defaultPendingHandler, this);

If you are familiar with Backbone.js you might be wondering about the saving and deleting events since these are not Backbone.js events. However, because we have our BaseModel in place, we are able to change some of this behaviour. For example, take a look at this snippet from our BaseModel.sync method.

sync: function(method, model, options){

				var xhr = Backbone.sync(method, model, options);
				xhr.method = method;


				if(method == "create" || method == "update"){
					model.trigger('saving', model, xhr, options);

				else if(method == "delete") {
					model.trigger('deleting', model, xhr, options)


				return xhr;

Basically, we overwrite the sync method with our own.  We still call the original Backbone.sync method but now we can do some other things before or after that. In this case, we trigger new events for when Backbone.js is in the process of saving or deleting something. This is more from a user experience perspective so that you can show different messages when models are being saved or deleted. Without this, you will not be able to differentiate between “sync” events which correspond to the model being synced with the server.

this.message = new Message();
this.messageView = new MessageView({
     model: this.message
this.message.on("change:uniqueId", this.messageView.render, this.messageView);

Also, in our BaseModel we make use of our Message view and model. These are responsible for handling messages that are returned by the server, or client side validation, or other error messages. Since we are in BaseModel.js, this.messageView is also available in all sub models for when we have a need to show the user a message.

Let’s look at the defaultSuccessHandler we wired up to the “sync” event above. We check what the method for the AJAX request was, and based on the method we show an appropriate message. Here, you also see that we use a “defaultMessages” object. This object contains some default text, however again, because it is in the BaseModel, another model is able to provide its own messages. Ex. in the BaseModel a successful save would show “Saved”, however, as you can see in the notification image above our Page model, it can provide its own message with more context around the action i.e. a page was saved.

defaultSuccessHandler: function(model, resp, options){
	//don't show a success message if we were just fetching from the server
	if(options.xhr.method == 'read'){
	else if(options.xhr.method == 'create' || options.xhr.method == 'update') {
			type: 'success',
			text: this.defaultMessages.success
	else if(options.xhr.method == 'delete') {
			type: 'success',
			text: this.defaultMessages.deleteSuccess

This works nicely with Backbone js validation as well since by default validation errors will trigger an “invalid” event which we will also listen to. Now we can show validation as well as errors returned from the back-end in the same way throughout the application.

This was a sneak peek into one of our favourite Backbone.js patterns at Veriday. To wrap this post up:

  1. Always extend your own base model instead of the Backbone.Model. Thank us when your code base crosses  30,000 lines of Javascript and you need to make a big change to all your models.
  2. If you need to overwrite Backbone.js behaviour, always do that in your BaseModel, BaseCollection, or BaseView.

Found this blog post useful? Leave us a note below!

Rich Liferay Applications using Backbone.js and Jersey (Part 2)


In part 1 of this series, we described how Veriday builds rich Liferay portlets using Backbone.js. If you missed the first part or are unsure how to integrate Backbone.js into your Liferay then it will be helpful to read Part 1 first.The approach we describe in part 2 allows your team to be highly efficient and iterative by nature while building Liferay portlets (or any web-based software in general). On Digital Agent, we break our teams into groups of 2-3 developers. Usually the ratio of front-end to back-end is 1:1, but in some cases it could also go to 2:1, depending on complexity.

How does this approach improve our team’s agility and efficiency?

This approach will allow both front-end developers and back-end developers to proceed with their work in parallel with zero time wasted waiting on each other to finish their portion.  This allows the other developer to begin working on their section.

JSON as an interface

This requirement is important because it allows the whole team to proceed with their work and tackle each challenge in the most productive way, rather than stitching some scaffolding for the purposes of the development team being able to build.

The first step is for the team to agree on the JSON contract between the front-end team and the back-end team. Here we answer questions such as: 1) what data is needed for this interface and 2) what should the data look like? We always start with what the end product of the front-end experience should be and then work on how to get that data returned in the format that the JSON contract specified.

The Backbone Model & Collection

Below is a code snippet from one of a Backbone model for a “Store”.

define([.. ],
     var Store = BaseModel.extend({
         urlRoot: "/stores/",
         getOwner: function(){
               return this.get("owner");

From the above model we can see that the end point for this model is “/stores”. The corresponding stores collection is:

        var Stores = Backbone.Collection.extend({
            model: Store,
            urlRoot: "/stores/",
            initialize: function() { ...

The Jersey End Point

The corresponding collection for stores also has the same endpoint “stores”.

    public class StoreWebservice {
        @Resource(name = "storeService")
        private StoreService storeService;
        @PreAuthorize("hasRole('Store Owner')")
        public List<StoreDto> get(@Context SecurityContext context) {
            List<StoreDto> result = getStoreService().getAllStores();
            return result;

The Jersey web service above defines the corresponding “/stores” endpoint that our Backbone.js model and collections points at. You can also see that the StoreWebservice has access to a “storeService”. This is where different business services can be injected into our JSON API. These other services can also be Liferay services, if needed. A typical pattern we use is to not directly call Liferay services from our web services. We typically wrap Liferay services within our own utility service to ensure Liferay service calls are contained instead of being present all over your application. We also follow this pattern in the front-end where we wrap Liferay Javascript methods with our own JavaScript utility object that contains these calls.

The list of “StoreDTO” that is returned is basically the POJO representation of the Backbone.js model Store.js we showed above.  The JSON object behind Store.js and what represents is your “JSON as an interface” contract that our front-end and back-end developers agree on before proceeding.

So, how does this increase team productivity?

At this point, our application is nicely broken up into layers in which people can work in without having to wait for others to complete their section. After agreeing on the JSON interface, a typical sprint will progress where the developers working on the back-end can proceed with implementing the new services and data access methods that will extract the required data. The front-end developers will proceed with creating the Backbone.js models, collections (ex. Store.js and Stores.js), Jersey Webservice (ex. Stojrewebservice) and the Java DTOs (ex. The front-end developers will even stub out the different methods of the Jersey Webservice, even just hardcode a valid response.

 @PreAuthorize("hasRole('Store Owner')")
 public List<StoreDto> get(@Context SecurityContext context) {
     List<StoreDto> result = new ArrayList<StoreDto>();
     StoreDto store1 = new StoreDto();
     store1.setName("My Store");
     store1.setAddress("5450 Explorer Drive, Mississauga");
     store1.setHours("8am-5pm every day except weekends");
     StoreDto store2 = new StoreDto();
     store2.setName("New Store");
     store2.setAddress("100 Main Street West, Hamilton");


     return result;

At this point, our team can proceed with building out their own areas of the application with little dependency on each other’s components, early in on the sprint.  We push the integration towards the end of the 2 week sprint where we now have iterated a few times over the front-end and back-end and have ironed out any unforeseen challenges. At this point, what is left is for our developers is to wire up the methods that the front-end team defined, in their Jersey classes, to the actual business services that were implemented.

The approach is not perfect but it definetely helps productivity from day 1. The approach allows us to have developers who are passionate about the front-end focus on the front-end, and those who love working on the back-end focus on the back-end. Even our full stack developers can take full advantage of this approach.

Being able to build applications in this style is also a testament to Liferay’s flexibility. Don’t be afraid to bring your own experience to your Liferay stack!

Liferay Portal, a Leader in the Gartner Magic Quadrant


As one of Liferay’s proud Canadian partners in portal technology, we were thrilled to see Liferay, provider of the world’s leading enterprise-class, open source portal, announced that it had been positioned by Gartner, Inc. in the Leaders Quadrant of the Magic Quadrant for Horizontal Portals.

Gartner, a leading information technology research and advisory company, placed Liferay in the Leaders Quadrant based on completeness of vision and ability to execute on that vision.

Gartner Magic Quadrant

For more about Liferay, visit

About Liferay

Liferay, Inc. is a leading provider of enterprise open source portal and collaboration software products, servicing Fortune 500 companies worldwide. Clients include Allianz, BASF, Cisco Systems, Lufthansa Flight Training, Rolex SA, Siemens AG, The French Ministry of Defense, and the United Nations. Liferay offers Enterprise Edition subscriptions, which provide access to emergency fixes, software updates, 24/7 support SLAs, and subscription-only features. Liferay also offers professional services and training to ensure successful deployments for its customers. Liferay, Liferay Portal, and the Liferay logo are trademarks or registered trademarks of Liferay, Inc., in the United States and other countries.

How to Manage Dozens of Themes?


In the world of web applications, there is a clear divide between the front-end (client) and the back-end (server); In the last several years, frameworks have emerged for both areas that offer substantial time savings in development and design. Libraries like jQuery, Backbone.js, and Underscore.js make life in the front-end much easier, and technologies like Spring, Hibernate, and Jersey reduce tedious rework and ease integration in the back-end.

It’s only logical that tools would evolve also for creating stylesheets, which can grow to gargantuan sizes in an enterprise portal environment. Enter LESS (and other CSS pre-processors), which provides JavaScript-like functionality to CSS – the ability to define variables, create functions, and nest rules, which results in being able to write better code, faster. In a Liferay portal with dozens of themes, the time savings achieved by using a pre-processor can grow to be substantial.


The attention to the styling of a website can often be eclipsed by the attention to the functional specs; how a website *works* is more important than how a website *looks*. If the styles are ignored for too long, however, especially in a Liferay portal environment with lots of themes, performing edits or upgrades can become hard to manage.

LESS allows for the declaration of variables, which greatly simplifies the reuse of colors, dimensions, and properties. Want to change the entire color palette of the site? Change 2 or 3 variables. Want to add a layout for large, widescreen monitors? Add a single variable and re-use logic for containers and columns. Want to change the size of every piece of text on the site? Define a base font-size and scale all other elements from that.

Variables also allow for consistency between themes – common elements like fonts, banners, and logos can be changed across dozens of themes by altering a single variable.


The use of functions (or mixins) in CSS pre-processors is well-documented; it’s easy to find libraries that will provide multiple vendor prefixes (-moz-, -webkit-, -o-, -ms-) or adjust multiple properties with a single parameter (border-radius, text-shadow). What about a function that will allow you to use different versions of images based on the style of the theme? By passing in parameters like size and color, it’s possible to create themes that are ‘aware’ of their layout and color palette and can provide corresponding images.

Here’s an example of a logo that can adjust its color and width, with a default of 280px:

.logo (@color, @size: 280px){
background:url('../images/logos/example-@{color}-logo-@{size}.png') center center no-repeat;

To use this in a LESS file, this function simply needs to appear along with any other CSS properties:

#my-logo {
.logo("black", 200)

This would output a logo with background image source:


Again, the use of LESS allows for more efficient, more flexible code that reduces rework, enabling common elements between themes that can easily be changed to match the overall aesthetic of the theme.


The benefits of a CSS pre-processor that allows nesting are two-fold. First, it saves time and keystrokes by not having to re-type selectors. For example:

#my-section {
  a {

Is rewritten by LESS into:

#my-section {
#my-section a {

Second, nesting ensures that all of the style definitions have the proper top-level selector. This is especially important in a portal environment, where there is no guarantee that a given class is not in use. From the example above, if ‘#my-section’ contained all of the edits, there is no chance that another, more specific selector (from either the Liferay portal or the browser styles) will take precedence. Avoiding these conflicting CSS rules is a huge time saver and prevents the front-end team from having to play ‘CSS Detective’ more than necessary.

In conclusion, the addition of a CSS pre-processor to any development environment can be a great quality-of-life improvement by increasing the productivity and consistency of the front-end development team while simultaneously decreasing maintenance overhead. When applied to Liferay portal, a pre-processor can assist in ease of re-use between themes and avoiding collisions and overwrites from existing styles.