Bringing Inline Editing capabilities into Liferay Portal 6.1 and 6.0

General Blogs 23. August 2013 Von Jonas Yuan

 
Inline editing is a quick way to update content by making changes directly. CKEditor 4 (http://ckeditor.com) comes in the form of Inline Editing, an HTML5 feature that allows users to edit pages directly in their final state. Inline editing gives us a perfect idea of how the content will look without using impractical “preview” functions.
 
Involved content included, but not limited, Web Content articles, Blogs entries, Wiki pages, Message Boards messages, ForgeCart products (http://demo.forgelife.com), Knowledge base articles, etc.  This article will address how to bring inline editing capabilities (CKEditor 4) into Liferay portal 6.1 (GA1, GA2 and GA3).
 
CKEditor 4 in portal 6.1 GA2, advanced editing mode
 

 

Features

Inline editing mode for web content
 
Icons: Source, Save, Cancel, Edit, etc.
  • Save: save current changes;
  • Cancel: cancel current changes;
  • Edit: go  to advanced editing mode.
 
 
  • Source: view / update content via source
 
Inline editing mode for blogs entries
 
 

Inline editing mode for Wiki pages

 

 
Inline editing mode for Message Boards messages
 
 
Enable / disable inline editing via " Edit Controls"
 
  • Enabled inline editing when "Edit Controls" was checked;
  • disabled inline editing when "Edit Controls was unchecked.
 
 

Implementation

 
The following is summary of inline editing feature implementation.  
 
  • First, add attributes in UI tag : InputEditorTag.java
_inlineEdit = false;
_inlineEditSaveURL = null;
_inlineEditContentURL = null;
 
  • And then, add attributes in UI tag: liferay-ui.tld
<attribute>
<name>inlineEdit</name>
<required>false</required>
<rtexprvalue>true</rtexprvalue>
</attribute>
 
<attribute>
<name>inlineEditSaveURL</name>
<required>false</required>
<rtexprvalue>true</rtexprvalue>
</attribute>
 
<attribute>
<name>inlineEditContentURL</name>
<required>false</required>
<rtexprvalue>true</rtexprvalue>
</attribute>
  • Finally, add CKeditor 4 (latest version) as a hook

 

Summary

Inline editing is nice feature for Web Content articles, Blogs entries, Wiki pages, Message Boards messages, ForgeCart products , Knowledge base articles, etc.  The hook (forgecart-ckeditor-hook) was designed for both Liferay portal 6.1 and 6.0. The WAR (forgecart-ckeditor-hook) is avaliable on demand.
 

Drag and Drop automatic file upload in Liferay CMS and ForgeCart, a complete eCommerce solution

General Blogs 17. November 2012 Von Jonas Yuan

There are a lot of demos and tutorials that show you how to drag-and-drop a file into the browser and then render it on the page. This article is going to address how to bring the feature (Drag and Drop automatic file upload) into Liferay CMS and ForgeCart, a complete eCommerce solution.

Introduction

Drag-and-drop and upload include following specific tasks (refer to blogs Drag and Drop and Automatically Send to the Server):
  • Capture the drop event and read its data.
  • Post that binary data to the server.
  • Provide feedback on the progress of the upload.
  • Optionally render a preview of what's being uploaded and its status.
To achieve all of these, the following HTML5 and non-HTML5 APIs are required:
  • Drag and drop (DnD), a first class citizen in HTML5 - The specification defines an event-based mechanism, JavaScript API, and additional markup for declaring that just about any type of element be draggable on a page. 
  • FormData object represents an ordered collection of entries. Each entry has a name and value.
  • XMLHttpRequest (XHR) is an API available in web browser scripting languages such as JavaScript. It is used to send HTTP or HTTPS requests directly to a web server and load the server response data directly back into the script.
  • FileReader interface provides methods to read File objects or Blob objects into memory, and to access the data from those Files or Blobs using progress events and event handler attributes; it inherits from EventTarget. It is desirable to read data from file systems asynchronously in the main thread of user agents. 

In CMS

JavaScript code snippet

function DnDUpload (d, u, m, t) {
 
    var dropZone = d,
        url = u,
        max = m,
        ms = t;
        fileQueue = new Array();
...
var showFileInList = function (ev) { 
  var file = ev.target.file;
 ... 
}
var uploadFile = function (file, li) { 
  var xhr = new XMLHttpRequest();
  ...
}
}
 

Drag and drop, automatic file upload into Server Document and Media, integrated with Tree-View.

Online demo (Liferay Portal 6.1 GA2)

Test account: test@forgelife.com/test
 
Note that FileReader is currently supported in FireFox, Chrome and Opera. IE 9 and Safari 5.x do not support FileReader yet.  Thus this feature would work well in browsers FireFox, Chrome and Opera.
 

In ForgeCart

JavaScript settings
 
window.onload = function () {
    if (typeof FileReader == "undefined") 
    alert("<%= dndTitle %>");
    
    DnDUpload = new DnDUpload(
        document.getElementById("fileDrop"),
        "<%= uploadURL %>",
        <%= fileMaxSize * 1024 %>,
        2000
    );
    DnDUpload.init();
}
 
drag and drop,  automatic image upload
 

 

drag and drop , automatic file upload

 

Download

 
ForgeCart 1.0 beta for Liferay portal 6.1 GA1 or GA2.
 
 

Seamless Integration of OpenX Ad Server in Liferay Portal – ForgeAds Publisher

General Blogs 27. März 2012 Von Jonas Yuan

OpenX Source is an open-source and PHP-based advertising server, featuring an integrated banner management interface and tracking system for gathering statistics. It enables web site administrators to rotate banners from both in-house advertisement campaigns as well as from paid or third-party sources, such as Google's AdSense. It provides standard banner rotation, click tracking, zone-based ad selection, zone-based campaign targeting, direct ad selection, ad targeting (per browser, domain, language, etc.), ad capping and full Adobe Flash support, etc. Refer to http://www.openx.com/.

The following diagram shows the main terms used in the Ad server OpenX 2.8.

 

Ad server: Normally operated by a third party, an ad server delivers and tracks ads on websites. Ad servers perform a useful role in building trust between advertisers and publishers since the statistics they supply are likely to be free of spin.

Ad space: The area on a web page set aside for the display of ads.

Ad units: These refer to the different types of ads which appear online, including banners, interstitials, pop-ups, skyscrapers and text links.

Affiliate marketing: An advertising system based on the CPA (Cost Per Action) payment method, where websites display advertisers' banners for free but receive payment when registrations or sales result from click throughs.

AIDA: Attention, Interest, Desire and Action - a simple marketing acronym that describes the supposed path to successful selling.

Banner: This is an ad that appears on a web page which is typically hyperlinked to an advertiser's website. Banners can be images (GIF, JPEG, PNG), JavaScript programs or multimedia objects (Flash, Java, Shockwave etc.).

Campaign: Refers to an advertising project in its entirety, from conception through creation and buying to tracking and final analysis.

For more online terms, refer to online-advertising-terms.

In general, there are three main parts that OpenX Ad server provides.

  1. Inventory - banners management
  2. Delivery – ads publishing
  3. Statistics – ads tracking and reporting

This article is going to address an extension for part 2 – seamless integration of OpenX Ad Server in Liferay Portal 6, implemented as a portlet plugin called ForgeAds Publisher.

This plugin has been released for different versions of Liferay Portal such as 6.0 CE/EE and 6.1 CE/EE. If needed, it can run in other Liferay version like 5.2 and 5.1 with some additional customization.

Integration Overview

The following diagram depicts an integration overview.

On the Ad server side, it includes, but is not limited to, inventory management, user & role management, AIDA management, etc. On the portal side, the involved components cover user and role management, auditing, reporting and ads publisher.

The integration approach

The integration is configurable as follows

## Oracle 10g, 11g
#jdbc.mdb.driverClassName=oracle.jdbc.driver.OracleDriver
#jdbc.mdb.url=jdbc:oracle:thin:@localhost:1521:XE
#jdbc.mdb.username=pradm
#jdbc.mdb.password=pradm
#jdbc.mdb.liferay.pool.provider=dbcp
 
## MySQL
jdbc.mdb.driverClassName=com.mysql.jdbc.Driver
jdbc.mdb.url=jdbc:mysql://108.227.160.137:3306/openx_2880?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.mdb.username=openx
jdbc.mdb.password=openx
 
## OpenX ad server settings
openx.ad.server.url=http://ads.forgelife.net
openx.ad.server.contextpath=/openx/www
openx.ad.server.delivery.ck=/delivery/ck.php?
openx.ad.server.delivery.ai=/delivery/ai.php?
openx.ad.server.image.contextpath=/images/
 
## OpenX content types
openx.ad.server.content.types=html,jpeg,png,gif,txt

As shown above, it supports all major relational databases like Oracle and MySQL and can link to any OpenX Ad Server.

The Ad server and Liferay Portal with ForgeAds Publisher can be installed on the same server (with different port numbers) or two different servers.

ForgeAds Publisher

ForegeAds Publisher has the following features:

1) Publishing ads with Liferay comments and ratting

2)  Sharing the Ad to any web page through the typical Liferay Portal Sharing feature

 3) Flexible search capabilities for ForgeAds Publisher administrator.

Basic search by keyword:

Advanced search:

4) Synchronization with OpenX Ad Server automatically on Liferay ForgeAds Publisher configuration page

 

Live demo

Demo server (Liferay portal 6.1 GA1): http://ads.forgelife.net:8090
Demo account: demo@forgelife.com/ForgeAds

OpenX Test server (OpenX 2.8.8): http://openx.forgelife.net/openx
Admin account: openx/openx - You are the openx Ad server admin, so please do not change admin password and permissions. smiley

Download URL

6.1 CE / EE

6.0.6 CE and 6.0 EE

5.2 CE / EE

5.1 CE / EE

Assets remote publishing enhancement in Liferay portal

General Blogs 31. Dezember 2011 Von Jonas Yuan

The portal provides remote staging and publishing capability through which the users can select subsets of pages and data, and transfer them to the live site of the remote portal instance. By this feature, we can export the selected data to the group of a remote portal instance or to another group in the same portal instance. The LAR export and import features are used for remote staging and publishing. These features are implemented in the PortletDataHandler API. As mentioned earlier, the intent of this API is to import and export application content to and from the portal in a database agnostic fashion for the portal core assets and custom assets.

The following diagram depicts an overview of remote staging and remote publishing. The Staging has a set of portal core assets, custom assets and groups of users. First, the portal will export related portal core assets and custom assets based on current user’s permission as a LAR zip file. And then the portal transfers this LAR zip file to the Live through tunnel-web http or https call. The Live will import related portal core assets and custom assets, as a LAR zip file, based on same user’s permission.

Abstracted from the Liferay development cookbook: Liferay Portal Systems Development (for liferay portal 6.1 or above version)

The staging and publishing are based on a page (a set of portlets).  Portlets that are check-marked will be Staged called staged portlets. This means that their data is published automatically whenever a page containing them is published.  The following is a list of staged portlets:

  • Blogs (*)
  • Bookmarks (*)
  • Calendar (*)
  • Documents and Media (*)
  • Documents and Media Display
  • Dynamic Data Mapping (*)
  • Knowledge Base (Admin) (*) - or any custom plugins which have LAR import and export configuration.
  • Message Boards (*)
  • OpenSocial Gadget Publisher
  • Page Comments
  • Page Ratings
  • Polls (*)
  • Polls Display
  • RSS
  • WSRP (*)
  • Web Content (*)
  • Web Content Display (*)
  • Wiki
  • Wiki Display 

These portlets which are disabled and check-marked are always automatically exported even if they aren’t on the page. These which are disabled and not check-marked are never automatically published. Note that Collaboration portlets, such as Blogs, Message Boards and Wiki are excluded from being Staged by default as their data typically originates in Live.

As you can see, the staging and publishing capabilities are portlet-based. If the staged portlet contains one asset, that asset will get published. For example, the portlet Web Content Display portlet can display web content a time.  If the staged portlet contain many assets, these assets will get published, filtered by the the range (like All, From Last Publish Date, Date Range, Last, etc).. But in real cases, it would be nice to have a feature to remotely publish assets directly in asset-level.

This article will address following remote publishing enhancement.

  • Add asset-level remote-publishing capability  (LPS-17235)
  • keep the category UID same AS IS when importing or remote publishing (LPS-22092)
  • Keep asset UID same AS IS when importing or remote publishing (LPS-17550)

Asset-level remote-publishing

Currently remote publishing feature is page-based. It would be nice to have a feature to remotely publish assets directly in asset-level. In fact, it would be cool that remote-publishing feature would support both layout-level and asset-level scheduling and publishing.

Use case

  • create a web content articles A1, A2 and A3 by user A and / or User B
  • schedule to publish A1 immediately by user A
  • chedule to publish A2 in 10 min by user A
  • schedule to publish A1 in 30 min by user B
  • schedule to publish A3 in 60 min by user B

This feature could be implemented by a plugin – called asset-remote-publish-portlet.

List screenshots of the plugin implementation

Assets creation, using web content as an example

Assets selection

Remote publishing settings - reuse default portal settings

Publish selected assets - from the Stage to the Live

As you can see, you can publish one or many assets one time based on selection.

Keeping the category UID same AS IS

This feature (keeping the category UID same AS IS when importing or remote publishing) should be available for vocabularies, categories and tags.

Use case:

  • There are two environments - the stage and the remote Live.
  • In the stage, create Journal Article A1 (C11) and B1 (with categories C22). And C11 and C22 are same level categories (siblings) of the vocabulary V1.
  • Push the article A1 and B1 to the remote Live. The remote Live should have V1, C11 and C22. Both C11 and C22 belong to the Vocabulary V1, and C11 and C22 are siblings.
  • In the Stage, update C11 and C22, let C22 becomes child category of the C11.
  • Push the article A1 and B1 to the remote Live. The remote Live should have V1, C11 and C22. Both C11 and C22 belong to the Vocabulary V1, and C22 should be the child of C11. No new categories related to C11 and C22 should be created, and relationship of C11 and C22 should be updated automatically.

Implementation:

when exporting / importing categories, keep the category UID (and vocabulary UID) same AS IS.

Keeping asset UID same AS IS

This feature (keeping asset UID same AS IS when importing or remote publishing) should be available for any asset (core asset like Document and Media Library document, Web Content artcile, or custom asset like Knowledge Base article) which has UID field.

Scenario I:

  • There are two environments – the stage and the Live.
  • In the stage, create Journal Article A1 and B1, and B1 is associated to A1 - Apply className-classPK pattern to JournalArticle.
  • publish A1 and B1 from the Stage to the Live
    • A1 UID and B1 UID should be maintained same in the Live as that of the Stage
    • Association of A1 and B1 should be maintained same in the Live as that of the Stage

Scenario II:

  • An editorial prepares release article with friendly URL http://${stage.domain.name}/release/${uid}/${article.title} in the Stage
  • Before remote-publishing the release, he / she sends the release with expected friendly URL http://${live.domain.name}/release/${uid}/${article.title} to the mail-list
  • Once remote-published the release, everyone should get the same URL for the release: http://${live.domain.name}/release/${uid}/${article.title}, for example, /feature/626347/2011-Year-in-Review-Collaboration; /feature/626323/2011-Year-in-Review-Core-Networks

Implementation:

When exporting / importing assets, keep the asset UID same AS IS.

More expected staging and publishing features

  • WCM Staging: Synchronize Comments with production (LPS-11213)
  • WCM Staging - UX Check boxes remain checked when publishing to Live after subsequent visits (LPS-12392)
  • WCM Publishing - Option to disable approval process for articles (LPS-10456)
  • WCM Video Publishing - Embedding on Asset Creation and Linking (LPS-13683)
  • WCM Staging - Version/Reversions Publishing and Point in Time Previews (LPS-13562)
  • Ability to remotely publish assets based on versions (LPS-17395)

Download URLs

6.0.12

6.1 CE
 

Indexer post processor hook in Liferay 6.1

General Blogs 30. November 2011 Von Jonas Yuan

Hooks are a feature to catch hold of the properties and JSP files into an instance of the portal, as if catching them with a hook. Hook plugins are more powerful plugins that come to complement portlets, themes, layout templates, and web modules. A hook plugin can, but does not have to, be combined with a portlet plugin or a web plugin. For instance, the portlet called so-portlet is a portlet plugin for Social Office with hooks; a hook plugin can simply provide translation or override JSP page. In general, hooks would be very helpful tool to customize the portal without touching the code part of the portal, as shown in the following diagram. In addition, you could use hooks to provide patches for portal systems or social office products.

In general, there are several kinds of hook parameters in order:

  • portal-properties (called portal properties hooks),
  • language-properties (called language properties hooks),
  • custom-jsp-dir (called custom JSPs hooks),
  • custom-jsp-global (applying custom JSPs hooks globally or locally),
  • indexer post processors (called indexer hook),
  • service (called portal service hooks) – including model listeners and service wrappers,
  • servlet-filter and servlet-filter-mapping (called servlet-filter hooks),
  • struts-action (called portal struts action hooks)

As you can see, JSPs hooks can set a custom-jsp-dir that will overwrite portal JSPs. You can also add <custom-jsp-global>false</custom-jsp-global> (default to true) so that JSPs hooks will not apply globally but only to the current scope. Each site (or organization) can choose to have that hook apply just for that site (or organization).

In addition, Liferay allows portal JSPs to be overloaded by theme templates – this pattern will require that within the theme's templates folder the complete path to the original JSP be maintained with the file extension replaced to match that of the theme's chosen template language.

Abstracted from the Liferay development cookbook: Liferay Portal Systems Development (for liferay portal 6.1 or above version)

The indexer hook allows building a post processing system on top of the existing indexer, therefore plugin hook developers could be able to modify the search summaries, indexes, and queries. Refer to LPS-15811.

This article will show how to build sample indexer post processor hook. Let’s consider use case. As you know, the version 6.1 adds a lot of CMS (Documents and Media) features like

  • Capability to manage different document types: basic document, image (Image Gallery is merged into Document Library), video, audio, etc.
  • Providing Dynamic Data List (DDL) and Dynamic Data Mapping (DDM) - see blogs Dynamic Data Lists I and Dynamic Data Lists II

For the document type “Image”, we are going to add searchable keyword “Image”; while for the document type “Video”, we are going to add searchable keyword “Video”. Thus whenever you search by keyword “Image” or “Video”, you would be able to find out documents from document type “Image” or “Video”.

Implementation

You can build sample indexer post processor hook plugin in following steps.

  • Create a hook project, for example, sample-indexer-post-processor-hook
  • In liferay-hook.xml, add following lines

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.1.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_1_0.dtd">
<hook>
    <portal-properties>portal.properties</portal-properties>
    <indexer-post-processor>
        <indexer-class-name>com.liferay.portlet.documentlibrary.model.DLFileEntry</indexer-class-name>
        <indexer-post-processor-impl>com.liferay.hook.indexer.SampleIndexerPostProcessor</indexer-post-processor-impl>
    </indexer-post-processor>
</hook>

As you can see, you can set indexer-class-name as a model entity like DLFileEntry: com.liferay.portlet.documentlibrary.model.DLFileEntry; and set the indexer-post-processor-impl as com.liferay.hook.indexer.SampleIndexerPostProcessor, which must implement the interface com.liferay.portal.kernel.search.IndexerPostProcessor.

  • In web.xml, add following lines

<web-app>
    <display-name>sample-indexer-post-processor-hook</display-name>
    <listener>
        <listener-class>com.liferay.portal.kernel.servlet.SerializableSessionAttributeListener</listener-class>
    </listener>
    <listener>
        <listener-class>com.liferay.portal.kernel.servlet.HookContextListener</listener-class>
    </listener>
</web-app>

  • Create sample indexer post processor class: com.liferay.hook.indexer.SampleIndexerPostProcessor

public class SampleIndexerPostProcessor implements IndexerPostProcessor
{
    public void postProcessContextQuery(BooleanQuery arg0, SearchContext arg1)
            throws Exception {       
        if(_log.isDebugEnabled())
            _log.debug("called postProcessContextQuery()");       
    }

    public void postProcessDocument(Document document, Object object)
            throws Exception {       
        DLFileEntry dLFileEntry = (DLFileEntry) object;       
        if(_log.isDebugEnabled())
            _log.debug("called postProcessDocument(): uuid=" + dLFileEntry.getUuid() +
                    " title=" + dLFileEntry.getTitle() +
                    " dLFileEntryId=" + dLFileEntry.getFileEntryId());       
        String text = "";       
        try {           
            long fileEntryTypeId = dLFileEntry.getFileEntryTypeId();           
            DLFileEntryType dLFileEntryType = DLFileEntryTypeLocalServiceUtil.getDLFileEntryType(fileEntryTypeId);
             // add your logic here
            text = dLFileEntryType.getName();           
        } catch (Exception e) {}       
        if(text.length() > 0)
            document.addText(Field.URL, text);       
    }

    public void postProcessFullQuery(BooleanQuery arg0, SearchContext arg1)
            throws Exception {       
        if(_log.isDebugEnabled())
            _log.debug("called postProcessFullQuery()");       
    }

    public void postProcessSearchQuery(BooleanQuery arg0, SearchContext arg1)
            throws Exception {       
        if(_log.isDebugEnabled())
            _log.debug("called postProcessSearchQuery()");       
    }
    public void postProcessSummary(Summary arg0, Document arg1, Locale arg2,
            String arg3, PortletURL arg4) {       
        if(_log.isDebugEnabled())
            _log.debug("postProcessSummary()");       
    }   
    private static Log _log = LogFactoryUtil.getLog(SampleIndexerPostProcessor.class);
}

As shown in the above code, you would be able to add your own logic for summary, search query, context query and document. That’s it.

Results

search by keyword "Image"

Search by keyword "Video"

Download URL

Sample indexer post processor hook Liferay 6.1 with source code.

What’s next?

The next post would be: sample servlet-filter and servlet-filter-mapping hook in Liferay 6.1.
 

Liferay development cookbook: Liferay Portal Systems Development

General Blogs 12. November 2011 Von Jonas Yuan

Liferay Portal Systems Development

Build dynamic, content-rich and social systems on top of Liferay

  • Use Liferay tools (CMS, WCM, collaborative API and social API) to create your own Web sites and WAP sites with hands-on examples
  • Customize Liferay portal using the JSR-286 portlets, hooks, ext plugins, themes, layout templates, webs plugins and diverse portlet bridges
  • Build your own web sites with kernel features like indexing, workflow, staging, scheduling, messaging, polling, tracking, auditing, reporting and more

In Detail

Liferay portal is one of the most mature portal frameworks in the market, offering many key business benefits that involve personalization, customization, content management systems, web content management, collaboration, social networking and workflow. If you are a Java developer who wants to build custom web sites and WAP sites using Liferay portal, this book is all you need.

Liferay Portal Systems Development shows Java developers how to use Liferay kernel (version 6.1 or above) as a framework to develop custom web and WAP systems which will help you to maximize your productivity gains. Get ready for a rich, friendly, intuitive, and collaborative end-user experience!

The clear, practical examples in the sample application that runs throughout this book will enable professional Java developers to build custom web sites, portals, and mobile applications using Liferay portal as a framework. You will learn how to make all of your organization's data and web content easily accessible by customizing Liferay into a single point of access. The book will also show you how to improve your inter-company communication by enhancing your web sites and WAP sites so that you can share content with colleagues.

What you will learn from this book

  • Provide complete coverage of Liferay Portal 6.1 for both the commercial and open source versions within real example Knowledge Base portlet
  • Build basic and advanced MVC portlets with the Service-Builder and RBAC permissions, helping you to build enterprise-ready Java web sites
  • Use Ext plugins and Hooks, allowing you to perform custom actions on your portal
  • Manage images, documents, videos, audios and records using the Document and Media Library and record management
  • Build web content and web forms using the WYSIWYG editors, Custom Attributes, assets tagging and classification, and Dynamic Data List
  • Create collaborative and social API complete with Ratings, Comments, Subscriptions, Collaboration, Social Equity, Asset Links and OpenSocial
  • Improve your portal with staging, scheduling, remote publishing, caching, clustering, indexing, search and workflow.
  • Build WAP mobile-based applications and leverage diverse portlet bridges like JSF, Struts, and Spring MVC
  • Use the common API for monitoring and auditing, tracking, scripting, reporting, messaging, and more, helping you to build dynamic web sites smoothly

Approach

This book focuses on teaching by example. Every chapter provides an overview, and then dives right into hands-on examples so you can see and play with the solution in your own environment.

Who this book is written for

This book is for Java developers who don't need any prior experience with Liferay portal. Although Liferay portal makes heavy use of open source frameworks, no prior experience of using these is assumed.

Table of Contents

Chapter 1: Liferay Enterprise Portal

Chapter 2: Service Builder and Development Environment

Chapter 3: Generic MVC Portlets

Chapter 4: Ext plugin and Hooks

Chapter 5: Enterprise Content Management

Chapter 6: DDL and WCM

Chapter 7: Collaborative API and Social API

Chapter 8: Indexing, Search and Workflow

Chapter 9: Staging, Scheduling, Publishing and Cache Clustering

Chapter 10: WAP and Portlet Bridges

Chapter 11: Common API

 

Building federated search indexer against diverse data sources on top of Liferay

General Blogs 1. November 2011 Von Jonas Yuan

In brief, OpenSearch allows publishing of search results in a format suitable for syndication and aggregation. Federated search is a simultaneous search of multiple online databases or web resources, and it is an emerging feature of automated, web-based library and information retrieval systems. The portal implemented federated search based on OpenSearch standard.

Abstracted from the Liferay development cookbook: Liferay Portal Systems Development (for liferay portal 6.1 or above version, coming out soon)

This article will address an approach – building federated search indexer against diverse data sources on top of Liferay. First of all, let’s consider a use case. A capital group manages many companies (each company has many users as members), hundreds-million accounts (the entity Account) and hundreds-million related documents (meta-data and real content).

The following diagram shows ER models of the current enterprise database (DB2 for example). In one row, the entity Seller could have many Portfolio; the entity Portfolio could have many Account associated; the entity Account could have many Document associated. In another row, the entity Company could have many User; the entity User could have many DcoumentType associated; the entity DocumentType could have many Document associated.

The following is a list of main requirements:

  • Keep the current enterprise database running AS IS for different existing applications;
  • Leverage the portal for user registration and membership management;
  • Leverage the portal RBAC for authorization;
  • Search documents based on accounts’ info and document meta-data
  • Audit users’ activities, track downloads and send update notifications

Solution overview

The following diagram shows an example: DB2 has database schema I and schema II; the portal has its own database and database schema. The schema I will be mapped into the Plugin I in the portal; while the schema II will be mapped into the Plugin II in the portal.

The schema status will be used to store updated results – that is, whenever system updates account and / or document meta-data, record the status as a row. This schema is mapped into the plugin status in the portal.

In summary, the solution would be able to provide following feature.

  • Enhancement of  the Service-builder for diverse data sources in plugins
  • Read-only approach –  data lookup
  • The Pull/push approach using the trigger to update the indexer
  • Scheduling  – check update status by the scheduler
  • Federated search indexer
  • Audit users' activities, records downloads and set up email notification

Diverse data sources support in plugins through the service-builder

First of all, let’s see how to support diverse data sources in plugins through the service-builder. Wiki articles (Connect to a Database with Plugins SDK and Extend Tables in Another Database) have addressed how to connect different database with plugins SDK manually. Here we are focusing on how to support diverse data sources in plugins through the service-builder directly.

This new feature and the fix patch have been addressed at the ticket "Ability to connect different data sources in plugins through the service-builder".  The following steps show the main idea.

  • Predefine a file with JDBC settings like jdbc.properties
  • Predefine template file spring-ext-xml.ftl in order to generate the Ext spring configuration ext-spring.xml
  • Update the class ServiceBuilder.java to generate jdbc.properties and ext-spring.xml in plugins.

After applying the fix patch, the service-builder now supports diverse data sources in plugins.

Example

Let’s have a deep look on a real example. The portal (6.0.6 for examle, you can use 6.1 or 6.0 EE) uses MySQL as its default database. In DB2, there is an entity called DocumentType. Now need to bring data of document types into the portal.

  • Apply the fix patch in the portal
  • Build service xml in a plugin as follows

<service-builder package-path="com.liferay.mdb">
<namespace>MDB</namespace>
<entity name="DocumentType" table="DOCUMENTTYPE" uuid="false" local-service="true" remote-service="true" data-source="mdbDataSource">
<!-- PK fields -->
<column name="documentTypeId" type="long" primary="true" />
<!-- Other fields -->
<column name="name" type="String" />
<column name="description" type="String" />
</entity>
</service-builder>

  • Use the service-builder to generate all services.
  • Configure the target database (through jdbc.properties) in the plugin

## DB2
jdbc.mdb.driverClassName=com.ibm.db2.jcc.DB2Driver
jdbc.mdb.url=jdbc:db2://192.168.2.138:50000/mcm:deferPrepares=false;fullyMaterializeInputStreams=true;fullyMaterializeLobData=true;progresssiveLocators=2;progressiveStreaming=2;
jdbc.mdb.username=lportal
jdbc.mdb.password=lportal

  • Deploy the plugin
  • Manually create database table DOCUMENTTYPE and insert sample data in the DB2

CREATE TABLE DOCUMENTTYPE (
    documentTypeId bigint not null primary key,
    name VARCHAR(512),
    description VARCHAR(512)
);
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1001,'Type A', 'Type A');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1002,'Type B', 'Type B');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1003,'Type C', 'Type C');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1004,'Type D', 'Type D');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1005,'Type E', 'Type E');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1006,'Type F', 'Type F');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1007,'Type G', 'Type G');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1008,'Type H', 'Type H')
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1009,'Type I', 'Type I');
insert into DOCUMENTTYPE (documentTypeId, name, description) values (1010,'Type J', 'Type J');

The following screenshot shows the results of the plugin in the portal.

The pull/push data approach

The entities DocumentType and Company have hundreds rows. Thus these rows could be retrieved by the Liferay API directly. For example:

int count = 0;
List<DocumentType> list = Collections.synchronizedList(new ArrayList<DocumentType>());
try {
    count = DocumentTypeLocalServiceUtil.getDocumentTypesCount();
    list = DocumentTypeLocalServiceUtil.getDocumentTypes(0, count);
} catch (Exception e) {}

This is the beauty of the service-builder.

The entities Account and Document have hundreds-million rows. Thus these rows could be retrieved by the Liferay API for first-time indexing only, not for checking updates.  Whenever Account and Document got updated in the DB2, trigger the updates in the table status.

In the portal, define a scheduler in the plugin – check the update of the table status regular base (for example, in one minute).  Whenever found rows in the table status, retrieve these rows (and related entities like Account and Document) from the DB2, update the same in the indexer of the portal, and then remove these rows from the table status.

Building federated search indexer

Based on the entities Seller, Portfolio, Account, Company, User, Document Types, Document and View permission in the portal, build federated indexer for the plugin. From now on, you could be able to search documents via the portal default search engine.

Since all entities (from the DB2) got mounted into the portal, you can audit user activities; record downloads and set up email notification whenever in need.

Summary

The approach – building federated search indexer against diverse data sources on top of Liferay – would be useful when integrating existing enterprise database, while keeping the existing applications running. And the new feature (LPS-22552) would be nice in order to improve the functions of the service-builder.

Auditing document-downloads via Document Library Record plugin

General Blogs 9. September 2011 Von Jonas Yuan

As you have been noticed, the version 6.1 adds a lot of CMS (Document Library) features. The following is a list of these features, not the full list.

  • Capability to manage different document types: basic document, image (Image Gallery is merged into Document Library), video, audio, etc.
  • Providing Dynamic Data List (DDL) and Dynamic Data Mapping (DDM) - see blogs Dynamic Data Lists I and Dynamic Data Lists II
  • Ability to mount different repositories via CMIS (see blogs Mounting Multiple CMIS Repositories on Liferay 6.1)
  • Capability to define custom document types in the runtime
  • Capability to define custom meta-data of document types in the runtime
  • Ability to make custom document types and their meta-data searchable
  • Ability to apply Workflow on document-type-level
  • Ability to set up asset-links among assets (like Blogs Entry, Comments, Message Boards Message, Web Content, Calendar Event, Document Library Document, Wiki Page)
  • And more.

This article will introduce an additiona feature: tracking who downloaded documents in Liferay portal 6.1 via Document Library Record plugin.

Use cases

The plugin (Document Library Record plugin) will consider following use cases.

  • Audit document-downloads: who downloaded the document and when the document got downloaded.
  • Define resources to be audited:  only defined resources will get audited
  • Distinguish auditing condition: trace non-signed-in required resource and trace sign-in required resource 
  • Report document-downloads: reporting by group, by user, by resource instance, by document-type, by date (weekly, monthly, yearly).

Abstracted from the book: Liferay Portal Systems Development (A Liferay cookbook for developers using version 6.1 or above)

Possible implementation

The plugin could be implemented in the following steps

  • Specify two entities for Document Library Record: DLRecord and DLRecordLog.

The entity DLRecord is defined as follows.

<!-- PK fields -->
<column name="definitionId" type="long" primary="true" />
<!-- Audit fields -->
<column name="folderId" type="long" />
<column name="documentType" type="String" />
<column name="companyId" type="long" />
<column name="groupId" type="long" />
<column name="userId" type="long" />
<column name="createDate" type="Date" />
<column name="modifiedBy" type="long" />
<column name="modifiedDate" type="Date" />
<!-- Other fields -->
<column name="name" type="String" />
<column name="title" type="String" />
<column name="signinRequired" type="boolean" />

The entity DLRecordLog is defined as follows.

<!-- PK fields -->
<column name="logId" type="long" primary="true" />
<!-- Audit fields -->
<column name="definitionId" type="long" />
<column name="documentType" type="String" />
<column name="companyId" type="long" />
<column name="groupId" type="long" />
<column name="userId" type="long" />
<column name="createDate" type="Date" />

  • Record document-downloads.

The following is sample code.

try {
DLRecordDefinition dLRecordDefinition =
DLRecordDefinitionLocalServiceUtil.findByF_N(folderId, name);
if (dLRecordDefinition != null) {

ThemeDisplay themeDisplay =
(ThemeDisplay) request.getAttribute(WebKeys.THEME_DISPLAY);
if (dLRecordDefinition.isSigninRequired()
&& !themeDisplay.isSignedIn()) {
response.sendRedirect(themeDisplay.getURLSignIn());
}
long companyId = themeDisplay.getCompanyId();
long groupId = themeDisplay.getScopeGroupId();
long userId = themeDisplay.getUserId();

DLRecordLogLocalServiceUtil.addDLRecordLog(
dLRecordDefinition.getDefinitionId(), companyId,
groupId, userId);
}
}
catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

Results

Here I share a few screenshots from the version 6.1 (revision 88826).

Resource search

Trace definitions

List records

When downloading a document (basic document, image, video, audio, etc.), audit it.

Summary

The plugin provided capability to audit document-downloads. It did work fine in the version 6.1. Of course, the plugin did also work well in the version 6 (like 6.0.6 CE, 6.0 EE SP1, 6.0 EE SP2, etc.). In addition, I would send a ton of thanks to Liferay development team who created the draft version of Document Library Record portlet.

Download URLs

6.0.12

6.0.11

6.0.10

6.0.6

6.1.0

Building social-media-rich newsroom on top of Liferay portal

General Blogs 21. Juni 2011 Von Jonas Yuan

Loosely speaking, a newsroom is the place where journalists—reporters, editors, and producers, along with other staffers—work to gather news to be published in a newspaper or magazine or broadcast on television, cable or radio.  As you have known already, Liferay Portal is an enterprise web platform for building business solutions that deliver immediate results and long-term value. This article will use The Network as an example to show:

  • Which kind of portal core features leveraged to build social-media-rich newsroom on top of Liferay.
  • Which kind of new features added to build social-media-rich newsroom on top of Liferay.

The Network is trying to tell stories and share information on the topics that are the most important, namely: Video, Collaboration, Core Networks, Mobility, Security, Data Center, Culture … and, more parochially to Social Media. The Network requires an environment where the editorials team could prepare releases in the staging and remotely publish releases to the Production. Technically, it mainly manages five content types.

  • Feature
  • Press release
  • Video
  •   Podcast
  • Bookmark

Main features overview

  • LDAP users and user group import – speed up the LDAP import processes and Liferay authentication

There is a big user base in over 50K+ user groups in LDAP. When logged in as a user from LDAP, it took 20-30 seconds by default. After applying the new feature fix patch, it takes less than 1 second. Refer to LPS-14322.

  • Secure the user LDAP password – do not import LDAP user’s password

When importing users from LDAP, user’s info and password got imported. Of course, all passwords stored in Liferay are secure. Especially, the LDAP password mapping field is optional. In some use cases, the fact Liferay stores users’ password is against some companies’ security policy rules. With the new feature fix patch, Liferay would not import user's password from LDAP. Refer to LPS-13933.

  • Remote staging and publishing mode

All editorial team work in the clustered staging environment; but open the clustered production to the public only. 

  • Cache Clustering

Both staging and production are clustered.

  • Secure the LDAP user info in the staging

By default, there is a dummy user in both the staging and the product. All editorial users exist in the staging only. Their daily work is to create / update press release, and schedule to publish to the product. This feature got fixed, referring to LPS-17251.

  • Keep UUID same in the remote publishing process

Export and imports all assets and their associations: tags, categories, custom fields, etc.

  • Federated indexing and search mechanism

Index five content types (more content types) in one place; and index tags, categories, custom fields, views, shared number, etc. Provide services’ URL to search latest views, most viewed, etc.

  • Configurable RSS feeds services

Make RSS feeds configurable. Search RSS results from the index – the performance is very good.

  • Home URL and friendly URL

Making home URL and web content SEO friendly URL highly configurable in multiple web sites

  • Category cloud

Search categories from the index, instead of the database. – The performance is very good.

  • Track Social media sharing and most shared

Provide a model to present social media sharing; provide services’ URL to update social media shared info;   provide services’ URL to search social media most shared. This is native Liferay tracking system – details will be addressed in another post.

  • Integrate Facebook activity, twitter feeds, and so on.

Implementation examples

Latest news, most viewed and most shared – an overview

Searching by FROM date and TO date, filtering by content types, and categories, and pagination – an overview

  • Remote staging and publishing mode

Both staging and production have following settings

tunnel.servlet.hosts.allowed=127.0.0.1,SERVER_IP

tunnel.servlet.https.required=false

  • Cache Clustering

Both staging and production have following clustering settings

net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml

ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml

image.hook.file.system.root.dir=/share/liferay/data/images

dl.hook.file.system.root.dir=/share/liferay/data/document_library

cluster.link.enabled=true

lucene.replicate.write=true

  • Federated indexing and search mechanism

Latest news
?cmd=latest&limit=10&contentType=feature:press-release&categoryId=14788

Parameters:
categoryId=id
limit=number

Most viewed

?cmd=viewed&limit=10&contentType=feature

All News

?cmd=all-news&start=0&end=20&contentType=feature:press-release&displayDateLT=20110101&&displayDateGT=20100101&categoryIds=14788:22287

Parameters:
contenType= all[:feature:press-release:video:podcast:bookmark]
displayDateLT=YYYYMMDD
displayDateGT=YYYYMMDD
categoryIds=id[:id]

  • Configurable RSS feeds services

Both feature and press-release – as you can see, no future content got displayed. Is it nice feature?

?cmd=rss&limit=10&contentType=feature|press-release

Feature

?cmd=rss&limit=10&contentType=feature

Press release

?cmd=rss&limit=10&contentType=press-release

  • Category cloud

?cmd=categorycloud&limit=10&contentType=feature:press-release

Parameters: contentType=feature[:press-release]

  • Track social media sharing and most shared

?cmd=shared&limit=10&contentType=feature&categoryId=14788

Expected features

Of course, there are still a set of features which are expected in newsroom systems. The following are some of them.

  • Asset-level scheduling and remote publishing capability, refer to LPS-17235
  • Asset-version-based scheduling and remote publishing capability, refer to LPS-17395
  • Page-version-based scheduling and remote publishing capability, refer to LPS-6431
  • Scheduling and remote publishing dashboard, refer to LPS-17400
  • Ability to show indexing progress in Control Panel, refer to LPS-18269

What is your wish-list? Your comments or suggestions?

Securing users’ information in the staging when using remote publishing function

General Blogs 16. Mai 2011 Von Jonas Yuan

Liferay portal provides remote staging and publishing capability through which the users can select subsets of pages and data (both portal core assets and custom assets), and transfer them to the live site—that is, remote portal instance.

There are two types of staging: Local Live and Remote Live. Local Live - Within the current portal environment, a clone of the current site will be created. This clone contains copies all existing pages and data of portlets. This clone becomes the local Staging while the original becomes the local Live. Remote Live - A connection is built between this site and target existing in a remote instance. The connection settings cover persistent network configuration which defines how to locate the remote instance when a publishing event occurs. This site becomes the remote Staging while the remote site becomes the remote Live.

In the previous post, we have discussed how to apply LAR and remote publishing capabilities on custom assets in Plugins. We also addressed the new feature how to keep user password secure with LDAP integration when building the staging. This article will address a new feature: how to secure users’ information in the staging when using Remote Live mode.

Let’s take a look on a scenario. There are clustered-staging servers and clustered-product servers. By default, there is a dummy user in both the staging and the product. All editorial users exist in the staging only. Their daily work is to create / update press release, and schedule to publish to the product.

Use case I: secure the users’ information in the staging when using remote publishing function.

Solution

In the staging server, pre-define a dummy user screen name in the portal.properties. This dummy user will impersonate all users who have permission to handle remote publishing.

## Tunnel dummy user enabled;
tunnel.dummy.user.enabled=false

## tunnel dummy user screen name.
## The portal will consume this dummy user screen name for remote publishing
## if the property tunnel.dummy.user.enabled is set to true.
tunnel.dummy.user.screenname=test

Then before calling remote group, impersonate the current user using the dummy user. The following is code snippet.

User user = UserLocalServiceUtil.getUser(permissionChecker.getUserId());

if(_TUNNEL_DUMMY_USER_ENABLED)
            user = UserLocalServiceUtil.getUserByScreenName(
                    permissionChecker.getCompanyId(), _TUNNEL_DUMMY_USER_SCREENNAME);

Summary

As you can see, this fix is very useful when you want to secure the user password in LDAP server, and secure all users info in the staging. For more details, you may check the status of the ticket LPS-17251.

Last but not least, I'd like to send special thanks to Eduardo Carneiro and Mark Wynne who provided these requirements.

Challenges

The feature of staging and remote publishing is cool! But real use cases require more new features on top. The following are some of them.

Challenge I: Asset-level scheduling and remote publishing capability. For example, editorial User A created press release articles A1, A2, A3. He / She going to schedule to publish A1 in 30 minutes, A2 in 60 minutes and A3 in 120 minutes. And meanwhile, editorial User B created press release articles B1, B2, B3. He / She going to schedule to publish B1 in 5 minutes, B2 in 180 minutes and B3 in 360 minutes.  As you can see, the portal should provide asset-level scheduling and remote publishing capability. Refer to LPS-17235

Challenge II:  Asset-version-based scheduling and remote publishing capability. For example, editorial User A created press release article A and different versions Av1, Av2, Av3. He / She is going to schedule to publish Av1 in 30 minutes, Av2 in 60 minutes and Av3 in 120 minutes. Similar scenario happens for other editorial users. Refer to LPS-17395

Challenge III: Page-version-based scheduling and remote publishing capability. For example, editorial User A updated top story page and published this page in three time slots Pv1, Pv2 and Pv3. The final version of top story page in Remote Live is Pv3. For some reason, he / she are going to revert to the version Pv1. Refer to LPS-6431

Challenge IV: scheduling and remote publishing dashboard. Request an admin-dashboard to oversee remote publishing history and to manage on-going scheduled events.  As editorial administrator, he / she needs a dashboard where he / she could see all history – asset-level, asset-version-level , page-level and page-version-level  remote-publishing content; and see all on-going  scheduled events. In case, he / she would be able to cancel some events. Refer to LPS-17400

Your comments / suggestions / feedback?

Highly configurable RSS feeds for web content and Struts action hooks

General Blogs 24. März 2011 Von Jonas Yuan

RSS (Really Simple Syndication) is a family of web feed formats used to publish frequently updated works—such as blog entries, news headlines, audio, and video—in a standardized format. A web feed (or news feed) is a data format used for providing users with frequently updated content. Content distributors syndicate a web feed, thereby allowing users to subscribe to it.

Liferay portal first provides secure RSS feeds (see Ray’s blogs post). All secure RSS feeds transparently support BASIC Authentication. Liferay portlet supports RSS feed type: ATOM 1.0, RSS 1.0 and RSS 2.0.
Liferay portal provides RSS feed for its content types like blogs entries, web content articles (by feeds), message boards threads, wiki pages, and so on.

Message Boards threads (using Struts Action): 

/c/message_boards/rss?p_l_id=10447&mbCategoryId=10157

Blogs entries (using Struts PortletAction): 

/web/guest/test/-/blogs/rss

Wiki pages (using Struts Action): 

/c/wiki/rss?p_l_id=10447&companyId=10132&nodeId=11635&title=FrontPage&type=rss&version=2.0

Asset Publisher (using Struts PortletAction): 

/web/guest/test/-/asset_publisher/7Pnm/rss?p_p_cacheability=cacheLevelPage

Web content feeds (use case I - using Struts PortletAction):

/web/guest/home/-/journal/rss/11301?doAsGroupId=10157&refererPlid=10447&_15_groupId=10157

Especially the portal provides RSS feeds for web content in following URLs:

Use case II (using Action): for a list of articles, including different versions

/c/journal/get_articles?groupId=@value@[&templateId=@value@][&structureId=@value@][&orderBy=@value@][&orderByType=@value@][&orderByCol=@value@][&type=@value@][&displayDateGT=@value@][&displayDateLT=@value@][&delta=@value@]

Use case III (using Action): for a given article by article ID

/c/journal/get_article?groupId=@value@&articleId=@value@

Above features are very useful in many use cases. Here let’s consider different requirements:

  • Use Case A: Web content RSS feeds should be presented in authentication public path, like /c/web_content/rss
  • Use Case B: Web content RSS feeds should be able to feed articles by different article types like press-release, feature, and types combination like press-release and feature
  • Use Case C: Web content RSS feeds should be able to feed articles by authors
  • Use Case D: Web content RSS feeds should be ordered by either modified date or display date; and only show the latest version.

This article will first address an implementation for above use cases (A, B, C, D) in Liferay 6; then it will show how to leverage Struts action hooks to implement above use cases in Liferay 6.1.  

Making RSS feeds of web content highly configurable

In order to make RSS feeds configurable, add following properties.

## Custom RSS feeds

## default type and version: ATOM 1.0, RSS 1.0, RSS 2.0
custom.rss.default.type=atom
custom.rss.default.version=10

## custom RSS feeds display
## display style: abstract, full-content, content, title
custom.rss.display.style=content

## set default content field
## if custom.rss.display.style is set to content
custom.rss.default.content.field=CONTENT

## set maximum characters
custom.rss.display.length=350

## set default content type
custom.rss.default.content.type=press-release

## set default maximum value
custom.rss.default.delta.max=50

## enable or disable to display author
custom.rss.display.author.enabled=false

## Custom RSS feeds links URL
## enable or disable RSS links URL
## custom.rss.link.preview.enabled is in use if
## custom.rss.link.enabled is set to true.
## custom friendly URL like /${content.type}/${id}/${title} will be in use when
## custom.rss.link.preview.enabled is set to false and
## custom.rss.link.friendly.url.enabled is set to true
custom.rss.link.enabled=true
custom.rss.link.preview.enabled=false
custom.rss.link.friendly.url.enabled=true

## Custom RSS feeds title prefix
custom.rss.title.prefix=The Network
custom.rss.title.author.prefix=Contributed Articles by

## Custom RSS feeds via author
## possible values: press-release, feature
custom.rss.author.content.type=feature

New features in Liferay 6

Leveraging auth public path: /c/journal/get_articles, customize GetArticlesAction extending Action

Testing results:

Press releases

Features

Press releases and features

Features by author

Using Struts action hooks in Liferay 6.1 

In brief, there are several kinds of hooks: portal properties, language properties, custom JSP, indexer post processors, service wrappers, servlet filters and servlets mapping, and struts actions in Liferay 6.1.

Struts action hook provides capabilities to override existing struts action and / or add new struts actions from plugins. With struts action hook, you can either add new struts actions to the portal core from plugins, or override any existing action within the portal core from plugins.

Above use cases (A, B, C and D, using auth public path /c/web_content/rss) could be implemented through portal properties and struts actions in following steps.

First, edit the liferay-hook.xml and add following fragment.

<hook>
   <portal-properties>portal.properties</portal-properties>
   <struts-action>
      <struts-action-path>/web_content/rss</struts-action-path>
      <struts-action-impl>com.liferay.knowledgebase.hook.action.RSSAction</struts-action-impl>
   </struts-action>
</hook>

As shown in the above code, it specified at least three kinds of hooks: portal properties hooks and struts action hooks.

Secondly, add following line in the portal.properties.

auth.public.paths=/web_content/rss

The property auth.public.paths specifies public paths that don’t require authentication.

Last but not least, implement same logic in the com.liferay.knowledgebase.hook.action.RSSAction, which extends BaseStrutsAction implementing StrutsAction.

Abstracted from the book: Liferay Systems Development - Liferay Cookbook (coming out soon).

Summary

As you can see, new features (implementing use cases A, B, C, D) make RSS feeds of web content highly configurable - related ticket LPS-16041.  In Liferay 6 or previous versions, these features were implemented as a fix patch, while same features were implemented in the plugin via portal properties hook plus struts actions hook.

Are these features useful?

Any suggestions or comments?

Making home URL and web content SEO friendly URL highly configurable in multiple web sites

General Blogs 9. März 2011 Von Jonas Yuan

User Interfaces may get affected when using friendly URL routing and mapping. URL routing or mapping could shorten URL, as you can see in browsers. A route is a pattern for a URL. It includes named fragments automatically parsed from the URL into the parameter map. Every URL parsed by a route could also be generated from the resulting parameter map. Abstracted from the book: Liferay User Interface Development.

In addition, the portal is integrated with UrlRewrite filter. Based on the mod_rewrite for apache, UrlRewrite filter is a Java Web Filter for any J2EE compliant web application server, such as Resin, Orion, or Tomcat, which allows us to rewrite URLs before they get to the code. Refer to http://tuckey.org/urlrewrite/. Performance on the UrlRewrite filter is very good and the UrlRewrite filter allows for convenient configuration of URLs where JkMount is pointing to /* or the web server isn't running behind Apache. Abstracted from the book: Liferay Portal 6 Enterprise Intranets

In brief, Liferay provides friendly URL framework and integrate UrlRewrite filter. This article will consider more specific use cases.

  • Use case A: home URL for each web site should be configurable; and hide home URL when hitting domain name only;
  • Use case B: keep existing web content SEO friendly URL AS IS; when hitting these web content friendly URL, display that web content without redirecting behaviors;
  • Use case C: provide new configurable SEO friendly URL for newly created web content; when hitting these web content friendly URL, display that web content without redirecting behaviors;

This article is going to show how to implement above use cases as new features in Liferay 6.

Configurable home URL

For the use case A, considering scenario: there is one portal instance, two groups (either communities or organizations): street and workshop. Each group has its own virtual host like street.bookpub.com, workshop.bookpub.com. Each group has its own public pages and private pages.

For example, the group street has following page structure
Home
    Test
    Demo
Resource
Help

And each page has its own friendly URL like /home, /resource, /help, /test, /demo

For instance, the group workshop has following page structure
Help
Resource
Test
    Demo
Home

And each page has its own friendly URL like /help, /resource, /home, /test, /demo

Requirements:

  • Set home URL globally – scoped to portal-instance, like /test
  • When hit URL: http: //street.bookpub.com[/], it should show the page /test of the group street;
  • No redirect in the picture; that is, the URL http: //street.bookpub.com[/] stays non-changed.
  • When hit URL: http: //workshop.bookpub.com[/], it should show the page /test of the group workshop;
  • Again there is no redirect in the picture; that is, the URL http: //workshop.bookpub.com[/] stays AS IS.

Is this useful use case?

Web content SEO friendly URL

For the use cases B and C, considering scenario, there are thousands press release articles got released in past ten years. Each article has its own SEO friendly URL, like http://street.bookpub.com/dlls/2011/prod_030311c.html.  The URL has following pattern.

http://street.bookpub.com/dlls/${year}/${title.name}.html

Where ${year} stands for the value of the year, like 2011; ${title.name} stands for SEO friendly title, created by editorials. The title will take characters a-z, A-Z, 0-9, and “ -_.”  (empty space, hyphen, underscore, period).

Now it is time to use Liferay WCM to manage all press release articles, while all SEO friendly URLs should get supported AS IS.

Similarly, manage features, videos and pod casts in WCM of Liferay as well.  After migration, new press release articles, features, video and pod casts will have new friendly URLs with following pattern.

http://street.bookpub.com/${type.name}/${id.value}/${title.name}

Where {type.name} could have values: release, feature, video, and podcast; ${id.value} presents article ID value; ${title.name} stands for SEO friendly URL title, created by editorials.

And set the list of article types, for example:

journal.article.types=announcements,blogs,general,news,release,feature,video,podcast

Are these useful use cases?

Solution proposals

  • Add following properties

## Home URL enabled or disabled
virtual.host.home.url.enabled=true

## Default home URL value. It will be used only if the property
## virtual.host.home.url.enabled is true, and home URL in Control Panel
## settings has value NULL.
## That is, home URL value could be reset in Control Panel
virtual.host.home.url.name=/test

  • Implement home URL as new feature.
  • Implement web content friendly URL as new feature

Testing results

home URL for street.bookpub.com

home URL for workshop.bookpub.com

Web content existing friendly URL

Web content SEO friendly URL

Summary

As you can see, Liferay could be used as a framework to present any kind of SEO friendly URL. For testing purpose, these features would be available as a fix patch for Liferay 6.0.

Last but not least, I'd like to send special thanks to Eduardo Carneiro, Rommel Bermudez, Duncan McKeever, Mark Wynne who provided these requirements.

Applying jQuery 1.5 and UI 1.8.9 in Liferay 6.1

General Blogs 13. Februar 2011 Von Jonas Yuan

jQuery 1.5 updates on both jQuery Core and UI. This release included many performance improvements and bug fixes as well as a major re-write of the Ajax module which now comes with deferred callback management. jQuery Core also introduced a new feature called jQuery.sub which allows for new copies of jQuery to be created where properties and methods can be safely modified without affecting the global jQuery object. Refer to http://jquery.com/.

In previous blogs posts, following topics have beed addressed.

This post will address how to apply jQuery 1.5 and UI 1.8.9 in liferay 6.1 (at revision 72877 - this is the development version, it will become in the future the next version of Liferay portal).

Introduction

UI 1.8.9 (with jQuery 1.5) is much same as that of UI 1.8.5 in Liferay 6.0.

The following are a few screenshots.

 

 

  

 

 

 

How to achieve this?

The way to apply jQuery 1.5 and UI 1.8.9 in Liferay 6.1 is almost same as that of   jQuery 1.4.* and UI 1.8.5 in Liferay 6.0. The following items show differences

  • Use DTD 6.1

liferay-portlet.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE liferay-portlet-app PUBLIC "-//Liferay//DTD Portlet Application 6.1.0//EN" "http://www.liferay.com/dtd/liferay-portlet-app_6_1_0.dtd">

liferay-display.xml.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE display PUBLIC "-//Liferay//DTD Display 6.1.0//EN" "http://www.liferay.com/dtd/liferay-display_6_1_0.dtd">

  • apply jquery 1.5 and UI 1.8.9 in liferay-portlet.xml

<portlet>
    <portlet-name>1</portlet-name>
    <icon>/images/world.png</icon>
    <header-portlet-css>/css/ui-lightness/jquery-ui-1.8.9.custom.css</header-portlet-css>
    <header-portlet-javascript>/js/jquery-1.5.min.js</header-portlet-javascript>
    <footer-portlet-javascript>/js/jquery-ui-1.8.9.custom.min.js</footer-portlet-javascript>
    <css-class-wrapper>sample-jquery-portlet</css-class-wrapper>
</portlet>

Download 

You can simply download WAR of Sample jQuery 1.5 and UI 1.8.9 plugin.

Liferay 6.1: sample-jquery-portlet-6.1.0.1.war

And then deploy it in Liferay 6.1.

Of course, you can test the same at Liferay 6.0.

Liferay 6.0: sample-jquery-portlet-6.0.5.1.war

That's it.

Bringing data type BigDecimal into Service-Builder

General Blogs 10. Februar 2011 Von Jonas Yuan

 Any application dealing with currency should really use the Java class BigDecimal.  BigDecimal - java.math.BigDecimal - immutable, arbitrary-precision signed decimal numbers. A BigDecimal consists of an arbitrary precision integer unscaled value and a 32-bit integer scale. If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. The value of the number represented by the BigDecimal is therefore (unscaledValue × 10-scale). The BigDecimal class provides operations for arithmetic, scale manipulation, rounding, comparison, hashing, and format conversion.  

Request for support of BigDecimal in Service-Builder has been there for around many years. Now it is time to implement. right? Service-builder supports following Java data types and database data types mapping.

  • boolean => BOOLEAN
  • int, Integer, short => INTEGER
  • long => LONG
  • float, double => DOUBLE
  • String => VARCHAR (<4000), STRING (=4000), TEXT (>4000)
  • Date => DATE

Supported primitive types include “boolean”, “int”, “short”, “long”, “float” and “double”; and data types “String” and “Date” got supported , too. The previous blogs post discussed how to apply convert-null element on primitive types. This article will address how to bring data type BigDecimal into Service-Builder.

An example

Consider following service.xml.

 <?xml version="1.0"?>

<!DOCTYPE service-builder PUBLIC "-//Liferay//DTD Service Builder 6.1.0//EN" "http://www.liferay.com/dtd/liferay-service-builder_6_1_0.dtd">

<service-builder package-path="com.liferay.sampleservicebuilder">

<namespace>SSB</namespace>

<entity name="Foo" uuid="true" local-service="true" remote-service="true">

<!-- PK fields -->

<column name="fooId" type="long" primary="true" />

<!-- Group instance -->

<column name="groupId" type="long" />

<!-- Audit fields -->

<column name="companyId" type="long" />

<column name="userId" type="long" />

<column name="userName" type="String" />

<column name="createDate" type="Date" />

<column name="modifiedDate" type="Date" />

<!-- Other fields -->

<column name="field1" type="String" />

<column name="field2" type="boolean" />

<column name="field3" type="int" />

<column name="field4" type="Date" />

<column name="field5" type="String" convert-null="false" />

<column name="field6" type="BigDecimal" />

<column name="field7" type="BigDecimal" constraint-precision="8" constraint-scale="3" />

<column name="field8" type="BigDecimal" constraint-precision="10" constraint-scale="4" />

<!-- Order -->

<order by="asc">

<order-column name="field1" />

</order>

<!-- Finder methods -->

<finder name="Field2" return-type="Collection">

<finder-column name="field2" />

</finder>

<!-- References -->

<reference package-path="com.liferay.portlet.asset" entity="AssetEntry" />

<reference package-path="com.liferay.portlet.asset" entity="AssetTag" />

</entity>

</service-builder>

 As shown in above code, columns “field6”, “field7”, “field8” have data type “BigDecimal” and columns “field7” and “field8” have constraint precision (8, 10) and scale (3,4). 

Solution proposal

The following are proposed steps to support data-type BigDecimal in Service-Builder.

  • Add two constraint elements for the data type BigDecimal in Service-Builder DTD.

constraint-precision: M - the maximum number of digits (the precision);

constraint-scale: D - the number of digits to the right of the decimal point (the scale)

The SQL standard requires that the precision of NUMERIC(M,D) be exactly M digits. For DECIMAL(M,D), the standard requires a precision of at least M digits but permits more. In MySQL, DECIMAL(M,D) and NUMERIC(M,D) are the same, and both have a precision of exactly M digits.  For example, column salary could be presented in database SQL as follows.

salary NUMERIC(5,2)

As shown in above code, constraint-precision has value 5 (M=5) and constraint-scale has value 2 (D=2).

  • Add a data type called BigDecimal in service-builder DTD

For example, column salary could be specified in service.xml as follows.

<column name="salary" type="BigDecimal" constraint-precision=”5” constraint-scale=”2” convert-null="false" />

  • Add an object type in Liferay portal core called com.liferay.portal.dao.orm.hibernate.BigDecimalType, where specifies default value;
  • Map BigDecimal into NUMERIC(M,D) as part of SQL scripts
  • Map BigDecimal into object type com.liferay.portal.dao.orm.hibernate.BigDecimalType if convert-null is set to true; or map BigDecimal into object type org.hibernate.type.BigDecimalType, if convert-null is set to false.
  • Map BigDecimal into Java class java.math.BigDecimal when generating models and services.

In above steps, we could make data-type BigDecimal available in Service-Builder. 

Results

Solution proposal could be presented as a fix patch. After applying this fix patch, you can generate services, models and SQL scripts with new data type BigDecimal based on above service.xml. The following are generated SQL scripts.

create table SSB_Foo (

uuid_ VARCHAR(75) null,

fooId LONG not null primary key,

groupId LONG,

companyId LONG,

userId LONG,

userName VARCHAR(75) null,

createDate DATE null,

modifiedDate DATE null,

field1 VARCHAR(75) null,

field2 BOOLEAN,

field3 INTEGER,

field4 DATE null,

field5 VARCHAR(75) null,

field6 NUMERIC(5, 2) null,

field7 NUMERIC(8, 3) null,

field8 NUMERIC(10, 4) null

);

Summary

As you can see, data type BigDecimal and two elements constraint-precision and constraint-scale are ready in Service-Builder. And generated services, models and SQL scripts will support data type BigDecimal as well. 

What’s next? Check Service-Builder improvement.

Integrating Alfresco 3.4 in Liferay 6.1 via CMIS 1.0

General Blogs 31. Januar 2011 Von Jonas Yuan

Alfresco Enterprise 3.4 delivers social content management. This new release delivers on Alfresco’s vision of providing the open platform for social content management by delivering both a more robust content platform for building any kind of content-rich application, along with a more social user-interface for collaboration and document management. Refer to alfresco website.

In previous blogs posts, we have discussed how to integrate Alfresco 3.2/3.3 with Liferay 6 via CMIS. This article is going to continue the discussion of the topic - integrating Alfresco 3.4 in Liferay 6.1 via CMIS.

How to achieve it?

You would be able to integrate Alfresco 3.4 EE via CMIS 1.0 in Liferay portal 6.1 in following steps.

  • Install Liferay portal 6.1 (revision 71867) at $LIFERAY_HOME; Liferay-Tomcat bundle is expressed as $TOMCAT_AS_DIR; and $PORTAL_ROOT_HOME = $TOMCAT_AS_DIR/webapps/ROOT;
  • Locate Alfresco 3.4 EE WAR ${alfresco.war} and Share WAR ${shared.war} at $ALFRESCO_INSTALLATION/ tomcat/webapps
  • Drop ${alfresco.war} and ${shared.war} to $TOMCAT_AS_DIR/webapps;
  • Create a database alfresco in MySQL.

drop database if exists alfresco;
create database alfresco character set utf8;
grant all on alfresco.* to 'alfresco'@'localhost' identified by 'alfresco' with grant option;
grant all on alfresco.* to 'alfresco'@'localhost.localdomain' identified by 'alfresco' with grant option;

  • Optionally, create database lportal in MySQL

drop database if exists lportal;
create database lportal character set utf8;
grant all on lportal.* to 'lportal'@'localhost' identified by 'lportal' with grant option;
grant all on lportal.* to 'lportal'@'localhost.localdomain' identified by 'lportal' with grant option;

  • Create a file named portal-ext.properties at $PORTAL_ROOT_HOME/WEB-INF/classes and add following lines in portal-ext.properties.

dl.hook.impl=com.liferay.documentlibrary.util.CMISHook
cmis.credentials.username=admin
cmis.credentials.password=admin
cmis.repository.url=http://localhost:8080/alfresco/service/api/cmis
cmis.repository.version=1.0
cmis.system.root.dir=Liferay Home

Especially you may install Alfresco 3.4 in different server, call virtual server. In this case, you need to update cmis.repository.url as follows

cmis.repository.url=http://${domain.name}:${port.number}/alfresco/service/api/cmis

In addition, you may have different admin account in alfresco. Thus you need to update cmis.credentials.username and cmis.credentials.password.
cmis.credentials.username=${admin.name}
cmis.credentials.password=${admin.password}

  • Optionally, add database connection in portal-ext.properties.

## MySQL
jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://localhost:3306/lportal?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.default.username=lportal
jdbc.default.password=lportal

Of course, you can use different database systems.

  • Start the portal

Of course, you can get the latest WAR of Alfresco Community version or Enterprise (Trial) version from Alfresco website

http://www.alfresco.com

Testing by examples

In Control panel of Liferay portal, select default community “liferay.com”; and going to Content->Document Library, upload two documents (pdf and ppt, for example) as shown in following screenshot.

Then logging in to Alfresco as "admin/admin", you would see the folder "Liferay Home" and documents as shown in following screenshot. That is, Alfresco becomes direct repository of Liferay Document Library via CMIS. All real content is stored in Alfresco, while all metadata is stored in Liferay database.


 

Open issues and new features

One known open issue is LPS-12406 - CMIS hook conflicts with plugins hooks. It will get fixed shortly.

And good news – Liferay 6.1 will allow Liferay to act as a CMIS producer, including CMIS Producer support for WCM (LPS-10201); and CMIS consumer Support (LPS-13160) - a ton of thanks to Alex.

Applying convert-null attribute in Service Builder to persist NULL and NOT NULL into data types in databases

General Blogs 21. Januar 2011 Von Jonas Yuan

Service Builder is a cool tool to automate the creation of interfaces and implementation classes for database persistence, local services and remote services, like find, create, update, and delete operations on the database. In general, the Service Builder is a code generator. Using an XML descriptor, it generates:

  • Java Beans
  • SQL scripts for database tables creation;
  • Hibernate Configuration;
  • Spring Configuration;
  • Axis Web Services and
  • JSON JavaScript Interface

The service classes include persistence services, local services, remote services, models and their implementation classes. The service builder supports following data types and database data types mapping.

Data types                    Database data types

boolean                        BOOLEAN
int, Integer, short         INTEGER
long                               LONG
float, double                 DOUBLE
String                            VARCHAR (<4000), STRING (=4000), TEXT (>4000)
Date                              DATE

Supported primitive types include “boolean”, “int”, “short”, “long”, “float” and “double”; and data types “String” and “Date” got supported , too.

Default value of above data types could be NULL or specific value. Thanks to Brian who introduced “convert-null” element in service builder (LEP-1353 - Add convert-null element to service generator). The convert-null value specifies whether or not the column value is automatically converted to a non null value if it is null. Currently (revision 71166) this feature only applies if the type value is String. This is particularly useful if your entity is referencing a read only table or a database view so that Hibernate does not try to issue unnecessary updates. The default setting of this convert-null attribute is true.

This article will address how to apply convert-null element on other data types like primitive types. For example, you may expect Long value and Integer value to be null value to table, rather than value 0. Mostly importantly you may need option to save Long value and Integer value either null or 0. Specially, following two use cases will be covered.

  • Use case A: default non-NULL values for primitive types, String and Date
  • Use case B: default NULL value for primitive types, String and Date.

An example

Consider following XML in service.xml.

<?xml version="1.0"?>
<!DOCTYPE service-builder PUBLIC "-//Liferay//DTD Service Builder 6.0.0//EN" "http://www.liferay.com/dtd/liferay-service-builder_6_0_0.dtd">

<service-builder package-path="com.liferay.sampleservicebuilder">
    <namespace>SSB</namespace>
    <entity name="Foo" uuid="true" local-service="true" remote-service="true">

        <!-- PK fields -->

        <column name="fooId" type="long" primary="true" />

        <!-- Group instance -->

        <column name="groupId" type="long" />

        <!-- Audit fields -->

        <column name="companyId" type="long" />
        <column name="userId" type="long" />
        <column name="userName" type="String" />
        <column name="createDate" type="Date" />
        <column name="modifiedDate" type="Date" />

        <!-- Other fields -->

        <column name="field0" type="Integer" convert-null="false" />
        <column name="field1" type="String" />
        <column name="field2" type="boolean" />
        <column name="field3" type="int" />
        <column name="field4" type="Date" />
        <column name="field5" type="String" convert-null="false" />
        <column name="field6" type="long" convert-null="false" />
        <column name="field7" type="long" convert-null="true" />
        <column name="field8" type="short" convert-null="false" />
        <column name="field9" type="short" convert-null="true" />
        <column name="fieldA" type="int" convert-null="false" />
        <column name="fieldB" type="boolean" convert-null="false" />
        <column name="fieldC" type="float" convert-null="true" />
        <column name="fieldD" type="float" convert-null="false" /> 
        <column name="fieldE" type="double" convert-null="true" />
        <column name="fieldF" type="double" convert-null="false" />   
       
        <!-- Order -->

        <order by="asc">
            <order-column name="field1" />
        </order>
    </entity>
</service-builder>

As shown in above code, it expects default value of “field0”, “field5”, “field6”, “field8”, “fieldA”, “fieldB”, “fieldD” and “fieldF” should be NULL. And for the rest, convert NULL value of String to “”, convert null value of Boolean to false, convert NULL value of Integer, Short and Long to 0; convert NULL value of Float to 0.0F; convert NULL value of Double to 0.0D. Especially, data type Date always has default value NULL.

Solution - applying convert-null element on data types

Above requirements can be satisfied by following two steps.

  • Mapping covert-null value into database SQL.

for example, generated SQL table:

create table SSB_Foo (
    uuid_ VARCHAR(75) null,
    fooId LONG not null primary key,
    groupId LONG,
    companyId LONG,
    userId LONG,
    userName VARCHAR(75) null,
    createDate DATE null,
    modifiedDate DATE null,
    field0 INTEGER null,
    field1 VARCHAR(75) null,
    field2 BOOLEAN,
    field3 INTEGER,
    field4 DATE null,
    field5 VARCHAR(75) null,
    field6 LONG null,
    field7 LONG,
    field8 INTEGER null,
    field9 INTEGER,
    fieldA INTEGER null,
    fieldB BOOLEAN null,
    fieldC DOUBLE,
    fieldD DOUBLE null,
    fieldE DOUBLE,
    fieldF DOUBLE null
);

  • Mapping column-types with convert-null="false" into different ORM object types with default value NULL. 

By default, Liferay has provided following ORM object types with default non-null value.

ShortType – mapped into data type short - default value 0
IntegerType - mapped into data type int - default value 0
LongType - mapped into data type long - default value 0
BooleanType - mapped into data type boolean - default value false
FloatType - mapped into data type float - default value 0.0f
DoubleType - mapped into data type double - default value 0.0D

That is, introduce new mapping feature in Service Builder that it would have capabilities to map column types into above ORM object types only if convert-null="false" was not set on columns.

Testing Results

After applying new mapping feature in Service Builder, building service based on above service.xml, deploying portlet and adding new row, you would see NULL value was add as default for the columns with setting convert-null="false". More exactly,  default value of “field0”, “field5”, “field6”, “field8”, “fieldA”, “fieldB”, “fieldD”, and “fieldF” is NULL, persisted in databases.

Summary

The convert-null value specifies whether or not the column value is automatically converted to a non null value if it is null (Use case A and Use case B) in Service Builder. This feature applies if the type value is String or primitive types like int, short, long, boolean, float and double.

By the way, this feature will be available for 6.1 (LPS-14863 - applying convert-null element to service generator). Definitely this feature could be available for 5.2 and 6.0 as a fix patch. 

Applying LAR and remote publishing capabilities on custom assets in Plugins

General Blogs 7. Januar 2011 Von Jonas Yuan

Liferay portal provides remote staging and publishing capability through which the users can select subsets of pages and data (both portal core assets and custom assets), and transfer them to the live site—that is, remote portal instance. Using this, we can export the selected data to the group of a remote portal instance or to another group in same portal instance.

The LAR exporting and importing features are used for remote staging and publishing. These features are implemented in the PortletDataHandler API. The intent of this API is to provide the portal core assets and custom assets with a useful API for importing and exporting application content to and from the portal in a database agnostic fashion. Abstracted from the book: Liferay Portal 5.2 Systems Development.

In addtion, the portal integrates workflow systems like jBPM or Kaleo on any assets, either core assets or custom assets; the portal provides a framework to add custom attributes or called custom fields to any Service-Builder generated entities at runtime, where indexed values, text boxes, and selection lists for input and dynamic UI are available; OpenSocial, Social Activity and Social Equity are available in Plugins; and CAPTCHA or reCAPTCHA would be available for custom assets; etc. As mentioned in following blogs posts, these features have been applied on custom assets in plugins.

When applying LAR and remote publishing capabilities on custom assets, following use cases come into picture.

  • Use case A: ability to archive custom assets, for example, Knowledge Base articles.
  • Use case B: ability to publish custom assets from staging to live remotely.

This article will address how to implement LAR and remote publishing capabilities on custom assets, for example knowledge base articles, in plugins. Of course, you can leverage the same mechanism to apply LAR and remote publishing capabilities on your own custom assets.

LAR – import and export

Data export and import generally revolve around the concept of storing data outside the portal permanently or temporarily. The portal does this by handling the creation and interpretation of the LAR files. The functions of data export and import are portlet-wise. Following screenshot shows capabilities to import and export Knowledge base articles.

Note that LAR is short for Liferay Archive. It includes all of the pages, their layouts, their configurations, their look and feel, their permissions, and so on. Importing a LAR file will overwrite any existing pages of a given group configured in the LAR file. And most importantly, target portal server, which is used to import LAR file, must have exactly same version as that of source portal server, which is used to export LAR file. That is, LAR should not be used for upgrade.

Custom Asset models – Knowledge Base

As shown in following figure, Knowledge Base articles are specified as entries: Article, Comment, and Template. The entry Article included columns: article Id as primary key, resource Prim Key, group Id, company Id, user Id, user Name, create Date, modified Date, parent resource Prim Key, version, title, content, description, priority, and so on; the entry Template included columns: template Id as primary key, group Id, company Id, user Id, user Name, create Date, modified Date, title, content, description and so on; while the entity Comment included columns: comment Id as primary key, group Id, company Id, user Id, user Name, create Date, modified Date, class Name Id, class PK, content, helpful, and more. As you can see, the entity Comment could be applied on either any core assets or custom assets like Article and Template by using class Name Id and class PK.Sure, you can find details in $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/service.xml

Abstracted from the book: Liferay User Interface Development

When exporting / importing custom assets like knowledge base articles, first above models and their versions should be in picture. And moreover, following associations should be taken into account.

  • Attachments (as documents)
  • Documents (as links)
  • Images (if any)
  • Polls (if any)
  • Asset ratings
  • Asset view counts
  • Asset comments and votes on comments
  • Subscriptions
  • Addresses (if any)
  • Asset tags
  • Asset categories
  • Custom fields
  • Locks
  • Permissions

Solution Overview

Following diagram shows what’s happening to remote publish on custom assets.  Supposed that there is one staging server – one Liferay installation for internal users to build and review web content, and there is one remote live server - one Liferay installation for external users to view web content. Of course, you can have more than one live servers clustered.

By the way, the portal provides capabilities to have local staging and local live, that is, the live group and the staging group exist in the same portal instance. How does it work? You can refer to the book: Liferay Portal 5.2 Systems Development.

In general, there are three-step processes to remote publish assets, both core assets and custom assets.

  • Export assets – both core assets and custom assets – as internal LAR in staging server
  • Push exported LAR from Staging to Remote Live server through tunnel-web, both staging server and remote live server
  • Import assets - both core assets and custom assets – as internal LAR in remote live server;

Protecting tunnel servlet

Of course, you need to protect tunnel servlet.  In order to communicate with the remote server and, moreover, protect HTTP connection, we need to set up a tunnel web in portal-ext.properties. That is, add the following lines at the end of portal-ext.properties:

tunnel.servlet.hosts.allowed=127.0.0.1,69.198.171.104,69.198.171.105,64.71.191.145,SERVER_IP
tunnel.servlet.https.required=false

The preceding code shows a tunnel.servlet.hosts.allowed property with a list of allowed hosts, for example, 69.198.171.104, 69.198.171.105, and 64.71.191.145 – IPs of staging server and remote live servers, replacing them with IPs of your own servers. As stated above, we used these hosts as examples only. You can have your own real hosts. Meanwhile, it specifies the tunnel.servlet.https.required property. By default, it is set as a false value. You can set it as a true value if you want to use HTTPS.

Capabilities

Note that remote publishing function should be used to push content only – both staging server and remote servers must have exactly same version. That is, remote publishing feature should not be used for upgrade.

How to Implement LAR?

In following three steps, you should be able to implement LAR to archive knowledge base articles.

  • Add portlet-data-handler in $PLUGIN_SDK_HOME/knowledge-base-portlet/docroot/WEB-INF/liferay-portlet.xml like

<portlet-data-handler-class>com.liferay.knowledgebase.admin.lar.AdminPortletDataHandlerImpl</portlet-data-handler-class>

  • Extend com.liferay.portal.kernel.lar.BasePortletDataHandler as com.liferay.knowledgebase.admin.lar.AdminPortletDataHandlerImpl.

Note that com.liferay.portal.kernel.lar.BasePortletDataHandler implements com.liferay.portal.kernel.lar.PortletDataHandler. Thus we can say, com.liferay.knowledgebase.admin.lar.AdminPortletDataHandlerImpl implements com.liferay.portal.kernel.lar.PortletDataHandler.

com.liferay.portal.kernel.lar.PortletDataHandler has following interface.

public interface PortletDataHandler {
public PortletPreferences deleteData(PortletDataContext portletDataContext, String portletId, PortletPreferences portletPreferences) throws PortletDataException;
public String exportData(PortletDataContext portletDataContext, String portletId,PortletPreferences portletPreferences) throws PortletDataException;
public PortletDataHandlerControl[] getExportControls() throws PortletDataException;
public PortletDataHandlerControl[] getImportControls() throws PortletDataException;
public PortletPreferences importData(PortletDataContext portletDataContext, String portletId, PortletPreferences portletPreferences, String data) throws PortletDataException;
public boolean isAlwaysExportable();
public boolean isPublishToLiveByDefault();
}

com.liferay.portal.kernel.lar.BasePortletDataHandler has following interfaces:
public PortletPreferences deleteData(
 PortletDataContext portletDataContext, String portletId,PortletPreferences portletPreferences)
public String exportData(
 PortletDataContext portletDataContext, String portletId,PortletPreferences portletPreferences)
public PortletDataHandlerControl[] getExportControls()
public PortletDataHandlerControl[] getImportControls()
public boolean isAlwaysExportable()
public boolean isPublishToLiveByDefault()
public PortletPreferences importData(
 PortletDataContext portletDataContext, String portletId,
 PortletPreferences portletPreferences, String data)

protected PortletPreferences doDeleteData(
 PortletDataContext portletDataContext, String portletId,PortletPreferences portletPreferences)
protected String doExportData(
 PortletDataContext portletDataContext, String portletId, PortletPreferences portletPreferences)
protected PortletPreferences doImportData(
 PortletDataContext portletDataContext, String portletId,PortletPreferences portletPreferences, String data)

  • Implement methods doDeleteData(), doExportData() and doImportData() in com.liferay.knowledgebase.admin.lar.AdminPortletDataHandlerImpl.

In method doDeleteData(), remove articles, comments and templates  by group.

in methods doExportData() and doImportData(), export / import following items:

custom assets: Articles, Comments, Templates

custom assets’ associations: attachments (as documents), images (if any), polls (if any), asset rating, asset view counts, asset comments and votes on comments, subscriptions, address (if any), asset tags, asset categories, custom fields, permissions, etc.

That's it. This is useful, isn’t it?

Summary

In brief, import/export and remote publishing features are available for custom assets in plugins. You can use import/export feature to archive your custom assets (Use case A), and remote publishing feature to push custom assets from staging server to remote live servers with one simple click (Use case B). Especially, you can schedule remote publish processes as well. Let’s address scheduling feature later.

By the way, you can have a closer look at the knowledge base source code (revision 69840) with LAR and remote publishing features in

knowledge-base-portlet-6.1.0.1.war

Last but not least, I'd like to send a ton of thanks to Liferay development team and Liferay community, who made a lot of good stuff.

Speeding up LDAP import process and Liferay authentication

General Blogs 13. Dezember 2010 Von Jonas Yuan

Liferay 5.2 EE and 6 improved the capabilities of LDAP integration in many areas (refer to blogs posts LDAP Enhancements and Keeping user password secure with LDAP integration):

  • synchronize user custom attributes between Liferay and LDAP
  • support LDAP chains and LDAP pagination
  • create a role for each LDAP group
  • override LDAP import and export processes via Spring
  • secure LDAP users' password

As you know, base DN is used as a base to search users and groups. When the number of users and groups is small, you would not meet any performance issue when searching users and groups. But if the number of users and groups is huge (like 500K users and 50K groups in LDAP), you would meet any performance issue when searching users and groups, since each user may be part of 50 groups.

Use case A (as shown in following screenshot): 500K users and 50K groups. Each user may be part of 50 groups. When logged in as a user from LDAP, it took 20-30 seconds by default. It should take less than 1 second.

The portal does introduce following property to search groups.

ldap.import.group.search.filter.enabled=true

As shown in above code, if set above property to true, the group filter will be applied, but only to groups in the specified base DN. If set to false, the filter will not be applied and all groups that are associated with the imported users will be imported regardless of the base DN.

The above workaround proposed would not be an option. Each user may belong to over 50 groups and it would just clutter the system with useless data if you were to import all groups for every user.

This article will address how to speed up LDAP import process and Liferay authentication.

Solution Overview

In two steps, you should be able to speed up LDAP import process and Liferay authentication.

1) Set up custom group base DN in portal-ext.properties

# Set this to true to enable custom group based DN settings.
# Set this to false to disable custom group based DN settings.
ldap.import.group.base.dn.enabled=true

# set up group base DN when the property ldap.import.group.basedn.enabled is set to true
# You can add group base DN against your LDAP here
ldap.import.group.base.dn.default=ou=groups,ou=system

2) Use custom group base DN in LDAP import process

Results

LDAP import process and Liferay authentication just took less than 1 second.

1) Set up base DN as users base DN like

2) Test LDAP users

3) Test LDAP groups

Is this feature useful? your comments / suggestions?

Summary

As you can see, LDAP import process and Liferay authentication can be improved a lot by using base DN as users base DN for users search, and custom group base DN as groups base DN for groups search. Ideally, we should divide base DN ldap.base.dn into ldap.users.base.dn and ldap.groups.base.dn for users search and group searchs, respectively. Refer to LPS-14322.

Last but not least, I'd like to send special thanks to Eduardo Carneiro and Jenny Chen who did a great job to narrow down the issue and to identify solutions.

Setting up users' screen name and friendly URLs in your own languages

General Blogs 8. Dezember 2010 Von Jonas Yuan

In Liferay portal, you can use your own language. Multilingual organizations get out-of-the-box support for up to 36 languages. Users can toggle among different language settings with just one click and produce/publish multilingual documents and web content. You can also easily add other languages in your public, private pages, or other organizations.

locales=ar_SA,eu_ES,bg_BG,ca_AD,ca_ES,zh_CN,zh_TW,cs_CZ,nl_NL,en_US,en_GB,et_EE,fi_FI,fr_FR,gl_ES,de_DE,el_GR,iw_IL,hi_IN,hu_HU,in_ID,it_IT,ja_JP,ko_KR,nb_NO,fa_IR,pl_PL,pt_BR,pt_PT,ru_RU,sk_SK,es_ES,sv_SE,tr_TR,uk_UA,vi_VN

As shown in above locale settings, multi-languages get supported as well in Web Content. That is, web content can be entered in as many languages as desired. As you can see, only content got localized. In addition, the name (article title) and the summary, that is, abstract description of web content, should be localizable, too. (Abstracted from the book: Liferay Portal 6 Enterprise Intranets).

Fortunately, knowledge base articles have fields: title, content, summary (description), etc. Multi-languages should get supported as well in knowledge base articles. That is, knowledge base articles (title, content and summary) can be entered in as many languages as desired. For more details, you can refer to the blogs post: Multiple-language support for Knowledge Base articles.

Currently (at revision 68460), users’ screen name and friendly URL must be characters between a-z and A-Z or digital number 0-9 plus special characters e.g.”-“,”.”,”_”.   That is, users’ screen name and friendly URL cannot be specified by multiple languages like Chinese, Japanese. 

For example, Asia-based users would like to have native user names and friendly URL like Chinese, Japanese, etc.

Use case A: set up native user screen name like “underwaterworld” as “海底世界” in Chinese, “アンダーウォーターワールド” in Japanese. Of course, it could be any locale as mentioned above.

Use case B: set up native friendly URLs like “liferay” as “生命之光” in Chinese, “生命の光” in Japanese. Sure, it could be any locale as mentioned above.

Use case C: set up native user email address like “underwaterworld@abc.com” as “海底世界@abc.com” in Chinese, “アンダーウォーターワールド@abc.com” in Japanese. Of course, it could be any locale as mentioned above.

This article will address how to add new feature (use case A, B and C) - support users’ screen name, email address and friendly URLs with 36 languages in liferay 5.2 and 6.

Solution Overview

In general, you should be able to create native user names and friendly URL with your own languages. As shown in following screen-shot, you should be able to see samples.

Setting up a group in Japanese, frienly URL as “生命の光”

Setting up a group in Chinese, friendly URL as “生命之光”

Creating a new user in Japanese, screen name as “アンダーウォーターワールド”, and password as “アンダーウォーターワールド”

Creating a new user in Chinese, screen name as “海底世界” and password as “海底世界”

Creating a new user in Japanese, native email address as “アンダーウォーターワールド@abc.com”, and password as “アンダーウォーターワールド”

Creating a new user in Chinese, screen name as “海底世界@abc.com” and password as “海底世界”

Results

Use case A: set up user native screen name like “Underwater World” as “海底世界” in Chinese, “アンダーウォーターワールド” in Japanese

Set: How do users authenticate?  - by screen name

sign in (as screen name / password: “アンダーウォーターワールド”/“アンダーウォーターワールド”) in Japanese

sign in (as screen name / password: “海底世界”/“海底世界”) in Chinese

Use case B: set up friendly URLs like “liferay” as “生命之光” in Chinese, “生命の光” in Japanese

Check friendly URL of the group “生命の光” in Japanese, after signed in.

Check friendly URL of the group “生命之光” in Chinese, after signed in.

Use case C: set up user native email address  like “Underwater World@abc.com” as “海底世界@abc.com” in Chinese, “アンダーウォーターワールド@abc.com” in Japanese

Set: How do users authenticate?  - by email address

sign in (as email / password: “アンダーウォーターワールド@abc.com”/“アンダーウォーターワールド”) in Japanese

After logged in

sign in (as screen name / password: “海底世界@abc.com”/“海底世界”) in Chinese

Summary

As you can see, this feature supports three use cases:

  • Use case A: set up native user screen name like “underwaterworld” as “海底世界” in Chinese, “アンダーウォーターワールド” in Japanese.
  • Use case B: set up native friendly URLs like “liferay” as “生命之光” in Chinese, “生命の光” in Japanese.
  • Use case C: set up native user email address like “underwaterworld@abc.com” as “海底世界@abc.com” in Chinese, “アンダーウォーターワールド@abc.com” in Japanese.

That is, with this feature you would be able to set up users' native screen name, native email address and friendly URLs in your own languages (up to 36 languages). Refer to LPS-14436

Is this feature useful? Your comments / suggestions?

Bringing Google Maps JavaScript API V3 into Liferay Web and WAP

General Blogs 30. November 2010 Von Jonas Yuan

Liferay 6 provided integration with Google Maps JavaScript API V2. More details, AUI tags are used to integrate Google Maps JavaScript API V2. The Google Maps API lets us embed Google Maps in the web pages with JavaScript. The API provides a number of utilities for manipulating maps and adding content to the map through a variety of services, allowing us to create robust maps applications on website. The following is sample code.

<aui:script>
  var <portlet:namespace />map;
  var <portlet:namespace />geocoder;
  function <portlet:namespace />load() {...}
  ...
</aui:script>

You can also refer to view.jsp.

V3 of Google Maps JavaScript API is especially designed to be faster and more applicable to mobile devices, as well as traditional desktop browser applications. New features of V3 include,

  • Draggable directions within the V3 Maps API allowing users to modify directions and add waypoints by dragging the path on the map.
  • Create your own Custom Panoramas and display them using the V3 Maps API Street View service.

Most importantly, V2 of  Google Maps JavaScript API has been officially deprecated, refer to basics.html. Thus it is time for Liferay community to upgrade to Google Maps JavaScript API V3 LPS-13968.

This article will address how to bring Google Maps JavaScript API V3 into Liferay 6 Web and WAP - Mobile devices.

Integration Overview

Google Maps JavaScript API is more applicable to mobile devices, as well as traditional desktop browser applications. Normally, mobile devices and desktop browser applications should support HTML 5.  Here list a few browsers as examples.

Safari 5

 Chrome 7 and FireFox 3.6

 And most importantly, Google Maps JavaScript API V3 is especially designed to be faster and more applicable to mobile devices, like iPhone (iOS v2.2 - v4.0), Palm Pre (webOS v1.4.1), Android (v1.5 - v2.2), BlackBerry OS (v5.0, v6.0), especially iPad, etc.

iPhone 4

Palm Pre

Implementation

In general, you can bring Google Maps JavaScript API V3 into Liferay WEB and WAP in three steps.

  • Declare a true DOCTYPE within your web application. That is, declare the applications as HTML5 using the simple HTML5 DOCTYPE as shown below

  <!DOCTYPE html>

  • For the map to display on a web page, you must reserve a spot for it by creating a named div element and obtaining a reference to this element in the browser's document object model (DOM).

  <div id="map_canvas" style="width: 100%; height: 100%"></div>

  • Add JavaScript calls as follows

<script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script>

<script type="text/javascript">

  function initialize() {

    var <portlet:namespace />geocoder = new google.maps.Geocoder();

    var myLatlng = new google.maps.LatLng(-34.397, 150.644);
    var myOptions = {
      zoom: 8,
      center: myLatlng,
      mapTypeId: google.maps.MapTypeId.ROADMAP
    }

   var <portlet:namespace />map = new google.maps.Map(document.getElementById("map_canvas"), myOptions);

   }
</script>

As you can see, JavaScript specifies the JavaScript class that represents a map is the Map class.

var <portlet:namespace />map = new google.maps.Map(document.getElementById("map_canvas"), myOptions);

Of course, you can add your JavaScript code inside the method initialize().

Results

Based on above steps, you can build a portlet called Google Maps V3 for both WEB and WAP in following steps.

  • Using HTML5 DOCTYPE in the theme (either WEB or WAP theme), like Classic theme (WEB) in Liferay 6.
  • Adding DOM <DIV> and JavaScript in view pages of the portlet Google Maps V3.

That's it. Is it so easy?

What’s next?

Google Maps JavaScript API V3 provides a lot of features like:

  • Events
  • Controls
  • Overlays
  • Services
  • Map Types

It would be nice that the portlet Google Maps V3 could merge above features in one place and, moreover, end users could customize their maps easily.

Your comments / suggestions?

Zeige 1 - 20 von 53 Ergebnissen.
Elemente pro Seite 20
von 3