Letzte Blogger

James Falkner

96 Nachrichten
22. September 2014

Muhammed Shafeek V

1 Nachrichten
20. September 2014

Nate Cavanaugh

37 Nachrichten
17. September 2014

Vicki Lea Tsang

5 Nachrichten
11. September 2014

Martin Yan

6 Nachrichten
10. September 2014

Angela Wu

3 Nachrichten
8. September 2014

Máté Thurzó

3 Nachrichten
8. September 2014

Mohit Soni

2 Nachrichten
4. September 2014

Matti Tahvonen

4 Nachrichten
2. September 2014

Jorge Ferrer

57 Nachrichten
29. August 2014

Hosting OSGi F2F at Liferay Spain

General Blogs 19. August 2014 Von Miguel Ángel Pastor Olivar Staff

I am sure most of you already know we are members of the OSGi Alliance, and we are trying to get involved as much as we can. Our next step is to host a face to face during the 9th, 10th and 11th of September here at the Liferay Spain office.

But not only that, Peter Kriens has kindly accepted my proposal to talk about OSGi and the enRoute initiative in a public talk (seats are limited, sorry). You can find all the details of the talk in the event page (we are organizing the talk under the MadridJUG umbrella)

Hope to meet some of you at Peter's talk!


Extensible templates: the OSGi way

General Blogs 16. Dezember 2013 Von Miguel Ángel Pastor Olivar Staff

All the OSGi related contents we have seen so far during previous blog entries are related to backend systems. There is no other reason that my daily work basics happens under the services, sorry about that. I will try to correct this situation with an example on how we can build extensible user interfaces using the already builtin mechanisms into the platform.

Disclaimer: my most sincere apologies with all the damage my poor design skills could cause

The problem

We want to write a new Liferay application with an extensible UI, so third-party components can write their custom extensions, contributing to our UI with new elements (even replacing them)

Tracking extensions

Our solution will be based in the BundleTrackerCustomizer  concept and the Extender Pattern. We could use other approaches, based in services, but the extender pattern fits perfectly with our current template mechanism.

Using a BundleTrackerCustomizer we can get notified every time a new bundle is deployed into the OSGi container. This bundle will track all the extensions, indexed by name. The following class implements the tracking logic


As you can see in the previous class definition we are just interested in the plugins which present certain kind of attributes: the Templates-Location one. This attribute will contain the extension point with the location of the templates. Once the tracker detects this deployment, it will store the reference to all of this locations

Note this is just a proof of concept and need quite a few improvements. Hopefully we are going to use this idea to create some new applications and we will generalize this component with some nice features:

  • Create a component, so you can reuse it in your applications without writing your custom tracker.
  • Allow stackable modifications, so you can publish/unpublish components and go back/forward.
  • Named extensions: so you can define the extensions name and where you want to contribute. For example: Template-Extension: [extension_name, template_location]
  • Easier and powerfull definition of extensions, so you can contribute with multiple views to a single extension, so we can create lists and so on

Defining our extension points

Now we need to create our extensible views, defining those points where we want third-party components can contribute with their custom views.

Lets create a simple view file where we create a very simple extension:

       <div id="template-extensions">
                <#assign extensions = request.getAttribute("sampleProviderFactory").getRegisteredTemplates() />
                This is just a very simple test showing all the registered extensions contributed by plugins: <br />
                <#list extensions as extension>
                        <#include "${extension}" />

As we can see in the preview snippet, we are just getting all the references to the existing contributions and including them. As you can imagine, we could do this extension mechanism much more powerful by using extension names and letting the contributions to decide which element they want to contribute. As I noted before, we will try to generalize this idea and create a customizable component.

We have a few limitations at the time of this writing I hope we can simplify at the future:

  • We live in an heterogeneus world where the OSGi components should live together with the core of the portal so, right now I am getting the references to the OSGi elements and making them accesible outside the container through static factory methods. I know this is not the best solution at all, but this is the unique approach we have for the 6.2 release
  • The FreeMarkerPortlet does not allow you to add new parameters to the template context so we have to pass it through the request or override the whole include method of the portlet.

They are not real problems but something we need to deal with in order to put our solution in place

Writing our first extension

Writing our first extension is a pretty simple task: we just need to create a new bundle, and define the mandatory attribute in order to be detected by the template tracker. You could use the following bnd file to create your extension

Bundle-Name: Sample Metrics Portlet Extesion
Import-Package: \
Web-ContextPath: /sample-metrics-portlet-extension
-wab: \
-wablib: \
This is a simple overview of a small proof of concept we have done and which it is working pretty fine but we need to improve it a lot. All the source code can be found here:
  • The shared/sample-metrics-core contains the core of our backend application
  • The portlets/sample-metrics-portlet contains the core of the UI application. The view.ftl is the extensible template built for this example. In addition, this component has the TemplateTracker artifact responsible of the extensions handling
  • The  portlets/sample-metrics-portlet-extension is just an extension to contribute to the core UI with a simple view

I have not gone through all the details since I just wanted to highlight the main concepts. Feel free to ping me if you want some low level details.

See you soon!

A quick update about the Scala support in Liferay SDK

General Blogs 16. Dezember 2013 Von Miguel Ángel Pastor Olivar Staff

A few months ago I pushed into the Liferay 's plugins repository the ability to write Scala code in the Liferay portlets but, to be honest, the support was quite poor since the Scala code was only allowed in portlets and you needed to create a new kind of portlet through the create.sh script located in the portlets folder.

A couple of weeks ago I got a chance to resume the work on the Scala support and I have added the ability to write Scala code in any kind of plugin in the plugins SDK. What does it mean? You will be able to write Scala code into your existing plugins (hooks, portlets, new OSGi plugins, ...) with no extra effort at all. Just add your new code into you existing plugin (or create a new application completely written in Scala) and everything will be compiled.

A couple of important tips to be considered:

  • Fast scala compiler (fsc) is the selected option for the default compilation process. There is a couple of task to clear the compiler's caches or shutdown the daemon
  • You cannot use Scala code in JSP. I know this is obvious but I wanted to highlight it :)

The source code is not still available in the main plugins repo but I will try to merge it in as soon as possible. Meanwhile, you can take a look into my Github repo

I will try to write a new blog post with a practical example about how you can use these new abilities

Liferay Berlin Devcon 2013: Our way towards modularity

General Blogs 9. Oktober 2013 Von Miguel Ángel Pastor Olivar Staff

First of all I would like to apologize to all of you who where expecting Raymond but got me instead. Sorry about that.

Through the talk I did a quick summary of why modularity is important for us, what we are looking for, what we have already done and some of the stuff we are looking at for the future. At the end of the talk I show how you already can create a modular application of top of Liferay and I even did a little bit of live coding (luckyly everything worked perfectly :) ). 

Based on the questions I got during the talk and during the after party I guess people liked what we have already done but this is not more than a personal opinion, I would love to hear your thoughts.

Here you can read the slides I used for the presentation and in this Github branch, under the shared folder, you can take a look to the source code I used during my live coding demo

Hope you liked the talk! I tried to do my best!


JCrete: Day 1

General Blogs 20. August 2013 Von Miguel Ángel Pastor Olivar Staff

This year I have been able to do the awesome JCrete conference. First of all I would to thanks Liferay and Jorge for letting me get a few days off and financing my trip.

The conference is organized in an Open Space format so the first day all the people can propose whatever they want: talks, discussions, ..., and, once all the proposals are done and explained, we tried to arrange and schedule all of them for the rest of the week.

The first day I attended a couple of talks/disccussions that were extremely interesting:

  • The first one was a disccusion proposed by Chris Richardson about some of the good parts? of NodeJS and how he had been playing with it for the latest months and some of the benefits he had found surprise. I think this talk does not deserve a detailed summary because I guess all of you know how it finished ... :)
  • The second disccusion was "The perils of (micro)benchmarking" where, mainly, Kirk Pepperdine and Martin Thompson shared with the rest of us their experiences on doing (al least trying to) good benchmarking of your applications. We also discussed a lot about the benefits (or lack of them) of the microbenchmarks.

I haven't written a completely detailed description of the sessions because you can find here all the notes, references and any kind of ouputs produced. 

This first day was a little bit shorter because we were doing the schedule for the rest of the week during the first part of the day. I will try to write a now blog entry with the summary of the second day as soon as possible (and include some photos because the location is very nice)

Just before finish the post, I would like to add that attending this conference is a huge oppurtinity to be close to many really smart guys and learn a lot from them and their experiences.

See you soon! 


Developing Liferay Core and plugins using JRebel

General Blogs 7. August 2013 Von Miguel Ángel Pastor Olivar Staff

I guess all of you already know what  JRebel is so I guess it doesn't deserve an exhaustive introduction. In a few words, JRebel is a JVM agent which allows you to reload your class definitions "on the fly" so you don't need to restart your applications once it has been started/deployed. You can think of it as a kind of the native HotSwap ability offered by the JVM but "on steroids": you can redefine "whatever" you want and the JRebel agent will reload your new class definition.
Here at the Liferay engineering team have been thoroughly using it for the core development during the last year and a half and the results have been very good. However, we have been dealing with a small problem during all the time we have been using it: if we have the JRebel enabled in the Liferay core we can not deploy Liferay applications because the hot deployer it is not being invoked. As  you know, here at Liferay we make an extensive use of "classloading tricks" (hopefully it is going to change in the near future :) ) in order to get a fully working implementation of our hot deployment mechanism so I guess the problem was something related to this, but I have never had the enough time to go through a thoroughly debug session. Finally, last week I was able to get some free time and I just decided to start a funny debugging session. 
I don't want bother you with all the technical details but, as a quick summary, I ended up with two JRebel configuration files: one for the portal-service classes (the one we put in a common classloader for all the webapps in the app servers) and one for the rest of the Liferay portal stuff and the docroot of the web application. All the classes you monitor in a JRebel configuration file end up in the same classloader and it was the root cause of our problem: an static field (remeber static fields are unique per classloader, not the whole JVM) of a class which was supposed to be loaded in a common classloader was being loaded in the portal classloader, and this issue was the reason that prevented the hot deploy events were fired.
I have decided to automate the generation of your whole JRebel configuration in order to prevent any kind of misconfigurations. So, once you get a copy of the Liferay core git repo in your file system you can run "ant setup-rebel"; this task will configure your development environment (it doesn't matter what app server you are using), enabling the JRebel development mode. The only thing that is left to you is the configuration of the startup process of your app server, in order to define the JRebel JVM agent path.
One more thing I would like to highlight is that this automatically generated configuration is IDE agnostic, and use the folders used by our standard build system tool, Ant. If you need to reconfigure JRebel in order to define custom folders (pointing to the compilation folders of your preferred IDE) you can overwrite the default configuration files (using the -overwrite.xml extension) and rerun the setup-rebel task.
In addition, I have added the same task to our Plugins SDK, so if you run ant setup-jrebel in the plugin of your choice this will get the JRebel configuration automatically generated. At the time of this writing the portal core stuff is already in the master branch of the public repo but the Plugins SDK stuff is still pending in Brian's pulls queue.
Feel free to ping me if you have any doubt or if you want a more detailed description about the internals of the process.

Monitoring JMX through http

General Blogs 4. Juli 2013 Von Miguel Ángel Pastor Olivar Staff

Monitoring JMX data has been always a painful process, especially in production systems where the ability to open ports and/or allow certain kind of traffic is kind of "complicated".

Thanks to Jolokia we have a lightweight layer on top of the MBean Server infrastructure which allows to query our JMX data over the http protocol. The architecture is very simple:

I have borrowed the previous image from this presentation where you can see a great overview of the Jolokia's benefits.

How can I use it in Liferay?

The Jolokia guys have an OSGi bundle ready to be deployed inside an OSGi container. The only dependency of the previous bundle is that an Http Service implementation should be available within the container, but this is not a problem since we already provide our own implementation fo the previous specification (it will be bundled by default from 6.2.0 and above). The provided bundle needs a small modification which I have already done for this blog entry and you can get here ( I need to do it configurable, but for the purpose of this post we are fine with this artifact).

Get the previous bundle and copy it to your deploy folder; in a few seconds your bundle will be deployed into the OSGi container and you will have a fully working http layer on top of your JMX data.

Querying some data

There are a few clients you can use to query the data through this Http layer (even if you want, you can write your own :) ): 

  • http://hawt.io/ Nice HTML5 extensible dashboard with many available plugins

  • j4psh: Unix like shell with command completion

  • jmx4perl: CLI tool for scripting

  • check_jmx4perl: Nagios plugin

  • AccessJ: and iPhone client

Let's see an small example of how to use the j4psh shell:

First of all we need to connect to our server

As I have told you before the current bundle is not configurable and you should use the /o/jolokia/jolokia path (hope this is not a big deal for you). Once we are connected we can navigate through our JMX data (j4psh structures the info using a a "file system approach").

For example, if we type ls, we can see something like this:

Or we can search for a certain info and get the current value, for example the number of total loaded classes into the JVM

As you can see this is an extremely powerful and lightweight tool we can use to monitor our data in a very simple way.

I will try to clean-up a little bit the bundle used in this post in order to make it configurable and I will try to publish it in a more accessible place.

See you soon!

Managing Liferay through the Gogo shell

General Blogs 14. Mai 2013 Von Miguel Ángel Pastor Olivar Staff

I have done a huge refactor for most of our OSGi related work, moving the majority of its components to the Liferay plugins SDK. Everything except the graphical admin UI is already in the master branch of the official repo so you can play with it; we would love to hear your feedback.

My goal with this post is to show you how create a new OSGi bundle in the plugins SDK using a practical example: extending the OSGi shell (we have spoken about it sometime ago). Let's try to do it:

Basic project structure

Currently there is no target in the SDK which allows you to create a bundle (I will push it soon) so you can use this example (or the other modules as http-service-shared or log-bridge-shared) as the basic skeleton for your tests:

Basic OSGi bundle skeleton

As you can see, the structure is very simple; let my try to highlight the main OSGi related points:

  • All the new OSGi plugins will be based in the great bnd tool . You can configure all your bundle requirement through the bnd.bnd file
  • The build.xml of the plugin must import the build-common-osgi-plugin.xml build file: <import file="../../build-common-osgi-plugin.xml" /> 

As I have told you before I will include an Ant target to create this basic skeletons, but, until then, you need to do this by hand. Sorry about that :(.

Writing Gogo shell custom commands

As we have stated at the beginning of the post, we are going to write a bunch of custom commands for the Gogo shell. As you will see, this is an extremely easy task.

This commands can be created using a simple OSGi service with a couple of custom properties. We will use the OSGi declarative services approach in order to register our commands (Liferay already includes an implementation of the Declarative Services specification in the final bundle, so you don't need to deploy it by yourself).

At this point, we are ready to write our first command: the ListUsersCommand: 

The @Component annotation let us to define a new OSGi service in an extremely easy way. The Gogo commands are registered in the console using the properties osgi.command.function and the osgi.command.scope. The first one establishes the name of the namespace where the command will be allocated (in order to prevent name collisions) while the latter specifies the name of the command. It is important to note that the Declarative Service annotations I am using does not support inheritance, so everything you declare in a base class will not be inherited down in the hierarchy.

And, how do I write the implementation of my new command? You just need to write a new public method named as the value written in the osgi.command.function. If your command needs some argument you will need to specify as a parameter of your method (an the console will do the coercion of the basic types). In our example we are creating a usermanagment namespace, with a command called listByCompany which expects the companyId (long) as the unique parameter.

Easy, isn't it?

Consuming Liferay services

I would like to highlight the implementation of the listByCompany command (I am sure you have already guessed the previous command retrieves all the users of a certain company).

In order to get all this info we need to call the corresponding method in the users service. We could do something like UserLocalServiceUtil.getCompanyUsers(companyId) but this is not a good approach, so we are to going to get all the benefits of having all the Liferay services as OSGi services. We just need to grab a reference to the UserLocalService bean:

public void setUserLocalService(UserLocalService userLocaService) {
   _userLocalService = userLocaService;
protected UserLocalService _userLocalService;

Building and running the bundle

Once we have written our command we need to build our final artifact: just type ant jar in the project folder and you will get the final jar file into the dist folder. Before deploying our new bundle let's connect to the Gogo shell in order to check which bundles we have already installed:
As you see, we have already a bunch of bundles running but nothing about the new commands we are writing. What commands are currently available in the console? Just type help in the console
Gogo shell available commands
Now we have to deploy our new bundle. To do that, we just need to copy it to the folder $liferay.home/data/osgi/deploy (it is the default folder, you can change it in the properties). You can use the deploy Ant task our just copy the folder into the previous folder. Once the bundle is deployed we should see it in the bundles list:
Available bundles once the custom command has been installed
Take a look to the last line; you will see our bundle has been deployed and it is, hopefully, running. Let's see which commands we do have available once our new bundle is already installed.
Do you see the last two lines? These are the commands we have written (in this post we only have described one command, you can find the other one in the companion source code). So, if I type listByCompany ID_OF_A_COMPANY (mine is 10153) (since there is no collision I don't need to prefix the command with the namespace) you should get an output similar to this:
Users of the company 10153
    User default@liferay.comwith id 10157
    User test@liferay.comwith id 10195
Along this post we have seen how we can extend the Gogo shell creating new commands, consuming Liferay services and building a basic OSGi bundle in the plugins SDK. It is not a big deal but I think it is a good starting point to get used to the new OSGi abilities. Hopefully I will be able to push some more complex examples in the near future.
You can find the whole source code in the shared/gogo-commands-shared folder at my plugins repo.

Leveraging a little bit of the OSGi power

Company Blogs 5. April 2013 Von Miguel Ángel Pastor Olivar Staff

These days I have been pushing all the OSGi stuff through the review process so we have already got some of the work in the master branch. Ray has written a really good blog post about using the Apache Felix Gogo console inside Liferay and he has already explained some of the benefits of our current work so I am not going to explain it here again.

We are still going through the peer review process so, at the time of this writing, some of the features could not be in the master branch yet. Anyway, we can still get many benefits and write powerfull applications which can exploit some of the benefits wich OSGi brings to us. In order to illustrate the previous statement, let's write a small, and extensible, metrics application.

Our goal is to build a small and pluggable monitoring system which allow us to track different aspects of the Liferay's internals like, for example, JVM statistics or whatever you can imagine.
The example is using the OSGI Declarative Services specification. It is a declarative and very easy way of declaring and consuming OSGi services. It is not included by default right now because, once all the stuff is already in master, we will try to push something very close to the approach shown in this example, but using the Eclipse Gemini Blueprint project, in order to keep all the Spring benefits we already own.

The metrics core system

The core of our metrics system is extremely simple, we just create a domain model to represent our system metrics
Another important piece of our metrics core component is the MetricsProviderManager. It is just a simple backgrupund task which, periodically, collects all the metrics extensions registered in the system.
As you can see, right now we don't have any source code which can provide us any kind of metric. At the beggining of the post we have said that our metrics system should be pluggable/extensible. In order to do that let's create an SPI (Service Provider Interface) which the extensions of our metrics system should fulfill.
Once we have the core of our system let's go and register it as a service inside the OSGi container (which is running inside liferay). We have different ways to do that: in this example we are going to show how we can achieve that using the Declarative Service specs:
What are we doing within the previous definition?
  • Declaring our Metrics manager as a component (OSGi service)
  • Get/UnGet a reference to every MetricsProvider service which is registered/unregistered in the OSGi container (note how easily we can extend our metrics system)

Our first metrics extension

At this point we have the core of our system, which, by its own, it does not measure anything :). Let´s build our first metric component, a JVM metrics provider (a stupid one :) ). How can we do that? Just two easy steps

Implement the SPI wich the core system is exposing




Register the previous implementation as an OSGi service (we are using the Declarative Service approach too):


And that´s everything we need to do. Simple, isn´t it?. Just package both components and deploy it to your LIFERAY_HOME/data/osgi/deploy folder and just see the logs. Now, you can connect to the console (as Ray has already shown) and stop your metrics-jvm component so JVM metrics will not be longer collected, or you can create a new metric extension, deploy it and start seing it metrics on the log.

I am, intentionally, hiding all the OSGi details since I want you to put the focus in the general idea, and not in the details of building a MANIFEST.MF. In the near future we will see more complex things like using Liferay services as OSGi services, creating OSGi based webapps, ...

You can find all the source code in the following Github repo under the folder liferay-module-framework/examples/declarative-services.

I would love to hear your feedback and if you would prefer having a wiki page with all the new module framework related stuff instead of having long blog posts like this one (sorry about that)

My 2nd birthray

Company Blogs 28. Februar 2013 Von Miguel Ángel Pastor Olivar Staff

It has been two years since I joined Liferay, it is incredible how fast time is going. Two amazing years I have been trying to enjoy as much as possible.

First of all I would like to thank all my Spanish colleagues to make me feel like if I were at home since the first day I ran into the office. And to all the people who usually works with me on the daily basis and they are able to bear me every day: Ray, Mike, ... Thx a lot to all of you guys!!

Hope all of you can enjoy as much as I am trying to do!

Thanks a lot guys!

Liferay and Modularity

Company Blogs 25. Oktober 2012 Von Miguel Ángel Pastor Olivar Staff

A few minutes ago I have finished my talk about Liferay, modularity and OSGI at the Spain symposium. I am pretty sure I have covered the main points I have planned before the talk but I have many more different ideas I couldn't talk about because I just only got a 20 minutes slot.

You can take a look to the slides at my Slideshare account http://www.slideshare.net/miguelinlas3/liferay-module-framework and you will can download it at Liferay. I will push the source code of the examples to the Liferay Tech Talks as soon as possible.

Btw, Ray and I have been talking about many different concerns on OSGi and modularity stuff for the last months and he has already done a great blog post about that (needless to say I am completely agree with him). I would like to put some emphasis on the footprint and resiliency stuff. Once we could have all in place, I would like to put some effort on decreasing Liferay's resources usage putting in place all the benefits modularity, and OSGI, bring to us. This is a long story and I think this deserves more than a blog entry.

Thanks to all the guys who has attended my talk. Any kind of feedback will be extremely wellcome.

EDITED: You can find the source code examples here

Scala infrastructure in plugins SDK

Company Blogs 26. März 2012 Von Miguel Ángel Pastor Olivar Staff


I am an Scala enthusiast; I must to admit it :) ! It allows me writing clear and concise code having all the advantages of an extremely powerful type system, functional and object paradigms  among another such things.
But this is not a blog post introducing Scala benefits (you can see a very quick intro to the language https://github.com/migue/blog-examples/tree/scala-blog-examples/scala-talk) but talking about including it in the Plugins SDK so we can use it to develop our new portlets based on Scala.
This is a very quick overview on what you can do with the ongoing work (https://github.com/migue/liferay-plugins/tree/scala-support-infrastructure). I need to make some minor hacking but I hope to be pushing it to master during this week (if the child allows me to do it). Let's get started:
  • You can create a new Scala based portlet by executing the create script: create.sh portlet_name porlet_display scala
  • The previous command line execution will create a basic infrastructure with all the artifacts we need: a build.xml file, all the Scala library needed and a simple portlet class.
  • Using the previous build.xml file we can compile Scala code, Java code, use scalac (the single one compiler and the daemon) and make our deployments. This infrastructure will take care of the mixed compilation process if you are using both Java and Scala as your development languages for your new portlet.

I have some more work to do, like including a command in order to allow updating the Scala libraries with a specific version or generating IDE files in an automatic way. The current version is working fine but I hope to improve during this weekend and push it to master.

Short entry; hopefully I will write a more detailed one with some more news: I am working on building an Scala wrapper on top of the Liferay API . . . so stay tunned!! 

Hope you like it!


Writing custom SQL queries

Company Blogs 13. Dezember 2011 Von Miguel Ángel Pastor Olivar Staff

Sometimes there is need of writing custom SQL queries in order to obtain the data acording to our needs. And, as all of you know, Liferay can run on top of the most popular databases so writing this queries should be written "carefully" to prevent unpleasant surprises in the future. 

The following lines summarize some of the most common tips that we must pay attention:

  • Using the "&" bitwise operator. Many databases, as Oracle, does not support using the keyword "&" as an operator, using a function instead. If you want to write an "bitwise and" operation in your custom SQL queries you will need to type something like BITAND(X,Y) and Liferay will translate your code according the underlaying database.  
  • Achieving casts in text columns. Another common difference between databases are casting text columns. You can solve this problem using the CAST_TEXT(text_column) function. So, every time you need to do a cast with text columns use this function and your cast operation will be working on the different databases.
  • Integer division. Some databases as MySql have an specific way on doing integer divisions. If you want to the integer result you should write A DIV B in MySql or TRUNC(A, B) in Oracle. If you want to make your query agnostic from the database use the function INTEGER_DIV(A, B).
  • Related with the first item, some databases as SyBase does not support using decimals columns in the bitwise operator. Liferay comes to the rescue again and offer you a CAST_LONG(x) function that will translate your sql code acording the current database.

There are some more tips but the previous ones, IMHO,  are the most common in the custom sql queries.

One simple advice: be careful when writing your custom SQL queries ;)



PD: I need to create a wiki page will all the technical details. When the page is available i will update this entry with the corresponding link.

PD2: I have a few pending blog entries related with Liferay and Cloud Computing. Hope I can write this posts as soon as possible :)

Debugging SQL queries with p6spy

Company Blogs 1. August 2011 Von Miguel Ángel Pastor Olivar Staff

Who has not ever had to fight with complex SQL queries and PreparedStatements? And, what about the ? symbol when enabling the sql log?

Last week I had to write a big migration process in order to complete a refactor and I have done some little hacking in order to use the p6spy driver when running a migration process from command line. This driver will resolve the values for the hated "?" symbols :).

My first step was to modify the portal/tools/db-upgrade/run.sh (I work on Linux) in order to launch the migration process from the command line witouth starting the application server.

Once the previous file has been modified we should include the p6spy.jar and the spy.properties files in the portal/tools/db-upgrade/lib/ folder.

We will need to configure three basic properties in the spy.properties file:

Basically we are configuring the path for the logged queries and the real driver we are using in our app (i am using MySQL at this moment)

The last step would be to tell the upgrade process to use a specific datasource (p6spy). We could achieve this by modifying the file portal/tools/db-upgrade/portal-ext.properties (pay attention to the jdbc.default.driverClassName property):


That's all! If you run your upgrade process by executing the run.sh file you will get logged all the executed SQL queries. And the most important thing; "?" symbols will be replaced with its corresponding values :).

We could use it for debugging SQL queries of the portal (hibernate.show_sql property does not replace ? ) but i left it for you as a homework ;).



PD: as far as I rembember, there is an eclipse plugin that displays the contents of the previous log much more beautiful than the plain text :)

ANTLR Creole Parser infraestructure

Company Blogs 21. Juni 2011 Von Miguel Ángel Pastor Olivar Staff

This is the first entry on my Liferay blog so, first of all, I would like to introduce myself. My name is Miguel Pastor and I have only been working four months in Liferay (since March 2011) but I'm very happy to be part of this incredible team.

In my first blog entry I would like to describe the general ideas of my first "contribution": The new Creole Parser infrastructure. Through this post, we will see an overview of the main architecture's components and some of our main ideas that could be introduced in a future.


The parser infrastructure is built on top of the main techniques used to build programming languages. The common flow of a typical parsing process would be the following:

  1. Parse the creole source code.
  2. The result of the previous parsing process is an Abstract Syntax Tree (AST)
  3. The previous AST is traversed by different visitors (right now there is some semantic validation visitors and a XHTML translation visitor)

In the following sections we are going to dive deeper inside some of the previous components

The parsing process

The parser is built with the invaluable help of ANTLR 3. For those whose don't know ANTLR, it is a tool that provides a framework for construction recognizers, interpreters, compilers, and translators from grammatical descriptions (LL(k)).

The grammar definition includes the needed actions to build the proper abstract syntax tree (see next section) from creole source code. This grammar is the responsible for validating the source's syntax and building the AST

Is someone is interested you could take a look at the grammar definition.

Abstract Syntax Tree

This kind of structure is usually used in compilers in order to represent the abstract structure of a program. Next phases of the compiler will perform multiple operations over this structure (usually using the Visitor Pattern).

The next figure shows a partial view of the hierarchy used in the AST representation (Composite Pattern)

Imagine for a moment that we have the following creole source code


= Header 1

== Header 2

[[http://www.google.com|Link to Google]]


=== Header 3

The abstract representation of the previous source code looks like something similar to this:



Code Generation and wiki engine

Once the previous structure has been built, a bunch of visitors could traverse it in order to do some work. The main feature implemented right now is code generation and link extraction:

  • XHtmlTranslationVisitor offers the basic funcionality to traverse the AST and generate the XHTML code (generic behaviour). In order to integrate with the wiki engine infrastructure already built in Liferay there is a XHtmlTranslator class (extending the previous one) conducting Liferay's particularities like link generation or table of contents.
  • LinkNodeCollectionVisitor allows us to extract all the nodes who represents a link in the original source code.

Following the previous patterns we could add a new visitor class to traverse the AST structure and perform whatever we want.

Future improvements

Using the previous pattern, i have already two main ideas in order to improve our current parser's infrastructure in order to allow extensions by external contributions:

  • The first one is creating a traversable structure in order to inject multiple visitor. Using this mechanism we could add new visitors in order to add new functions to our current parser. For example, imagine that we want to translate the creole code to XML instead of XHTML
  • The second one is adding extensions (similar to TableOfContents). At this moment the grammar allows (needs a little hacking) to include new terms in our Creole code using this syntax @@new term@@. This extension will be available at the AST in a node of type ExtensionNode (it does not exist right now so the final name could be different) and therefore, the visitor interface will have a method to deal with this kinds of nodes.

The above ideas has not been implemented yet but it should not be too complex :).

I'd love to hear your comments!




Zeige 15 Ergebnisse.
Elemente pro Seite 20
von 1