"The plugin worked on my computer" is not valid anymore

Community Blogs June 12, 2014 By Manuel de la Peña Staff

Hello my lovely Liferay developers!
 
I'm very proud and glad to announce that, from now on, we are going to be able to write integration test in our Liferay plugins!
 
* Before breaking this down, I want to thank all people that collaborated as a strong team to achive this: Carlos SierraCristina González and Miguel Pastor, who worked really hard to push this awesome stuff to the product.
 
 
Hey man, wait, what do you mean by integration test?
 
Well, with integration tests I mean those tests that rely on other services, like portal services or even services within the plugin itself. We will have a real (not mocked) instance of that service, with all the wiring it uses (Persistence behaviour, caches, indexing, etc).
 
This black magic has been desired by many of you since years, but at last, we have made a fine integration of Liferay with one of the coolest testing frameworks nowdays. This framework is Arquillian (http://arquillian.org), an innovative and highly extensible testing platform for the JVM that enables developers to easily create automated integration, functional and acceptance tests for Java middleware.
 
Directly shot from http://arquillian.org/invasion:
  • Managing the lifecycle of the container (or containers)
  • Bundling the test case, dependent classes and resources into a ShrinkWrap archive (or archives)
  • Deploying the archive (or archives) to the container (or containers)
  • Enriching the test case by providing dependency injection and other declarative services
  • Executing the tests inside (or against) the container
  • Capturing the results and returning them to the test runner for reporting
In a few words: "Arquillian brings your test to the runtime, giving you access to container resources, meaningful feedback and insight about how the code really works."
 
Firstly, you should know that Arquillian can use three containers types (I'll only mention them, so please visit their documentation website to understand more of them):
  • Embedded: the test runner contains the container as a library
  • Managed: the test runner starts and stops the container as a separated process
  • Remote: the test runner relies on an operational container, already started.
After tons of beer and two or three minutes discussing this, we think that the best option to start with is using a Remote approach. This allow us to run test in development time and we can supply managed behavior using CI scripts if needed.
 
Just to make things easier, we have added some capabilities to our plugins SDK to configure a Liferay bundle (the one defined on the SDK) with Arquillian support, which means:
  • JMX enabled and configured.
  • Tomcat's manager installed and cofigured.
  • Arquillian dependencies available on compile/test time.
I'll explain them later more deeply.
 
Secondly, we have created a library that make it easier to create a WebArchive, the file that Arquillian needs to send to the container. This piece of software builds a WebArchive and execute portal's auto-deployers, so just see the WebArchive as an abstraction of a plugin WAR file that has been dropped on LIFERAY_HOME/deploy folder, but not deployed to the container.
 
At this moment, you must define a method with Arquillian's @Deployment annotation, and build your WebArchive there. (We are deciding how to improve this, but for now it's mandatory to define this deployment method).
 
Once we have created the WebArchive, we can add classes (or resources) to that archive, which it's actually a very good thing, because we are making test dependencies explicit: just read the test to see all of them.
 
 
Lastly, test classpath must contains an arquillian test descriptor, where you define where your remote server is running. This file, named "arquillian.xml" is placed under PLUGIN-NAME/test/integration folder.
 
Mmm... let me think... I believe that's all, so let's summarize!
  • Tomcat configured with JMX, manager, and valid credential to access the manager
  • Library that builds the plugin so that Arquillian knows how to deal with
  • Test classpath configured
We have added some cool tools on the SDK so that you can add all this previous configuration executing only two ANT targets:
  • On root folder of plugins SDK (ONLY FOR FIRST TIME YOU START WITH SDK): ant setup-testable-tomcat, which will configure your bundle, affecting these files.
    • CATALINA_HOME/bin/setenv.sh file, to configure JMX
    • CATALINA_HOME/conf/tomcat-users.xml file, to configure tomcat's manager users and roles
    • webapps/manager, web application Arquillian communicates with to deploy your tests on the container.
  • On root folder of your plugin: ant setup-arquillian, which will configure your plugin, affecting these files.
    • test/integration/arquillian.xml, to define the credentials to log in the tomcat's manager
    • add Arquillian dependencies to test classpath 
Well, our plugin supports now Arquillian! But we need to write a test that verify the integration with portal.
 
So, let's dirty our hands!
 
This is the minimum recipe to achieve it:
  1. Start your Tomcat, already configured with provided ant tasks
  2. Create a test class under test/integration folder.
  3. Run your test using Arquillian test runner.
  4. Add a method to the test class to retrieve the WebArchive, using @Deployment annotation.
  5. Add a test method that verifies some functionality in the portal, i.e., retrieving the number of calendars on calendar portlet.
  6. Execute the tests:
    1. Using ANT: ant test-integration
    2. Using your IDE
* Note that test dependencies has been declared in an IVY test configuration on plugins SDK, so no test-related plugin configuration must be done, as it's inherited from SDK.
 
import com.liferay.ant.arquilian.WebArchiveBuilder;

import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.arquillian.junit.Arquillian;
import org.jboss.shrinkwrap.api.spec.WebArchive;

import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;

@RunWith(Arquillian.class)
public class LiferayArquillianTest {
	
	@Deployment
	public static WebArchive createDeployment() {
		return WebArchiveBuilder.build();
	}

	@Test
	public void testGetCalendarsCount() {
		int count = CalendarLocalServiceUtil.getCalendarsCount();

		Assert.assertEquals(0, count);
	}

}
In this example, CalendarLocalServiceUtil is a real object, so no need of mocking anymore!!!
 
 
But, why is this CalendarLocalServiceUtil a real object? Where is the magic here?
 
Arquillian deploys the fully working plugin into the container, which has been started before. Then it executes the tests into the container, and after test execution Arquillian returns test results to the runner, undeploying the plugin after all.
 
This is really cool!! Because you can run your tests using your ANT commands on your shell, or even using your IDE, which speeds up development process.
 
Ok, but won't it be time-consuming deploying/undeploying the plugin?
 
Not at all. Do not forget that your container is started, so the deploy->test->undeploy cycle should be very fast (10 seconds or less). All the heavy load was done during container and Liferay portal startup, it's only your plugin actions on live.
 
Will I be able to debug?
 
Yes, the blessed debugger! If you start your container in debug mode, then you can create a Remote connection to your tomcat and debug. Have you noticed I said Remote connection? Why did I say that?
 
As you have read some paragraphs before, the tests will be executed on a Remote server (maybe in your local machine, but still remotely), so you need to configure your IDE to point to that debug port.
 
Future?
 
Well, as you can figure out after this reading, we could backport this to 6.2.x and 6.1.x branches, so plugins on that versions can be tested.
 
And of course, next benefit of having Arquillian integration is that we can start writting more tests in our plugins right now!!!
 
As all of you already know, we are doing huge efforts trying to convert Liferay into a nicer, simpler and more extensible platform to use. In the near version you will be able to write apps on top of Liferay in a completely different way, and, of course, we want you to test this new applications, so we are currently going through the review process of the basic testing infrastructure which will support this new testing mechanisms.
 
Well, that's all. We'd love to hear your voice, so please comment here what you want.
 
See you, and remember this...
 

Continuous Integration Best practices: #7 Don't Comment out Failing Tests

Staff Blogs October 9, 2013 By Manuel de la Peña Staff

Hello Continuous Readers! I'm here again to share a new CI practice in a very small pill.
 
Do you remember a time when you had some tests that continuously failed, and nobody had bandwith to work on it? What was the easiest solution for them? Of course commenting or removing them, because they were disturbing your green lights on the server.
 
Did I say of course? Of course not.
 
This must always be the last resort, very rarely and reluctant used, because it will hide the real problem, and what you want with your tests is exactly the opposite: to show what is happening, in special what is wrong to solve it as soon as possible.
 
Instead of using the rug as in the picture above, try to apply these simple rules:
  • Has a regression been found by the test?
    • Fix the code!
  • Is one of the assumptions of the test no longer valid?
    • Delete it!!
  • Has the application really changed the functionality under test for a valid reason?
    • Update the test!!

With these three very simple rules you can solve the majority of the situations related with regressions found.

See you next post!

Continuous Integration Best practices: #6 Time-Box fixing before reverting

Staff Blogs August 28, 2013 By Manuel de la Peña Staff

Hello my friends!

Here I'm again after a long time without writting about CI in Liferay. If you remember my last post, it described the importance of reverting those commits that break the build as quick as possible.

Well, in this blog post I will share a small pill that can help you during the revert process.

Before reverting an offending commit, establish this rule: whenever the build breaks on check-in, try to fix it for an specific amount of time,  defined by your interests, in example 10 minutes.
 
If, after that, you aren't finished, revert to the previous version.
 
 
In Liferay, we usually dedicate 20-30 minutes to investigate the problem. If we cannot solve it, because we don't know much of the functionality that is breaking the build, then we rollback.
 
Another thought that we feel trying to solve other developer's failure is that maybe they will see you as the last bullet on the gun, and they don't mind on breaking the build "because it will be fixed magically when I come to work tomorrow".
 
Instead, we prefer reverting the commits, and notify the developer with another specific email (not only the automatic email sent by the CI server), explaining why we have rolled back the commits, so actually he/she is aware of the failure.
 
Just my two cents!

Continuous Integration Best practices: #5 Be prepared to revert

Staff Blogs August 1, 2013 By Manuel de la Peña Staff

 
In this blog post I want to talk about developers mentality, about how they (we) create software that never fails, and brights more than the sun, and is faster than a jet plane... or not?
 
No, seriously, developers are more selfish about our code than other careers, and we usually don't like others criticising us for it. Well, but we must not forget that we are in a team, with many co-workers (maybe distributed all around the world), and one of our highest wishes must be a software product with the best quality. And in order to achieve that quality, we have to polish defects we commit.
 
If you remember my last post, there is a role named "Build Master", that is responsible to polish those defects, reverting wrong commits and addressing the issue back to the developer who caused the problem.
 
I think that I've written this before, but maybe is better to empower this idea: we all make mistakes, so everyone of us will break the build from time to time.
 
And the important thing is not to blame the developer, no. Indeed the most important thing is to get everything working again quickly. Of course, if you aren't able to fix the problem quickly, for whatever reason, you should revert the previous change-set held in the version control, and remedy it locally. After all, you know that previous version was good. Why? Because you don't check-in on a broken build!!!
 
I'll add a brief story to show you how reverting is a good idea :)
 
Airplane pilots assume that something will go wrong, so they should be ready to abort the landing attemp, and 'go around' to make another try.
 
 
Imagine how critic is this landing process compared with a set of commits: they prefer aborting it in order to avoid human beings deaths rather than doing a dangerous maneuver. So, why not doing the same with those conflictive commits, that will be re-sent as quick as possible?
 
So, my advice is: Don't be afraid, my friend... and revert.

Continuous Integration Best practices: #4 Never go home on a broken build

Staff Blogs July 19, 2013 By Manuel de la Peña Staff

I love that picture! Imagine yourself any Friday, at the end of your day-work. You look at the CI server and, unluckily, it is broken. You have only three options:

  1. Resign yourself that you will be leaving late, because you'll try to fix it.
  2. Revert you changes and retry next week.
  3. Leave now and leave the build broken.

Of course, the best choices are to choose number 1 or number 2, never number 3. In the above picture, Iron-man decided that he doesn't mind what will happend after that "bomb". But why is it a bomb to leave the build broken?

It's a bomb, because any co-worker that pulls from your master branch will get dirty code. And what happens with this? Please look back to my first post about don't check-in on a broken build.

On the other hand, if you try to solve, or even revert your changes before leaving, you will keep the build green, and other developers will be happy to pull safe code from SCM repository.

Some good practices to avoid potential problems are:

  • Check in frecuently and early enough to give yourself time to deal with problems should they occur
  • Experienced developers often save checks in for the next day

Well, at this point you could say: "Ok, I follow similar practices but my team is distributed and we have problems working in different time zones".

In Liferay we actually have this "problem":

As you can see, we work in different time zones: China, Europe and America, following the sun.

In case of problems:

  • If China breaks the build... then Europe day's work is dramatically affected
  • If Europe goes home on a broken build... America would be screaming and crying

How have we solved it? Well, at this point the figure of the Build Master appears:
 
Templario
This role not only mantain the build but also policed it, ensuring that whoever broke the build was working to fix it. If not, the build engineer would revert that check-in, so it's mandatory that the build engineer has write access to the master branch in your SCM, or prioritize his commits, otherwise.
 
The build master is a controversial role, because nobody wants to see his/her commits rolled back. But the whole team should  accept that this is not a personal offense, it is another effort to improve the quality of the product, never criticizing the developer.
For that, we all should open our mind and accept that is not bad to revert someone's commits: they are still present in the history of the project, so we could restore them whenever we need them. And after all, those commit were breaking something, weren't they?
 
I will talk about reverting in the next episode, so let's see then!
 

Continuous Integration Best practices: #3 Wait for commit tests to pass

Staff Blogs June 24, 2013 By Manuel de la Peña Staff

Hi all!

This is my third blog post about Continuous Integration best practices, and today I want to explain the benefits of being patient after sending commits for being reviewed.

As developers, we are used to work on functionalities, finish them, and jump to another one. We send our work to a reviewer,  and continue working on other tasks. As you probably know, in these cases our mind completely focuses on the new task to do its best, almost forgetting the previous one.

Do you remember my last post, about running the tests (manually in a local machine, or automatically in Jenkins via a pull-request)? Well, please think about not knowing about the results of this tests execution. Are you sure that your work works as expected? Are the tests finding potential bugs on it?

If you don't monitor the build that executes the test for your changes, last questions won't be answered until it's very late: when your code is pushed to your master branch, where other developers can pull from it, getting unexpected behaviour. 

So this blog post is asking you for waiting for the commit test to pass, being aware of tests results, and start solving it as soon as possible (if needed). 

 

In my last post I also commented that the CI server is a shared resource with a lot of information. Developers should monitor it to verify that test results for their commits caused failures or not. Doing that, they are in the best place to solve potential problems, because they haven't switched context between tasks.

 

Monitor the build to verify if you add bugs

 

One of the most important thing of this best practice is that you must know that everyone can commit errors. Furthermore, errors are an expected part of the process.

But our goal, in what we will be focused, is to find and eliminate them as soon as possible, without expecting perfection and zero errors.

While the build is running, you can organize your inbox, prepare for next tasks, have a coffee, or even go to the bathroom! Because it should take the least time for the build to finish: depending on your project size, between 10-20 minutes is OK.

Well, that the end for today. See you in next post!

Continuous Integration Best practices: #2 Always run the tests

Staff Blogs June 4, 2013 By Manuel de la Peña Staff

Following with these blog posts series about good practices in Continuous Integration, I want to talk about the benefits of running tests.

Practice 2: Always run the tests

When a developer commits a new functionality, it's expected that in that commit, the software works as what we believe it should work. And, if a software works as expected in a single commit, why not release in that state? And what would happen if we can assert that every commit in the history makes the software to work as expected? Just iterate the sentence "release in the $COMMIT" through each commit in the history... We would have achieved the ability to be more "releasable", as we can release in whatever commit we want.
 
At this point, we should have realized how important a single commit is, as it could trigger the creation of a release candidate.
 
Ok, we know what the goal is: to have good commits that works as expected but, how can we achieve it? How can our commits be more releaseable?
 
One of the most important thing you can do to verify that your commits work as expected is to write well written test for them, and when I say well written I mean that they must test the functionality: conditionals, loops, different values..., not only the happy path of the test.
 
Once you have written good tests, you need to run them, and check results. I will assume that you know how to write and run tests, this is not the main goal of this blog entry, so let me continue without that explanation.
 
In Liferay, we can run tests in two ways:
  1. Locally: a developer can use some ant targets to run test in his/her own workspace, so he/she can test the code before sending it. Please read the wiki page explaining the Testing Infrastructure in related assets:
    • ant test-unit: execute all unit tests (dependencies to other systems, i.e. databases, are not real: we mock what we need)
    • ant test-integration: execute all integration tests (dependencies to other systems are real, not mocked)
    • ant test-class: to execute only one test class
    • etc. 
  2. After sending a pull request: we use Jenkins as CI server to manage all our CI processes, and we have made that every pull request sent to a peer reviewer is monitored by the CI server: it checks out the code, and execute some tasks (compilation, format source, test execution...) The cool thing here is that there is a Jenkins plugin that can monitor the pull request and operate after tests results, managing Github pull request (auto closing if it breaks tests, writting comments, changing pull status...). In this scenario, a peer reviewer knows if the pull request he/she is going to review is good or breaks something, so we reduce the feedback loop a lot with this process, discarding bad pulls as soon as possible.
Another peer reviewerAnother peer reviewer
 
Mmmm... interesting, two places to run tests: locally and in the CI server. But why both?
 
You as developer could have the latest version of a library, or a driver, or an application that configures XXX in your O.S, or even your O.S. is tunned because YYY.
On the other hand, the CI server is a controlled environment, it always runs with the same scenario, for each commit sent by each developer, so every test is executed with same conditions, in every build, for everyone. And that's a very good choice, because then, your test results will be repeteable.
 
Maybe you don't want to execute tests locally, it's ok, we don't have problems with that, but try always to run the tests in a controlled environment.
 
Another good capability of the CI server, because of being a controlled environment, is that it is also a centralized information repository: everyone in the team can look at it searching for build results, and seeing what is happening at every moment related to tests. The CI server produces logs for almost everything, so it's very easy to read them and to be informed about the real state of the commit (and the project, too).
 
Once looking up the server logs, which logs are the most important for us to verify that our commits are good? Well, we have two possible options to know what is happening:
  • Jenkins logs: you can configure Jenkins to send the commiters an email with test results, telling them that their commits produced a breakage. In this case we have improved the usability of default Jenkins email, to make its reading easier.
  • Github logs: the plugin we use to monitor pull requests can write comments on Github, so this really good platform also sends emails when a pull is commented with tests results. So a developer inmediately knows whether his/her commits passed the tests or not.

Both of them produce very good complementary information a developer will know what to do with. Then, try to notify them with every break your CI system discovers, so the culprit can be ready to solve it as soon as possible, as we saw in last practice. 

That's all for today, please wait till next blog entry about CI best practices!

Byes!

Continuous Integration Best practices: #1 Don't check-in on a broken build

Staff Blogs May 29, 2013 By Manuel de la Peña Staff

Hi all! I'm writing this blog entry as the first post of a continuous integration blog serie, sharing our knowledge and usage of this technique.

In these blog posts, I will to talk about some good practices I would recommend you to follow, based on my experience reading the book "Continuous Delivery", by Jez Humble and David Farley, concretely chapter number 3. And of course, my experience dealing with CI in Liferay.

One of the most important thing that I've learned reading this book, is that Continuous Integration (CI), is a practice, not a tool , and requires a significant degree of discipline from the team as a whole. So all team members are involved on it, and must collaborate to achieve its perfection.

The objective of a CI system is to ensure that a software is working, in essence, all of the time. So you should keep that in mind as a mantra: the software is working before your changes, and it will also work after them.

We hope this blog serie helps you if you are starting with CI, but we also want to hear your experience and your feedback about. So please comment what you consider on it.

Ok, once introduced the topic, let's start with the first practice.... 

 

Practice 1: Don't check-in on a broken build

You are about to start a new work day, and see the build broken. Have you received an email from the CI server? If so, you should know how to proceed to verify if you are the causant of the errors, and if it is so, please try to solve it as soon as possible instead of starting to code that stellar functionality as a beast.

Doing that, you can identify the cause of the breakage very quick, and then fix it, because you are in the best position to work out what caused the breakage.

But, wait, you have already finished your work, and the build is still broken. Why shouldn't you check-in further changes on that broken build?

First of all, it will compound the failure with more problems. Imagine that you don't know about these practices, so every time you check-in, you cannot prove that your changes are not adding more errors, and maybe your changes plus existing errors cause another different problems.

Direct consequence of previous sentence is that it will take much longer for the build to be fixed, because you added more complexity to the problem.

Of course, you can still check-in. And you can also get used to seeing the build broken. In that case, the build stays broken all the time :(

And that's the cycle, it's true.

But after many broken builds, the long term broken build is usually fixed by an Herculean effort of somebody on the team (here in Liferay is usually Miguel) and the process starts again.

Ok, that's all for today.

I'm looking forward to hearing your feedback!!

Una experiencia muy personal

Staff Blogs September 25, 2012 By Manuel de la Peña Staff

Transcribo en esta entrada de blog unos pensamientos de hace unos días, espero que alguien pueda aprovechar algo de ella:

"Bueno, aquí estoy otra vez al teclado, deseando que el sentido de estas líneas sea 100% real.

Y escribo mientras espero la última nota de la última asignatura de mi expediente universitario en la titulación de Ingeniería Técnica en Informática de Gestión.

Hace 6 años me matriculé en la UNED buscando finalizar la carrera que en 2001 comencé en la universidad presencial. Digo finalizar como un eufemismo, porque llegué a la UNED con 4 asignaturas convalidadas, y de ahí a completar 181 créditos bien podría afirmar que he hecho el total de la carrera (28 asignaturas) a distancia.

Me matriculé sin saber que vendrían unos añitos duros, de sacrificios, disgustos, cansancio y todo lo malo que se os pueda ocurrir cuando decides compaginar una carrera universitaria de Ingeniería con el trabajo diario. Al principio trabajaba para la administración en Castilla-La Mancha pero, como culo inquieto que soy, busqué progresar dando el salto a consultoras madrileñas, pasando por varios proyectos y muchos clientes. Así que, tal y como está el sector en Españistán, imaginaos la dualidad onda corpúsculo con trabajo y estudios.

Tengo que decir que no pensé en nada de ello. Mi objetivo siempre fue el final. Nunca pensé en las horas de dedicación, nunca. Me matriculaba año tras año descontando el número de créditos que me quedaban, y sólo quería llegar a este preciso lugar, donde las tripas se anudan por los nervios de la espera y el sudor corre frío por las sienes. Este preciso instante en el que uno recoge los frutos de la cosecha, con la más grande de las ilusiones, para ver que los dígitos que se ocultan en la web de la universidad son mayores estrictos que el 4'99999.

Pienso en todo lo aprendido y lo veo con otros ojos. Puede ser que por haber compaginado estudios y trabajo, haya visto con mayor facilidad la utilidad de cada asignatura, donde muchas veces encontré la aplicación inmediata en el entorno profesional de los conocimientos aprendidos, y donde otras muchas veces intuí la inutilidad o desactualización de los mismos... ¿Por qué se sigue enseñando a programar en MODULA-2? He aquí un gran problema de base de las titulaciones españolas: cuando se está en la universidad se desconoce totalmente el impacto del conocimiento de muchas materias en el día a día profesional. No sé si por el sistema, que le cuesta abrir las puertas y mirar más allá, acercando el mundo empresarial a la Universidad, o somos los alumnos que estamos a otras cosas. Un poco de todo, ¿no?

Lo importante es que independientemente de este por qué, yo me he sentido cómodo con el trayecto. He intentado sacar cada asignatura con ilusión, y estudiaba imaginando que aplicaba la materia al día siguiente. No creo haberme agobiado, y esto sin lugar a dudas hizo más fácil superar la parte negativa del camino.

Echo la vista atrás y debo confesar que realmente he tenido suerte. Suerte por trabajar para la Administración Pública, donde me dejaron aprender a decidir y a tener responsabilidades; suerte por sufrir a las cárnicas españolas desde clientes que respetaban horarios; suerte por tener a JP que no me dieron mucha guerra (aunque vi situaciones desagradables); suerte por tener compañeros que me ayudaron a adaptarme por esos proyectos de dios... suerte por saber qué es lo que quería, y por haber conocido a personas en las que verme reflejado (sí, Jorge, hablo de ti, siempre te tuve presente como ejemplo).

Suerte también por haber nacido en una familia que siempre me dejó hacer, que supo verme investigar, dudar y equivocarme, proporcionándome la autonomía facilitadora para emprender este tipo de aventuras.

Y suerte por haber llegado a una compañia como Liferay, esa empresa del s.XXI que @RCarpintier nos descubrirá en el próximo Symposium, un lugar donde he encontrado un espacio de desarrollo y crecimiento personal-profesional que antes no podía imaginar. Tengo que agradecer profundamente a mis compañeros y compañeras el que me hagan tan fácil el día a día en la oficina... el mérito también es vuestro por crear un clima de excelencia tan sano!!

Pero en lo que más afortunado me siento es de tener a mi lado a una persona que, a pesar de todas las dificultades, siempre estuvo a mi lado: en las celebraciones de los aprobados, en los sacrificios de días de estudio, en los nervios del examen (los suyos) y en los nervios de la espera (los míos). Una parte de este mérito es tuyo, por acompañarme activamente en este largo camino.

Y ahora me pregunto... ¿qué vendrá después? No lo sé. No ha salido la nota pero ya he empezado a interesarme por los máster...

Y es que estoy convencido que no puede ser de otra manera!!"

 

ACTUALIZACIÓN DEL 25/09/2012, 18:30

SÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! :))

Mi experiencia en el EVP (Employee Volunteer Plan)

Staff Blogs June 14, 2012 By Manuel de la Peña Staff

Durante el pasado mes de mayo, en concreto del 7 al 11, estuve disfrutando de las horas que Liferay ofrece a sus empleados para una colaboración en algún programa de voluntariado, y una vez de vuelta a la oficina he pensado que la mejor forma de expresar lo vivido durante esos días es la de compartirlo mediante una entrada de blog.

Primero, comentar para quien no lo sepa que EVP significa Employee Volunteer Program, es decir, Programa de Voluntariado del Empleado. Este EVP hace ver cómo Liferay apuesta de forma clara por incentivar la motivación de sus empleados con una política de Responsabilidad Social Corporativa que nos ayude a ayudar a los demás. Y para mí esto es algo super-motivador que deberían aprender-utlizar en muchas empresas españolas.

Segundo, una vez finalizado mi EVP, tengo que reconocer que me ha encantado la experiencia. Todo lo que he enseñado, ya sea mucho o poco, será utilizado directamente por personas que buscaban ayuda. El poder ayudarles me ha hecho sentir bien, pero además, me ha dejado con ganas de repetir.

En mi EVP elegí colaborar con COCEMFECLM, que es una Organización no Gubernamental de Castilla La Mancha con sede principal en Toledo capital, cuya función principal es la de trabajar con personas con discapacidad física y orgánica para conseguir que se cumpla el principio de igualdad de oportunidades y lograr la plena integración, en el ámbito educativo, laboral y social, eliminando todo tipo de barreras.

Conocí la organización por mediación de un amigo, al que se lo agradezco profundamente, que colabora con ellos como voluntario. Me dijo lo que hacía allí y pensé: "¿por qué no puedo yo colaborar también?", "podría ayudar de alguna manera, tecnológica o no". Así que concerté una cita con COCEMFECLM, donde expuse las ideas que tenía para ver si encajaban en sus proyectos.

Una vez allí, la responsable de los proyectos del área de mujer, Inés Escudero, me agradeció enomemente el interés en colaborar, y me dijo que tenían un proyecto muy bonito sobre el fomento del emprendimiento con mujeres con discapacidad en el medio rural, el cual consistía (y consiste) en una serie de procesos de tutela y mentorización para crear, en un futuro, posibles y nuevos emprendimientos, que les permitan crear su propio autoempleo.

Según destaca COCEMFECLM "las mujeres con discapacidad sufren las cotas más altas de paro, alcanzando casi el 60 por ciento de su población activa, además de sufrir múltiples discriminaciones, siendo víctimas de todo tipo de violencia", por lo que es "importante que las administraciones públicas diseñen políticas de equidad, priorizando a los colectivos más vulnerables y sensibles ante la crisis, como es el colectivo de las mujeres y varones con discapacidad".

Ahí encajaba perfectamente mi colaboración: enseñarles las ventajas que las tecnologías ponen al servicios de estos emprendimientos. Estaba ya todo en marcha.

Ese día se prepararon los ingredientes para lo que finalmente se horneó como el curso "Aprender a Emprender: Las Tecnologías al servicio del Emprendimiento", que comprendería 5 días de formación que ayudase a este desfavorecido colectivo en mejorar sus capacidades tecnológicas para fomentar el autoempleo.

Para ello, decidimos que en el curso se hablaría sobre:

  • cómo informarse y comunicarse en Internet, principalmente mediante el buen manejo de los buscadores, mensajería instantánea y correos electrónicos.
  • cómo aprovecharse del nuevo modelo de negocio que supone la Nube para las PYMES y autónomos, mediante herramientas de colaboración como Google Docs, o de compartición de archivos como DropBox.
  • cómo utilizar las redes sociales en el ámbito profesional y así incrementar la visibilidad de los productos/servicios que se ofrecen desde los negocios, mediante el uso de Twitter, Facebook, Linkedin.
  • la utilización de blogs como herramienta de comunicación.
  • Por último, decidimos que las alumnas y alumnos debían cerrar el ciclo con la creación de un blog.

Una vez en el curso, me encontré delante de 8 alumnas y 2 alumnos con mucha ilusión por aprender. Me comentaron que habían tenido algún reparo en apuntarse al curso, ya que no estaban seguras del uso que podrían darle, pero según iban pasando los días, me di cuenta del gran interés que estaban poniendo... ni siquiera me avisaban del descanso a mitad de la jornada!!

Por lo demás, el curso transcurrió muy fluido, las alumnas y alumnos estuvieron perfectos como oyentes, ya que las interrupciones fueron las mínimas, y la atención, la máxima.

Además, esta experiencia tuvo bastante repercusión mediática en la ciudad de Toledo, incluso a nivel Castilla-La Mancha me aventuraría a decir, ya que el penúltimo día estuvieron medios de comunicación regionales para hacer una rueda de prensa. En ella intervinieron, además del que escribe, la Directora del Instituto de la Mujer de Castilla-La Mancha, María Teresa Novillo, y el Presidente de COCEMFECLM, Julio Roldán Perezagua.

 

Trabajos de las asistentes

A continuación os dejo un listado con los trabajos realizados en el curso, que iré actualizando según vaya recibiendo más información.

  • Aeroclub Las Tablas del Alberche en Almorox, página web donde encontrar escuela de vuelo sin motor, avionetas, radiocontrol, automodelismo, aeromodelismo, etc, por Milagros.
  • Catas de Vino y Azúcar: blog de recetas, por Ana.
  • Las cosas de Marita: venta de camisetas personalizadas creadas por Marita.
  • El Taller de la Expresión: a través de las artes plásticas los participantes aprenden a pintar y decorar, se expresan emocionalmente y se educan en la importancia del conocimiento interno y los valores, por María

 

Enlaces

Os dejo también los enlaces con las notas de prensa al respecto del curso:

 

Fotos

Y aquí unas fotos del curso:

Aquí con las alumnas y alumnos.

Aquí otra foto con las alumnas y alumnos. ¡Qué bien atienden!

Aquí con Inés (@iescudero3), la coordinadora del curso, y unos presentes que me regalaron al finalizar el curso.

Aquí con la Directora del Instituto de la Mujer de CLM, María Teresa Novillo, y el Presidente de COCEMFECLM, Julio Roldán Perezagua, durante la rueda de prensa que presentó el curso ante los medios.

 

Y por mi parte, espero con ganas el poder colaborar otra vez con esta organización, y que mi experiencia sirva además a alguien para que dé ese pequeño paso para colaborar socialmente con otras iniciativas, que por poco que se haga, ayuda y mucho.

Hasta la próxima!

Liferay 6.1 GA1 in the cloud, step by step

Staff Blogs January 12, 2012 By Manuel de la Peña Staff

Do you want your new Liferay 6.1 GA1, with all these new functionalities: english and spanish, in the cloud?

Follow these simple steps to achieve it, even if you want to use the new Setup Wizard:

  1. Go to Jelastic (www.jelastic.com), create an account depending of your location
  2. Create an enviroment: Tomcat 6 + MySQL 5
  3. Upload portal libraries (portal-client and dependencies) to SERVER_ROOT/lib. Include here JDBC driver for your database.
  4. Upload WAR file from sourceforge (here) (Jelastic make it easy that uploading, because directly links to that WAR file)
  5. Create portal-ext.properties at SERVER_ROOT/home, with this values:
    • resource.repositories.root=${user.home}/ENVIROMENT_NAME
    • include-and-override=${liferay.home}/portal-setup-wizard.properties
    • liferay.home=${user.home}/ENVIROMENT_NAME
    • jdbc.default.jndi.name=jdbc/LiferayPool
  6. Modify SERVER_ROOT/server/context.xml file with this values (note that the "mysql-" is very important at db url):
    • <Resource name="jdbc/LiferayPool" auth="Container" type="javax.sql.DataSource"
                  maxActive="100" maxIdle="30" maxWait="10000"
                  username="USER" password="PASSWORD" driverClassName="com.mysql.jdbc.Driver"
                  url="jdbc:mysql://mysql-ENVIROMENT_NAME.jelastic.com/DATABASE_NAME?useEncoding=true&amp;characterEncoding=UTF-8" />
  7. Clean catalina.out log to be sure that your installation is successfull.
  8. Restart server.
  9. Check in catalina.out that Liferay starts reading your portal-ext
  10. Browse to you portal: ENVIROMENT_NAME.jelastic.com
  11. Setup Wizard is the first thing you'll see, but as JNDI is configured, we cannot modify database settings. We should go to SERVER_ROOT/server/context.xml for database changes.
  12. Set up for portal (name, language, admin credentials), and...
  13. Here it is! Your portal up and running!

Then, you can tune your portal with portal-ext reminding not to modify properties set in this blog.

Importants (and new things):

As you can see, we are taking care of telling setup wizard where to read the new props file (the include-an-override property), and we are also configuring database with JNDI, but of course you can do it with JDBC, just with the usual way:

  • jdbc.default.driverClassName=com.mysql.jdbc.Driver
  • jdbc.default.url=jdbc:mysql://mysql-ENVIROMENT_NAME.jelastic.com/database_name?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
  • jdbc.default.username=user
  • jdbc.default.password=password

Hope it helps!

Manuel

Showing 11 results.
Items 20
of 1