Liferay Trust Center /

Responsible AI

As a provider of products and services designed to help our customers create digital experiences, we are excited about the potential of AI to drive innovation and create value.

Responsible AI

Responsible AI

As a provider of products and services designed to help our customers create digital experiences, we are excited about the potential of AI to drive innovation and create value. However, we also recognize that the development and deployment of AI technologies comes with significant ethical considerations and potential risks.

Liferay is committed to conducting business ethically and in full compliance with all relevant regulations. We have identified Responsible AI as an additional key focus area for our compliance efforts, with the ultimate goal of ensuring that Liferay harnesses AI responsibly and in alignment with our values and applicable laws.

To this end, Liferay has begun developing a Responsible AI Management System. This system aims to:

Identify and mitigate potential risks

We will carefully assess the potential societal and environmental impacts of our use of AI-powered solutions, including bias, fairness, transparency, privacy, security, and respect for other parties’ intellectual property. Based on this assessment, we will implement measures to mitigate identified risks.

Ensure ethical development and use

We will integrate ethical considerations into all stages of the AI lifecycle, from research and development to deployment and maintenance, ensuring human oversight throughout.

Foster trust and transparency

We will be transparent about how we use AI and the impact of our use of AI systems.

We believe that Responsible AI is not only the right thing to do but also essential for building trust with our customers, partners, and the broader community. We are committed to continuous learning and improvement in this area and will actively engage with stakeholders to make sure our AI solutions are developed and used responsibly.
We track and evaluate use of AI-driven systems at Liferay. Our employees received an awareness training and working instructions outlining what AI-driven systems can be used as well as how and when. 

As a provider of products and services designed to help our customers create digital experiences, we are excited about the potential of AI to drive innovation and create value. However, we also recognize that the development and deployment of AI technologies comes with significant ethical considerations and potential risks.

Liferay is committed to conducting business ethically and in full compliance with the relevant regulations. We have identified Responsible AI as an additional key focus area for our compliance efforts, with the ultimate goal of ensuring that Liferay harnesses AI responsibly and in alignment with our values and applicable laws.

To this end, Liferay has implemented an AI Management System (AIMS) that has undergone a verification by external auditors and received a ISO42001 certification. The AIMS provides a structured approach to risk-based compliance and responsible conduct in AI-related activities, reflecting Liferay’s commitment to the core principles of Responsible AI, and applicable regulatory frameworks, including the EU AI Act and data protection laws. 

Core Principles of Responsible AI 

Liferay's Responsible AI Program is guided by the following principles, that apply across the entire AI lifecycle and are embedded into risk assessments, controls, and operational procedures:

  • Environmental and Societal Well-being
    We consider sustainability and the social impact of our AI use.
  • Fairness
    We promote ethical, inclusive, and non-discriminatory AI operations.
  • Transparency and Explainability
    We strive to give preference to AI Systems with traceable outputs and that users are informed about their use.
  • Human Oversight
    We ensure that AI supports human decision-making and remains subject to proper oversight.
  • Robustness, Safety and Security
    We ensure that the AI systems we use are designed to be technically sound, resilient, and protected against threats.
  • Data Protection
    We adhere to the privacy-by-design principle and comply with all data protection laws.
  • Respect for Intellectual Property
    We ensure that the design, eventual training, and use of AI systems do not infringe on third-party intellectual property and that Liferay's own IP is protected.
  • Prevention of Misuse
    We deploy continuous monitoring, access controls, and training to prevent intentional or unintentional misuse.
  • Accountability
    We assign clear roles, responsibilities, and maintain extensive documentation to ensure compliance.

Key Operational Measures for AI Systems 

To put these core principles into action, Liferay has implemented rigorous procedures across the AI system lifecycle:

  • Risk and Impact Assessments
    Formal AI Risk and Impact Assessments are conducted by an inter-disciplinary team of experts in all relevant areas for all AI Systems prior to deployment to identify the level of risk and potential impact on Liferay and other interested parties. Systems are classified, their potential risks and impacts are assessed and mitigating measures are implemented.
  • AI Use Case Registry
    We maintain a centralized registry of approved AI systems, documenting their purpose, classification, business owner and applicable risk level.
  • Working Instructions
    Binding instructions are provided to all employees detailing the approved AI systems, their permitted use cases, and limitations to ensure compliance with the Core Principles of Responsible AI.
  • Vendor Assessments
    AI Vendors undergo due diligence, which includes reviewing their compliance documentation and contractual safeguards before acquisition.
  • Data Management Protocols
    Rigorous procedures ensure data quality, bias detection and mitigation, privacy-enhancing techniques, and robust access controls for all data used in AI systems.
  • Training and Awareness
    We deliver a company-wide training program on responsible AI to ensure everyone at Liferay is aware of the policies, procedures, and their specific responsibilities.
  • Incident Response
    A clear policy and team are in place to manage, triage, and respond to AI incidents promptly and transparently.
  • Communication and Transparency
    Internal updates and external disclosures regarding AI use, including risk disclosures and disclaimers, are managed through defined workflows and channels like the Trust Center to maintain transparency with all stakeholders.
  • Continuous Monitoring and Reassessment
    Approved AI systems are subject to regular performance and compliance reviews, and a mandatory reassessment is triggered by new features, changes in purpose, or an AI Incident.
  • Review and Remediation
    A crucial part of the AIMS is the commitment to assurance and continuous improvement, which includes regular reviews and auditing. The AIMS is subject to periodic reviews of the AIMS effectiveness, which include regulatory developments, emerging risks, program metrics, and audit results.Internal audits are conducted annually. Furthermore, the comprehensive set of policies, procedures, and controls that constitute Liferay's AIMS is specifically designed to support external audit and certification, aligning with international standards for AI Management. Liferay’s AIMS framework is certified in accordance with the ISO 42001 standard (a recognized international standard for AI Management Systems), which demonstrates Liferay's externally validated commitment to responsible and compliant AI development, design and use. 

AI in Liferay Offerings

Liferay's strategy focuses on a 'Bring Your Own AI' (BYO-AI) model rather than providing AI systems as part of its offerings. Liferay enables integrations with customers’ own AI systems, as further outlined in our documentation. That being said, for the purposes of these integrations, the customer is responsible for its own use of any AI systems the customer chooses to integrate with Liferay offerings. 
Liferay will notify its customers of any upcoming product releases that might involve integrated AI capabilities or features provided by Liferay in advance.
Liferay only leverages AI offerings of reliable industry leaders that come with enterprise grade privacy and security assurances. Liferay does not (and does not allow any third parties) to use any data that customers provide to Liferay to train AI models without customers’ permission.

Questions or Concerns

If you have any concerns you can report that through [email protected]