DORA - The Compliance Explorer (2)

Episode 2 - Use of GenAI for DORA compliance support

1/17/20255 min read

How can DORA compliance be accelerated and rendered more efficient using GenAI?

The integration of GenAI can help organizations navigate this regulatory landscape. Below we outline some ways in which GenAI can be used to accelerate and make the compliance exercise more efficient.

  • AI-Driven Gap Analysis and Road Map creation: Many organizations in scope have already had to address a patchwork of rules covering ICT compliance (such as CSSF circulars 22/806, 27/750, and 24/847). Therefore they are not starting with a blank canvas. With the advent of GenAI, organizations can reinforce their teams with the use of such tools to assist the comparison of their existing readiness using a GenAI gap analysis. This can be conducted in real time repeatedly as the team progresses, giving an accurate dashboard of the organization’s maturity. This tool can be used to indicate where they are on this journey, like a compass. This can be done by taking the legislation as a reference, and comparing it against internal company policies, procedures and contracts. This accelerator would only work where people control the output, and accelerates known processes.

  • AI-driven aid to complete the Register of Information - third party risk management: The register of information under DORA is a standardized central database that records all contractual agreements of a financial company with ICT third-party service providers. It contains detailed information about the ICT services utilized, the providers, and the supported business and operational functions. The register is there to ensure ongoing monitoring of dependencies and risks arising from the use of ICT third-party providers, and serves to provide this information to the relevant supervisory authorities. It encompasses all ICT services though critical or important functions must be listed in more detail.

More specifically, today, GenAI can help to: (a) assess the compliance of existing or draft contractual agreements with regulatory requirements (b) fill in supplier questionnaires and due diligence forms. (c) Support Threat Led Penetration Testing (TLPT) - Large Language Models (LLMs) can be used as tools to support penetration testing teams.

This approach would save time, especially for organizations that handle a large number of ICT service providers. This would also allow one to provide unified reporting in a streamlined manner - albeit with oversight. This specific task is an area that is well addressed by GenAI, given it consists of a task that requires limited inference. Doing so would free up time for experts to review and analyze outputs further for decision making purposes.

  • AI-Driven Automation and Incident Reporting: AI can streamline DORA’s incident reporting process, enabling financial institutions to generate comprehensive reports on ICT-related incidents. This includes assessing the impact of incidents using reasoning that integrates established frameworks, identifying vulnerabilities, and recommending remediation steps.

  • Training and awareness: AI can support the creation of training material on regulatory topics in the financial sector by simplifying the content creation process. It can be used to summarize DORA regulations, and tailor the relevant sections of the law to different roles. AI can be used to continuously monitor regulatory changes, and propose updates to training materials for moderators to approve. It can be used to generate simulations for real-world scenarios to augment the learning experience. Additionally, AI can be used across large geographies for teams that are located globally, and can be used to generate tests to ensure core tenets are well understood. By streamlining the training process and enhancing engagement, AI helps financial institutions maintain compliance and reduce risk efficiently.

What about the risks raised by the of useing AI to facilitate the DORA compliance process?

Reliance on AI tools for achieving compliance with regulations comes with its own set of risks, many of which, if not addressed, can lead to serious financial, operational, and reputational harm.

  1. AI tools rely heavily on data to function effectively. However, this dependency raises concerns about data privacy and security. Many regulations, such as the GDPR, impose strict requirements for handling personal data. If AI tools are not adequately secured, or the way they handle personal data is non-compliant, organisations may inadvertently breach the law. Moreover, data used for AI training may introduce risks of inadvertent exposure or misuse. For example, unencrypted or data that is not truly properly anonymised is vulnerable to cyberattacks, leading to regulatory penalties and loss of customer trust.

  2. AI tools, especially those using machine learning, often operate as “black boxes”, where the logic behind decisions or recommendations is not easily understood. Regulatory frameworks increasingly demand transparency in decision-making processes. If an AI tool’s outputs cannot be explained, organisations may struggle to demonstrate compliance during audits or investigations. This lack of explanation may pose risks when regulators or stakeholders require detailed justifications (reasoning behind underlying analysis) for specific decisions, such as in risk assessments.

  3. AI tools are powerful, but they are not infallible. Over-reliance on AI can lead to complacency, where organisations neglect to maintain robust human oversight or fail to question the outputs of these tools. For instance, an AI system flagging fewer compliance issues may not necessarily mean a reduced risk environment but could indicate a misconfiguration or blind spots in its algorithms. Such over-reliance can result in overlooked regulatory violations and heightened risk exposure.

  4. Regulations evolve constantly, and AI tools need frequent updates to align with these changes. If tools are not updated promptly or if the organization fails to adapt to new compliance requirements, the risk of non-compliance increases - as tools rapidly become incomplete in their assessment, or obsolete by virtue of containing outdated laws or reference frameworks. Furthermore, poorly managed updates can introduce errors or gaps in compliance processes.

We suggest that each of the risks above are looked at thoroughly prior to the deployment of AI Compliance solutions to ensure that the benefits of AI outweigh the risks. We would recommend that parties refer to established risk management frameworks, such as the ISACA Luxembourg Chapter AI GRC toolkit.

Key takeaways

While AI tools can enhance efficiency and accuracy in achieving regulatory compliance, they introduce risks that cannot be ignored. Organisations must strike a balance between leveraging AI’s capabilities and maintaining rigorous oversight to ensure ethical, transparent, and compliant operations. By addressing these risks proactively, organisations can unlock the potential of AI without compromising their regulatory standing or reputation.

We are also firmly of the opinion that the use of AI is most beneficial to empower compliance and cyber security practitioners. It most benefits those that are already knowledgeable in their field as an accelerator, and an important part of their toolkit.

As the use of GenAI expands, maintaining compliance with DORA’s resilience requirements and privacy regulations, like the GDPR, will require a thoughtful balance of innovation and consideration of privacy by design and default. By implementing robust privacy-preserving techniques alongside responsible AI practices, financial institutions can build a secure, resilient, and compliant digital future.

About the authors of this series:

Catalin Tiganila is an experienced consultant and program manager with experience in Cyber Security, Cloud Security, IT Governance, Risk Management and Compliance and AI Governance, Risk and Compliance (GRC). With more than 20 years practice in leading and executing advisory and audit engagements, as part of different consulting firms, Catalin delivered numerous projects as part of international teams in different geographies covering a wide range services in diverse industries: finance and banking, technology, telecommunication, start-ups, energy, healthcare, retail and manufacturing. He is a Board Member of ISACA Luxembourg Chapter professional association where is responsible for the chapter membership and is also leading the AI GRC Working Group.

Shariq Arif - in addition to being Co-Founder at IntGen.AI - a RegTech GenAI Compliance start-up, he is also a seasoned Personal Data Protection professional. In 2017 he co-founded the Data Protection practice at a leading Professional Services firm in Luxembourg, and was systematically communicated for all external Data Protection Officer mandates at this organization to several National Data Protection Authorities. Shariq also co-led this organization's application to become a GDPR-CARPA certification body in 2023. Shariq is also a certified Data Protection Officer Coach (PECB), and a Board Member of the APDL Association pour la Protection des Données au Luxembourg.