DORA - The Compliance Explorer (3)
Episode 3 - How can you securely deploy AI?
2/3/20254 min read


The future of financial services will be heavily influenced by AI and data-driven innovation. For financial institutions adopting AI, to enhance their operational efficiency, their compliance purposes, or any other reason, the benefits that AI has to offer will need to outweigh the risks which they will face while doing so. Artificial Intelligence (AI) infrastructure is the backbone that supports the development, deployment, and scaling of AI applications. It encompasses various components, each playing a crucial role in ensuring efficient and effective AI operations.
Now we will explore the different ways AI can be deployed: Public Cloud, High Performance Computers (HPC), Private Cloud, AI Supercomputers, and what are some key considerations you should be aware of with these deployments.
Public Cloud
Public Cloud service providers that operate massive data centers - also known as Hyperscalers - offer extensive computing resources on a global scale. The leading hyperscalers include AWS, Microsoft Azure, Google Cloud, Alibaba Cloud, IBM Cloud, and Oracle Cloud Infrastructure. These providers leverage advanced automation and infrastructure to support extensive AI workloads, enabling businesses to innovate and adapt quickly. Hyperscalers are essential for enterprises requiring high-performance computing and large-scale data processing capabilities.
However it is worth noting that the use of public cloud may risk breaching confidentiality constraints or legal requirements - especially in instances where data transits to third countries outside of the European Union. Reason being, invariably, the organizations' data is shared with, and under the control of cloud operators. As such, this solution may not be an option for some organizations, beyond merely testing the viability of solutions before their implementation.
High Performance Computer (HPC)
An alternative to hyperscalers is to use the network of the “AI Factories” that are being implemented in seven countries across Europe. These AI factories will pool European Union and national resources from17 European countries, with many consortia involving several participating countries. Luxembourg has its own AI Factory with the MeluXina supercomputer. MeluXina provides financial institutions with an environment that takes into consideration the regulatory obligations organizations face - with data remaining within the European Economic area. This option is well suited to firms that wish to process extremely large datasets.
Private Cloud
Besides legal, privacy and confidentiality concerns, cost is another reason for which the organizations are looking for on-premises options to deploy AI workloads. On top of that, they also hope that on-premises deployments will provide higher performance, more control, and enhanced security. Private cloud solutions offer dedicated resources for AI workloads within an organization’s own data centers or through a managed service provider. This approach provides greater control over data security, compliance, and customization compared to public clouds. Private clouds are suitable for organizations with stringent data privacy requirements or those needing specialized configurations for their AI applications. They offer the benefits of cloud computing while maintaining a higher level of control over infrastructure.
Existing AI ready Private Cloud solutions include: HPE Private Cloud AI, a Luxembourg based firm called Gcore (edge AI), Rackspace Technology, as well as most of the Public Cloud solutions that can be also deployed on-premises (e.g. Microsoft, Alibaba, Nebius).
AI supercomputers
Running large language models (LLMs) on desktop computers has become more feasible thanks to advancements in GPUs and optimized software frameworks.
AI workstations are high-performance computers designed for developing and testing AI models. These machines are equipped with powerful CPUs, ample RAM, and often multiple GPUs to handle intensive computational tasks. Workstations are essential for data scientists and AI researchers who need to experiment with different models and algorithms before deploying them to production environments. They provide a dedicated environment for AI development, ensuring that researchers have the necessary resources to innovate.
Running LLMs on on-premise computers provides multiple benefits:
Data Privacy: Sensitive data never leaves your premises, addressing concerns about privacy or regulatory compliance (when based in Europe, data does not transit to third countries that do not offer the same level of adequacy).
Cost Efficiency: For frequent use, local deployment may be more reasonable than cloud services
Customization: Locally running models allows fine-tuning and experimentation with specific datasets, making the model more relevant to your use case.
Latency Reduction: With no reliance on internet-based APIs, local LLMs deliver near-instantaneous responses.
As of January 2025, desktop computers featuring NVIDIA’s Grace-Blackwell architecture are emerging, with NVIDIA’s Project DIGITS being a notable example. Announced at CES 2025, Project DIGITS is a compact personal AI supercomputer designed for researchers, data scientists, and students. It is powered by the NVIDIA GB10 Grace Blackwell Superchip, delivering up to 1 petaflop of AI performance. The system includes 128GB of unified memory and up to 4TB of NVMe SSD storage, enabling the development and inference of large AI models with up to 200 billion parameters, according to NVIDIA.
Key Takeaway
AI infrastructure is a multifaceted ecosystem comprising public clouds, private clouds, HPCs, AI dedicated desktops. Each component plays a vital role in supporting the diverse needs of AI applications, from development and testing to deployment and scaling. By leveraging these technologies, organizations can better harness the full potential of AI to drive innovation and achieve strategic goals.
About the authors:
Catalin Tiganila is an experienced consultant and program manager with experience in Cyber Security, Cloud Security, IT Governance, Risk Management and Compliance and AI Governance, Risk and Compliance (GRC). With more than 20 years practice in leading and executing advisory and audit engagements, as part of different consulting firms, Catalin delivered numerous projects as part of international teams in different geographies covering a wide range services in diverse industries: finance and banking, technology, telecommunication, start-ups, energy, healthcare, retail and manufacturing. He is a Board Member of ISACA Luxembourg professional association where is responsible for the chapter membership and is also leading the AI GRC Working Group.
Shariq Arif - in addition to being Co-Founder at IntGen.AI - a RegTech GenAI Compliance start-up, he is also a seasoned Personal Data Protection professional. In 2017 he co-founded the Data Protection practice at a leading Professional Services firm in Luxembourg, and was systematically communicated for all external Data Protection Officer mandates at this organization to several National Data Protection Authorities. Shariq also co-led this organization's application to become a GDPR-CARPA certification body in 2023. Shariq is also a certified Data Protection Officer Coach (PECB), and a Board Member of the APDL Association pour la Protection des Données au Luxembourg.