After two years, the Digital Operational Resilience Act (“DORA”) comes into effect as of today. This European Union (EU) regulation addresses a critical gap in EU financial regulation by establishing a framework for digital operational resilience in the financial sector.
The financial sector is the lifeblood of any economy. According to the IMF Global Financial Stability Report, nearly one-fifth of reported cyberattacks over the past twenty years have affected the financial sector on a global scale, resulting in a direct loss of $12 billion.
With such significant financial implications at stake, it is clear that the financial sector is in need of robust measures to grow its cybersecurity protections.
To get up to speed about the milestones of DORA, please refer to the timeline below:
What is DORA?
The Digital Operational Resilience Act (Regulation (EU) 2022/2554) solves a significant cybersecurity gap in EU financial regulation. More than 22,000 financial institutions and information technology providers in the EU will be subject to DORA.
DORA is structured around 5 key pillars: (i) Information and Communication Technology (“ICT”) risk management, (ii) ICT-related incident reporting, (iii) digital operational resilience testing, (iv) ICT third-party risk, and (v) information sharing.
Before DORA, financial institutions managed operational risks by allocating capital to cover potential losses. There were limited proactive and uniform regulations tailored to mitigate risks of exposure and prevent losses due to ICT-specific risks. This approach failed to encompass all aspects of operational resilience and did not effectively work in relation to ICT.
DORA requires financial institutions to follow stringent guidelines to safeguard against ICT-related risks that are homogeneous across EU member states. These guidelines range from detection and reporting to assessment and recovery. ICT incidents can destabilise an entire economic system, and implementing DORA promotes operational resilience and requires organisations to adopt controls designed to support them in withstanding and recovering from risks. Moreover, DORA provides specific criteria, templates, and instructions to ensure that organisations are compliant.
Unique from other traditional operational resilience frameworks, DORA expressly tackles the integration of ICT and AI measures. While other regulatory schemes traditionally focus on identifying critical business services and assessing impact tolerance for disruption generally, they seldom address ICT risks with any specificity. Other jurisdictions continue to build tools to facilitate ICT risk management. For example, the US has enacted the Gramm-Leach-Bliley Act, which covers information security broadly. Likewise, the US Federal Financial Institutions Examination Council (FFIEC) developed the Cybersecurity Assessment Tool (CAT) to assess cybersecurity preparedness. However, DORA goes a step further by enforcing ICT changes rather than merely offering guidelines on cybersecurity or prescribing broader operational requirements. This paves the way for other jurisdictions to adopt similar legislative measures.
The need for digital resilience in AI
As the world becomes more dependent on and consumed by digital technology and infrastructure, the financial sector needs to implement the right processes and tools to protect itself and its customers.
According to DarkTrace's State of AI Cyber Security 2024 Report, nearly 74% of global security leaders believe that AI-powered threats have become a significant issue. Nine out of 10 agree that AI will remain their number one challenge for the future. Six out of 10 stated they were unprepared for the next wave of AI and its potential threats.
How will DORA govern AI?
Although AI has already proven its value to society, and many are excited by its future opportunities, it also introduces significant risks that financial institutions and information technology providers have not encountered previously. DORA outlines a comprehensive framework for organisations to achieve digital operational resilience.
- Data Governance: Data is gold. The last thing any organisation wants is for its data to be stolen, manipulated, or subject to unauthorised access. DORA requires that organisations have strong data governance practices to ensure data security, quality, and integrity. The data injected to train and operate the AI models must be clean and concise to avoid biased or inaccurate outcomes in machine learning algorithms.
- Algorithmic Risk Management: Establishing frameworks to identify, assess, and mitigate risks associated with AI models will ensure organisations are confident in the model and its outcomes when deployed.
- Model Monitoring and Logging: You must continuously monitor your AI model's performance to ensure consistency. Detecting performance variances can raise potential security concerns. Logging all actions related to AI models will improve your organisation's resilience and be helpful for audit purposes.
- Third-Party Vendors: DORA emphasises the need for rigorous oversight of AI solutions when engaging with third parties. Under DORA, organisations must ensure that ICT risks are taken into account when conducting diligence on counterparties and negotiating relevant contracts. Prudent measures will include adopting customary contractual provisions which clearly define each party’s obligations in respect of ICT risks, such as KYCs and disclosure obligations directed at assessing counterparties’ compliance with DORA; monitoring and compliance through counterparty audit rights; incident detection and reporting; Service Level Agreements (SLAs) in connection with business continuity and data recovery; limitations of liability; or specific indemnities (as applicable). Failure to adequately address these key aspects could result in legal and regulatory exposure.
The impact of DORA on AI:
Organisations all over Europe have experienced an increase in sophisticated cyber attacks. To ensure resilience, everyone is gearing up and focusing on cyber hygiene.
- Trust and Transparency: The development of AI has not always been popular. Many have concerns about its ethical use and the safety measures put in place to protect society. DORA promotes the use of responsible AI so that society can trust AI and an organisation's transparency.
- Security: Once trust and transparency are addressed, organisations must ensure that their AI use follows a robust AI risk management process to strengthen the overall security and confidence with respect to AI use.
- Protect Consumers: Data governance, algorithmic management, and model monitoring will ensure that organisations use AI fairly and ethically. Protecting the end-user from risks or discriminatory practices should be a priority for organisations deploying AI models.
Responsible AI with your cloud provider
As more sectors employ AI applications in their operations and services, policies and regulations will extend beyond financial institutions to ensure a secure and responsible AI ecosystem.
You must dive deeper than costs when looking at AI cloud providers to ensure that they meet your extensive AI requirements. For example:
- Robust infrastructure: To ensure digital resilience, robust infrastructure should be at the top of an organisation’s priority list. A strong infrastructure consists of high-performance computing resources, reliable network capabilities, and advanced security protocols which follow safe and secure cyber hygiene.
- Scalable AI Solutions: Organisations consider AI an opportunity to scale; therefore, scalability becomes vital as AI systems grow in complexity and usage. This allows organisations to maintain resilience and efficiency as they adapt to demand and future-proof their organisation.
- Vendor Accountability: Under DORA, organisations are accountable for their relationship with third-party vendors. When choosing an AI cloud provider, you must assess the vendor's resilience, review their risk-management process, establish strong SLAs and demand transparency.
Nscale is a full-stack, sovereign, and sustainable AI cloud platform. By managing every aspect of AI infrastructure—from our energy-efficient data centres in Norway to our cutting-edge compute clusters and software configurations - we can provide robust infrastructure without compromising your cyber hygiene.
Each component of our full stack has been carefully selected and designed to meet AI's intensive needs, ensuring organisations can scale safely.
Ready to scale your AI initiatives? Please reach out to our trusted and world-class team here.