...two themes have consistently dominated the conversation. The first is regulation-driven change, particularly implementing the EU's Digital Operational Resilience Act (DORA). Alongside this, the wider implications of operational resilience have come into sharper focus, leaving organisations grappling with how best to approach compliance. The second key theme has been artificial intelligence—not only as a tool for driving efficiency and enabling deeper and broader organisational insights but also as a driver of significant regulatory changes that are already beginning to take shape.
With DORA due to come into effect in a matter of weeks, it’s been eye-opening to see how many organisations remain unclear about their approach. Some are overwhelmed; others seem to have adopted a “bury your head in the sand” strategy. At a recent seminar I co-hosted, we asked the audience a seemingly straightforward question:
“Who owns operational resilience in your organisation?”
Not a single person could provide a consistent or definitive answer. This speaks volumes. Regulations like DORA, which are broad and touch multiple areas within an organisation, don’t fit neatly into existing silos. Instead, they highlight the need for custodianship of compliance—where responsibility isn’t ‘owned’ by one department but shared across multiple stakeholders.
That said, the allocation of this custodianship can vary greatly. For some, it falls to the IT team, given their focus on operational aspects—what I often refer to as operational compliance. For others, it sits within risk management. The reality is that there is no right or wrong answer. Organisations need to find the model that works for their unique structure and culture, which often involves trial and error. What is universal, however, is the need for people to work together. Regulations like DORA demand collaboration, compromise, and shared understanding—qualities that don’t always come naturally within organisations.
Whilst technology plays an ever-increasing role in governance, risk, and compliance, it’s important to remember that it is an enabler. No algorithm, no matter how advanced, has yet figured out how to truly bring people together, mediate their differences, or force collaboration. And when it finally does, there will undoubtedly be far more pressing applications waiting in line.
Continuing with the theme of technology, AI has undeniably been hailed as a game changer. While we have seen similar promises in the past with technologies such as blockchain—only to watch them fall short—AI genuinely feels different. Its practical applications are already evident in our personal and professional lives - yes, ChatGPT reviewed this article. In the world of GRC, AI is already making its mark, with significant innovation around its practical use.
There is no question that the volume of data being collected as part of risk and compliance programmes is growing at an exponential rate. But the real challenge is not just the sheer amount of data—it is also quality. This is where I believe AI will make its first major impact. By improving and then interpreting data, AI will empower organisations to dig deeper and expand their reach across the business, ultimately providing something tangible for risk committees, boards, investors, regulators, and auditors. Many GRC vendors have arguably been on this path for some time, innovating and developing with AI to deliver advancements that, while seemingly modest on the surface, often have a profound impact in practice—much like many things in life.
Where will this lead us? Much has been said about the transformative power of generative AI, but its true value in risk and compliance settings remains to be seen. Over the coming months and years, use cases will undoubtedly emerge or evolve. However, I believe those working in highly regulated industries, where human transparency is a non-negotiable requirement for regulators, can rest assured—they are unlikely to be replaced by machines anytime soon.
What is more certain is that compliance professionals will soon need to wrestle with regulation specifically for AI. Unsurprisingly, the European Union is leading the charge, with the EU AI Act coming into force on the 1st of August of this year and set to take effect from the 2nd of August 2026. Much like previous EU legislation, the Act has a far-reaching impact, applying to anyone deploying AI systems within the EU, regardless of their geographic location.
The EU AI Act is comprehensive and ambitious, adopting a risk-based approach to regulation. It addresses everything from banning the use of AI systems by governments to monitor citizens’ behaviour (classified as “Unacceptable Risk”) to measures affecting everyday encounters with AI-generated content (classified as “Minimal Risk”), such as requiring platforms to notify users when they are engaging with such material.
This landmark legislation continues the EU’s trajectory of digital regulation, which began in earnest with the introduction of GDPR and the more recent DORA. Human nature being what it is, some degree of procrastination and confusion is to be expected as organisations come to terms with its implications.
While enforcing ethical safeguards is both sensible and necessary, the challenges for organisations are clear. Determining ownership and accountability for compliance will once again take centre stage, starting with a thorough understanding of their exposure to AI technologies. Given the widespread reliance on outsourcing and third-party technology in today’s enterprises, the ripple effects will be significant. Vendors should anticipate a sharp increase in assessments and scrutiny over the coming years.
Although navigating these requirements may seem daunting and could lead to delays, organisations that take a proactive approach to planning and preparation will be far better positioned to stay ahead of the curve.
This, however, is all for next year, so wishing you joy, warmth, and happiness this festive season. Here’s to a bright and successful 2025!
This blog was initially featured as an article on LinkedIn, click here to read it.