While other jurisdictions grapple with the rise of AI and the emergence of new cybersecurity challenges, the European Union (EU) has sprung into action with a series of new regulations aimed at limiting digital risk. The EU AI Act, EU Cyber Resilience Act, and EU Digital Operational Resilience Act (DORA) each represent a new chapter in governing cyber risk—but what do they actually mean for businesses? Will your organization be impacted by the new rules? What about your partners or suppliers? Understanding what these new guidelines are intended to achieve is important, but it is equally important to understand what the real-world impact will be on organizations and how they will affect governance, risk, security and compliance (GRC) operations moving forward. By Nick Kathmann (pictured) , CISO, LogicGate
The EU AI Act Establishes New Guidelines in a Crowded Regulatory Space
Let’s start with the EU Artificial Intelligence Act, which was officially adopted on March 13, 2024, and should enter enforcement within the next two years. The EU AI Act represents the first comprehensive framework developed by the EU to help govern the use and development of AI. As you might guess from its name, the regulation is not global in nature—it is specifically aimed at the EU—but it does have global implications. Whether or not they are headquartered in the EU, organizations doing business within the EU will need to be sure they are maintaining compliance with the EU AI Act, which means it will impact just about every multinational corporation.
AI and machine learning have been around for a long time. Even before the advent of what we consider “modern” AI, spam filters were using supervised models to block spam emails from cybercriminals. (Ever received an email promising “FR33 P!LLS”? These were eventually blocked by one of these early models.) We’ve obviously come a long way since then, and generative AI is becoming both more advanced and increasingly accessible as AI developers dedicate significant sums of money to the technology. According to OpenAI, building and training ChatGPT 4 cost more than $100 million, while Google spent nearly $200 million developing its Gemini tool. Organizations are devoting a lot of resources to making these solutions even bigger and better as time goes on.
But with any new technology comes risk—and the EU AI Act aims to help address that. The new legislation establishes four risk-based classifications for how AI is used and developed, with different safety and security requirements for each risk level. Industries like critical infrastructure or law enforcement fall under a “high-risk” classification, which carries a lengthy list of requirements that AI developers doing business with those industries will need to meet. These might include data storage and record keeping mandates, market surveillance demands, transparency standards, and other requirements. Each one is tied to significant financial penalties that range up to 6% of the company’s total worldwide annual revenue—a tidy sum.
The practical implications here are serious. There is a lot of AI-based innovation happening within the EU (and in countries that do business with the EU), and these new regulations include substantial due diligence requirements, adding to the already onerous number of regulatory guidelines and frameworks businesses need to keep track of. While it’s a definite positive that the EU AI Act applies to the entirety of the EU, the growing number of AI regulations in countries around the world (and, in the case of the U.S., individual states) has created a difficult web of frameworks—many of which contradict each other. A lot of organizations will need to rethink how they manage the risk and compliance aspect of AI—in fact, it won’t be surprising if GRC soon becomes “GRCAI.”
The EU Cyber Resilience Act Mandates Better Support, Quicker Remediation
In addition to the EU AI Act, the EU Cyber Resilience Act was also approved by the European Parliament in March. Rather than specifically targeting AI, this act is aimed at connected devices and the software that runs on them—ranging from smartphones and laptops to smart fridges and washing machines. In today’s increasingly interconnected world, anything connected to the internet represents a potential attack vector, and bad actors will target connected devices and use them as a jumping-off point into other systems, networks, and organizations. Some of the biggest DDoS attacks in history (such as the Mirai Botnet) have originated from devices like home routers. The fact that these devices are connected directly to the internet, rather than sitting behind corporate firewalls, makes them particularly vulnerable.
Not so long ago, these connected devices were easy to secure. They ran on an assembly loop and would simply reboot and reset if something abnormal was detected. Now, these same devices use higher-order languages, often with full Linux (or even Windows) systems running natively. Instead of a simple loop, they now have millions of lines of code and a wide range of code libraries that attackers can target. But perhaps the biggest problem is the lack of support many of these products receive. A connected device might receive 12 to 18 months of support before the manufacturer moves on to the next thing. This is understandable, but it leaves those devices no longer receiving updates or patches vulnerable to attack.
EU Cyber Resilience Act effectively mandates that companies support their products for a longer period of time—and ensure that consumers know how long they can expect that support. They need to make it known how they plan to secure those devices, and also implement new vulnerability management solutions. When a vulnerability is identified, the manufacturer needs to acknowledge it and submit a plan to address it within a certain amount of time. The idea is to add new consumer protections that reduce the number of attack vectors adversaries can use to target connected devices. Like the EU AI Act, these new rules come with serious penalties—up to $15 million or 2.5% of total annual revenue (whichever is higher).
DORA Establishes New Financial Rules Impacting Critical Third Parties
The EU Digital Operational Resilience Act (DORA) is also focused on cybersecurity, but sets its sights specifically on risk management (including third-party risk management) and incident response/reporting in the financial industry. The new law will go into effect next year, with the goal of helping banks and other financial institutions protect themselves from attacks and improve their resiliency in the event of a breach. It’s easy to see why this would be of interest to regulators: when a financial institution suffers a cyberattack, it has downstream effects impacting nearly every other industry. When businesses (or individuals, for that matter) can’t process payments, access accounts, or use their banking platforms, it can grind the economy to a half. In fact, modern cybercriminals may not even need to target a bank: they can often accomplish the same result by attacking a trading platform or other third-party service.
As a result, DORA doesn’t just apply to financial institutions themselves. It applies to financial firms, crypto platforms, cloud service providers, risk management services, and dozens of other industries that meaningfully interact with the finance industry. The law establishes a new risk management framework around which these businesses need to build, similar to frameworks like NIST that American businesses may already be familiar with. It establishes basic guidelines around threat detection, prevention, and response, as well as a mandated reporting period when an incident does take place.
DORA also mandates a certain degree of digital operational resilience testing. Organizations need to be able to demonstrate sufficient business continuity capabilities in order to establish that a single cyber incident cannot adversely impact the broader financial industry. If the entire company goes down, what will they do? How will they get back online, and in what order? How do they inform customers before the breach is made public? The ability to plan for the worst is critical. The framework also establishes procedures for sharing cyber threat intelligence. The U.S. has a similar initiative, known as FBI InfraGard, which helps push companies toward intelligence sharing for the benefit of all.
Perhaps the most important takeaway here is the fact that the legislation will impact such a wide range of companies. Any business considered a “critical third party” to the EU companies that fall under the financial services umbrella will need to be sure they are in compliance with DORA, or risk not being able to do business with EU-based businesses. For now, organizations will need to judge for themselves whether they fall under this rule, but as implementation gets closer, critical third parties will likely be informed by their partners and customers that they need to maintain DORA compliance or risk losing their business.
The EU Guidelines Are Only the Beginning
That the EU is leading the way when it comes to establishing AI guardrails and risk awareness regulations should come as little surprise—after all, the EU also paved the way for new data privacy rules when it passed the General Data Protection Regulation (GDPR). Rest assured, these EU guidelines are only the beginning—in fact, the U.K. is already in the process of implementing its own Corporate Governance Code and Operational Resilience framework, underscoring the fact that this regulatory trend is likely to spread quickly. But the truth is that establishing strong data governance capabilities and effectively managing cyber risk is a good idea for every organization – and customers – and these new regulations should serve as motivation for businesses across the globe to take the necessary steps to improve their GRC (or should I say GRCAI) capabilities.