top of page

L'Art Du Risque: The European Approach to AI Risk Management

  • Writer: Erica Giraldo Toro
    Erica Giraldo Toro
  • Feb 10
  • 3 min read

Updated: Mar 30

Abstract


At the 32nd Risk Managers Meeting organized by the Association pour le Management des Risques et des Assurances de l'Entreprise (AMRAE) in Deauville, France, Artificial Intelligence (AI) is a key topic of debate, viewed through the lens of risk. The discussion aligned with the approach adopted by the AI Act, the first European regulation to come into force in 2024, which had been discussed prior to its implementation.


As with any regulation, understanding its origins, the needs it addresses, and its defining characteristics is crucial. We must consider the diverse approaches adopted by different continents in response to this technological revolution, which are clearly linked to the fundamental diversity between countries and continents.


The European Union (EU) stands out for its protective and highly regulated approach, which, while ensuring certain protection standards, may reduce its competitiveness against technological giants like the United States and China. However, it is worth questioning whether this European vision, centered on rights protection, truly represents an obstacle when it comes to Artificial Intelligence.


What are the potential consequences of deploying these technologies without a clear regulatory framework?


This is where the importance of addressing this issue from a risk management angle comes into play: it's necessary to examine potential scenarios where AI, though designed as an asset, could become a liability if not properly managed from creation to implementation.


Examining AI Through a Risk Management Lens


Criticisms of excessive legislation have echoed across many sectors, and AI  is no exception. The EU's regulatory landscape creates uncertainty among AI technology developers. The General Data Protection Regulation (GDPR), Corporate Social Responsibility (CSR), and the many other laws regulating the field pose challenges for companies of all sizes in this region. This raises concerns about where companies should locate their services or invest in developing technological capabilities.


A distinctive feature of the AI Act's development is its use of delegated acts, which allow the European Commission to modify or update lists of AI applications classified as high-risk and add new categories as technology evolves. While this approach offers advantages such as efficiency and specialization, it also creates uncertainty for businesses.


The rules of the game are constantly changing, making it challenging to develop clear future projections.


There's also a risk of over-regulation, potentially create an imbalance between business interests and consumer needs. The EU may not be competitive due to its focus on risk management, compared to other states. However, business interests don't always align with those of citizens. While in Europe, companies advocate for less strict regulation, in the United States, society is beginning to demand increased protection, concerned about the risks of unregulated AI that would prioritize security and individual rights. This contrast underscores the challenge of achieving a global balance between innovation and security, as each region tackles the challenges of emerging technologies from distinct perspectives.


Key Risk Families


At the 32nd AMRAE Risk Managers Meeting, six key risk families were addressed:


1. Strategic risk


Strategic risk refers to the uncertainty generated by strict regulations such as the AI Act and GDPR (General Data Protection Regulation), even CSR (Corporate Social Responsibility), which may deter companies from operating in Europe due to their impact on competitiveness compared to markets with more flexible frameworks.


2. Performance risk


Performance risk is related to companies' ability to achieve their objectives, as regulations can slow down innovation and affect technology efficiency.


3. Operational risk


Operational risk concerns the complexity and additional costs that companies must assume to comply with AI standards, which can create bottlenecks in their internal processes.


4. Personal data protection risk


Personal data protection risk is crucial in Europe due to GDPR legislation, which requires transparent and secure processing of personal data, with any breach posing a major financial and reputational risk.


5. Security risks


Security risks cover threats to the integrity of AI systems, requiring strict measures to avoid vulnerabilities, as cyberattacks can compromise data confidentiality and availability.


6. Human resources (HR) related risks


HR-related risks concern the lack of qualified AI and automation talent, generating a shortage of skilled professionals and potentially negatively impacting organizational culture and employment.


Conclusion


These risks are essential for understanding the challenges that companies are currently facing in Europe, under the current regulatory framework. However, they also represent a challenge for the EU as it seeks to enter competitive markets without compromising its commitment to Human Rights. As a pioneer in rights protection, the EU must adapt to diverse regulatory and commercial approaches and it is clear that the current economic and commercial pressures could significantly influence the future decisions of the European Parliament.


To drive progress, we should foster inter-institutional collaboration that engages both the private sector and technology consumers. By promoting global cooperation, even though expecting uniform adoption of regulatory approaches across all states, we can create a framework that respects cultural and societal differences while advancing shared goals.

 
 
 

Recent Posts

See All

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page