A European Artificial Intelligence with People at Its Core
- Ibán García del Blanco
- Nov 7, 2024
- 6 min read
Updated: Apr 10
Author:
Ibán García del Blanco - Member of the European Parliament (MEP) 2019-2024 & Lawyer
Co-Rapporteur of the AI EU Law
Rapporteur for the Legislative Initiative on Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies
Rapporteur for the Opinion on the Data Act
Co-rapporteur for the Own-initiative Report on Virtual Worlds
International Affairs Director of LASKER Consulting
European Union (EU): AI & Fundamental Rights
On August 2nd, 2024, the European Artificial Intelligence Law (hereinafter "AI Law") came into effect following its publication in the Official Journal (OJ). This milestone marked the culmination of five years of work across all European institutions, starting with the world's first legislative proposal on AI ethics, which the European Parliament approved in October 2020 (and for which I had the honor of being the rapporteur).
It is no coincidence that the initial regulatory proposal focused on ethical aspects: it reflects a civilizational commitment that is, and must remain, intrinsic to the European project.
Our proposal placed people at the heart of the equation, and this is the primary characteristic of the finally agreed-upon AI Law: it is a human-centered law.
The AI Law has been designed to respect our European principles and values. The goal is to foster the development of European AI applications, instilling trust among citizens and providing legal certainty to actors in the EU, while setting a global standard that puts people at the center and focuses on preventing risks inherent in the use of these technologies.
This is a framework regulation with an almost universal geographic scope, encompassing any AI developed or operated in the European Union, even if developed in third countries, thereby ensuring the extraterritoriality of European law.
A risk scale has been established to ensure that no harm is caused to individuals, nor any disruption to our democratic societies, the rule of law, or the environment, which are identified as general interests to protect.
Potential AI uses in Europe are classified as follows:
Prohibited
High-risk
Uses that require additional transparency requirements
Low or negligible risk uses
AI use is prohibited in the EU when the risk is deemed unacceptable for our democratic societies, including, among others, predictive policing, surveillance through emotion recognition systems in workplaces or education, indiscriminate facial tracking on the internet, and biometric categorization systems to deduce sensitive personal data.
Mass biometric surveillance by authorities in public places is also banned in Europe, except in very exceptional cases involving serious crimes, searching for a kidnapped person, or in the event of an imminent terrorist attack, provided strict legal safeguards and guarantees are observed.
High-risk AI applications are those that may significantly impact the safety, health, or Fundamental Rights of individuals. The law itself lists high-risk AI uses in sensitive areas, such as critical infrastructure, employment, education, border control, access to justice, or essential public services, including financial credit or insurance. AI applications that may impact electoral outcomes are also considered high-risk, given their importance in combating misinformation. When AIs are declared high-risk, a series of requirements must be met before commercialization, such as ensuring good data governance to avoid bias and discrimination, transparency, human oversight, cybersecurity and technical controls, a risk prevention plan, and registration in a European AI registry.
Additionally, during negotiations with the Council, Parliament managed to include a Fundamental Rights Impact Assessment requirement for high-risk AI used in public service provision, identifying individuals or groups that may be affected by bias or discrimination and implementing specific prevention plans.
Additional transparency requirements apply when AI interacts with natural persons, who must be informed from the outset. In this regard, the law empowers citizens by establishing new rights, such as the right to be informed and to receive a reasonable explanation when they are legally affected by AI use. Moreover, complete transparency is required for the production and dissemination of images, voices, videos, and even texts created or manipulated by AI, known as deepfakes, clearly indicating that such content is AI-generated. The massive emergence of general-use or generative AI models occurred during the law's debate.
Their impact convinced the majority—despite strong pressure from some technology actors and some member states—of the need to create a new chapter for their regulation. Under the European Law, general-use AI providers operating in Europe must comply with specific transparency obligations, including a guarantee of transparency concerning copyrighted data used during training. Furthermore, those with high capacity that pose systemic risks will have additional obligations for risk prevention, cybersecurity, and disclosing the energy consumption of such systems.
When AI poses no risks, voluntary codes of conduct are encouraged to promote, among other criteria, accessibility, gender equality, environmental sustainability, or AI training and literacy. Such a disruptive technology must be accompanied by safeguards to allow citizens to exercise democratic control, which can only be achieved through knowledge. Thus, companies that use or commercialize AI, will have to ensure that their personnel have at least the necessary knowledge to assess AI-related risks and meet the obligations established by the law (and, in turn, inform potential users).
To promote the development of European AI, an incentive system is included through testing spaces or sandboxes. The idea is to create at least one testing space in each EU country where AI can be tested and receive the necessary support for developing AI that complies with the law's obligations.
National authorities will be responsible for ensuring compliance with the law in theirmterritories and applying the penalties for non-compliance.
The European AI Office will provide technical support to member countries and supervise generative AI models (as well as supervise high-risk AI, which must be registered in a European Registry). In contrast to economic, techno-centric, or state-control models, Europe has pioneered the leadership of AI regulation according to our principles, promoting a fairer, safer, more transparent, robust, sustainable, and socially responsible AI.
The so-called "Brussels Effect" is not only a way to influence the world but also to promote our values and ensure that they become a sort of global regulatory standard. Contrary to predictions that Europe would be an island of hyper-regulation disconnected from global technological development, more and more regions are adopting their regulations inspired by the European model.
The signing of the Council of Europe (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy, and The Rule of Law on May 17th, 2024, or the Global Digital Compact proposed at the United Nations (UN) this September, are two good examples. Both outline commitments to ensure that digital technologies contribute to sustainable development and Human Rights.
AI is a technology as powerful and world-changing as any we have ever known. Humanity can achieve wonderful development heights or slide down a dystopian path where wealth gaps between those who control these technologies and those who do not could be colossal.
It is up to us to choose a model that reconciles technological development with respect for our human condition.
Biography of the Guest Expert
Coming from Spain, Ibán García del Blanco served as a Member of the European Parliament (MEP) during the 9th legislature, as part of the European Political Group of the Progressive Alliance of Socialists and Democrats (S&D), from July 2019 to July 2024.
In his capacity as Coordinator for the European Group of the Progressive Alliance of Socialists and Democrats (S&D) in the European Parliament's Committee on Legal Affairs, among other responsibilities, he was part of the negotiating team for the Artificial Intelligence Act and has been part of the Special Committee on Artificial Intelligence in a Digital Age.
He also acted as rapporteur for the legislative initiative on the Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies, and for the opinion on the Data Act, and was co-rapporteur for the own-initiative report on virtual worlds.
As a member of the Committee on Culture and Education, he was rapporteur for the own-initiative report on "Cultural Diversity and the Conditions of Authors in the European Music Streaming Market".
As a Lawyer, Ibán García del Blanco has specialised in cultural management and intellectual property. He served as President of Acción Cultural Española in 2018 and 2019, the Spanish Government entity responsible for promoting and supporting Spanish culture both domestically and internationally.
He is a member of the Royal Board of Trustees of the National Library of Spain and, since October 2024, he holds the position of International Affairs Director of LASKER Consulting.
Comments