The Intelligence of Digital Bits and the Rights of Humans: Friends or Foes?
- Gabriel N. Toggenburg
- 15 hours ago
- 10 min read
Authors (1):
Gabriel N. Toggenburg - Head of Sector EU Charter, Rule of Law and Democracy at the European Union Agency of Fundamental Rights (FRA)
in co-authorship with: David Reichel - Head of Data and Digital Sector at European Union Agency for Fundamental Rights (FRA)
Introduction
Artificial intelligence has reached a performance level that makes the writing of this small article close to superfluous. Even the open-access version of ChatGPT provides a well-structured and quite convincing reply to the question, whether Artificial Intelligence (AI) and Human Rights are friends or foes. And it does so in the fraction of a second whereas we as human authors have invested quite a bit of our weekend to address this topic with our modest but authentic human brains.
Human Rights and Artificial intelligence: the Agnostic Constitutional Phase
Most probably Human Rights considerations where absent at the cradle of the AI-discussion given that discussing the (admittedly, more philosophical than technical) potential of non-human intelligence predates modern Human Rights law. (2)
In fact, the same goes the other way round. When the Council of Europe negotiated back in the early 40-ies its key-Human Rights gold standard–the Convention for the Protection of Human Rights and Fundamental Freedoms–AI was not yet a topic that would have been of interest for society at large.
The same holds for the EU’s Charter of Fundamental Rights, negotiated in the years 1999 and 2000. Back then AI was a topic only in Hollywood movies and science fiction books. Those who are in their middle age, such as the authors, remember well that “Terminator” killed human beings en masse at the order of a hostile AI called “Skynet”. About the same time as the Charter was elevated to the status of primary EU law in 2009, a parallel development took place. Computer processors became increasingly more powerful. People started carrying small computers with them all the time–smart phones. The fast exchanges and storage of digital data points–bits–had a huge impact on the way people could use these computers, importantly for communication and information exchange.
During the phase of massive data processing–also referred to as Big Data–methods to process and making use of data skyrocketed. This mainly led to considerable improvements in so-called 'Machine Learning', which is a fancy term for using data to infer or predict certain outcomes. This is mainly done by evaluating the accuracy of predictions and outcomes without spending too much time on theoretical models, as traditional statisticians would have approached data modelling. (3)
For example, the accuracy in detecting people’s faces on images or videos was suddenly possible at almost perfection. Yet, when negotiating the EU’s Charter of Fundamental Rights in 1999/2000, only few voices referred to the upcoming challenges of technology and the internet, calling for a “modern approach to fundamental rights” that would embrace “the problems associated with the Internet”. (4) Some called for encryption as a means to protect confidentiality (5) or spoke of a new generation right to have “access to the internet” (6) while others stressed the need for protection against abuses such as in the context of children rights and pornography. (7) AI was absent from the negotiations and does consequently also not appear in the text of the EU Charter.
That AI was absent from the process of constitutionalising Human Rights at European level and is hence neither reflected in the ECHR (Council of Europe) nor the Charter (EU) does not imply that these key documents do not apply to the use of AI and its impact. There is no need for a new constitutional text on Human Rights in the era of AI.
Such an endeavour would lead to an inflation of Human Rights provisions that would create more conflicts rather than more protection. The problem is not so much to identify relevant rights (a question of substantial law) but how to guarantee their protection in an AI-context (a question of procedures). (8) In fact, also the need to regulate AI at EU level did not so much arise for substantial but for procedural reasons.
The main question was how best to ensure that Fundamental Rights standards can be upheld when using or considering using AI systems. (9)
Putting Fundamental Rights at the basis of Europe’s approach to AI: the phase of regulation
It took a couple of years, starting around 2017, for the interaction between AI and Human Rights to become a key concern. (10)
Europe realised it needs to regulate AI to safeguard Fundamental Rights, be it in the framework of the Council of Europe with the AI-Framework Convention or the European Union with its prominent AI Act. Human Rights are mentioned close to 40 times in the 12-page long Framework Convention (11) and Fundamental Rights are referred to over 100 times in the 144 pages long AI Act. (12)
Article 1 of the AI-Act defines the overall purpose of the regulation which is to “promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, of fundamental rights including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation”.
Under the AI-Act, Fundamental Rights play an explicit role.
For example, in the classification of AI systems, the identification and analysis of risks, the obligation to provide appropriate safeguards, the impact assessment for high risks systems (13), the standardisation process, AI regulatory sandboxes (14), cooperation (15), and the right to explanation (16). Moreover, national Fundamental Rights authorities shall have the power to request and access any documentation created or maintained under the AIA-Act when necessary for effectively fulfilling their mandates, (17) and they have to be informed by the market surveillance authorities where Fundamental Rights are at risk (18).
Next to the AI-Act also other pieces of EU legislation will form the core of the EU’s AI-aquis including, of course, the European data protection law (General Data Protection Regulation and Law Enforcement Directive), the Product Liability Directive, the Directive on non-contractual liability or media-relevant legislation such as the Digital Service Acts, the Media Freedom Act but also the Platform Workers Directives. Wherever provisions of this AI-relevant aquis apply, the application of the Charter will follow like their shadow. (19)
Fundamental Rights and AI: a Moving Target
It remains to be seen to which degree this regulatory system will be able to prevent and address Fundamental Rights concerns in the context of AI.
The FRA and other actors have pointed especially at three areas: the rights to privacy and data protection (Articles 7 and 8 of the Charter), equality and non-discrimination (Article 20 and 21 of the Charter) and access to justice (Article 47 of the Charter). But it is safe to assume that AI-tools will rather sooner than later become a reality in all contexts of our lives be it employment, housing, social services, public administration. It might conquer roles even in the most intimate corners of our private lives.
Accordingly, all Fundamental Rights in the Charter are likely to become of relevance when assessing the impacts of AI-use cases.
Not always will it be easy to grasp new forms of human-machine-interaction with the old language of Fundamental Rights. Maybe the mother of all Fundamental Rights, the right to dignity, will face a renaissance as it encapsulates in all its simplicity what it means to be human.
This is what Article 1 of the EU’s Charter of Fundamental Rights says: “Human dignity is inviolable. It must be respected and protected.” (20) It will be with good reasons why Oreste Pollicino and other prominent experts in the field put “dignitá” first in a list of 20 principles that should guide AI applications. (21) And, who knows, maybe old case-law will see new interpretations. Gregor Ress, a former ECtHR judge, argued recently that the “respect for the minimum requirements of life in society” and the principle of “living together” might in the future also be arguments to defend a human way of life not dominated by machine-imposed limitations. (22)
While big data and AI have indeed a bright side with the obvious potential to improve human lives (and here, Fundamental Rights and the use of AI pull in the same direction) by enhancing health care, easing and increasing access to information, improving efficiency of public services, assisting in fighting catastrophes including climate change, providing support to persons of age or with disabilities, there is equally the risk that AI leads to built-in discrimination, enhances digital and technological Darwinism, gives new teeth to unlimited surveillance and contributes to large-scale manipulation of the information flows in our democracies. It is important to ensure that AI-tools stay on the track of becoming “General Problem Solvers” (GPS) (23) and do not risk becoming, from a Fundamental Rights perspective, general problem producers.
Much of this 'dark side' of AI tends to be presented through the lenses of anthropomorphism by attributing human-like traits to machines. (24) This sort of “Skynet-effect” (see above our midlife-crisis induced reference to Terminator) makes humans easily believe that machines or AI can act on its own. Discussions that humanise machines and sometimes even call for a legal personality of AI applications challenge accountability and dilute our own, human responsibility. (25) It should not be forgotten that it is humans that use machines and who need to be hold accountable for the outcomes.
AI does not happen. It is created.
Conclusion: Fundamental Rights as Potential Guardians of the Future
It is too early to tell how the balance sheet of AI will look like 10 years after the entry into force of the AI-Act. When we 'asked' ChatGPT before writing this piece, whether AI and Human Rights are friends or foes it 'concluded' that AI was simply a mirror of humanity and that its future is a test of “our wisdom, not just our technical skill”.
The question is not what AI will become, but “what we will make of it.” (26) As long as the AI output is here speaking of our wisdom, that is the wisdom of humankind, there is hope. After all, the output of an AI model like this is just word predictions based on correlations from text that is originally prepared by people (so far) as well as potential tweaks to the predictions the service provider considers necessary. And in order not to be left alone with a hope, legal practitioners and consumers alike should not lose sight of Fundamental Rights as they are key tools to keep authorities and (large) corporations accountable and to translate this hope into reality, thereby allowing AI and Human Rights to be friends rather than foes.
References
(1) Whereas both authors work for the European Union Agency for Fundamental Rights (FRA) all views here expressed are private and cannot be attributed to FRA.
(2) For instance, the concepts of a lingua characteristica and a calculus ratiocinator were developed by Gottfried Wilhelm Leibniz back in 1666 in his Dissertatio de Arte Combinatoria.
(3) See the seminal paper by: Leo Breiman, 2001. "Statistical Modeling: The Two Cultures". Statistical Science, Vol. 16, No. 3 (Aug., 2001), pp. 199-215.
(4) Catherine Lalumière, opinion for the Committee on Constitutional Affairs on the drafting of a Charter of Fundamental Rights of the European Union (C5-0058/99 –1999/2064(COS)), 28.2.2000.
(5) Submission by Kathalijne BUITENWEG and Johannes VOGGENHUBER, amendment 222.
(6) Submission by CAVE (La Confederación de Asociaciones de Vecinos Consumidores y Usuarios de España), CHARTE 4336/00 CONTRIB 200. See also the submission by Andrea Manzella, representative of the Italian Senate calling for a right to look for information, Observations reçues relatives au Document CHARTE 4422/00.
(7) Submission by M. Friedrich, CHARTE 4218/00 NC/cb 1, 21.03.2000.
(8) Oreste Pollicino, Forum AU and law, in Rivista di BioDiritto, n. 1/2010, 491-492, at 492.
(9) European Union Agency for Fundamental Rights, 2020. "Getting the Future Right – Artificial Intelligence and Fundamental Rights."
(10) See European Parliament resolution of 14 March 2017 on fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law enforcement (2016/2225(INI)).
(11) Council of Europe, 2024. "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". Council of Europe Treaty Series - No. 225.
(12) European Parliament and Council of the European Union, 2024. "Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act)." Official Journal of the European Union, L 2024/1689, July 12, 2024.
(13) Art. 9 and Art. 27 AI-Act.
(14) Art. 57 AI-Act.
(15) To be established under Article 67 AI-Act.
(16) Art. 86 AI-Act.
(17) Art. 77 AI-Act.
(18) Art. 79 AI-Act.
(19) For those that are less familiar with the EU-Charter it might be relevant to recall: the Charter only binds the Member States when they are implementing Union law (Article 51 of the Charter). This might come as a disappointing surprise and given this limited field of application it was also referred to as a “illusionary giant”. See: Gabriel N. Toggenburg, "The Charter of Fundamental Rights: An Illusionary Giant? Seven brief points on the relevance of a still new EU instrument", in A. Crescenzi, R. Forastiero, G. Palmisano (eds.), "Asylum and the EU Charter of Fundamental Rights", editoriale scientifica, 2018, pp. 13-22.
Given that much of the AI-Act is not so much about the actual decisions taken by AI but about the procedural aspects leading to the use of AI, the reach of the Charter’s applicability might not always go far enough to keep AI under a sufficient umbrella of fundamental rights control which underlines the relevance of the ECHR (which always applies). See: Melanie Fink, "The hidden reach of the EU AI Act", in Verfassungsblog, 20.1.2025. Do note, however, that the ECHR and Fundamental Rights under the national constitutions always apply.
(20) See Article 1 of the Charter.
See also: Gabriel N. Toggenburg, August 2024. "The 1st of all EUr rights: dignity and how the Charter contributes," first entry in the “All EU-r rights” Series, August 2024.
(21) Alessandro Pajno et al., 2019. "Intelligenza Artificiale: criticità emergenti e sfide per il giurista." Rivista di BioDirtitto, n. 3/2019, pp. 205-235, at 230.
(22) Georg Ress, "Künstliche Intelligenz (KI) als Herausforderung für das Europarecht und Völkerrecht", in: Berliner Online-Beiträge zum Europarecht, Nr. 148, S. 18 (he refers here to the case of S.A.S. v. France, Application no. 43835/11 (1 July 2014, Strasbourg).
(23) This is to recall a relevant element in the history of AI: when Herbert Simon and Allen Newell presented their programme back in 1957 they called it “General Problem Solver” (GPS). GPS illustrated how structured algorithms could simulate human thinking processes.
See: Manuel Lenzen, 2023. "Künstliche Intelligenz." C.H.Beck 2023, S. 21.
(24) Adriana Placani, 2024. "Anthropomorphism in AI: hype and fallacy". AI Ethics 4, 691–698 (2024).
(25) For a good discussion on the limits of legal personality, see: Simon Chesterman, 2020. "Artificial Intelligence and the limits of legal personality," International and Comparative Law Quarterly, Vol. 69 (4) pp. 819 – 844.
(26) Question posed to ChatGPT on 23.5.25, whether AI and Human Rights are friends or foes.
Comments