top of page

Why AI Must be Trained to Uphold Human Rights

  • Writer: EU Agency for Fundamental Rights (FRA)
    EU Agency for Fundamental Rights (FRA)
  • Jun 12
  • 7 min read

Author:

European Union Agency for Fundamental Rights (FRA), the EU Agency working to instil a Fundamental Rights culture and to make rights a reality for all within the European Union.


Abstract


In 2025, AI is everywhere and algorithms are used to shape how we live, work and exercise our Fundamental Rights. From data protection and non-discrimination to freedom of expression and fair elections, AI becomes instrumental in how societies develop.


But as technology advances, we have the responsibility to ask: progress for whom and at what cost?


At the Fundamental Rights Agency (FRA), we have shown that AI systems are not neutral. They are only as fair as the data they are trained on and the societal biases they embed. While the need to adjust quickly and stay competitive is real, it must not come at the expense of Fundamental Rights. Without careful oversight and regulation, these systems risk exacerbating the very inequalities we aim to overcome.


AI and the Right to Non-Discrimination


Let’s consider the Dutch childcare benefits case. In 2018, it became clear that housands of innocent families--many from immigrant backgrounds--were flagged by the algorithm and falsely accused of fraud and forced to repay social benefits. The consequences were devastating: some parents lost access to social benefits, housing, or even the custody of their children. This is not just a technical glitch, but a systemic problem.


AI is also increasingly being used in predictive policing. Algorithms trained on biased crime data send the police to the same neighborhoods, creating a so-called feedback loop. More policing leads to more recorded crime, which then justifies even more policing.

Over time, this cycle deepens mistrust in communities that already face discrimination--for example, more than one in two Black people report experiencing racial profiling. While some areas are over-policed, others are left without support, even when they need it.


A biased algorithm does not make anyone safer, it just shifts the risk and reinforces the problem.


Freedom of Speech


While disinformation and hate are nothing new, their massive amplification through social media platforms presents a growing Human Rights issue. FRA’s analysis of social media posts and comments underlines how widespread online hate is.

Moreover, hateful posts easily slip through online content moderation tools and many people--and in particular women--face harassment, abuse and incitement to violence online.


However, hate speech can easily evade detection. For instance, AI models trained to detect hate speech were shown to disproportionately flag statements like ‘I am Muslim’ or ‘I am Jewish’ as offensive. This stems from training datasets where such identity terms often appear alongside genuinely hateful content. At the same time, adding the word ‘love’ to a hateful statement may trick the algorithm into believing it is not hateful at all.


This only underlines that a fast AI can facilitate how we process content, but unsupervised speed can as easily undermine its potential benefits.


Electoral Processes


The power of AI to influence public opinion is most visible during elections.


During the Brexit referendum, for example, social media campaigns were used to deliver targeted messaging that often escaped public scrutiny and accountability.

Coming to the present, the first round of the Romanian presidential elections had to be annulled last year after serious allegations of irregularities. The Romanian Constitutional Court ruled that voters were effectively disinformed and manipulated by the candidate’s electoral campaign on social media.


This only shows that protecting democracy in the digital age means guaranteeing the integrity of our elections in the online space as well. If we fail, we risk the erosion of democracy itself. Our Fundamental Rights Report, FRA’s annual flagship report, focuses on the need for fair, transparent, and safe elections--including the online dimension.


Conclusion


As Human Rights professionals, it is not enough to only highlight the risks that AI can poses; we must also come forward with solutions. Our lives are already digital in many ways--and we will not be able to change that. What we need now is a clear path forward that protects everyone’s rights.


The good news? We already have some of the tools to get there.


The AI Act is the world’s first comprehensive law on AI. It sets clear standards, including transparency, Fundamental Rights impact assessments, and safeguards around the use of sensitive data. Similarly, the Digital Services Act (DSA) strengthens accountability for online platforms by requiring the removal of illegal content, greater algorithmic transparency, and stronger user protections.


While legislation is a significant step, we must also ensure that its implementation is robust, coherent and rights-based.


At FRA, we developed a series of recommendations for anyone actively contributing to the development in this field, including:

  • considering the impact of AI on all Fundamental Rights;

  • testing algorithms for bias before and also during their use;

  • providing guidance on the use of sensitive data, such as gender;

  • increasing access to the data and infrastructures behind AI and online platforms;

  • ensuring effective oversight.


With our ongoing research work, we are actively supporting the development of trustworthy AI: we develop ways to assess Fundamental Rights risks, we advise on the implementation of the AI Act, and we scrutinise high-risk AI applications. We also analyse how police forces use remote biometric identification, including facial recognition technology, and later this year, we will publish a report on the digitalisation of our justice systems and compliance with Fundamental Rights.


We believe that innovation and Human Rights go hand in hand. By minimising risks and respecting Human Rights, we can create a better technology that provides all of us with the benefits we were once promised.



Bibliography


Dutch Parliamentary Committee, 2020. “Unknown injustice: Report of the Parliamentary Committee childcare benefits scandal”. Hague: Dutch Parliamentary Committee.


European Parliament, 2024. “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)”. Brussels: European Parliament.


Fundamental Rights Agency, 2022. “Bias in algorithms: Artificial Intelligence and Discrimination”. Vienna: Fundamental Rights Agency, p. 29.


Fundamental Rights Agency, 2023. “Being Black in the EU – Experiences of people of African descent”. Vienna: Fundamental Rights Agency, p. 20.


Fundamental Rights Agency, 2023. “Online Content Moderation: Current challenges in detecting hate speech”. Vienna: Fundamental Rights Agency.


The Romanian Constitutional Court, 2024. “Decision no. 32 of December 6, 2024 on the annulment of the electoral process regarding the election of the President of Romania in 2024”. Bucharest: Romanian Constitutional Court.


European Union, 2022. “Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act)”. Brussels: European Union.


About FRA


The European Union Agency for Fundamental Rights (FRA) is an independent body established in 2007 to provide expert advice to EU institutions and Member States on Fundamental Rights issues. Headquartered in Vienna, FRA's mission is to ensure that the rights of individuals within the EU are protected and promoted, especially in areas where EU law is applied.


FRA conducts research, collects data, and provides analysis on a range of Human Rights topics, including: access to justice; discrimination and racism; data protection and privacy; migration and integration; rights of the child; victims' rights. By offering evidence-based insights, FRA aids in the development of policies and legislation that uphold Fundamental Rights across the EU.

Through its comprehensive reports and surveys, FRA influences EU policy-making by highlighting areas where rights are at risk; recommending measures to address systemic issues; supporting the implementation of EU law in Member States.


Recognizing the rapid advancement of technology, FRA examines how innovations like artificial intelligence and big data impact Fundamental Rights

Their research focuses on ensuring that technological developments do not compromise privacy, equality, or other core individual rights. FRA's studies have identified potential risks associated with AI applications in public services, such as predictive policing leading to biased law enforcement practices; automated decision-making in social benefits potentially causing unfair exclusions; use of facial recognition technology raising privacy concerns.


FRA advocates for a Human Rights-based approach to AI by participating in expert meetings on AI governance, contributing to discussions on the EU's Artificial Intelligence Act and emphasizing the need for transparency, accountability, and non-discrimination in AI systems.

Their involvement ensures that AI regulations align with Fundamental Rights and all the related principles governing EU law in the field. To amplify its impact, FRA collaborates with EU institutions and Member States, civil society organizations, academic and research institutions. These partnerships facilitate the exchange of knowledge and promote the integration of Human Rights considerations in various sectors.


In a time of political tension, social inequality, and rapid technological change, protecting Fundamental Rights in the digital space is more urgent than ever. FRA works to ensure that human dignity, equality, and justice remain at the core of EU law and policy, whether addressing discrimination, upholding the rule of law, or guiding the ethical use of new technologies.


FRA’s mission is to make sure rights are real for everyone, every dayIn physical reality and in the digital space.

 
 
 

Recent Posts

See All

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page