top of page

How International and Regional Actors Are Tackling AI Discrimination

  • Writer: Nafeesa Alshaala
    Nafeesa Alshaala
  • May 22
  • 7 min read

Updated: Jun 4

Author:

Nafeesa Alshaala - Former Diplomat & Founder of Diplotech Solutions

Human Rights Researcher at National Institution for Human Rights (NIHR) Bahrain

Certified Trainer in developing communities from BITA


Abstract


Artificial Intelligence (AI) is transforming how we live, work, and govern. But as algorithms increasingly shape decisions in hiring, policing, healthcare, and migration, they risk reproducing and even amplifying existing inequalities. One of the most urgent challenges today is algorithmic bias, the ways in which AI systems may disproportionately harm marginalized groups based on race, gender, socioeconomic status, or geography.


At the heart of this issue lies a critical Human Rights concern: how do we ensure that technological progress does not come at the expense of equality and dignity? In response, international organizations, regional blocs, and civil society groups are actively shaping frameworks, both legally binding and advisory, to confront bias in AI.


Below is a simplified overview of these efforts, including a comparative view between global and regional responses, and a real world case study to ground the discussion.


Global Efforts: From Ethics to Action


The global response to AI bias has largely taken the form of soft law instruments guidelines and principles that, while not legally binding, carry significant moral and political weight. UNESCO’s Recommendation on the Ethics of AI (2021) stands as a landmark global agreement, endorsed by over 190 countries. It calls for AI systems to be transparent, accountable, and non-discriminatory.

Notably, it links the governance of AI directly to Human Rights and the Sustainable Development Goals, emphasizing inclusion, gender equality, and data sovereignty.


Similarly, the OECD Principles on AI, adopted by over 40 countries, advocate for AI that is fair, transparent, and respectful of Human Rights. While these frameworks don’t enforce legal compliance, they set a shared ethical baseline, particularly valuable for countries without national AI legislation. However, these soft law efforts face challenges of enforcement. Without binding mechanisms, their success depends on political will, voluntary compliance, and robust monitoring areas where civil society plays a crucial role.


Europe: Legislating Fairness


The European Union (EU), as a crucial legal actor on the global stage, is advancing a comprehensive hard law response to AI risks. The EU Artificial Intelligence Act, entered into force in August 2024, is the world’s first legislation to regulate AI based on a risk hierarchy. Under the Act, high-risk AI systems--such as those used in biometric identification, employment, or border control, must meet strict transparency, safety, and anti-discrimination standards--and certain AI applications, like social scoring, are outright banned due to their inherent Human Rights risks.


The EU also builds on its existing General Data Protection Regulation (GDPR) and Charter of Fundamental Rights (CFREU), providing legal recourse for individuals affected by biased AI decisions. The European Court of Human Rights (ECtHR) further reinforces these protections in the European region, increasingly hearing cases where digital surveillance and algorithmic profiling intersect with civil liberties. This strong legal framework offers a potential blueprint for other regions, though it reflects Europe’s specific institutional and legal context.


Africa: Innovation Rooted in Inclusion


Across the African continent, a growing number of regional and national actors are shaping AI governance with a focus on equity and development. While hard-law instruments are still emerging, soft-law initiatives have taken the lead.


The African Union’s Draft Data Policy Framework encourages member states to adopt ethical AI practices rooted in cultural context, inclusion, and development goals. It also emphasizes the need to avoid importing biased technologies without local adaptation.

Several countries are leading with innovation: Kenya’s National AI Strategy focuses on fairness and digital inclusion; Rwanda is piloting AI in healthcare with attention to data ethics; and South Africa has convened multi-stakeholder forums on AI and equity. Crucially, Africa’s approach often includes linguistic and cultural diversity, recognizing that bias can stem from datasets and systems that marginalize local languages or social norms.


Case Study: Facial Recognition at the Border


Facial recognition technology is increasingly used to control migration, particularly in Europe. The EU’s iBorderCtrl system, an experimental AI driven lie detector used at border crossings, analyzed travelers’ facial expressions to assess trustworthiness. Though eventually discontinued, it raised serious Human Rights concerns.


Similarly, the Eurodac biometric database stores fingerprints and facial data of asylum seekers. Civil society groups have documented concerns about racial profiling, lack of consent, and opaque decision making. Migrants from Africa and the Middle East, in particular, have reported discriminatory experiences linked to algorithmic assessments at EU borders.


These systems reflect a growing trend: AI is often deployed first on the most vulnerable, without sufficient safeguards. In these contexts, bias is not just a technical flaw, it’s a structural issue that can lead to wrongful detentions, deportations, or denial of asylum.


Civil Society and the Push for Accountability


Civil society and the push for accountability have played a crucial role in advancing responsible AI. Human rights organizations, academics, and grassroots activists have been instrumental in advocating for ethical AI practices.


Groups like Access Now, AlgorithmWatch, and Amnesty International have published investigative reports and lobbied governments to halt or revise biased AI deployments. Their advocacy has resulted in moratoriums on facial recognition in some European cities, greater scrutiny of tech exports to authoritarian regimes, and the inclusion of civil society in UN AI advisory bodies.


These efforts highlight the need for participatory governance, as the involvement of affected communities--especially from the Global South--is essential. Without their voices, AI regulations risk being top-down and disconnected from lived realities.


Challenges and the Way Forward


Despite the progress made so far, several challenges remain on the path to effective AI governance.


Soft law instruments often lack enforcement power, and many developing regions still face a digital divide that complicates meaningful oversight and participation in AI governance. Additionally, private companies frequently develop AI systems without adequate external oversight or accountability. Addressing these issues requires that legal frameworks evolve more rapidly and become globally inclusive. It is also essential to prioritize capacity-building in legal, technical, and regulatory fields. Furthermore, cross-border cooperatin--particularly between regional systems such as EU-AU dialogue--can help create more harmonized and rights-based AI governance.


Conclusion


As AI continues to shape the systems that govern people’s lives, combating bias is not optional—it is essential to protecting Human Rights in every region of the world. International and regional actors are making strides, but success depends on turning ethical principles into enforceable policies, and ensuring that AI development remains inclusive and just.


From UNESCO to the AU, and from the EU to local grassroots movements, the message is clear: a fair digital future is possible, but only if we build it deliberately, transparently, and together.


Yet, while soft law instruments have played a crucial role in shaping norms and raising awareness, they are ultimately limited by their non-binding nature. In the face of rapid technological advancement and growing structural inequalities, voluntary compliance is no longer enough. Binding legal frameworks, such as the EU's Artificial Intelligence Act, are indispensable tools for holding both state and non-state actors accountable. They provide clarity, consistency, and legal recourse, especially for those most vulnerable to algorithmic harm.


Personally, I believe that the future of equitable AI governance hinges on our willingness to treat technological bias as a Human Rights issue, not just a technical flaw to be patched. Only binding law can truly guarantee that protections are not left to the discretion of powerful actors. It is through enforceable, rights-based regulation, developed inclusively and applied universally, that we can ensure AI technologies uplift rather than undermine human dignity.



Bibliography


Access Now, 2021. "Ban Biometric Surveillance". Brussels, Access Now.


African Union Commission, 2022. "African Union Data Policy Framework (Draft)". Addis Ababa, African Union.


European Parliament and Council of the European Union, 2024. "Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act).Official Journal of the European Union, L 2024/1689, July 12, 2024.


Hagendorff, T., 2020. "The Ethics of AI Ethics: An Evaluation of Guidelines". Minds and Machines, 30(1), pp. 99–120.


OECD, 2019. "OECD Principles on Artificial Intelligence". Paris, Organisation for Economic Co-operation and Development.


UNESCO, 2021. "Recommendation on the Ethics of Artificial Intelligence". Paris, United Nations Educational, Scientific and Cultural Organization.


Biography of the Guest Expert


Nafeesa Alshaala is a former Bahraini diplomat, educator, and founder, with a multidisciplinary background in international relations, cultural diplomacy, and Human Rights. With over a decade of experience across government institutions, embassies, and educational leadership, she is committed to bridging global perspectives with local impact.


Nafeesa holds a Master’s degree in International Relations and Cultural Diplomacy carried out in Germany and a Bachelor’s degree in Education from the University of Bahrain. She was also awarded a Fulbright scholarship at Kenyon College in the United States, where she has been serving as a teaching assistant since 2018. In the US, she also served as a cultural ambassador, acting as a representative of Bahrain in the UN Youth Assembly. Until 2023, she served as a diplomat at the Embassy of the State of Kuwait in Berlin.


In 2024, Nafeesa founded Diplotech Solutions, a platform integrating AI with diplomacy. Through this project, she has led training programs, hosted workshops, and collaborated on international projects aimed at modernizing diplomatic practices. She has been a Human Rights researcher at the National Institution for Human Rights in Bahrain since 2019, analyzing national progress in relation to global development goals.


Nafeesa is also a published writer, having authored her first book Twentieth in Twenty, and remains active in youth engagement, cultural exchange, and international collaboration. Her background in education also includes teaching roles with children, such as serving as the Arabic Language Department Coordinator at Al Busaiteen Primary Girls School in Bahrain, where she led curriculum development, teacher training, and school improvement projects.

 
 
 

Recent Posts

See All

Comments


bottom of page