top of page

Algorithmic Bias: Human Rights in the Middle East

  • Writer: Patrick Zaki
    Patrick Zaki
  • Feb 13
  • 7 min read

Updated: Apr 10

Author:

Patrick Zaki - Human Rights Advocate & PhD Student at Scuola Normale Superiore (SNS) in Florence (Italy)


Abstract


It is now increasingly recognized that machine systems built to be objective and unbiased do indeed discriminate along familiar human lines, reproducing or amplifying social differences and inequalities.” – Safiya Umoja Noble, Algorithms of Oppression


Artificial Intelligence (AI) is often perceived as neutral and objective, yet AI bias has emerged as a critical Human Rights issue. AI systems are built upon datasets collected by human actors, whose implicit biases, historical prejudices, and systemic inequalities are embedded into these technologies. As a result, rather than eliminating discrimination, AI can perpetuate and even amplify existing patterns of racism, gender bias, and socio-economic marginalization.


This article aims to provide a clear and accessible explanation of AI bias and how it intersects with Human Rights.


By examining how AI-driven decision-making can reinforce discrimination and restrict access to Fundamental Rights, it highlights the broader implications for marginalized communities.


Through the case study of Jordan’s Takaful cash assistance program, this article explores how AI bias manifests in real-world welfare systems, particularly in relation to gender-based discrimination and barriers to digital access. Ultimately, the goal is to underscore the need for a Human Rights-centered approach to AI governance, ensuring that technological advancements do not deepen existing inequalities.


AI and Data Bias


When you start building algorithms for this particular purpose, for overseeing access, what always happens is that people who need help get excluded.” – Meredith Broussard, Professor at NYU


The Organisation for Economic Co-operation and Development (OECD) defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”


AI operates through machine learning processes, analyzing vast amounts of data at an unprecedented scale. These systems rely on algorithms that automatically examine large datasets to detect patterns, often utilizing deep-learning neural networks. These networks enable AI to learn and generalize relationships within data, identifying patterns that influence decision-making across various applications. However, AI systems are only as reliable as the datasets they are trained on. The core issue of AI

bias originates from the data itself and the structural inequalities embedded within it.


While AI is often seen as an impartial tool, it does not inherently filter, verify, or contextualize the fairness of the data it processes. This is particularly concerning because institutions, corporations, and governments that develop and deploy AI technologies often have specific agendas, shaping how AI systems interpret and apply data.


A significant concern is that AI systems rely on historical data that reflects centuries of discrimination, colonialism, and patriarchy.


Data is not collected in a vacuum—it is shaped by who gathers it, the methods used, and the underlying societal biases of its time. Many AI models, particularly those used for decision-making in employment, finance, policing, and social services, inherit and perpetuate systemic discrimination against marginalized groups. For instance, AI tools have been shown to perpetuate housing discrimination, affecting tenant selection and mortgage qualifications, thereby hampering the economic security of marginalized communities (ACLU).


For example, AI-driven hiring tools have been shown to disadvantage women and people of color, as they are trained on historically biased employment data. Similarly, facial recognition technologies have been criticized for their racial bias, often misidentifying individuals from minority communities at disproportionately higher rates.


According to the Office of the United Nations High Commissioner for Human Rights (OHCHR), racial bias in AI systems has led to wrongful arrests, exclusion from employment opportunities, and further marginalization of vulnerable populations (OHCHR).


While data quality can be controlled to some extent, the reality is that biased data, once embedded into AI systems, can perpetuate and even exacerbate existing inequalities. Without greater transparency, accountability, and oversight, AI will continue to reinforce discriminatory outcomes rather than creating fairer and more equitable systems.


Case Study: AI Bias in Jordan’s Takaful Program


The World Bank, one of the largest development actors promoting the use of data-driven technologies, has been at the forefront of integrating AI and algorithmic decision-making into social welfare programs. A key example is Jordan’s Takaful program, a major cash transfer initiative designed to provide financial assistance to low-income families. The program employs advanced data analytics to identify and assist vulnerable households, aiming to enhance social protection and alleviate poverty.


Jordan is one of eight out of ten borrowing countries in the Middle East and North Africa (MENA) region that have received World Bank loans to modernize and digitize their welfare programs. The Takaful project reflects the Bank’s broader strategy of leveraging technology to create more efficient, targeted, and inclusive social assistance systems.


Historically, the World Bank has promoted poverty-targeted cash transfer programs, identifying beneficiaries by estimating income and welfare levels. However, this approach has been widely criticized for undermining social security rights, particularly in the aftermath of the COVID-19 pandemic, which exposed flaws in global poverty alleviation systems. Critics argue that poverty-targeted programs frequently fail to reach those most in need due to errors, mismanagement, and corruption. In response, the Bank has increasingly invested in AI-driven decision-making systems, claiming they will improve accuracy, reliability, and efficiency.


However, this shift raises serious concerns about the potential for algorithmic bias and the impact on Human Rights in social welfare programs.


According to a Human Rights Watch (HRW) report, the Takaful program has exhibited significant bias, resulting in the violation of fundamental Human Rights.


This article aims to highlight two major areas of concern:


1.Gender-Based Discrimination


AI-driven welfare programs often reflect deep-rooted gender biases, and the Takaful program is no exception. Many Jordanian women reported that the system automatically assumed men to be the primary financial providers, prioritizing male heads of households when allocating financial assistance.


As a result, women-led households and single mothers faced exclusion or delayed support, reinforcing existing economic disparities.


Moreover, the system failed to account for non-Jordanian spouses, effectively excluding migrant families and mixed-nationality households from receiving aid. This algorithmic oversight further marginalized vulnerable populations and disregarded the complexities of Jordan’s diverse social fabric.


2. Digital Accessibility and Exclusion


The Takaful system is designed for individuals with reliable internet access and digital literacy, creating barriers for low-income beneficiaries. The application and verification processes are entirely online, requiring access to a computer or smartphone. However, those in severe poverty—the very individuals the program aims to assist—are often the least likely to have stable internet access or technological resources. By failing to accommodate digitally excluded populations, the system inherently discriminates against those who lack access to technology, reinforcing existing socio-economic inequalities rather than alleviating them.


Despite the promise of AI to enhance efficiency, the implementation of the Takaful program demonstrates how algorithmic decision-making can reinforce systemic biases, disproportionately affecting women, migrants, and the digitally excluded. This case study underscores the need for greater oversight, transparency, and Human Rights-centered AI governance to ensure that automated welfare systems do not perpetuate discrimination under the guise of efficiency.


Conclusion


AI holds immense potential to transform decision-making across sectors, but its deployment must be firmly rooted in Human Rights principles.


Far from being neutral, AI systems often reflect and amplify the biases present in the data they are trained on, perpetuating societal inequalities. Without careful oversight, these technologies risk deepening discrimination against marginalized groups, including women, minorities, and those with limited digital access. To prevent AI from becoming a tool of exclusion and injustice, Human Rights must be central to its design, implementation, and regulation.


Accountability is critical.


Governments, institutions, and corporations developing or deploying AI in sensitive areas like welfare, employment, and law enforcement must recognize and address the risks of algorithmic bias. Transparent practices—such as disclosing how AI systems operate, where data is sourced, and how decisions are made—are essential to building trust and ensuring fairness. Regular bias audits, corrective measures, and robust safeguards must be implemented to prevent discrimination and protect vulnerable populations.


Strong regulatory frameworks are also needed to enforce fairness, accountability, and inclusivity in AI systems. Human oversight must be integrated into AI-driven processes to ensure that individuals affected by these technologies have access to appeal mechanisms and redress. Without such measures, AI risks entrenching historical biases rather than advancing equity and justice.


The path to responsible AI development lies in collaboration.


Policymakers, technologists, and Human Rights advocates must work together to ensure that AI serves as a force for social good. By prioritizing transparency, accountability, and the protection of Human Rights, we can harness AI’s potential to create a more just and equitable world, rather than allowing it to become a

vehicle for systemic discrimination and exclusion.


References


MIT Technology Review, June 13, 2023. "An Algorithm Intended to Reduce Poverty in Jordan Disqualifies People in Need".


Human Rights Watch, June 13, 2023. "Automated Neglect: How the World Bank’s Push to Allocate Cash Assistance Using Algorithms Threatens Rights".


Zajko, M., 2022. "Artificial Intelligence, Algorithms, and Social Inequality: Sociological Contributions to Contemporary Debates", Sociology Compass, 16(3), 2022, e12962.


Noble, S. U., 2018. "Algorithms of Oppression: How Search Engines Reinforce Racism". New York University Press


Organisation for Economic Co-operation and Development (OECD), 2019. "Recommendation of the Council on Artificial Intelligence, OECD Legal Instruments", May 22, 2019.



Biography of the Guest Expert


Patrick George Michel Zaki Soliman is an Egyptian Human Rights Advocate who has been actively involved in promoting and researching the international protection of Human Rights, particularly focusing on gender issues and minority rights in Egypt since 2017.


Alongside his academic pursuits and two Master’s degrees in Gender and Minority Studies from the University of Bologna (Italy) and the University of Granada (Spain), Zaki has consistently dedicated himself to research over the years. He began as a researcher for the Egyptian Initiative for Personal Rights (EIPR), where he investigated issues related to sexual and reproductive health and rights, particularly documenting violations against minorities. He is now pursuing a PhD at Scuola Normale Superiore in Florence (Italy), where he is researching the influence of AI in the Middle East.


Zaki’s work in Human Rights Advocacy led to his arrest by Egyptian authorities on February 7, 2020, and his detention lasted until December 8, 2021. He was held for nearly two years on charges including “disseminating false news” and “inciting protests”. His case garnered international attention, with numerous Human Rights organizations, academic institutions, and governments calling for his release.


Zaki’s case has become emblematic of the challenges faced by Human Rights defenders in Egypt and has sparked discussions about academic freedom and freedom of expression both in the country and across Europe.


In connection with his PhD-related studies, he has also been very active in advocating for the “Palestinian cause,” speaking out for the liberation of Palestinian territories occupied since 1967, where civilians continue to endure immense suffering due to the occupation.


He has now written a book titled “Sogni e Illusioni di Libertà. La mia storia” (EN: "Dreams and Illusions of Freedom. My Story"), where he shares his journey through the challenges that have shaped him into one of the leading figures in Human Rights Advocacy, inspiring fellow activists worldwide.

 
 
 

Recent Posts

See All

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page