AI as a Patriarchal Tool: How Inequality is Programmed into Our Lives
- Elif Selenay Baş Bilgiç
- Apr 10
- 7 min read
Author:
Elif Selenay Baş Bilgiç - Associate Editor at iGlobal.Lawyer & Lawyer
Abstract
Artificial Intelligence (AI)–being a sociocultural practice mirroring societal concepts and prejudices–inevitably reflects the patriarchal and discriminatory nature of our current world. More importantly, owing to its rapid development and widespread adoption, it intensifies the perpetuation of a vicious cycle, shaping gendered relations both online and offline.
One can easily witness the sexist bias in AI technologies by simply asking image generators like DALL-E or MidJourney to create images of professions and observing their stereotypical responses. Grappling with the ethical challenges of such a fast-evolving domain requires a thorough understanding of how negative consequences arise, how they manifest and what can be done to mitigate them.
Causes of Gender Bias in AI
Gender bias in AI is not an isolated issue but rather a systemic one. This is apparent in earlier design shortcomings, such as seatbelts engineered for male bodies, which compromised women’s safety. In the case of AI, several interconnected factors contribute to the gender biases we frequently encounter:
Incomplete or misleading datasets
Women are often underrepresented, misrepresented or excluded in the training datasets of AI models, leading to skewed and unfair outcomes.
Supervised machine learning
Many commercial AI systems rely extensively on training data labelled by humans. Human annotators tend to reflect their conscious and unconscious gender biases, ultimately encoding them into the AI models.
Algorithmic bias
AI systems can also amplify biases through the way algorithms are designed. For example, an algorithm that prioritises income or education may end up reinforcing harmful prejudices and discrimination against marginalised groups, including women.
Use of social media content
Through large-scale web scraping, biased and discriminatory content on social media is often used as cheap training data for many AI models, resulting in the reproduction and amplification of such harmful views.
Flaws in modelling techniques
The parameter inputs for machine learning models or the training methods themselves can also introduce bias. For instance, text-to-speech and speech-to-text technologies are known to have performed worse for women than men throughout the years due to early models being based on male characteristics.
Underrepresentation of women and the concentration of power
Women comprise less than 1/3 of the employees in the IT industry and a mere 22% of AI professionals. Furthermore, AI is predominantly funded, developed and regulated by an elite group of wealthy, Western men. This lack of inclusivity reinforces existing power imbalances and inequalities on a global scale.
Different Manifestations of Gender Bias
The implications of gender-biased AI are extensive, ranging from lower quality service for women, unfair distribution of resources, opportunities and information, as well as the perpetuation of harmful stereotypes and prejudices. Additionally, increased risks and adverse effects on both physical and mental health are also of substantial concern.
Numerous instances have been documented, from facial recognition software performing less accurately for dark-skinned women to image generators perpetuating the ‘male gaze.’
Below, we will delve deeper into some of these examples.
Recruitment
The most notorious case regarding hiring processes occurred at Amazon, in 2015. The company’s recruiting system was trained on resumés from the past decade, with men making up 60% of Amazon’s employees. Consequently, the AI tool incorrectly assumed that male candidates were preferable and penalised resumés with the word ‘women'.
In like manner, researchers at the University of Washington uncovered striking gender and racial biases in AI-powered résumé screening tools. After submitting over 550 résumés to leading language models, they found that the AI systems overwhelmingly favoured male candidates. Names associated with men were preferred in 52% of comparisons, while those associated with women were selected in only 11%. Even more disturbingly, white male-associated names were favoured over Black male-associated ones 100% of the time.
Other examples relate to finding opportunities through online job platforms and social media. Research has revealed that female users were shown fewer ads for high-paying jobs than male users, even when equally or better qualified. LinkedIn also discovered that more men than women were displayed open positions based solely on the fact that they searched for new jobs more frequently.
Healthcare
A UCL study found that AI models used for liver disease screening were nearly twice as likely to miss cases in women compared to men. Furthermore, in 2023, researchers revealed that AI tools used to diagnose bacterial vaginosis, a common infection affecting women, exhibited ethnic biases. The AI models performed more accurately for white women but showed significant errors when diagnosing Hispanic and Asian women.
Similar findings have surfaced in mental healthcare. In a 2024 study, results indicated that AI systems may misinterpret speech variations across different demographic groups, leading to inaccurate assessments. The AI tools tended to underdiagnose depression in women compared to men despite women exhibiting higher rates of depressive symptoms. Additionally, Latino participants reported greater anxiety levels, which the AI failed to detect accurately.
A recent article in the European Heart Journal offers a different perspective. When popular AI image generators were prompted with the term 'cardiac patient,' the resulting images predominantly depicted older white males. This is especially troubling, as cardiovascular disease (CVD) is the leading cause of death among women worldwide. Women with CVD continue to be under-diagnosed due to persistent misconceptions and a lack of awareness.
Moreover, women are more likely to die following a heart attack than men. In line with these facts, the study highlights how such imbalanced portrayals in AI can influence the perceptions of medical professionals and the public, reinforcing harmful stereotypes and ultimately compromising the quality of care provided to women.
Stereotyping
AI-powered assistants, such as Siri and Alexa, are often designed with ‘feminine’ voices and personalities, reinforcing stereotypes that associate women with servitude and emotional labour. These systems are intentionally programmed to be polite, accommodating and subservient. In 2017, an investigation into virtual bots revealed troubling responses to sexual harassment. For instance, Siri's reply to "you're a bitch", was "I’d blush if I could"; while other bots flirted with harassment or failed to set boundaries. Alarmingly, when questioned about sensitive topics like rape, some provided disturbing answers or directed users to harmful content.
Translation software has also demonstrated bias. Google Translate, when translating Hungarian gender-neutral sentences to English, opted for male pronouns for the terms “clever,” “reading,” “teaching,” “research,” and “making a lot of money.”
Meanwhile, female pronouns were used for the terms “beautiful,” “sewing,” “cooking,” and “raising a child.”
Various reports have shown that stereotypical and sexist assumptions have crept into credit-scoring algorithms as well. A well-known case is that of a married couple that separately applied for a credit line increase with Apple Card. Despite the wife having a higher credit score and stronger financial credentials, the husband was granted a credit limit 20 times higher. Apple Co-Founder, Steve Wozniak, also experienced a similar discrepancy, receiving a credit limit ten times that of his wife, even though they shared assets and accounts.
What Can Be Done?
Mitigating gender bias in AI necessitates a multifaceted strategy, the first step being the use of diverse datasets that represent a broad scope of factors like gender, race, age, and sexuality. This correlates with the need for increased research, especially on marginalised groups like minority women, to ensure sufficient data is available for balanced outputs.
Furthermore, more women need to be recruited in every phase of AI development, as their perspective is crucial in identifying underlying biases that may otherwise go unnoticed. Rigorously assessing the performance of algorithms across demographic groups is also key to assuring fairness and intersectionality. Throughout these efforts, transparency remains the most indispensable principle.
Developers should disclose the sources of their training data, explain their system's logic, and confront biases through mechanisms like "human-in-the-loop" oversight systems. In tackling gender bias, the integration of feminist methodologies, such as asking "the woman question," should also be embraced. This approach critically examines how technology may inadvertently harm or exclude women, especially by questioning whether AI models account for gendered disparities.
Another example is the initiative of feminist AI, which challenges the male-dominated design of existing technologies and reimagines AI to revolve around fairness, agency, and collective participation. Feminist AI prioritises social justice and the needs of diverse communities, ensuring AI development is shaped by those it affects most.
Towards Gender-smart AI: Building an Equitable Future
Although the risks of gender bias in AI are manifold, its potential to address some of the most pressing challenges of our time and pave the way for a brighter future is widely acknowledged. Positive outcomes of AI use concerning women have already been recorded across various sectors. Tools like Glassdoor have exposed gender pay gaps whereas platforms like Coursera and edX have highlighted enrolment imbalances and promoted more inclusive learning materials.
Moreover, AI’s role in improving women’s safety and combating different forms of violence has been a tremendously welcome development. For example, apps like bSafe provide safety alerts for women. Chayn’s letter-writing tool, on the other hand, is trauma-informed, empowering survivors to draft legal requests for law enforcement or tech companies.
However, even as AI offers innovative solutions, unchecked biases in these systems are further entrenching inequalities, undermining the broader struggle for gender equality and women’s empowerment. Given the mounting risks, establishing comprehensive and robust AI governance frameworks is more urgent than ever. As we work toward gender-smart AI, it is crucial for all stakeholders to consistently ask and build upon the question:
How can AI be designed to challenge and dismantle, rather than reinforce, gender inequality?
References
AIMultiple Research, March 22, 2025. “Bias in AI: Examples & 6 Ways to Fix it in 2025.”
Harvard Business Review, November 20, 2019. “4 Ways to Address Gender Bias in AI.”
Sideri M. & Gritzalis S., 2025. “Gender Mainstreaming Strategy and the Artificial Intelligence Act: Public Policies for Convergence,” Digit. Soc. 4(20).
Available at: https://doi.org/10.1007/s44206-025-00173-y
Smith G., & Rustagi I., 2021. “When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity,” Stanford Social Innovation Review.
Available at: https://doi.org/10.48558/A179-B138
The Catalyst, February 25, 2025. “Why we need feminist AI.”
UN Women, February 5, 2025. “How AI reinforces gender bias—and what we can do about it.”
Available at: https://www.unwomen.org/en/news-stories/interview/2025/02/how-ai-reinforces-gender-bias-and-what-we-can-do-about-it
UNESCO, OECD, IDB, 2022. “The Effects of AI on the Working Lives of Women.” France: Paris. p.46.
Available at: https://unesdoc.unesco.org/ark:/48223/pf0000380861
Wellner G., 2022. “Some Policy Recommendations to Fight Gender and Racial Biases in AI,” The International Review of Information Ethics 32(1).
Available at: https://doi.org/10.29173/irie497
Comments