Article by: Olivia Carling, Open Dialogues International Foundation

Young women participants work together on a laptop during an African Girls Can Code Initiative's coding bootcamp held at the GIZ Digital Transformation Center in Kigali, Rwanda in April 2024. Photo: UN Women
Artificial Intelligence is one of the most powerful inventions of the 21st century. It has the power to reshape our lives, but at the same time is not repellent against the biases and inequalities present in our societies, and can actually deepen the issues. The 2024 study, “Unmasking Inequalities of the Code: Disentangling the Nexus of AI and Inequality”, published in Technological Forecasting and Social Change, explored how AI systems have a huge potential to either reinforce existing divides, or become extremely useful tools for inclusion. All this relies on how they are designed, trained and then governed, the latter being a controversial and difficult topic to touch on, as there are so many unpredictable outcomes of AI integration and use.
Vulnerable communities are constantly at risk for discrimination, and AI facilitates these dangers to minorities, women and children in new ways, as it develops at such a fast rate that policymakers struggle to address situations before they are already outdated.
For minorities, biased AI can damage every part of life
AI is usually sold to us as unbiased. However, the data it uses often contains many systemic inequalities and prejudices, reflecting how they operate in the real world. Bad data is implicit in suppressing sections of societies; vulnerable groups like women and minorities are often those targeted. In our last article, Carlotta explored how indigenous voices are essential and must be embedded in AI models to empower minority groups.
This is also true for other groups. AI that was built in the context of the patriarchy pushes it further, for example, algorithms that learn about how most people in a particular job are male, could then favor male job applications when companies employ these AI systems to sort through applications. The issue with AI having access to all the information of the internet is that our data is polluted by a set of myths and outdated beliefs, all leading to discrimination based on gender, ethnicity and sexual identity.
The majority of human history is plagued with the idea of establishing particular social and political order, extending privilege to males. In Western societies, this furthers into white males, and the effects of this are still seen today. There is a strong assumption that the residues of racist and sexist discrimination feed into our technology just like how they continue to impact modern life.
A concerning example of racial bias and how they transfer from our societal issues to technological advances is predictive policing. Predictive policing tools are already controversial, but with the implementation of AI, they can become downright discriminatory, as they make assessments about who will commit future crimes and where these future crimes may occur. They have huge risks of exacerbating historical over policing in disadvantaged communities because of racial and ethnic lines.
"Because law enforcement officials have historically focused their attention on such neighbourhoods, members of communities in those neighbourhoods are overrepresented in police records. This, in turn, has an impact on where algorithms predict that future crime will occur, leading to increased police deployment in the areas in question."
- Said Ashwini K.P. during her interactive dialogue at the Human Rights Council’s 56th session in Geneva.
According to her findings, location-based predictive policing algorithms draw on links between places, events and historical crime data. They then predict when and where it is likely that future crimes happen, and this impacts how police forces plan their patrols. Some use the argument that because of historical bias and discrimination against minorities, systemic and poverty cycles are pushed resulting in crime. However, this is an outdated argument because statistics in places like the UK and US, ethnic minorities are overrepresented in arrest and incarceration rates because of bias, like how black people are over 3 times more likely to be arrested or deemed suspicious than white people. But, the AI is not trained to explain historical reasons behind locations and arrest numbers. It spits out data; data shaped by discriminatory police and authority figures who are more lenient on white people. Police forces should absolutely not be over relying on predictive AI until the biases can be addressed and accounted for, and a real change is seen. We cannot continue to over punish minorities while letting privileged people off easier, and this issue only gets worse the more we rely on predictive AI that has been trained on bad data.
Artificial Intelligence and gender inequality

Photo: Lara Jameson on Pexels
Just as there is a racism problem, our world possesses a gender equality problem, where AI not only mirrors the gender bias, but has already been posing active risks towards gender equality. AI has huge danger risks towards women, as well as minorities, primarily through the increase of gender-based violence and the dehumanization of women. The technology advanced quickly and people have taken advantage of it. The terrifying reality is that anyone with a device can make a deepfake, impacting women’s safety and privacy. Furthermore, this often spills over from online abuse to real-life harm and impacts. AI is actively amplifying violence against women, and showing young generations a gender imbalance between who is harmfully targeted. Technology-facilitated violence against women and girls is growing, with 16-58% of women worldwide impacted, and the scary thing is that lawmakers struggle with keeping up to the ever changing technological developments to be able to properly address the issue. AI tools target women and enable access, blackmail, stalking and harassment that have huge real world consequences.
Consider this: developed by male teams, the majority of deepfake tools are not even designed to work on images of a man’s body.
AI is pushing the gender divide and making it dangerous for girls and women to exist in society. An example of this is in Pennsylvania, where two teens used AI to create fake nudes of female classmates and then received probation. The boys were 14 at the time and created about 350 images, showing at least 59 girls under 18. What’s worse is that the judge said he hadn’t heard the boys apologize a single time after giving them several opportunities to comment. Tools like this are dangerous and harmful, and create serious safety concerns for young girls. In an age where social media is widespread and present in everyone’s daily life, it is impossible to remain anonymous and to remove all images on the internet of oneself.
Until these tools get banned or governed properly, billions of people are threatened, and we will not be rid of this fear. Minorities, women, children, all targeted by dehumanizing and prejudiced people, have their abuse and unfair treatment facilitated through AI, and the technology is only getting better.
Artificial intelligence is incredibly impressive as a technology, and shows the rapid development of society and innovation. But with innovation and creation comes a responsibility necessary to protect those who are still at a disadvantage because of colonialist and patriarchal mindsets.
Bibliography
Adib-Moghaddam, A. (2023). Biased AI Algorithms Can Damage Almost Every Part Of A Minority’s Life. [online] SOAS. Available at: https://www.soas.ac.uk/about/blogs/minorities-biased-ai-algorithms-can-damage-almost-every-part-life.
Bircan, T. and Özbilgin, M.F. (2024). Unmasking Inequalities of the code: Disentangling the Nexus of AI and Inequality. Technological Forecasting and Social Change, [online] 211, p.123925. doi:https://doi.org/10.1016/j.techfore.2024.123925.
Capraro, V. (2024). The Impact of Generative Artificial Intelligence on Socioeconomic Inequalities and Policy Making. PNAS nexus, [online] 3(6). doi:https://doi.org/10.1093/pnasnexus/pgae191.
Jarvis, H. (2024). How AI Is Hardwiring Inequality — and How It Can Fix Itself. [online] Brunel University of London. Available at: https://www.brunel.ac.uk/news-and-events/news/articles/How-AI-is-hardwiring-inequality-%e2%80%94-and-how-it-can-fix-itself.
Mulvihill, G. (2026). Teens Who Used AI to Create Hundreds of Fake Nudes of Classmates Sentenced to Probation in Pennsylvania. [online] CBS News. Available at: https://www.cbsnews.com/pittsburgh/news/pennsylvania-teenagers-probation-ai-fake-nudes-classmates/.
Schellekens, P. and Skilling, D. (2024). Three Reasons Why AI May Widen Global Inequality. [online] Center for Global Development. Available at: https://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality.
Scolforo, M. (2026). Pennsylvania Teens Get Probation after Using AI to Create Fake Nudes of Classmates. [online] ABC7 Los Angeles. Available at: https://abc7.com/post/lancaster-pennsylvania-teens-get-probation-using-ai-create-fake-nudes-classmates/18778852/.
UN Women (2024). Artificial Intelligence and Gender Equality. [online] UN Women. Available at: https://www.unwomen.org/en/articles/explainer/artificial-intelligence-and-gender-equality.
UN Women (2025). AI-powered Online abuse: How AI Is Amplifying Violence against Women and What Can Stop It. [online] UN Women. Available at: https://www.unwomen.org/en/articles/faqs/ai-powered-online-abuse-how-ai-is-amplifying-violence-against-women-and-what-can-stop-it.
United Nations (2024). Racism and AI: ‘Bias from the past Leads to Bias in the Future’. [online] United Nations Human Rights (OHCHR). Available at: https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future.







