top of page
Search

Article by: Olivia Carling, Open Dialogues International Foundation


Young women participants work together on a laptop during an African Girls Can Code Initiative's coding bootcamp held at the GIZ Digital Transformation Center in Kigali, Rwanda in April 2024. Photo: UN Women


Artificial Intelligence is one of the most powerful inventions of the 21st century. It has the power to reshape our lives, but at the same time is not repellent against the biases and inequalities present in our societies, and can actually deepen the issues. The 2024 study, “Unmasking Inequalities of the Code: Disentangling the Nexus of AI and Inequality”, published in Technological Forecasting and Social Change, explored how AI systems have a huge potential to either reinforce existing divides, or become extremely useful tools for inclusion. All this relies on how they are designed, trained and then governed, the latter being a controversial and difficult topic to touch on, as there are so many unpredictable outcomes of AI integration and use. 


Vulnerable communities are constantly at risk for discrimination, and AI facilitates these dangers to minorities, women and children in new ways, as it develops at such a fast rate that policymakers struggle to address situations before they are already outdated. 



For minorities, biased AI can damage every part of life


AI is usually sold to us as unbiased. However, the data it uses often contains many systemic inequalities and prejudices, reflecting how they operate in the real world. Bad data is implicit in suppressing sections of societies; vulnerable groups like women and minorities are often those targeted. In our last article, Carlotta explored how indigenous voices are essential and must be embedded in AI models to empower minority groups. 


This is also true for other groups. AI that was built in the context of the patriarchy pushes it further, for example, algorithms that learn about how most people in a particular job are male, could then favor male job applications when companies employ these AI systems to sort through applications. The issue with AI having access to all the information of the internet is that our data is polluted by a set of myths and outdated beliefs, all leading to discrimination based on gender, ethnicity and sexual identity. 


The majority of human history is plagued with the idea of establishing particular social and political order, extending privilege to males. In Western societies, this furthers into white males, and the effects of this are still seen today. There is a strong assumption that the residues of racist and sexist discrimination feed into our technology just like how they continue to impact modern life. 


A concerning example of racial bias and how they transfer from our societal issues to technological advances is predictive policing. Predictive policing tools are already controversial, but with the implementation of AI, they can become downright discriminatory, as they make assessments about who will commit future crimes and where these future crimes may occur. They have huge risks of exacerbating historical over policing in disadvantaged communities because of racial and ethnic lines.


"Because law enforcement officials have historically focused their attention on such neighbourhoods, members of communities in those neighbourhoods are overrepresented in police records. This, in turn, has an impact on where algorithms predict that future crime will occur, leading to increased police deployment in the areas in question."


- Said Ashwini K.P. during her interactive dialogue at the Human Rights Council’s 56th session in Geneva. 


According to her findings, location-based predictive policing algorithms draw on links between places, events and historical crime data. They then predict when and where it is likely that future crimes happen, and this impacts how police forces plan their patrols. Some use the argument that because of historical bias and discrimination against minorities, systemic and poverty cycles are pushed resulting in crime. However, this is an outdated argument because statistics in places like the UK and US, ethnic minorities are overrepresented in arrest and incarceration rates because of bias, like how black people are over 3 times more likely to be arrested or deemed suspicious than white people. But, the AI is not trained to explain historical reasons behind locations and arrest numbers. It spits out data; data shaped by discriminatory police and authority figures who are more lenient on white people. Police forces should absolutely not be over relying on predictive AI until the biases can be addressed and accounted for, and a real change is seen. We cannot continue to over punish minorities while letting privileged people off easier, and this issue only gets worse the more we rely on predictive AI that has been trained on bad data. 



Artificial Intelligence and gender inequality


Photo: Lara Jameson on Pexels 


Just as there is a racism problem, our world possesses a gender equality problem, where AI not only mirrors the gender bias, but has already been posing active risks towards gender equality. AI has huge danger risks towards women, as well as minorities, primarily through the increase of gender-based violence and the dehumanization of women. The technology advanced quickly and people have taken advantage of it. The terrifying reality is that anyone with a device can make a deepfake, impacting women’s safety and privacy. Furthermore, this often spills over from online abuse to real-life harm and impacts. AI is actively amplifying violence against women, and showing young generations a gender imbalance between who is harmfully targeted. Technology-facilitated violence against women and girls is growing, with 16-58% of women worldwide impacted, and the scary thing is that lawmakers struggle with keeping up to the ever changing technological developments to be able to properly address the issue. AI tools target women and enable access, blackmail, stalking and harassment that have huge real world consequences. 


Consider this: developed by male teams, the majority of deepfake tools are not even designed to work on images of a man’s body. 


AI is pushing the gender divide and making it dangerous for girls and women to exist in society. An example of this is in Pennsylvania, where two teens used AI to create fake nudes of female classmates and then received probation. The boys were 14 at the time and created about 350 images, showing at least 59 girls under 18. What’s worse is that the judge said he hadn’t heard the boys apologize a single time after giving them several opportunities to comment. Tools like this are dangerous and harmful, and create serious safety concerns for young girls. In an age where social media is widespread and present in everyone’s daily life, it is impossible to remain anonymous and to remove all images on the internet of oneself. 


Until these tools get banned or governed properly, billions of people are threatened, and we will not be rid of this fear. Minorities, women, children, all targeted by dehumanizing and prejudiced people, have their abuse and unfair treatment facilitated through AI, and the technology is only getting better. 


Artificial intelligence is incredibly impressive as a technology, and shows the rapid development of society and innovation. But with innovation and creation comes a responsibility necessary to protect those who are still at a disadvantage because of colonialist and patriarchal mindsets.



Bibliography


Adib-Moghaddam, A. (2023). Biased AI Algorithms Can Damage Almost Every Part Of A Minority’s Life. [online] SOAS. Available at: https://www.soas.ac.uk/about/blogs/minorities-biased-ai-algorithms-can-damage-almost-every-part-life.


Bircan, T. and Özbilgin, M.F. (2024). Unmasking Inequalities of the code: Disentangling the Nexus of AI and Inequality. Technological Forecasting and Social Change, [online] 211, p.123925. doi:https://doi.org/10.1016/j.techfore.2024.123925.


Capraro, V. (2024). The Impact of Generative Artificial Intelligence on Socioeconomic Inequalities and Policy Making. PNAS nexus, [online] 3(6). doi:https://doi.org/10.1093/pnasnexus/pgae191.


Jarvis, H. (2024). How AI Is Hardwiring Inequality — and How It Can Fix Itself. [online] Brunel University of London. Available at: https://www.brunel.ac.uk/news-and-events/news/articles/How-AI-is-hardwiring-inequality-%e2%80%94-and-how-it-can-fix-itself.


Mulvihill, G. (2026). Teens Who Used AI to Create Hundreds of Fake Nudes of Classmates Sentenced to Probation in Pennsylvania. [online] CBS News. Available at: https://www.cbsnews.com/pittsburgh/news/pennsylvania-teenagers-probation-ai-fake-nudes-classmates/.


Schellekens, P. and Skilling, D. (2024). Three Reasons Why AI May Widen Global Inequality. [online] Center for Global Development. Available at: https://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality.


Scolforo, M. (2026). Pennsylvania Teens Get Probation after Using AI to Create Fake Nudes of Classmates. [online] ABC7 Los Angeles. Available at: https://abc7.com/post/lancaster-pennsylvania-teens-get-probation-using-ai-create-fake-nudes-classmates/18778852/.


UN Women (2024). Artificial Intelligence and Gender Equality. [online] UN Women. Available at: https://www.unwomen.org/en/articles/explainer/artificial-intelligence-and-gender-equality.


UN Women (2025). AI-powered Online abuse: How AI Is Amplifying Violence against Women and What Can Stop It. [online] UN Women. Available at: https://www.unwomen.org/en/articles/faqs/ai-powered-online-abuse-how-ai-is-amplifying-violence-against-women-and-what-can-stop-it.


United Nations (2024). Racism and AI: ‘Bias from the past Leads to Bias in the Future’. [online] United Nations Human Rights (OHCHR). Available at: https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future.




Article by: Carlotta de Carolis Villars, Open Dialogues International Foundation


Pandey, J. (2023) | Medium


In its 24th session, the UN Permanent Forum on Indigenous Issues proclaimed the necessity for Indigenous Peoples to actively participate in shaping the future of AI throughout its lifecycle, from training to governance. Both AI and the rights of indigenous communities are individually at the heart of multiple SDGs and are being incorporated in the UN Agenda, but as the technology evolves the two seem to be increasingly incompatible. 


The rise of AI and its integration in an exponentially increasing number of tasks, professions and mechanisms seems to be unstoppable, and fits right into SDG 9 (Industry, Innovation and Infrastructure). However, the current explosion of AI technologies and the water, energy and land-intensive data centres that support them are currently at the antipodes of most other SDGs, particularly environmental goals (SDGs 13, 14 and 15). Less evidently, poorly distributed advancements in AI technology and profits from AI are undermining SDG 10, Reduced Inequalities. Indigenous communities are among the most vulnerable groups in the Fourth Industrial Revolution: they are facing territory and resource takeover, environmental repercussions, increase in income and education inequality, and even the return of what scholars have flagged as colonial dynamics. 


This article firstly outlines the challenges specific to indigenous communities that AI technology represents throughout its life cycle, from the construction of specialised digital infrastructure to its legacy of income inequality and harmful cultural bias. Subsequently, it highlights the ways in which indigenous people are instrumentalising this tool to protect their identity and livelihoods. And finally, it summarises the ways in which indigenous cultures, knowledge systems and actorness must be embedded in AI models for them to not only minimise harm but empower vulnerable groups. 


By taking these steps, the article aims to provide a clear account of the interactive pathways of AI and indigenous communities at the level of the UN SDGs. 



Territory takeover


The first and most evident impact of AI expansion on indigenous community is the territory takeover that the latter have experienced in the past couple of years, especially in the United States. The rapid expansion threatens to not only chip on the already diminished indigenous occupied and owned lands left, but to eat away territories and water resources that are sacred to local cultures and key to local legends and rituals. Fragile traditional knowledge systems and treaty rights are also deeply connected to these areas, including some that are part of the UNESCO Intangible Cultural Heritage


In addition to the threats associated with this expansion, the methods used to come to the sale of land for large-scale digital projects often sits between unfairness and unlawfulness: in Canada, the Sturgeon Lake Cree Nation opposed the proposal of the world’s largest planned industrial park after no consultation occurred between the Municipality and the First Nation, a legal requirement due to the overlap of the plan with their territory. The lawsuit was turned down and the sale continued. 



Environmental repercussions


AI data centres are among the most energy-intensive technologies of this day and age, and because their expansion has yet to stimulate an equally fast growth in the green transition, this also makes them among the most polluting. Data centres also require incredibly high quantities of water for cooling, which draws from essential freshwater sources in the local area. Because of this, indigenous communities are meeting increasing challenges around water collection and management, as well as in the agricultural and ecological sectors.


While this strain on environmental resources is harmful for all human centres, it takes a higher toll on communities that are closely reliant on their local ecosystems and their natural functioning for resource extraction. 



Increase in inequality: income and digital divide


AI is developing new strands of work in a myriad of sectors, providing creative opportunities to innovators and pioneers worldwide. However, only those who have access to these technologies can even consider being part of this professional elite, and because of geographical and discriminatory isolation, indigenous people are lagging. The World Summit of the Information Society has also flagged the structural obstacles faced by indigenous people in accessing new technologies on top of tech careers, and this group has been proven to suffer from both barriers of entry and underrepresentation in the highest-paying spheres of tech-related jobs. Moreover, the creative destruction paradox is truer in AI than ever, with a good quantity of jobs becoming automated in the next two decades. A WEF report unveils that AI-led automation has an ethnic bias, discriminating POCs and especially indigenous workers. 



Reinforcement of colonial dynamics


In addition to the more obvious subtraction of land and natural resources, there are two paradoxically opposing ways in which the way AI is being developed perpetrates colonial dynamics: one occurs through the exclusion of indigenous voices, and the other through the theft of indigenous heritage and practices without the consent and guidance of appropriate representatives. 


The former involves excluding indigenous representatives from AI model trainings, leading to a homogenisation of practices into Western frameworks “that do not necessarily reflect or serve Indigenous community needs”. The latter, identified as ‘data colonialism’ or ‘data extractivism’, involves the usage of information extracted from Indigenous Knowledge Systems without a proper overview or an adapted system ensuring the accuracy of the content produced; so, while indigenous inputs are not considered, their intellectual property is still instrumentalised. Because of GenAI’s prioritisation of form over content or ethics, it often recycles material from indigenous communities to regurgitate false or inaccurate content that spreads misinformation about those same communities.



Harmful cultural bias


Because of the prevalence of data based on Western frameworks and epistemologies in AI training models which excludes or misrepresents indigenous knowledge, AI can reproduce and reinforce harmful cultural biases and exclusion. This is particularly dangerous in contexts where previous knowledge of non-indigenous people towards indigenous communities is scarce: indigenous communities are often esoteric and obscure because of forced isolation or strong community practices that tend towards the inwards securitization of knowledge, so replacing information that comes directly from indigenous people with AI-generated content is particularly detrimental and can lead to further ostracisation and othering



AI opportunities for indigenous inheritance


Despite these threats, could AI itself play an active role towards the protection of vulnerable communities? 


AI technologies do represent an incredible advancement for all their users across an immense number of fields, and this value can be and has been honed by indigenous communities as well, including to strengthen their very indigenous identity by preserving intergenerational knowledge, increasingly unused language systems, and cultural practices. 


Probably the most evident case of this use of AI comes from computer scientist Michael Running Wolf, at the Mila-Quebec Artificial Intelligence Institute, who is developing the First Languages AI Reality, a tool “designed to respect data sovereignty and linguistic self-determination” that includes speech recognition models for over 200 endangered languages. 

These projects already promise great advancements in the maintenance of indigenous languages and culture at the risk of extinction, but as of now their scale is still quite modest. Moreover, a very small proportion of computer data scientists (both at the bachelor and doctorate levels) are indigenous . Some initiatives are also tackling this challenge: IndiGenius, Rewriting the Code and Wihanble S’a Center for Indigenous AI are training indigenous computer science students in the preservation of heritage through this technology. 


Moreover, the ethical incorporation of indigenous heritage into AI training models would be an added value well beyond indigenous communities, as these technologies that a staggering number of people are blindly using as reliable sources could do with the principles of reciprocity, sustainability, and collective well-being brought by Indigenous Knowledge Systems.



Steps forward in light of the reconciliation of the SDGs


AI represents an incredible opportunity for indigenous individuals and communities worldwide through digital integration, job creation and most importantly the preservation of identity and cultural heritage. However, this must happen following crucial safeguards: 

  1. Data centre expansion and construction into indigenous territories and drawing from energy sources used by indigenous communities must be based on and respect legal agreements with their representatives. 

  2. Data centres should be built following technical guidelines on safe energy and water consumption, especially from local sources. 

  3. Indigenous Knowledge Systems, epistemologies and academic information on indigenous peoples must be incorporated in AI training models to avoid cultural biases or the exclusion of indigenous narratives, but this must occur with respect of cultural secrecy practices and data privacy, and with the oversight of indigenous communities to ensure the accuracy of reported material and avoid intellectual/cultural theft. 

  4. Resources must be dedicated to scholarships and educational programmes towards native students in STEM, with a focus on computer science. 


Without these guidelines, the uncontrolled expansion of AI risks disrupting already fragile livelihoods by depleting local and regional ecosystems, reinforcing biases and community-wide isolation patterns, and replicating colonial dynamics of land, resource and knowledge appropriation, all while excluding the voices of the oppressed, ultimately undermining the principles of the 2030 UN Agenda. 




Bibliography


Barba-Escoto, L., van Wijk, M.T. and López-Ridaura, S. (2020) ‘Non-linear interactions driving food security of Smallholder Farm households in the Western Highlands of Guatemala’, Frontiers in Sustainable Food Systems, 4. doi:10.3389/fsufs.2020.00051. 


Chagnon, C. and Hagolani-Albov , S. (2023) ‘Data Extractivism’, in The European Digital Economy, pp. 186–203. 10.4324/9781003450160-14


Chan, K. (2019) AI-powered automation will have an ethnic bias, World Economic Forum. Available at: https://www.weforum.org/stories/2019/07/job-losses-ai-automation/ (Accessed: 23 April 2026). 


Chanza, N. and Musakwa, W. (2021) ‘Indigenous practices of ecosystem management in a changing climate: Prospects for ecosystem-based adaptation’, Environmental Science & Policy, 126, pp. 142–151. doi:10.1016/j.envsci.2021.10.005. 


Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media20(4), 336-349.


Fawcett-Atkinson, M. (2026) Kevin O’Leary pins water licence for $70-Billion data centre project on a small Alberta Municipality | Canada’s National Observer: Climate News, Canada’s National Observer. Available at: https://www.nationalobserver.com/2026/04/20/news/kevin-oleary-pins-water-licence-70-billion-data-centre-project-small-alberta


Humphries, G.R., Bragg, C. and Overton, J. (2014) ‘Pattern recognition in long‐term sooty shearwater data: Applying machine learning to create a harvest index’, Ecological Applications, 24(8), pp. 2107–2121. doi:10.1890/13-2023.1. 


Kapor Foundation and AISES (2023) State of Diversity: The Native Tech Ecosystem. Available at: https://www.kaporcenter.org/wp-content/uploads/2023/10/NativeTechEcosystem_v5a.pdf


Kim, I. (2025) How Indigenous Engineers Are Using AI to Preserve Their Culture, NBCU Academy. Available at: https://nbcuacademy.com/how-indigenous-engineers-are-using-ai-to-preserve-their-culture/


Maracle, C. (2026) Be wary of AI-generated content on indigenous cultures, say experts | CBC News, CBCnews. Available at: https://www.cbc.ca/news/indigenous/ai-indigenous-language-culture-9.7126508


Mila (2026) First languages AI reality, Mila. Available at: https://mila.quebec/en/ai4humanity/applied-projects/first-languages-ai-reality


Ochungo, P., Khalaf, N. and Merlo, S. (2022) ‘Remote Sensing for biocultural heritage preservation in an African semi-arid region: A case study of Indigenous Wells in northern Kenya and southern Ethiopia’, Remote Sensing, 14(2). doi:10.3390/rs14020314. 


Pandey, J. (2023) Balancing nature and technology: Integrating Indigenous Knowledge in ai sustainability | by Juhi Pandey | Medium, Medium. Available at: https://medium.com/@juhipandey4/balancing-nature-and-technology-integrating-indigenous-knowledge-in-ai-sustainability-c082008578a9


Schertow, J.A. (2026) Indigenous peoples push back as data centers expand across North America, Intercontinental Cry. Available at: https://icmagazine.org/indigenous-peoples-push-back-as-data-centers-expand-across-north-america/


Tariq, R., Cetina-Quiñones, A.J. and Cardoso-Fernández, V. (2021) ‘Artificial Intelligence assisted technoeconomic optimization scenarios of hybrid energy systems for water management of an isolated community’, Sustainable Energy Technologies and Assessments, 48, p. 101561. doi:10.1016/j.seta.2021.101561. 


UN DESA (2015) World Summit on Information Society important for Indigenous Peoples, United Nations Commission for Social Development (CSocD) | Division for Inclusive Social Development (DISD). Available at: https://social.desa.un.org/csocd.


UNESCO (2026) Browse the Lists of Intangible Cultural Heritage and the Register of good safeguarding practices, UNESCO Intangible Cultural Heritage . Available at: https://ich.unesco.org/en/lists


Vinson, N. (2025) The AI Landscape in Canada: A Critical Moment for First Nations, First Nations Technology Council. Available at: https://www.technologycouncil.ca/news/the-ai-landscape-in-canada-a-critical-moment-for-first-nations/






On 27th March, ODIF hosted our monthly DialogueON series, bringing together a diverse group of participants to explore sustainability within the textiles and fashion industries. While fast fashion often feels accessible and affordable, it continues to rely on systems that place significant strain on both the environment and the people producing our clothes. This session highlighted why rethinking our relationship with clothing is not only necessary, but a key part of our shared climate responsibility.


Antoinette on Durability


We were delighted to welcome our first speaker, Antoinette Garzon, whose work sits at the intersection of fashion, sustainability, human rights, and linguistics. Drawing on her experience as a tailor in Medellín, Colombia, and as a former lingerie shop owner, Antoinette offered a powerful Global South perspective on sustainability to challenge dominant narratives. She encouraged participants to reconsider what sustainability really means, emphasising that environmental concerns cannot be separated from social justice and economic realities.


Antoinette highlighted that durability is often overlooked in mainstream sustainability conversations. In many parts of South America, she explained, sustainability has historically been a matter of survival rather than innovation. Limited access to resources meant that clothing had to be made to last, often passed down through generations. This stands in contrast to fast fashion models, where durability is rarely prioritised. She also challenged assumptions around certifications, noting that while EU benchmarks and organic labels are important, they do not fully capture the realities of sustainable practices globally. She argued that makers and consumers need to “go back to basics,” prioritising natural fibres, long-lasting garments, and mindful consumption over corporate-driven narratives.


Harriet on the Sustainability Triangle


Our second speaker, Harriet Cleary, founder of Sew Me Sunshine, built on these ideas by offering insight into the complexities of fabric production and retail. As both a sewer and business owner, Harriet spoke about the environmental and social impacts embedded within the textile industry, from exponential fibre production to significant water usage and carbon emissions. She reinforced the importance of viewing sustainability through a three-part triangle of environment, people, and economy. She highlighted that all three must be balanced to create meaningful change.


Harriet also unpacked some of the terminology often used in the fashion industry. She discussed certifications such as the Better Cotton Initiative (BCI) and compared them to stricter standards like GOTS, encouraging participants to look beyond labels and understand what they truly represent. She touched on the complexities of recycled materials, such as Econyl, noting that while they offer innovative solutions, they can also involve intensive chemical processes and raise questions about durability and long-term impact. Similarly, she explored fabrics like Tencel and Ecovero, which aim to offer more sustainable alternatives, and stressed the importance of transparency through improved labelling and emerging concepts like textile passports. A particularly engaging part of Harriet’s discussion focused on “deadstock” materials. She encouraged participants to think critically about what qualifies as genuine deadstock versus overproduction, while also recognising the value of repurposing existing materials. While textiles may never be 100% sustainable, small, informed changes in how we source, make, and use clothing can have a significant impact.


The Dialogue


The discussion that followed was rich and thought-provoking. Participants reflected on what sustainability means in their own contexts, raising questions about buying less, mending clothes, and the balance between durability and biodegradability. Both speakers emphasised the importance of extending the life of garments, whether through repair, reuse, or passing items on.


There were also important contributions from participants across different regions, highlighting challenges such as access to sustainable materials, the role of local sourcing, and the need for stronger political will and education. Conversations touched on affordability and the potential of concepts like “re-wear” to make sustainability more accessible. Others raised questions about how profits from textile industries can better benefit local communities, underlining the importance of equity and knowledge-sharing.



The session concluded with a powerful reminder from both speakers. Antoinette urged us to look to the Global South for existing knowledge and practices, encouraging a return to simplicity, durability, and confidence in local solutions. Harriet reinforced the importance of education and awareness, noting that more conscious consumption and informed decision-making are essential steps forward.


We want to say a huge thank you to everyone who joined and contributed to this dialogue. It was an inspiring and insightful event that sparked meaningful conversations across borders and perspectives. Let’s continue to challenge overconsumption, deepen our understanding, and take collective responsibility for the impact of our clothing.


~Naomi Lea




OPEN DIALOGUES INTERNATIONAL FOUNDATION

Raise your voice, join the conversation! Everyone matters in an Open Dialogue. 

Join us in the mission of making the world a better place for all.

Subscribe to our mailing list!

Thanks for submitting!

  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
4.png

© 2022-2026 by Open Dialogues International

bottom of page