Missing Voices In AI

AI is slowly developing into a routine part of daily life, whether it is in the form of asking ChatGPT a question regarding your homework, or using an AI software to edit a video. AI is slowly but surely becoming a helpful assistant to daily life. However, in order for AI to be the most useful and beneficial tool it can possibly be, the idea of whose voices are heard and whose are not heard must be addressed. AI is already a reliable source for data, but with voices from all communities and populations, AI would have the potential to be an even more potent force in our society. 

Consequences of Underrepresentation

The consequences of a lack of representation in AI are quite substantial, as AI is slowly but surely becoming the main source for many things in our society. As written in her article for Medium, Nicole Cacal states: “The divide in AI access often mirrors and amplifies existing socio-economic inequalities. For those with ample access, AI can be a gateway to enhanced personalization, efficiency, and opportunities. Yet, for underrepresented groups, the lack of access to AI technologies means not only missing out on these benefits but also potentially facing increased marginalization as AI becomes more ingrained in everyday life. This digital divide can lead to a cultural chasm where certain voices and experiences are underrepresented or misrepresented in AI-driven platforms and solutions.” (Cacal). Cacal states the inequalities that come with the increased divide of AI access between privileged and underprivileged groups, which proves to be a problem, especially since AI is becoming as popular and widespread as ever in the modern world. As she states, a digital divide can lead to a chasm of people who use AI to assist their daily lives while the AI is in a state of bias and unawareness, and those who have little to no access to AI at all. In order to live in a world where this chasm doesn’t exist, AI creation and development must feature all voices across a spectrum of communities and populations. 

Exclusion in AI Policy

AI is on track to become a front for assisting in policy making in the near future, but in order to make that reality a place where equity strives, we must first allow the voices of all to be heard when creating AI for policy making. In his article for Politico.eu, Mark Scott talks about how the Global North dominates AI development and policy making. His article states: “Right now, these policy conversations — that are already dominated by corporate interests with little, if any, civil society buy-in — are almost exclusively the domain of the so-called Global North. Developing economies, despite their large populations and interest in AI, are getting left out.” (Scott). Many argue that the exclusion of developing economies from contributing to AI development is a significant issue. This exclusion arises because the Global North—comprising wealthier and more developed nations—dominates the creation and training of AI technologies. By relying heavily on data and input from these regions, AI systems end up reflecting the perspectives, values, and biases of the Global North, which leads to inaccurate or unfair outcomes for people in other parts of the world.

Toward Inclusive News and Media

As AI develops to become a resource for information and news, all voices must be involved in molding AI into a resource in order for it to be the most unbiased and realistic news platform possible for humans. This is so that AI can do something humans cannot, being unbiased. AI has the potential to help news outlets become more effective, both by implementing a more unbiased method of informing the people about the news, and also by creating articles and data based on recent news for people to see and use. But for this to come into fruition, AI development must become a hub where diversity and inclusion thrive first.  Without this inclusion, AI has the potential to reinforce existing biases, rather than eliminating them. The goal should be for AI to function as a tool for fairness, presenting news and information in a way that is reflective of the full spectrum of human experiences and viewpoints.

The Path Forward

By prioritizing inclusivity in AI development, we can build a future where AI serves as a truly unbiased, efficient, and valuable resource, assisting in everything from policymaking to news dissemination. However, if AI development continues to spiral into a non-inclusive environment, AI will become a source for biased and unreliable information that seeks to benefit only a certain demographic of people. Prioritizing inclusivity is the first step AI developers can make in order for AI to truly become a great resource for all.


Closing Note: 

This blog is part of #OwnTheAlgorithm, iFp’s Emerging Innovators campaign to rethink how AI is built—and who it’s built for. We invite young people and the communities they’re part of to question AI systems, claim their role in its development, and build a future where AI reflects our values, not just profit.

Question it. Own it. Build it.

Work Cited

Cacal, Nicole. “Unequal Access to AI and Its Cultural Implications.” Medium, 24 January 2024, https://medium.com/the-modern-scientist/unequal-access-to-ai-and-its-cultural-implications-0948a8042c91. Accessed 4 March 2025.

Scott, M. (2023, August 31). The Global South’s missing voice in AI. POLITICO. https://www.politico.eu/newsletter/digital-bridge/the-global-souths-missing-voice-in-ai/

Previous
Previous

Biases in AI - Through the Eyes of an Emerging Innovator

Next
Next

Creativity Under Threat