Biases in AI - Through the Eyes of an Emerging Innovator

The first form of artificial intelligence was created in 1955. AI has evolved and advanced significantly since then. When using AI, many people perceive it to be flawless and completely trustworthy. This false ideology has caused significant damage to society and will continue to harm people if we do not acknowledge its imperfections. The simple fact is that the majority of AI algorithms are built on detecting patterns that stem from societal biases. We should attempt to identify these biases and find solutions to combat them.

How do biases in AI occur?

Biases within AI can occur for a multitude of reasons. Firstly, the unfairness seen in AI is often a reflection of the internal biases of the creators of that AI. When developing an AI product, people sometimes unintentionally train the AI to mimic their personal preferences. For example, when trying to train a computer to recognize a shoe, you may train it to identify the specific type of shoe you imagine when thinking of a shoe. This may result in the computer disregarding all the other types of shoes that exist. 

To combat this form of bias, people began shifting their focus to machine learning, where computers learn to respond by finding patterns within their database and past interactions with users. Although this may seem like a solution to bias, it is far from it. This is because people can never fully separate themselves from their biases. Biases are part of what makes us human. 

Furthermore, the patterns that computers use in machine learning are often biased themselves. If 100 out of 100 people draw birds in the same way, the computer will recognize that birds look like that, which completely ignores the diversity of bird types that are not represented by the patterns. 

Finally, biases can occur unintentionally due to human error when developing algorithms. Algorithms are a set of instructions given to a computer to help it perform tasks. If the data in an algorithm is not diverse and does not provide a wide range of possibilities, it may lead to the computer producing biased outputs. If we want the computer to output images of people, we must ensure that we input a wide variety of different people so that the computer does not see people in just one specific way.

How Does AI Bias Affect Us?

Biases within AI affect people in the world in a multitude of ways. The majority of ways that AI biases affect us are through the creation of stereotypes and systemic inequalities. 

The harm caused by AI creating stereotypes is evident in the real world, such as when facial recognition software fails to accurately identify people of color or when a language system correlates a language with specific images of people, which may reinforce harmful stereotypes. 

AI bias is even present in the healthcare field. Due to the underrepresentation of women and other minority groups within predictive AI algorithms, healthcare software, such as diagnostic systems, produces less accurate results for minority groups, such as Black patients, compared to White patients. 

In advertising, studies show that Google Ads tend to associate high-paying jobs with men much more frequently than with women. A recurring theme with these issues is that they involve AI algorithms creating conclusions based on previously collected data patterns. The problem with this is that it leads to AI targeting a specific group while ignoring others. 

The police force frequently uses artificial intelligence to help determine which communities need more police presence to keep them safe. AI identifies these areas using past crime data, which leads to AI creating its own biases and unfairly targeting certain neighborhoods over others. Because police constantly monitor the same neighborhoods, crime rates in other neighborhoods may rise. When AI systems rely solely on historical criminal data, they fail to recommend police presence in those areas when needed.

How to Combat AI Bias:

Although bias and unfairness in AI are serious and complicated dilemmas, there are ways that people can combat and hopefully overcome them.

Firstly, we must understand that AI is nothing without human creators. We determine how AI acts and functions. This means that we are also the reason why AI has biases within it. The first step in eliminating bias within AI should be to teach people about the damage caused by unfair biases. If we make people less biased and fairer, then surely the AI created will also have less bias. 

Another approach is to address the flaws in machine learning. From the research above, the primary problem with machine learning is that it relies on previously collected data. This issue could be solved by frequently updating the data in an algorithm. This way, the AI will be more accurate and not make biased conclusions based on outdated data. 

Finally, people should observe and monitor real-life patterns that are affected by AI. For example, if a company sees that its algorithm is recommending many more hiring resumes for men than women, the company should investigate why the AI is favoring men and fix the issue.

Conclusion:

Artificial intelligence is one of the most powerful tools at humanity’s disposal and will be used in almost everything in the future. Since it will be such an impactful component of everyone’s lives, we must ensure that we use it correctly and safely. 

As we experiment more with AI, we are bound to encounter problems. It is our responsibility as AI creators to ensure that we program it in a way that does not harm others, and every group is equally represented in all aspects where AI is used. 

Closing Note: 

This blog is part of #OwnTheAlgorithm, iFp’s Emerging Innovators campaign to rethink how AI is built—and who it’s built for. We invite young people and the communities they’re part of to question AI systems, claim their role in its development, and build a future where AI reflects our values, not just profit.

Question it. Own it. Build it.


Works Cited

“Bias in AI.” Chapman University, https://www.chapman.edu/ai/bias-in-ai.aspx. Accessed 12 March 2025.

Holdsworth, James. “What Is AI Bias?” IBM, 22 December 2023, https://www.ibm.com/think/topics/ai-bias. Accessed 12 March 2025.

“AI Bias - What Is It and How to Avoid It?” Levity.ai, https://levity.ai/blog/ai-bias-how-to-avoid. Accessed 12 March 2025.

Previous
Previous

How AI is Reshaping Our Security and Privacy

Next
Next

Missing Voices In AI