The student news site of Santa Clara High School

The Roar

The student news site of Santa Clara High School

The Roar

The student news site of Santa Clara High School

The Roar

OPINION: AI is biased

Because+of+the+way+AI+is+trained%2C+it+tends+to+be+biased%2C+however%2C+with+the+implementation+of+movements+such+as+AI+for+Good%2C+humans+are+working+towards+making+it+more+just.+
Amelia Tai
Because of the way AI is trained, it tends to be biased, however, with the implementation of movements such as AI for Good, humans are working towards making it more just.

How do systems like ChatGPT, Snapchat, AI and others receive information? The data these systems rely on often have bias in the political and technological world. Due to biased information, voices of those in authority are uplifted, while minority groups are left out.

AI systems rely heavily on the existing databases, but those often reflect historical biases and inequalities. Having biased data will lead to discrimination for marginalized communities. In an article from Chapman University, bias can occur in multiple stages, the most prominent being in the model training.

During the critical phase, if the training data is not balanced or if the architecture is not designed to attain diverse information, the model can produce biased outputs. With the various perspectives of situations, AI must be able to handle different types of information in order to create an unbiased system.

Implicit and explicit biases are also often integrated into AI systems. An example is depicted in the MIT news, about an experiment with facial recognition. The program’s error rates were 20% and in some cases, 34%for specifically darker-skinned women. When looking into the facial recognition system, the data set that was used was more than 77% male and more than 83% white.

Sampling bias, such as with facial recognition systems, is common, especially in the psychology and technology fields. One of the types of bias is selection bias, which is when the data that is used to train an AI system is not representative of the reality it is meant to model. This can occur through biased sampling or other factors that lead to an unrepresentative dataset. Ensuring that a dataset is representative allows for fairness and justice in decision-making processes.

Social media platforms oftentimes highlight content from influential users or mainstream sources, sidelining other marginalized voices. KSL News Radio reported that Meta Platforms have recently limited political content on users’ feeds. An uproar was caused especially with the upcoming presidential elections and other political issues, such as Palestine and Israel. The media is restricting political information from reaching users’ feeds, clearly depicting how those in power are the ones controlling the media for their own agenda.

There must be an initiative for diversity and inclusivity in databases and within algorithms. The AI for Good movement advocates for using technology to solve and help human challenges and impact people’s lives. Their goal is to measure and advance the UN’s Sustainable Development Goals, by bringing nonprofits, governments and others to engender this positive social change. Utilizing organizations like AI for Good will enable society to strive toward creating a just AI system.

Leave a Comment

Comments (0)

All comments are moderated, and any containing offensive or inappropriate content will not be posted. You must use a valid email address when commenting, but your email will not be displayed publicly.
All The Roar Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *