The Student News Site of Monta Vista High School

El Estoque

The Student News Site of Monta Vista High School

El Estoque

The Student News Site of Monta Vista High School

El Estoque

the AIssue

Students must be equipped with skills to face AI’s inherent biases
Abha+Dash+%2812%29+was+photographed+before+and+after+using+a+Snapchat+beauty+filter.+The+filter+whitened+not+only+her+clothes+and+surroundings%2C+but+her+facial+features%2C+as+well.
Abha Dash (12) was photographed before and after using a Snapchat beauty filter. The filter whitened not only her clothes and surroundings, but her facial features, as well.

TikTok has seen a surge in the use of AI filters, whether to modify facial features or to generate seemingly realistic photos. Many companies, not just  investing in these tools, hoping to appeal to consumers. 

However, some users have noticed filters tend to “whitewash” their features and change their body shape to fit a more societally attractive look. In a Washington Post article, “This is how AI image generators see the world,” text-to-image artificial intelligence platform Stable Diffusion produced disturbing results — given the prompt, ‘attractive people,’ Stable Diffusion generated images of white people with blue eyes. For ‘cleaning,’ women were shown holding sprays and wipes, and when asked to produce images of ‘toys in Iraq,’ the program depicted little plastic men and brown teddy bears holding guns. Furthermore, given the input ‘Latina,’ Stable Diffusion generated pornographic images of women in sexy lingerie. 

Despite claims from Stable Diffusion’s parent company that its mission is to be as unbiased as possible, the AI platform still displays racially and sexually stereotypical images. This is especially problematic for younger audiences, who are impressionable and may internalize such biases.

Fortunately, with the passing of Assembly Bill 873, California students in grades K-12 will be exposed to lessons on media literacy. The education will go beyond helping students discern real images from AI-generated ones, ensuring students are aware of moral and ethical concerns in social media. Literacy education should help people comprehend the underlying biases in AI tools, especially in social media platforms like TikTok where filters are a click away.

To understand why AI has these inherent biases, we have to dig deeper into where AI-generated information comes from. AI is trained to recognize human language patterns from books, movies and even social media posts. However, because our society holds preconceived notions surrounding topics such as race and gender, these notions are present in the text that AI sees. Thus, AI will absorb simple associations such as white people being the most attractive race and women being born to cook and clean, and will reflect that in what AI generates. 

Yet even in the face of these pressing issues, some companies remain uninterested in investing money into inclusivity within AI. The inaction has led others to take on projects to examine AI’s biases. In 2023, four professors from the Stanford School of Humanities and Sciences received a seed grant to use computational linguistics and psychology to examine the role of racial biases in media. Using Natural Language Processing, an AI linguistics model, the four could test how race was perceived by viewers and how viewers’ racial biases were changed. However, such projects are still in the works and have not made their way to popular platforms with AI.   

When introduced to social media, filters were seen as novelties to be used for fun. Who wouldn’t want to give themselves pink dog ears or flaming sunglasses at the press of a button? But some filters are not as innocent as they seem. For example, in 2017, FaceApp’s skin-lightening filter garnered attention as many users were outraged at its racial insensitivity to their differing skin tones. 

Since then, beauty filters have come a long way. Social media platforms like TikTok utilize advanced AI to make one’s face appear flawless, shaping and resizing eyes, noses, chins and other features. The recent improvements in AI technology have led social media users around the globe to label beauty filters as terrifyingly realistic.

Many of us use TikTok and other social media platforms daily. Whether we scroll through TikToks or make content ourselves, it is important to recognize how these AI filters might be passing on inherent biases to the users. Curating media literacy lessons that teach students to be prepared for scenarios where AI produces inaccurate results is beneficial for our community.

As our society becomes increasingly dependent on AI, especially with potential impacts on graphic artists, screenwriters and even cashiers, it is important to recognize that AI too has its flaws. It can distort our reality and make its usage dangerous. Individuals who constantly use AI and filters may find themselves and those around them influenced by AI’s subtle racism, sexism and more. They may buy into Eurocentric beauty standards, which is damaging to their mental health. Using AI for the occasional computation or for feedback on an essay paragraph may not cause a big problem, but at the end of the day, AI is not a human being. It is unable to recognize its flaws or fix them. It is not unfAIling.

About the Contributors
Shannon Yu
Shannon Yu, Staff Writer
Shannon Yu is a sophomore and staff writer with El Estoque. She loves playing volleyball and piano. At night, she watches true crime psychological thrillers until she is unable to fall asleep. She should probably stop that habit.
Kalyani Puthenpurayil
Kalyani Puthenpurayil, Co-Editor-in-Chief
Kalyani is currently a senior and a co-editor-in-chief for El Estoque. She previously served as a sports editor and is a midfielder on the field hockey team at MVHS. In her free time, she likes to read, listen to music and spend time with her little brothers and friends.
More to Discover