Shereen Wu, a model based in Los Angeles, has accused American designer and former Project Runway contestant Michael Costello of digitally altering a runway photo by replacing her face with that of a white woman. This incident has ignited discussions about racism and the complex realm of Artificial Intelligence (AI) in the fashion industry.
Wu, of Asian descent, asserted that when she confronted Costello about the alteration, he shifted blame onto the photographer, claiming the photo arrived in that edited state. However, when Wu spoke with the photographer, he confirmed that he did not edit the photo. Wu expressed her concerns in a viral TikTok video, where she also referenced past allegations of racist behaviour against Costello, including reported use of derogatory language. Subsequently, Costello deleted the modified photo and seemingly blocked Wu on Instagram.
Following these allegations, Costello began sharing Instagram stories featuring AI-generated art and AI models. Wu perceived this as implying that she had no reason to be upset, as the use of AI is already prevalent in the fashion industry. In response, Wu lamented, "In the end, I'm just another dissatisfied model."
How Can AI Reinforce Racism?
When we input data into machines tainted by our inherent biases, those machines inevitably replicate these prejudices. The problem becomes particularly troubling when it involves AI that we can employ directly, without the need for extensive training, and with the knowledge that it can perform a wide array of tasks for us.
The case of Costello illustrates this issue vividly. He utilised AI not only to alter Wu's facial features but also to conceal or entirely replace her ethnicity and race. This transcended a mere pursuit of a 'more attractive' model; it delved into the realm of actively avoiding the presence of an Asian model.
Costello attempted to justify his actions by suggesting that AI's use in the fashion industry is far more pervasive than commonly acknowledged. However, these actions are fundamentally rooted in systemic racism and a flawed beauty standard that discourages anything outside the bounds of 'whiteness.'
Costello's use of AI, in this context, exemplifies how technology can be wielded to reinforce prejudiced beauty ideals and marginalise individuals from diverse racial backgrounds. By utilising AI to essentially erase Wu's ethnicity and race, he perpetuated a harmful message that non-white aesthetics are somehow undesirable or less marketable. Such actions not only perpetuate systemic racism but also undermine the progress made in promoting diversity and inclusivity in the fashion industry.
Furthermore, it highlights the urgent requirement for diversity and inclusivity in the development and deployment of AI technologies, as well as a comprehensive understanding of the societal impact of such tools. The incident exposes the deeper structural issues within the fashion industry, emphasising the pervasive influence of Eurocentric beauty standards and the exclusion of diverse identities.
This incident highlights the need for a critical examination of how AI can be misused to perpetuate discrimination and bias, particularly in an industry that has a long history of struggling with diversity and representation. It calls for a reevaluation of ethical guidelines and regulations surrounding AI use to prevent it from becoming a tool that sustains racial inequalities.
The fashion industry, like other sectors, must be vigilant in embracing technology while ensuring that it is harnessed to promote equity and inclusiveness rather than perpetuating harmful stereotypes and biases.
Views expressed by the author are their own
Suggested Reading: How Algorithm Bias Affects Hiring Choices, Favouring Men Over Women: Study