In 2016, it took Tay, a chatbot developed by Microsoft and launched on Twitter, less than twenty-four hours to turn into a racist and sexist entity––demonstrating the data it learned online (Vincent, 2016). However, Tay's predecessor, XiaoIce, released in China in 2014, served very differently. Designed as an AI companion, XiaoIce focused on creating long-term relationships and emotional connections with its users, and Tay was made to gain a better conversational understanding (Zhou et al., 2020). Tay aimed to become more casual and playful with each interaction with profiles on the internet's town square (Vincent, 2016). Eight years later, the challenges with artificial intelligence are the same: bias. With different image-generation tools available to those interested in 2024, generative AI faces more challenges than the problems it may solve. After analyzing 5,000 images, Nicoletti & Bass (2023) learned that images constructed with the Generative AI tool Stable Diffusion amplified gender and racial stereotypes. So, while Generative AI carries the potential for inaccuracy and misleading outputs, it fabricates data with the same confidence it utilizes for accurate information (Nicoletti & Bass, 2023).

As women of color from the Global South, we are deeply concerned about the inherent Anglo-American bias in Generative AI. This bias is evident in AI's design and intended use, primarily within an Anglo-American context. Drawing on the concept of jugaad, a practice that transcends the Anglo-American context—imagine grassroots innovation, kanju, and gambiarra—we are critically evaluating the risks of an AI solution not rooted in local contexts. We hypothesize that AI enables technological colonialism by imposing solutions and mindsets developed in the Global North onto the Global South rather than utilizing or facilitating local or regional practices and innovations. As a result, in this project, we ask: How might an AI solution devoid of local context impose a monolithic approach to problem-solving regardless of regional cultures, practices, and behaviors?

Jugaad is Hindi for an improvised solution, based on ingenuity and cleverness. Dr. Butoliya (2022) describes the jugaad as an act of freedom, a bottom-up reaction to the top-down oppression caused by the capitalist subjugation of the market and societies (Butoliya, 2022). For Indians like us, jugaad demonstrates joy within restrictions, where scarcity leads to innovation. However, much of the imagery of jugaad on the internet limits its ingenuity, resourcefulness, and humanity. Thus, this project uses jugaad as a critical mode of reflection while tracing the gaps in the learning models and recognizing the biases of AI-generated content.  


References & Sources: 

Butoliya, D. (2022). Critical Jugaad ontologies: Practices for a Just Future. Diid — Disegno Industriale Industrial Design, (76), 14. https://doi.org/10.30682/diid7622d

Daub, Adrian. What Tech Calls Thinking : An Inquiry into the Intellectual Bedrock of Silicon Valley. New York, Farrar, Straus And Giroux, 2020.

Li Zhou, Jianfeng Gao, Di Li, Heung-Yeung Shum; The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. Computational Linguistics 2020; 46 (1): 53–93. doi: https://doi.org/10.1162/coli_a_00368

MIT Sloan Teaching & Learning Technologies. “When AI Gets It Wrong: Addressing AI Hallucinations and Bias.” MIT Sloan, MIT Sloan Teaching & Learning Technologies, mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/. Accessed 20 Aug. 2024.

Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality. https://www.bloomberg.com/graphics/2023-generative-ai-bias

Vincent, James. “Twitter Taught Microsoft’s AI Chatbot to Be a Racist in Less than a Day.” The Verge, The Verge, 24 Mar. 2016, www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.



← Back