Using jugaad as a critical mode of reflection to trace the gaps in AI learning models and recognizing the biases.
October 4, 2024
Pluriversal Research Talk
Anotherism: Jugaad Meets AI
Virtual Workshop Reflection
November 7, 2024
Design Thinking DC: Anotherism Workshop
District of Columbia
November 15, 2024
Navigating Human-AI Interactions Through a Cultural Lens. Roundtable conversation with Ami Mehta, Maria Lupetti, and Giulia Donatello
Nidhi Singh Rathore
Assistant Professor of Design
Corcoran School of the Arts & Design
George Washington University
Shreya Thakkar
Design Strategist & Futurist
Senior Design Researcher at Electrolux Group
Wonderings
Everything we are thinking about and how that connects with everything in the AI space.
Workshops
Mapping AI's cultural blind spots by comparing human-centered approaches and collaboration.
What we are reading
Image Generation on Google Gemini, and extended on Runway
Image Generation on Google Gemini
Image Generation on Google Gemini
Does generative AI reinforce tech colonialism by prioritizing Global North perspectives, or can it effectively integrate diverse local knowledge and practices to decenter Anglo-American narratives?
Implications 🡒 Our central hypothesis and project pillars are that humans draw their understanding of the immediate environment, needs, and limitations, leading them to highly adaptable and feasible outcomes. Solutions that represent balanced imagination with constraints of physics and practicality, focusing on function. Human problem-solving often includes cultural adaptability, empathy, and emotional intelligence—qualities that generative AI currently lacks. Meanwhile, generative AI solutions often defy the laws of physics, prioritizing creative and imaginative outputs over practicality. Turns out, it's the age-old debate of form versus function.
Jugaad is a resourceful approach that often arises from necessity, involving creativity and a bit of experimentation. It’s about going beyond the intended use of objects or materials—thinking about physical, chemical, and even gravitational properties.
Image Generation on Google Gemini
Dive into the kiniopio space here
In 2016, it took Tay, a chatbot developed by Microsoft and launched on Twitter, less than twenty-four hours to turn into a racist and sexist entity—demonstrating how the data it learned and trained on was and is biased (Vincent, 2016). However, Tay's predecessor, XiaoIce, released in China in 2014, served very differently. Designed as an AI companion, XiaoIce focused on creating long-term relationships and emotional connections with its users, while Tay was made to gain a better conversational understanding at the expense of… (Zhou et al., 2020). Eight years later, the challenges with artificial intelligence are the same: bias. The common thread between AI Chatbots and generative AI is that they learn from data created and generated by humans on the internet. So, what led to Tay becoming racist and sexist is contributing bias in image generation. With different image-generation tools available to those interested in 2024, generative AI faces more challenges than the problems it may solve. After analyzing 5,000 images, Nicoletti and Bass (2023) learned that images constructed with the generative AI tool Stable Diffusion amplified gender and racial stereotypes. This research demonstrates that, like early AI chatbots, generative AI carries the potential for inaccuracy and misleading outputs, data fabrication, and can present hallucinations confidence (Nicoletti & Bass, 2023).
As women of color from the Global South, we are deeply concerned about the inherent Anglo-American bias in generative AI. This bias is evident in generative AI's design and intended use, primarily within an Anglo-American context. One of our ongoing explorations is to understand whether these biases are embedded within the data AI learns from or if they emerge due to choices made in the development process. Broadly, we posit that AI enables technological colonialism by imposing solutions and mindsets developed in the Global North onto the Global South rather than utilizing or facilitating local or regional practices and innovations. As a result, in this project, we ask: How might an AI solution devoid of local context impose a monolithic approach to problem-solving regardless of regional cultures, practices, and behaviors?
The foundation of AI is biased, but as design practitioners who use and develop these tools, we need a better understanding of generative AI's limitations to effectively utilize it as a tool. Through our research, Anotherism, we compare Jugaad with generative AI to unpack how and where the biases appear. Drawing on the concept of jugaad— Hindi for an improvised solution based on ingenuity, cleverness, and innovative problem-solving—a practice that transcends the Anglo-American context, we are critically evaluating the risks of an AI solution not rooted in local contexts. Dr. Butoliya (2022) describes the jugaad as an act of freedom, a bottom-up reaction to the top-down oppression caused by the capitalist subjugation of the market and societies (Butoliya, 2022). For Indians like us, jugaad demonstrates joy within restrictions, where scarcity leads to innovation. However, much of the imagery of jugaad on the internet limits its ingenuity, resourcefulness, and humanity. Thus, Anotherism uses jugaad as a critical mode of reflection while tracing the gaps in the learning models and recognizing the biases of AI-generated content.
Image generated on Google Gemini
Image generated on Google Gemini
Image generated on Google Gemini
Human Ingenuity vs AI Generations
Our work started by developing and learning from AI with no expectations[i]. We started exploring the fact that a computer is—of course—not a human and recognized that we need to learn how humans solve problems to truly compare the differences and similarities between humans and generative technology. This research workshop and project defined humans as social beings with complex experiences and a capacity for abstract thinking. Humans have a wide range of emotions and connections; in this project, participants are from different parts of the world with a diversity of perspectives and identities. Although complex, generative technology is not as complex as humans we are interacting with in our research workshops—human complexity is not a recruitment criterion but an observation. With this project, we are exploring and defining the gap between humans and generative AI to define how design practitioners can utilize generative AI to support their ingenuity and problem-solving. In order to learn about human ingenuity, we collaborated with the Pluriversal Design SIG to offer a research workshop. In this workshop, we were set to explore the same prompts and generations we explored with ChatGPT, Gemini, MidJourney, Claude, and other tools.
References & Sources:
Butoliya, D. (2022). Critical Jugaad ontologies: Practices for a Just Future. Diid — Disegno Industriale Industrial Design, (76), 14. https://doi.org/10.30682/diid7622d
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Daub, Adrian. What Tech Calls Thinking : An Inquiry into the Intellectual Bedrock of Silicon Valley. New York, Farrar, Straus And Giroux, 2020.
Li Zhou, Jianfeng Gao, Di Li, Heung-Yeung Shum; The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. Computational Linguistics 2020; 46 (1): 53–93. doi: https://doi.org/10.1162/coli_a_00368
MIT Sloan Teaching & Learning Technologies. “When AI Gets It Wrong: Addressing AI Hallucinations and Bias.” MIT Sloan, MIT Sloan Teaching & Learning Technologies, mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/. Accessed 20 Aug. 2024.
Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality. https://www.bloomberg.com/graphics/2023-generative-ai-bias
Vincent, James. “Twitter Taught Microsoft’s AI Chatbot to Be a Racist in Less than a Day.” The Verge, The Verge, 24 Mar. 2016, www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
Butoliya, D. (2022). Critical Jugaad ontologies: Practices for a Just Future. Diid — Disegno Industriale Industrial Design, (76), 14. https://doi.org/10.30682/diid7622d
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Daub, Adrian. What Tech Calls Thinking : An Inquiry into the Intellectual Bedrock of Silicon Valley. New York, Farrar, Straus And Giroux, 2020.
Li Zhou, Jianfeng Gao, Di Li, Heung-Yeung Shum; The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. Computational Linguistics 2020; 46 (1): 53–93. doi: https://doi.org/10.1162/coli_a_00368
MIT Sloan Teaching & Learning Technologies. “When AI Gets It Wrong: Addressing AI Hallucinations and Bias.” MIT Sloan, MIT Sloan Teaching & Learning Technologies, mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/. Accessed 20 Aug. 2024.
Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality. https://www.bloomberg.com/graphics/2023-generative-ai-bias
Vincent, James. “Twitter Taught Microsoft’s AI Chatbot to Be a Racist in Less than a Day.” The Verge, The Verge, 24 Mar. 2016, www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.