ANOTHERISM

Using jugaad as a critical mode of reflection to trace the gaps in AI learning models and recognizing the biases.
COLLABORATIONS

October 4, 2024   
Pluriversal Research Talk
Anotherism: Jugaad Meets AI
Virtual Workshop Reflection

November 7, 2024
Design Thinking DC: Anotherism Workshop
District of Columbia

November 15, 2024
Navigating Human-AI Interactions Through a Cultural Lens. Roundtable conversation with Ami Mehta, Maria Lupetti, and Giulia Donatello 
RESEARCHERS

Nidhi Singh Rathore
Assistant Professor of Design
Corcoran School of the Arts & Design
George Washington University

Shreya Thakkar 
Design Strategist & Futurist
Senior Design Researcher at Electrolux Group


SECTIONS

Wonderings
Everything we are thinking about and how that connects with everything in the AI space.

Workshops
Mapping AI's cultural blind spots by comparing human-centered approaches and collaboration.

What we are reading 



Image Generation on Midjourney
Prompt: Can you give me a visual solution for how I can keep a round-bottom vessel filled with water from toppling? 

Image Generation on Google Gemini, and extended on Runway




Prompt: Can you create a visual of how one might use a two-wheeler motorcycle to transport more than three people at the same time, inspired from the jugaad design principles or are examples grounded in the Indian context

Image Generation on Google Gemini
Drawing on jugaad— an Indian approach to problem-solving—a practice transcending the Anglo-American context, this paper critically evaluates the risks of an AI solution not rooted in local contexts and Indigenous practices. The research inquiry compares Jugaad with generative AI to unpack how and where the biases appear. Dr. Butoliya (2022) describes the jugaad as an act of freedom, a bottom-up reaction to the top-down oppression caused by the capitalist subjugation of the market and societies (Butoliya, 2022). It demonstrates joy within restrictions, where scarcity leads to innovation. However, much of the imagery of jugaad on the internet limits its ingenuity, resourcefulness, and humanity. Thus, this research uses jugaad as a critical mode of reflection while tracing the gaps in the learning models and recognizing the biases of AI-generated content.
Prompt: Can you give me a visual solution for how I can keep a round-bottom vessel filled with water from toppling? 

Image Generation on Google Gemini

Human Problem-Solving: Grab two socks and stuff one inside another to use as a doorstop

Human Problem-Solving: Use a cinder or concrete block as a door stop.


Human Problem-Solving: Use an acroterion, a traditional Greek roofing element as a door stop.
Should I design it to be functional,” the students say, “or to be aesthetically pleasing?” This is the most often heard, the most understandable, and yet the most mixed-up question in design today. “Do you want it to look good or to work?” Barricades are erected between what are really just two of the many aspects of function. A simple diagram shows the dynamic actions and relationships that make up the function complex.


VICTOR PAPANEK 
DESIGN FOR THE REAL WORLD
1984 



Does generative AI reinforce tech colonialism by prioritizing Global North perspectives, or can it effectively integrate diverse local knowledge and practices to decenter Anglo-American narratives?

Implications 🡒 Our central hypothesis and project pillars are that humans draw their understanding of the immediate environment, needs, and limitations, leading them to highly adaptable and feasible outcomes. Solutions that represent balanced imagination with constraints of physics and practicality, focusing on function. Human problem-solving often includes cultural adaptability, empathy, and emotional intelligence—qualities that generative AI currently lacks. Meanwhile, generative AI solutions often defy the laws of physics, prioritizing creative and imaginative outputs over practicality. Turns out, it's the age-old debate of form versus function. 


What is jugaad?

Jugaad is a resourceful approach that often arises from necessity, involving creativity and a bit of experimentation. It’s about going beyond the intended use of objects or materials—thinking about physical, chemical, and even gravitational properties. 

The classic sock door stop directly from our research workshop, is an everyday example of jugaad. It uses a simple sock as a functional door stop is a perfect. It demonstrates how people creatively repurpose existing materials to solve a practical problem with minimal resources.
Figure 1:

Image found on Google Image search as an example of Jugaad

Figure 2:

Prompt: Can you create a visual of how one might use a two-wheeler motorcycle to transport more than three people at the same time, inspired from the jugaad design principles or are examples grounded in the Indian context

Image Generation on Google Gemini





Human vs AI Workshops

Our work started by developing and learning from generative AI with no expectations. We started exploring the fact that a computer is—of course—not a human, and recognized that we need to learn how humans solve problems to truly compare the differences and similarities between humans and generative technology. Our research workshops explore and define the gap between humans and generative AI, by creating together, mind-mapping our process, and discussion to reflect on pressing and unanswered questions. 



Wonderings

Everything we are thinking about and how that connects with everything in the AI space.

Dive into the kiniopio space here


In 2016, it took Tay, a chatbot developed by Microsoft and launched on Twitter, less than twenty-four hours to turn into a racist and sexist entity—demonstrating how the data it learned and trained on was and is biased (Vincent, 2016). However, Tay's predecessor, XiaoIce, released in China in 2014, served very differently. Designed as an AI companion, XiaoIce focused on creating long-term relationships and emotional connections with its users, while Tay was made to gain a better conversational understanding at the expense of… (Zhou et al., 2020). Eight years later, the challenges with artificial intelligence are the same: bias. The common thread between AI Chatbots and generative AI is that they learn from data created and generated by humans on the internet. So, what led to Tay becoming racist and sexist is contributing bias in image generation. With different image-generation tools available to those interested in 2024, generative AI faces more challenges than the problems it may solve. After analyzing 5,000 images, Nicoletti and Bass (2023) learned that images constructed with the generative AI tool Stable Diffusion amplified gender and racial stereotypes. This research demonstrates that, like early AI chatbots, generative AI carries the potential for inaccuracy and misleading outputs, data fabrication, and can present hallucinations confidence (Nicoletti & Bass, 2023). 

As women of color from the Global South, we are deeply concerned about the inherent Anglo-American bias in generative AI. This bias is evident in generative AI's design and intended use, primarily within an Anglo-American context. One of our ongoing explorations is to understand whether these biases are embedded within the data AI learns from or if they emerge due to choices made in the development process. Broadly, we posit that AI enables technological colonialism by imposing solutions and mindsets developed in the Global North onto the Global South rather than utilizing or facilitating local or regional practices and innovations. As a result, in this project, we ask: How might an AI solution devoid of local context impose a monolithic approach to problem-solving regardless of regional cultures, practices, and behaviors?

The foundation of AI is biased, but as design practitioners who use and develop these tools, we need a better understanding of generative AI's limitations to effectively utilize it as a tool. Through our research, Anotherism, we compare Jugaad with generative AI to unpack how and where the biases appear. Drawing on the concept of jugaad— Hindi for an improvised solution based on ingenuity, cleverness, and innovative problem-solving—a practice that transcends the Anglo-American context, we are critically evaluating the risks of an AI solution not rooted in local contexts. Dr. Butoliya (2022) describes the jugaad as an act of freedom, a bottom-up reaction to the top-down oppression caused by the capitalist subjugation of the market and societies (Butoliya, 2022). For Indians like us, jugaad demonstrates joy within restrictions, where scarcity leads to innovation. However, much of the imagery of jugaad on the internet limits its ingenuity, resourcefulness, and humanity. Thus, Anotherism uses jugaad as a critical mode of reflection while tracing the gaps in the learning models and recognizing the biases of AI-generated content.


Prompt: Can you generate images of the vessel above with a built-in ring base? This ring can be slightly wider than the vessel, providing a stable foundation while keeping the round bottom's aesthetic. 

Image generated on Google Gemini
Tools, Tones, and Transformations

This research extends recent academic and industry discussions about human-AI interaction, building upon our project, which examines the evolving relationship between artificial intelligence and human creativity. As scholars like Kate Crawford (2021) and Safiya Noble (2018) have highlighted, AI systems often perpetuate existing power structures and cultural biases (Crawford, 2021; Noble, 2018). Meanwhile, practitioners like Rumman Chowdhury and Emily Denton (2021) have emphasized how AI's integration into creative workflows raises critical questions about human agency and cultural representation.
Prompt: Can you generate images of the vessel above with a built-in ring base? This ring can be slightly wider than the vessel, providing a stable foundation while keeping the round bottom's aesthetic. 

Image generated on Google Gemini
Prompt: Can you generate images of the vessel above with a built-in ring base? This ring can be slightly wider than the vessel, providing a stable foundation while keeping the round bottom's aesthetic. 

Image generated on Google Gemini

Human Ingenuity vs AI Generations
Our work started by developing and learning from AI with no expectations[i]. We started exploring the fact that a computer is—of course—not a human and recognized that we need to learn how humans solve problems to truly compare the differences and similarities between humans and generative technology. This research workshop and project defined humans as social beings with complex experiences and a capacity for abstract thinking. Humans have a wide range of emotions and connections; in this project, participants are from different parts of the world with a diversity of perspectives and identities. Although complex, generative technology is not as complex as humans we are interacting with in our research workshops—human complexity is not a recruitment criterion but an observation. With this project, we are exploring and defining the gap between humans and generative AI to define how design practitioners can utilize generative AI to support their ingenuity and problem-solving. In order to learn about human ingenuity, we collaborated with the Pluriversal Design SIG to offer a research workshop. In this workshop, we were set to explore the same prompts and generations we explored with ChatGPT, Gemini, MidJourney, Claude, and other tools.

References & Sources:

Butoliya, D. (2022). Critical Jugaad ontologies: Practices for a Just Future. Diid — Disegno Industriale Industrial Design, (76), 14. https://doi.org/10.30682/diid7622d

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

Daub, Adrian. What Tech Calls Thinking : An Inquiry into the Intellectual Bedrock of Silicon Valley. New York, Farrar, Straus And Giroux, 2020.

Li Zhou, Jianfeng Gao, Di Li, Heung-Yeung Shum; The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. Computational Linguistics 2020; 46 (1): 53–93. doi: https://doi.org/10.1162/coli_a_00368

MIT Sloan Teaching & Learning Technologies. “When AI Gets It Wrong: Addressing AI Hallucinations and Bias.” MIT Sloan, MIT Sloan Teaching & Learning Technologies, mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/. Accessed 20 Aug. 2024.

Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality. https://www.bloomberg.com/graphics/2023-generative-ai-bias

Vincent, James. “Twitter Taught Microsoft’s AI Chatbot to Be a Racist in Less than a Day.” The Verge, The Verge, 24 Mar. 2016, www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.




Learn about NidhiLearn about ShreyaEmail: anotherismproject@gmail.com