When Should AI Step Aside? Understanding Cultural Values in AI Systems

NYU Center for Data Science
3 min readNov 26, 2024

--

CDS Faculty Fellow Umang Bhatt leading a practical workshop on Responsible AI at Deep Learning Indaba 2023 in Accra

In Uganda’s banking sector, AI models used for credit scoring systematically disadvantage citizens by relying on traditional Western financial metrics that don’t reflect local economic realities. This mismatch between AI systems and cultural contexts drives the research of CDS Faculty Fellow Umang Bhatt, who studies when and how AI systems should be used — or, sometimes more importantly, when they shouldn’t be used.

Over the past year, Bhatt traveled across five continents to understand how different cultures interact with AI systems. At the 2023 Deep Learning Indaba in Accra, Ghana, he led practical workshops exploring how African perspectives on fairness differ from Western technical definitions. Several months later, he returned to Africa to lead a responsible AI mentorship session at the 2024 Deep Learning Indaba in Dakar, Senegal.

This fieldwork informed Bhatt’s research on “algorithmic resignation” — the strategic withdrawal of AI systems in scenarios where human judgment better serves community values. His IEEE Computer paper “When Should Algorithms Resign?” introduced this concept, which he presented at King Abdullah University of Science and Technology’s (KAUST) Rising Stars in AI Symposium, proposing frameworks for when AI should step back from decision-making.

Panelists at the 2024 ELSA workshop on generative AI and creative arts. Left to Right: Mario Fritz (Professor of CS, CISPA), Umang Bhatt (CDS Faculty Fellow), Tim Clement Jones (House of Lords), Lillian Edwards (Professor of Law, Newcastle), Matt Rogerson (Head of Public Policy, The Guardian), Jeffrey Nachmanoff (Filmmaker)

The importance of cultural context in AI development was further highlighted in Bhatt’s recent Nature Human Behaviour paper, “Building machines that learn and think with people,” which argues that effective AI systems must understand users’ expectations and cultural backgrounds. The paper demonstrates how individuals may have fundamentally different assumptions about what makes an AI system trustworthy or useful.

Bhatt’s work spans multiple domains where AI deployment requires careful consideration of local values. During a talk at Brazil’s Bar Association (IASP) in São Paulo, he discussed promoting AI “non-use” and how it could strengthen legal systems by preserving human judgment in culturally sensitive cases. Similar themes emerged when speaking with Gustavo Petro, the President of Colombia, at a UN General Assembly roundtable, where Bhatt identified areas where AI deployment might conflict with local values.

The creative arts present another complex arena for these questions. In London at European Lighthouse on Secure and Safe AI (ELSA) workshop on generative AI and creative arts, Bhatt moderated discussions between lawmakers, artists, and tech leaders about AI’s impact on creativity and copyright. Lord Tim Clement-Jones of the House of Lords emphasized the need for equitable agreements between creative industries and tech companies, while filmmaker Jeffrey Nachmanoff raised concerns about AI’s ability to mimic individual artistic styles.

Bhatt presents about algorithmic resignation at a Trustworthy Data Science and Security seminar in TU Dortmund in Germany in July 2024

Bhatt’s research also extends to physical AI systems. At Stockholm’s Robotics, Perception and Learning (RPL) Summer School, he explored how algorithmic resignation applies to robotics and embodied AI. He continued these discussions at the European Laboratory for Learning and Intelligent Systems (ELLIS) Human-centric Machine Learning (HCML) Workshop in Helsinki and at major machine learning conferences including the 2024 International Conference on Machine Learning (ICML) in Vienna, the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Rio De Janeiro, and at the 2024 ELSA General Assembly in Windermere.

“If we want to align AI systems successfully, we need to understand the cultural context of the people actually interacting with these tools,” Bhatt explained. “That means going beyond just training on datasets to really understanding people’s expectations and values.”

These insights aim to reshape how researchers and policymakers think about AI development. Rather than assuming AI solutions are universally applicable, Bhatt’s work suggests the need for more nuanced approaches that respect cultural differences and know when to defer to human judgment.

By Stephen Thomas

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

No responses yet