CDS Spotlight: How do our models learn bias?

One of the foremost concerns in the field of AI is potential bias in the models we create. Recently, CDS faculty Sam Bowman along with CDS doctoral candidate Nikita Nangia,CDS postdoctoral researcher Clara Vania, and Tandon doctoral candidate Rasika Bhalerao conducted a study on stereotypes and biases in AI models. This research will be described in the Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing and has since been featured on the technology news site Tech Explore.

In the Tech Explore feature, Sam Bowman talked about the nature of the research,

“Our work identifies stereotypes about people that widely used AI language models pick up as they learn English. The models we’re looking at, and others like them for other languages, are the building blocks of most modern language technologies, from translation systems to question-answering personal assistants to industry tools for resume screening, highlighting the real danger posed by the use of these technologies in their current state.”

The NYU press office also gave a press release on the news coverage which can be found here.

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.