CDS’ Tim G. J. Rudner Joins National Effort to Set Standards for Safe AI

NYU Center for Data Science
2 min readApr 2, 2024

--

Image generated with ChatGPT/DALL-E. Prompt: “Image of professionals around a conference table discussing AI safety in a high-tech room with city view.”

The promise and risks of artificial intelligence have never been higher. As CDS Fellow Tim G. J. Rudner spearheads the university’s involvement in the U.S. AI Safety Institute Consortium convened by the National Institutes for Standards and Technology (NIST), he is keenly aware of how high the stakes are for getting AI safety right.

“The stakes are high. And it’s important to get this right,” said Rudner in a recent interview. “And I think it’s encouraging that NIST in particular is focusing on technical solutions in this effort. We urgently need them.”

Under orders from the Biden administration, NIST has established the U.S. AI Safety Institute (AISI) to develop guidelines, evaluate models, and pursue research to promote safe and trustworthy AI. The AISI Consortium unites over 200 organizations, including AI creators, users, academics, government researchers, industry players, and civil society groups.

Rudner, representing CDS, was selected to join the Consortium along with Julia Stoyanovich, a faculty member at CDS and at Tandon Computer Science & Engineering, who directs the NYU Tandon Center for Responsible AI.

Stakes are particularly high in domains where AI failures could bring immediate, concrete harms. As Rudner explains: “One example is medicine, where we might see immediate harm for patients if machine learning models fail to operate as intended.”

But even as AI is deployed more widely in lower-stakes applications, Rudner cautions it could still “increase existing socioeconomic, racial, and other disparities, if they are not ensured to be fair.”

In his own research, Rudner is particularly focused on techniques to make AI systems more aware of their own limitations and uncertainties. The goal is to have models “know what they don’t know” so they can fail gracefully and avoid overconfident errors.

While the Consortium is still in its early stages, working groups are being established to tackle priorities like safety and security, capability evaluations, and risk management for generative AI. Rudner is hopeful the effort will bear fruit.

“It’s good to see that there is government buy-in in trying to ensure machine learning models are not just better in a general sense, with new capabilities,” he says, “but that we look at that last mile to make sure we get very close to reliably safe systems, or have safety mechanisms to ensure models don’t cause harm.”

The establishment of the U.S. AI Safety Institute, bolstered by contributions from a broad consortium, is a landmark in AI’s safe and ethical advancement. Rudner’s involvement is emblematic of CDS’ engagement in crucial conversations on AI safety and reflects the Center’s commitment to fostering a secure, equitable AI future.

By Stephen Thomas

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

No responses yet