AI Isn’t Always Helpful — and Researchers Have Found a Way to Know When

NYU Center for Data Science
3 min readApr 18, 2025

--

Just because an AI system is good at something doesn’t mean you’ll always want its help. In a study recently presented at AAAI 2025, CDS Faculty Fellow Umang Bhatt, CDS visiting researcher Valerie Chen, and colleagues introduced a system called Modiste, detailed in their paper, “Learning Personalized Decision Support Policies,” to determine exactly when a user truly benefits from AI assistance.

As AI tools become increasingly common in daily tasks — from answering trivia questions to medical diagnoses — knowing when to use them remains critical. “We often assume AI support will improve our decisions, but the reality is more nuanced,” Bhatt said. Modiste, a tool designed to personalize access to decision support, learns over time exactly when each individual should receive AI input based solely on their prior interactions.

Bhatt and his team’s approach differs significantly from other systems because it starts without assumptions about user expertise. Traditional AI systems rely on large datasets of past user behavior, but these datasets rarely exist, especially when it comes to interactions as personal as deciding when someone should receive assistance. Instead, Modiste learns a policy in real-time, employing an interactive framework based on older, simpler algorithms previously used for targeted online advertising.

“Each person uses AI differently,” Bhatt explained, “and our system quickly identifies your unique pattern of needing or rejecting help, making decisions about when to offer AI assistance or withhold it.” This approach, which the researchers tested rigorously through simulations and human studies, demonstrated significant improvement over fixed, one-size-fits-all policies.

Their findings align closely with two recent studies by Bhatt and colleagues. One study, titled “When Should We Orchestrate Multiple Agents?”, explores the complexities and costs associated with using multiple AI agents simultaneously, emphasizing the importance of precisely choosing when each agent interacts with a user. Another, “Modulating Language Model Experiences through Frictions,” examines how adding deliberate obstacles — like requiring extra clicks — can effectively limit unnecessary AI interactions and encourage independent thinking.

Interestingly, Modiste emerged as a metaphor inspired by the modiste — a tailor — in the popular show Bridgerton, someone who expertly adjusts garments to fit perfectly. Similarly, Modiste “weaves AI into our lives in a reasonable, measured way,” Bhatt said.

Yet despite Modiste’s effectiveness, Bhatt cautions about potential unintended consequences. When users were informed about their superior expertise compared to the AI system, they unexpectedly reduced AI interactions even in areas where the AI could have helped. “Understanding how human behavior might generalize beyond intended areas is a critical next step,” Bhatt said.

As personalized AI assistance becomes more practical, Bhatt believes that the insights from this research could shape the future of human-AI interactions profoundly. “Ultimately, the real question isn’t whether AI can help — it’s figuring out exactly when,” he concluded.

By Stephen Thomas

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

No responses yet