AI Assistants in Encrypted Messaging: Moving Too Fast in the Dark?

NYU Center for Data Science
3 min readMar 21, 2025

--

Major technology companies are rapidly integrating AI systems that can process encrypted message content, yet users and experts often don’t know exactly how these systems work or what happens to their private data. This lack of transparency concerns CDS-affiliated Assistant Professor of Computer Science and Adjunct Professor of Law, Sunoo Park, who, along with CDS Professor Kyunghyun Cho and an interdisciplinary team of researchers, has analyzed the growing tensions between AI features and end-to-end encrypted messaging.

Their paper, How To Think About End-To-End Encryption and AI: Training, Processing, Disclosure, and Consent, examines various ways AI assistants could be implemented in encrypted messaging apps and evaluates which approaches actually preserve the privacy guarantees of end-to-end encryption (E2EE).

“The promise of end-to-end encryption is that users’ data will remain accessible only to the users concerned, and will not be accessible by any platforms they may be communicating with,” Park said. “When integrating AI assistants with end-to-end encrypted applications, there are additional privacy concerns about whether the AI assistants will be processing any of that specially protected and encrypted data.”

The research comes at a critical time, as major platforms rush to add AI capabilities to their messaging apps. Since the paper’s initial release in December 2024, Samsung has launched new AI integrations that can process end-to-end encrypted content, joining Apple’s Intelligence system as examples of this growing trend. WhatsApp users can access Meta AI as a standalone chat or summon it within conversations using “@MetaAI,” while Samsung’s Galaxy AI offers features that can process content from encrypted applications like Google Messages.

The team, which included Mallory Knodel, Daniella Ferrari, Jacob Leiken, Betty Li Hou, Derek Yen, Sam de Alfaro, all at NYU, and Andrés Fábrega, at Cornell, developed a framework for evaluating whether AI features can coexist with encryption’s privacy guarantees. Their analysis led to four key recommendations:

First, using encrypted content to train AI models that are shared between multiple users fundamentally breaks encryption’s promises. As Park explained, “When data goes into a model for training, the model itself becomes a function dependent on the private data that went in. That private data then influences outputs that could face the public or other users — which violates the guarantees of end-to-end encryption.”

Second, any processing of encrypted content should happen locally on users’ devices whenever possible. If processing must happen elsewhere, no third party should be able to see the encrypted data without breaking encryption, and data should only be used to fulfill that specific user’s requests.

Third, messaging providers should not make broad claims about providing encryption if their default settings allow third parties to access encrypted content for AI features.

Finally, AI features in encrypted apps should be off by default and only activated with explicit user consent. The researchers emphasize that obtaining meaningful consent requires careful consideration of factors like opt-in/out mechanisms and group chat dynamics.

“A large part of our motivation was concern about the possible risks of integrating AI systems rapidly with end-to-end encrypted systems,” Park noted. “End-to-end encrypted messaging has become the default way that many communications happen, both mundane and sensitive. Those established protections could be undermined by certain configurations of AI assistants being integrated at large scale.”

The researchers aren’t claiming that platforms are currently misusing encrypted data in AI systems. Rather, they warn that the rapid integration of AI features, combined with limited public documentation about how these systems work, creates risks that need to be carefully examined before they potentially compromise the privacy protections that billions of users rely on.

The authors are also getting the word out about this work themselves. Knodel and Fábrega have written an article at Tech Policy Press, while Fábrega, Ferrari, Leiken, Hou, Yen, and de Alfaro have published a post on Sunoo’s lab’s blog.

By Stephen Thomas

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

No responses yet

Write a response