News Post

Hackers can read private AI-assistant chats even though they’re encrypted

Arstechnica

· 3m
placeholder-image

Researchers have discovered a vulnerability in AI assistants that allows attackers to decipher users' private conversations. By exploiting a side channel present in most AI assistants except for Google Gemini, and refining the results through large language models, attackers can infer the specific topic of 55% of captured responses, and deduce responses with perfect word accuracy 29% of the time. This attack, which requires a passive adversary-in-the-middle position, can bypass encryption measures used by AI assistants, potentially compromising sensitive information. The side channel vulnerability lies in the token-length sequence used during real-time transmission of responses.

No Comments yet

Download the medial app to read full posts, comements and news.