News Post

Hackers can read private AI assistant chats even though they’re encrypted

Arstechnica

· 3m
placeholder-image

Researchers have discovered a technique that allows an attacker to decipher responses from AI assistants with high accuracy. By exploiting a side channel found in major AI assistants (excluding Google Gemini), the attack can infer the specific topic of 55% of all captured responses, often with precise wording. This passive attack can be carried out by someone monitoring the data packets between the assistant and user. The method involves monitoring token-length sequences, which are transmitted in real-time during a conversation. The researchers highlight the flawed encryption used by OpenAI and other chatbot providers, leaving private chats vulnerable to interception.

No Comments yet

Download the medial app to read full posts, comements and news.