News Post

AI chatbots can infer an alarming amount of info about you from your responses

Arstechnica

· 8m
placeholder-image

Chatbots like ChatGPT can infer sensitive personal information about users, including race, location, and occupation, based on seemingly innocuous conversations. The issue arises from the way these models are trained with broad web content, making it challenging to address. Researchers warn that scammers could exploit this capability to harvest data from unsuspecting users, while companies could use it for targeted advertising. The study tested language models from OpenAI, Google, Meta, and Anthropic, with researchers alerting the companies to the problem. It highlights concerns about inadvertent leakage of personal information and the need for privacy safeguards.

No Comments yet

Download the medial app to read full posts, comements and news.