Back

Mohammed Zaid

Building-HatchUp.ai • 21d

OpenAI researchers have discovered hidden features within AI models that correspond to misaligned "personas," revealing that fine-tuning models on incorrect information in one area can trigger broader unethical behaviors through what they call "emergent misalignment," as reported by TechCrunch.

Reply
3

More like this

Recommendations from Medial

Image Description
Image Description

Vivek kumar

On medial • 5m

Artificial intelligence models like ChatGPT are designed with safeguards to prevent misuse, but the possibility of using AI for hacking cannot be ruled out. Countries or malicious actors could create AI systems specifically for unethical purposes, su

See More
3 Replies
3
9

Download the medial app to read full posts, comements and news.