StealthĀ ā¢Ā 2m
this happens when you give a lot of intent to ai models. let's say you know that Tomato is a fruit, but that fruit is not supposed to be put while making a fruit salad! the same way AI has knowledge of intent but it's hiding it because somewhere while training a lot of information related to hiding the data or negativity might have created this alignment! I'm assuming it's purely because data was analysed before training but maybe the teams missed intent of that data
Download the medial app to read full posts, comements and news.