Ilya and Open AI missions with his new “safe” AI Ilya Sutskever, co-founder of OpenAI, is launching a new venture called Safe Superintelligence, dedicated to creating artificial general intelligence (AGI) that prioritizes safety, akin to "nuclear safety." This move comes after Sutskever's controversial involvement in the ousting of Sam Altman from OpenAI last fall, a decision he later reversed due to internal pressure. Safe Superintelligence aims to remain free from commercial pressures, focusing solely on developing AGI that benefits humanity without the distractions of a competitive market. Co-founded with AI expert Daniel Gross and former OpenAI colleague Daniel Levy, the startup has yet to secure investors, posing a challenge given the high costs of building and training AI models. (Continuing in the thread 🧵)
Download the medial app to read full posts, comements and news.