A guy with lot's of ... • 4d
🧠🍏 Apple Just Dropped a Bomb on AI Reasoning, “The Illusion of Thinking” Apple’s latest research paper is making waves and not the usual “faster chip, better cam” kind. In The Illusion of Thinking, Apple’s AI team basically said: “Yeah… large language models look smart, but when things get tough, they fall apart.” 📉 What They Found Apple tested AI models (even the “reasoning” ones like GPT-4) on logic-based puzzles like Tower of Hanoi, River Crossing, and more. 🔎 At first, models solved them like pros. 💥 But as complexity increased, even with enough tokens and time, performance crashed hard. Imagine writing a perfect essay… but failing basic math when the teacher changes the numbers. 🧠 The Illusion? These models sound like they’re reasoning. They “think out loud” with step by step logic. But Apple caught them bluffing confidently walking in the wrong direction. They’re not thinking. They’re just really good at looking like they are. 😬 The Backlash Another group hit back: “Hold up, Apple. Your puzzles weren’t fair. Some were literally impossible or too long for the AI to even finish answering.” So now we have… 🧠 The Illusion of Thinking vs 🪞 The Illusion of the Illusion of Thinking ⚠️ Why This Matters Don’t trust an AI just because it explains itself. More logic ≠ better answers. AI still has a long way to go before it truly reasons like a human. 🚨 🌚🌝 Apple just exposed a big hole in the hype. Most AI models aren’t thinking they’re winging it. And they sound smarter than they are.
Download the medial app to read full posts, comements and news.