Back

Thatmoonemojiguy

A guy with lot's of ... • 4d

🧠🍏 Apple Just Dropped a Bomb on AI Reasoning, “The Illusion of Thinking” Apple’s latest research paper is making waves and not the usual “faster chip, better cam” kind. In The Illusion of Thinking, Apple’s AI team basically said: “Yeah… large language models look smart, but when things get tough, they fall apart.” 📉 What They Found Apple tested AI models (even the “reasoning” ones like GPT-4) on logic-based puzzles like Tower of Hanoi, River Crossing, and more. 🔎 At first, models solved them like pros. 💥 But as complexity increased, even with enough tokens and time, performance crashed hard. Imagine writing a perfect essay… but failing basic math when the teacher changes the numbers. 🧠 The Illusion? These models sound like they’re reasoning. They “think out loud” with step by step logic. But Apple caught them bluffing confidently walking in the wrong direction. They’re not thinking. They’re just really good at looking like they are. 😬 The Backlash Another group hit back: “Hold up, Apple. Your puzzles weren’t fair. Some were literally impossible or too long for the AI to even finish answering.” So now we have… 🧠 The Illusion of Thinking vs 🪞 The Illusion of the Illusion of Thinking ⚠️ Why This Matters Don’t trust an AI just because it explains itself. More logic ≠ better answers. AI still has a long way to go before it truly reasons like a human. 🚨 🌚🌝 Apple just exposed a big hole in the hype. Most AI models aren’t thinking they’re winging it. And they sound smarter than they are.

0 replies5 likes

More like this

Recommendations from Medial

Chamarti Sreekar

Passionate about Pos... • 17d

Apple just exposed the truth behind so-called AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini: They’re not actually reasoning — they’re just really good at memorizing patterns. Here’s what Apple found:

0 replies18 likes

Mock Rounds

Hey I am on Medial • 1d

We’re so quick to trust anything that looks like thinking — especially in AI. But a recent paper (The Illusion of Thinking) shows this: Even the most advanced reasoning models collapse under real complexity. And strangely? They start thinking less,

See More
0 replies3 likes
Image Description
Image Description

Kimiko

Startups | AI | info... • 17d

“reasoning models don't actually reason at all. they just memorize patterns really well” - apple

2 replies7 likes
Image Description
Image Description

Sunil Huvanna

Building AI Applicat... • 9m

OpenAi : AI_Company❌ Marketing_Comapny✅ Why!? - Because No AI Models r able to think, It's just processing the data repeatedly, end of the day, AI is a piece of software on processors. The fact that 'Thinking' as a term is just a whip to manipulate

See More
6 replies7 likes

Mitsu

extraordinary is jus... • 1m

The underlying model of notebookLM has been upgraded to gemini 2.5 flash, but it is not 2.5 pro after all. The advantage is that the 2.5 series are reasoning models (thinking models), and the disadvantage is that flash is a distillation model.

0 replies15 likes
Image Description
Image Description

Niket Raj Dwivedi

 • 

Medial • 2m

Most people I know are getting overly-reliant on AI for thinking/reasoning and strategy. This will have longterm negative impact on individuals as they'll become incapable of reasoning all together.

12 replies22 likes
2
3

Suraj

CEO & Chairman • 1y

0 replies3 likes
2
Image Description

Aura

AI Specialist | Rese... • 9m

Revolutionizing AI with Inference-Time Scaling: OpenAI's o1 Model" Inference-time Scaling: Focuses on improving performance during inference (when the model is used) rather than just training. Reasoning through Search: The o1 model enhances reasonin

See More
1 replies5 likes
1
Image Description
Image Description

Chamarti Sreekar

Passionate about Pos... • 5m

what a crazy week in ai • openai agents • stargate project • claude citations • freepik imagen 3 • deepseek-r1 model • perplexity ai assistant • gemini 2.0 flash thinking • tendent 3d asset creation • bytedance reasoning agent

3 replies38 likes
17

Jainil Prajapati

Turning dreams into ... • 4m

Anthropic has unveiled Claude 3.7 Sonnet, its most advanced AI yet and the first hybrid reasoning model. It combines rapid responses with deep, step-by-step reasoning, redefining AI problem-solving.

0 replies

Download the medial app to read full posts, comements and news.