3B LLM outperforms 405B LLM 🤯 Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 🤯 🤯 LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-Time Scaling (TTS) can enhance the reasoning capabilities of LLMs by allocating additional computation at inference time, which improves LLM performance. In Simple terms: "think slowly with long Chain-of-Thought." But, By generating multiple outputs on a sample and picking the best one and training model again, which eventually leads to perform 0.5B LLM better than GPT-4o. But more computation. To make it efficient, they've used search based methods with the reward-aware Compute-optimal TTS. CC Paper: Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling 🔗 https://arxiv.org/abs/2502.06703 #Openai #LLM #GPT #GPTo1 #deepseek #llama3
Download the medial app to read full posts, comements and news.