Hey I am on Medialย โขย 9m
Another open-source model has arrived, and itโs even better than DeepSeek-V3. The Allen Institute for AI just introduced Tรผlu 3 (405B) ๐ซ, a post-training model that is a fine-tune of Llama 3.1 405B, which outperforms DeepSeek V3.


Hey I am on Medialย โขย 1y
Huge announcement from Meta. Welcome Llama 3.1๐ฅ This is all you need to know about it: The new models: - The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models
See More
Python Developer ๐ป ...ย โขย 8m
3B LLM outperforms 405B LLM ๐คฏ Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 ๐คฏ ๐คฏ LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-
See More
Think Differentย โขย 1y
๐ Breaking News: Meta Unleashes Its Biggest Open-Source AI Model Yet โ Llama 3.1 with 405 Billion Parameters!Meta has taken a giant leap forward in AI development by releasing Llama 3.1, their most powerful open-source AI model to date, boasting an
See More
startups, technology...ย โขย 1y
Meta has introduced the Llama 3.1 series of large language models (LLMs), featuring a top-tier model with 405 billion parameters, as well as smaller variants with 70 billion and 8 billion parameters. Meta claims that Llama 3.1 matches the performance
See MoreDownload the medial app to read full posts, comements and news.