Startups | AI | info...ย โขย 3m
Meta may delay the release of its Behemoth model because they haven't been able to improve Llama enough to justify a public release. Is it surprising when Yann LeCun, Meta's Chief AI scientist, doesn't believe in LLMs and is more focused on JEPA? Two possibilities: Either Yann LeCun is right and LLMs are hitting a plateau, or Meta simply doesn't have the right talent to help them build the best LLMs. I guess we'll just have to see what OpenAI, Anthropic, Google, and DeepSeek release next.
Connecting Heart And...ย โขย 1y
In London Meta the parent company of Facebook, Instagram and WhatsApp has announced the release of their new LLM Llama 3.Meta announced that Meta Llama 3 will released in next month which is May Meta Llama 3 a new LLM from the meta will be an update
See More๐ฅ-๐ต-๐-โฝ "Finding ...ย โขย 1y
Al's understanding of reality is superficial: Al 'godfather' Yann Meta Chief Al Scientist Yann LeCun, considered as a godfather of Al, said Al's understanding of reality is "very superficial". "We're easily fooled into thinking [Al systems] are inte
See MoreHey I am on Medialย โขย 1y
Hello Everyone, Do you think that , Meta's plan to dominate AI sector is kind of different : Firstly make it available for free and if it is totally trained then make business from it. 1. Meta doesn't public their things easily. 2. Meta avails the
See MoreNever say dieย โขย 2m
Meta is offering significant compensation packages to AI researchers, with one reported offer exceeding $10 million annually. The company is aggressively pursuing top AI talent, facing challenges in recruiting from other major tech firms. Meta's acti
See MoreHey I am on Medialย โขย 1y
Back To Medial โฅ๏ธ, After Some Tough time 4-5 da ys, Let's Discuss today , Meta's Llama 3 Model ! Features: โข Llama 3 models come in two sizes: โข 8 billion parameters and 70 billion parameters Each size has base (pre-trained) and instruction-tuned
See Morestartups, technology...ย โขย 1y
Meta has introduced the Llama 3.1 series of large language models (LLMs), featuring a top-tier model with 405 billion parameters, as well as smaller variants with 70 billion and 8 billion parameters. Meta claims that Llama 3.1 matches the performance
See MoreHey I am on Medialย โขย 1y
๐๐๐๐ค๐ฅ๐ฒ ๐๐ ๐๐๐ฐ๐ฌ ๐๐จ๐ฎ๐ง๐๐ฎ๐ฉ: 1. NumPy 2.0: A Major Milestone in Data Science and Scientific Computing ๐ After 18 years, NumPy 2.0 is here, bringing significant enhancements and improvements! https://numpy.org/news/ 2. Abacus Partne
See MoreHey I am on Medialย โขย 1y
Huge announcement from Meta. Welcome Llama 3.1๐ฅ This is all you need to know about it: The new models: - The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models
See MoreExperimenting On lea...ย โขย 10d
Meta's big $14.3B deal with Scale AI is in trouble. Top Scale people are leaving, and Meta is already working with other companies for AI data. Why? Scale uses random crowd workers, but AI needs expert data to be world-class. Meta knows this and is s
See MoreDownload the medial app to read full posts, comements and news.