ย โขย
Rideย โขย 1y
https://arxiv.org/pdf/2404.07143.pdf Google has dropped possibly THE most important and future defining AI paper under 12 pages. Models can now have infinite context.
AI Deep Explorer | f...ย โขย 7m
Want to learn AI the right way in 2025? Donโt just take courses. Donโt just build toy projects. Look at whatโs actually being used in the real world. The most practical way to really learn AI today is to follow the models that are shaping the indus
See Morebuilding hatchup.aiย โขย 3m
DeepSeek has quietly dropped V3.1, a 685B-parameter open-source LLM on Hugging Faceโ128K token context, hybrid reasoning/chat/coding, multi-precision support. Early benchmarks: 71.6 % Aider coding, on par with proprietary models & 68ร cheaper.

Software Engineerย โขย 3m
Recently learn how LLM works, maths behind attention layers and transformer, continously trying to keep up with rapid development in AI space, getting so overwhelmed ๐ฅฒ Now google came with "Mixture-of-Agents (MoA): A Breakthrough in LLM Performance
See MoreInquisitiveย โขย 2m
๐๐โ๐ซ๐ข๐ฌ๐ค ๐ฃ๐จ๐๐ฌ ๐๐ฒ ๐๐ข๐๐ซ๐จ๐ฌ๐จ๐๐ญ โ ๐๐ซ๐จ๐ฆ ๐๐จ๐ซ๐ค๐ข๐ง๐ ๐ฐ๐ข๐ญ๐ก ๐๐: ๐๐๐๐ฌ๐ฎ๐ซ๐ข๐ง๐ ๐ญ๐ก๐ ๐๐๐๐ฎ๐ฉ๐๐ญ๐ข๐จ๐ง๐๐ฅ ๐๐ฆ๐ฉ๐ฅ๐ข๐๐๐ญ๐ข๐จ๐ง๐ฌ ๐จ๐ ๐๐๐ง๐๐ซ๐๐ญ๐ข๐ฏ๐ ๐๐ 1. Interpreters and Translators 2. Historians 3. Pas
See More
Founder Snippetz Lab...ย โขย 4m
I didnโt think Iโd enjoy reading 80+ pages on training AI models. But this one? I couldnโt stop. Hugging Face dropped a playbook on how they train massive models across 512 GPUs โ and itโs insanely good. Not just technical stuffโฆ itโs like reading a
See More
ย โขย
YouTubeย โขย 1y
Researchers at Google DeepMind introduced Semantica, an image-conditioned diffusion model capable of generating images based on the semantics of a conditioning image. The paper explores adapting image generative models to different datasets. Instea
See MoreFounder of Friday AIย โขย 4m
Introducing the World's First Adaptive Context Window for Emotional AI Built by the Friday AI Core Technologies Research Team Most LLMs today work with static memory โ 8K, 32K, maybe 128K tokens. But human conversations aren't static. Emotions evo
See MoreFounder of Friday AIย โขย 4m
Introducing the World's First Adaptive Context Window for Emotional AI Built by the Friday AI Core Technologies Research Team Most LLMs today work with static memory โ 8K, 32K, maybe 128K tokens. But human conversations aren't static. Emotions evo
See MoreThatmoonemojiguy ๐ย โขย 6m
Apple ๐ is planning to integrate Al-powered heart rate monitoring into AirPods ๐ง Apple's newest research suggests that AirPods could soon double as Al-powered heart monitors. In a study published on May 29, 2025, Apple's team tested six advanced A
See More
Hey I am on Medialย โขย 8m
Retrieval-Augmented Generation (RAG) is a GenAI framework that enhances large language models (LLMs) by incorporating information from external knowledge bases, improving accuracy, relevance, and reliability of generated responses. Here's a more det
See More
Download the medial app to read full posts, comements and news.