Back

DK

 • 

Ride • 1y

https://arxiv.org/pdf/2404.07143.pdf Google has dropped possibly THE most important and future defining AI paper under 12 pages. Models can now have infinite context.

2 Replies
7
Replies (2)

More like this

Recommendations from Medial

AI Engineer

AI Deep Explorer | f... • 5m

Want to learn AI the right way in 2025? Don’t just take courses. Don’t just build toy projects. Look at what’s actually being used in the real world. The most practical way to really learn AI today is to follow the models that are shaping the indus

See More
Reply
1
9
Image Description
Image Description

Mohammed Zaid

Shitposter of Medial • 1m

DeepSeek has quietly dropped V3.1, a 685B-parameter open-source LLM on Hugging Face—128K token context, hybrid reasoning/chat/coding, multi-precision support. Early benchmarks: 71.6 % Aider coding, on par with proprietary models & 68× cheaper.

2 Replies
2
13

Keval Rajpal

Software Engineer • 1m

Recently learn how LLM works, maths behind attention layers and transformer, continously trying to keep up with rapid development in AI space, getting so overwhelmed 🥲 Now google came with "Mixture-of-Agents (MoA): A Breakthrough in LLM Performance

See More
Reply
2
Image Description

Rishi Chavan

Inquisitive • 18d

𝐀𝐈‑𝐫𝐢𝐬𝐤 𝐣𝐨𝐛𝐬 𝐛𝐲 𝐌𝐢𝐜𝐫𝐨𝐬𝐨𝐟𝐭 — 𝐟𝐫𝐨𝐦 𝐖𝐨𝐫𝐤𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐀𝐈: 𝐌𝐞𝐚𝐬𝐮𝐫𝐢𝐧𝐠 𝐭𝐡𝐞 𝐎𝐜𝐜𝐮𝐩𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐈𝐦𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 𝐨𝐟 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 1. Interpreters and Translators 2. Historians 3. Pas

See More
1 Reply
17
Image Description
Image Description

Pulakit Bararia

Founder Snippetz Lab... • 2m

I didn’t think I’d enjoy reading 80+ pages on training AI models. But this one? I couldn’t stop. Hugging Face dropped a playbook on how they train massive models across 512 GPUs — and it’s insanely good. Not just technical stuff… it’s like reading a

See More
4 Replies
1
7
Image Description

Shuvodip Ray

 • 

YouTube • 1y

Researchers at Google DeepMind introduced Semantica, an image-conditioned diffusion model capable of generating images based on the semantics of a conditioning image. The paper explores adapting image generative models to different datasets. Instea

See More
2 Replies
3

Swamy Gadila

Founder of Friday AI • 2m

Introducing the World's First Adaptive Context Window for Emotional AI Built by the Friday AI Core Technologies Research Team Most LLMs today work with static memory — 8K, 32K, maybe 128K tokens. But human conversations aren't static. Emotions evo

See More
Reply
1
1

Swamy Gadila

Founder of Friday AI • 2m

Introducing the World's First Adaptive Context Window for Emotional AI Built by the Friday AI Core Technologies Research Team Most LLMs today work with static memory — 8K, 32K, maybe 128K tokens. But human conversations aren't static. Emotions evo

See More
Reply
1
3

Siddharth K Nair

Thatmoonemojiguy 🌝 • 4m

Apple 🍎 is planning to integrate Al-powered heart rate monitoring into AirPods 🎧 Apple's newest research suggests that AirPods could soon double as Al-powered heart monitors. In a study published on May 29, 2025, Apple's team tested six advanced A

See More
Reply
3

Download the medial app to read full posts, comements and news.