Software Engineer • 6d
Recently learn how LLM works, maths behind attention layers and transformer, continously trying to keep up with rapid development in AI space, getting so overwhelmed 🥲 Now google came with "Mixture-of-Agents (MoA): A Breakthrough in LLM Performance" Link - https://www.marktechpost.com/2025/08/09/mixture-of-agents-moa-a-breakthrough-in-llm-performance/ Original research paper - https://arxiv.org/pdf/2406.04692 Happy Learning 👍
Python Developer 💻 ... • 5m
3B LLM outperforms 405B LLM 🤯 Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 🤯 🤯 LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-
See MoreBuilding Snippetz la... • 18d
I’m building an AI-first architectural engine that converts natural language prompts into parametric, code-compliant floor plans. It's powered by a fine-tuned Mistral LLM orchestrating a multi-model architecture — spatial parsing, geometric reasonin
See MoreAI Deep Explorer | f... • 3m
Top 10 AI Research Papers Since 2015 🧠 1. Attention Is All You Need (Vaswani et al., 2017) Impact: Introduced the Transformer architecture, revolutionizing natural language processing (NLP). Key contribution: Attention mechanism, enabling models
See More| Technologist | ML ... • 5m
In the ever-evolving AI landscape, a new player is making waves — Deepseek. While OpenAI, Google DeepMind, and Meta AI have been dominant forces, Deepseek is emerging as a formidable contender in the AI race.The recent buzz around Deepseek stems from
See MoreDownload the medial app to read full posts, comements and news.