ย โขย
Medialย โขย 4m
๐ ๐๐ฝ๐ฒ๐ป๐ ๐ฐ+ ๐ต๐ผ๐๐ฟ๐ ๐ฟ๐ฒ๐๐ฎ๐๐ฐ๐ต๐ถ๐ป๐ด ๐๐ฎ๐ฟ๐ฝ๐ฎ๐๐ต๐โ๐ ๐ฌ๐ ๐ธ๐ฒ๐๐ป๐ผ๐๐ฒ. And I realized โ weโve been looking at LLMs the wrong way. Theyโre not just โAI models.โ Theyโre a new kind of computer. โข LLM = CPU โข Context window = memory โข Orchestration of tools + compute = like how OS manages resources The ecosystem already looks like old OS battles: โข OpenAI, Anthropic, Gemini = Windows/macOS โข Llama, Qwen, DeepSeek = Linux โข Apps (Cursor, Notion AI, etc.) = cross-platform, can run on any LLM backend Switching between GPT, Claude, or Gemini? โ literally just a dropdown. But hereโs the key insight: Weโre basically back in the 1960s of computing. โข Compute is expensive โ everything lives in the cloud โข Users are thin clients โ just pinging infra over APIs โข No GUI yet โ prompting feels like a terminal โข No universal shell โ only apps, wrappers, hacks And unlike electricity, computers, or the internet (which started in govt/enterprise first)โฆ LLMs went straight to consumers. Billions got access before big corporations even figured them out. Thatโs crazy. So no โ LLMs arenโt just tools or APIs. Theyโre a foundational computing layer. A programmable OS for intelligence. Weโre still very early. The personal LLM moment hasnโt happened yetโฆ but it will.

Python Developer ๐ป ...ย โขย 9m
3B LLM outperforms 405B LLM ๐คฏ Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 ๐คฏ ๐คฏ LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-
See More
Hey I am on Medialย โขย 4m
Master LLMs without paying a rupee. LLMCourse is a complete learning track to go from zero to LLM expert โ using just Google Colab. Youโll learn: โ The core math, Python, and neural network concepts โ How to train your own LLMs โ How to build and
See MoreStartups | AI | info...ย โขย 7m
Google DeepMind just dropped AlphaEvolveโผ๏ธ A Gemini powered coding agent that evolves algorithms and outperforms AlphaTensor. It delivers: โ 23% matrix speedup โ 32.5% GPU kernel boost โ 0.7% global compute recovery Using Gemini Flash + Pro, it re
See More
17 | Building Doodle...ย โขย 8m
๐ For the first time in LLM History! Introducing Agentic Compound LLMs in AnyLLM. What are agentic LLM's ? Agentic LLMs have access to all the 10+ llms in anyllm and know when you use any one of them to perform a specific task. They also have acces
See More
Download the medial app to read full posts, comements and news.