Business Coachย โขย 2m
๐ฅ Government set to name ~8 Indian teams for foundational model incentives next week โ second-round beneficiaries may include BharatGen; GPU access remains tight as only ~17,374 of planned 34,333 GPUs are installed so far. ๐ค Why It Matters โ More subsidised compute means faster Indiaโtuned models, but the GPU crunch could slow training unless procurement accelerates or inferenceโefficient approaches are prioritised. ๐ Action/Example โ Founders should prepare grant docs and pivot to efficient training/inference (LoRA, distillation, 4โbit quant) to ride the incentive window despite supply constraints. ๐ฏ Who Benefits โ AI researchers, Indic LLM builders, and startups focused on lowโcost inference at scale. Tap โค๏ธ if you like this post.
Hey I am on Medialย โขย 8m
"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -
See MoreWilling to contribut...ย โขย 1m
I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan
See More
Technology, Business...ย โขย 1m
When AI changed the rules, cloud computing had to change too. And thatโs exactly where Oracle took the lead. Most cloud giants like AWS, Azure, and GCP still rely on virtualization โ where resources like CPU, GPU, and memory are shared across users.
See More
Founder | Agentic AI...ย โขย 8d
4 different ways of training LLM's. I've given a simple detailed explanation below. 1.) ๐๐ฐ๐ฐ๐๐ฟ๐ฎ๐๐ฒ ๐๐ฎ๐๐ฎ ๐๐๐ฟ๐ฎ๐๐ถ๐ผ๐ป (๐๐๐ฒ๐ฝ-๐ฏ๐-๐๐๐ฒ๐ฝ) Prepares clean, consistent, and useful data so the model learns effectively. 1. Collect text
See More
AI Deep Explorer | f...ย โขย 7m
LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance
See MoreLetโs connect and bu...ย โขย 6m
Why Grok AI Outperformed ChatGPT & Gemini โ Without Spending Billions In 2025, leading AI companies invested heavily in R&D: ChatGPT: $75B Gemini: $80B Meta: $65B Grok AI, developed by Elon Musk's xAI, raised just $10B yet topped global benchmar
See More



Python Developer ๐ป ...ย โขย 9m
3B LLM outperforms 405B LLM ๐คฏ Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 ๐คฏ ๐คฏ LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-
See More
mysterious guyย โขย 6m
๐ 99% of AI Startups Will Die by 2026 โ Here's Why 1. LLM Wrappers โ Real Products Most AI startups are just pretty UIs over OpenAIโs API. No real backend. No IP. No moat. They're charging โน4,000/month for workflows that cost โน300 if done directly
See More
Founder of Friday AIย โขย 5m
๐จ Open AI is an Wrapper๐๐คฏ Hot take, but letโs break it down logically: OpenAI is not a full-stack AI company โ itโs a high-level wrapper over Azure and NVIDIA. Hereโs why that matters ๐ ๐น 1. Infra Backbone = Microsoft Azure Almost 90%+ of Op
See More
Download the medial app to read full posts, comements and news.