Business CoachĀ ā¢Ā 2m
š„ Government set to name ~8 Indian teams for foundational model incentives next week ā second-round beneficiaries may include BharatGen; GPU access remains tight as only ~17,374 of planned 34,333 GPUs are installed so far. š¤ Why It Matters ā More subsidised compute means faster Indiaātuned models, but the GPU crunch could slow training unless procurement accelerates or inferenceāefficient approaches are prioritised. š Action/Example ā Founders should prepare grant docs and pivot to efficient training/inference (LoRA, distillation, 4ābit quant) to ride the incentive window despite supply constraints. šÆ Who Benefits ā AI researchers, Indic LLM builders, and startups focused on lowācost inference at scale. Tap ā¤ļø if you like this post.
Hey I am on MedialĀ ā¢Ā 7m
"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -
See MoreWilling to contribut...Ā ā¢Ā 10d
I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan
See More
Technology, Business...Ā ā¢Ā 23d
When AI changed the rules, cloud computing had to change too. And thatās exactly where Oracle took the lead. Most cloud giants like AWS, Azure, and GCP still rely on virtualization ā where resources like CPU, GPU, and memory are shared across users.
See More
AI Deep Explorer | f...Ā ā¢Ā 6m
LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance
See MoreLetās connect and bu...Ā ā¢Ā 5m
Why Grok AI Outperformed ChatGPT & Gemini ā Without Spending Billions In 2025, leading AI companies invested heavily in R&D: ChatGPT: $75B Gemini: $80B Meta: $65B Grok AI, developed by Elon Musk's xAI, raised just $10B yet topped global benchmar
See More



Python Developer š» ...Ā ā¢Ā 8m
3B LLM outperforms 405B LLM 𤯠Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 𤯠𤯠LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-
See More
mysterious guyĀ ā¢Ā 5m
š 99% of AI Startups Will Die by 2026 ā Here's Why 1. LLM Wrappers ā Real Products Most AI startups are just pretty UIs over OpenAIās API. No real backend. No IP. No moat. They're charging ā¹4,000/month for workflows that cost ā¹300 if done directly
See More
Founder of Friday AIĀ ā¢Ā 4m
šØ Open AI is an Wrapperšš¤Æ Hot take, but letās break it down logically: OpenAI is not a full-stack AI company ā itās a high-level wrapper over Azure and NVIDIA. Hereās why that matters š š¹ 1. Infra Backbone = Microsoft Azure Almost 90%+ of Op
See More
AI Deep Explorer | f...Ā ā¢Ā 7m
"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1ļøā£ Fin
See More
Download the medial app to read full posts, comements and news.