Software DeveloperĀ ā¢Ā 6m
1. LoRA on a reasonably small open model (best balance for small compute): apply low-rank adapters (PEFT/LoRA). Requires less GPU memory and works well for 700ā3000 rows. 2. Full fine-tune (costly / heavy): only if you have >A100 GPU or cloud paid GPU. Not recommended for early MVP. 3. No-fine-tune alternative (fast & free): use retrieval + prompting (RAG) ā keep base LLM and add context from your 3k+ rows. Great when compute is limited.
Business CoachĀ ā¢Ā 6m
š„ Government set to name ~8 Indian teams for foundational model incentives next week ā second-round beneficiaries may include BharatGen; GPU access remains tight as only ~17,374 of planned 34,333 GPUs are installed so far. š¤ Why It Matters ā More
See MoreWilling to contribut...Ā ā¢Ā 4m
I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan
See More
Introvert!Ā ā¢Ā 29d
This is crazy š¤Æš„! Mumbai-based AI startup Neysa has raises $1.2 billion funding, around ā¹10,897 crore($600M equity + $600M debt) led by Blackstone at the valuation of $1.4 billion šµ. The Startup founded in 2023 by Sharad Sanghi and Anindya Das.
See More

AI Deep Explorer | f...Ā ā¢Ā 11m
"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1ļøā£ Fin
See More
Founder of Friday AIĀ ā¢Ā 8m
šØ Open AI is an Wrapperšš¤Æ Hot take, but letās break it down logically: OpenAI is not a full-stack AI company ā itās a high-level wrapper over Azure and NVIDIA. Hereās why that matters š š¹ 1. Infra Backbone = Microsoft Azure Almost 90%+ of Op
See More
Founder | Agentic AI...Ā ā¢Ā 3m
SLM vs LLM ā which AI model is best for you? Iāve explained both in simple steps below. š¦šš (š¦šŗš®š¹š¹ šš®š»š“šš®š“š² š š¼š±š²š¹) (š“šµš¦š±-š£šŗ-š“šµš¦š±) Lightweight AI models built for speed, focus, and on-device execution. 1. šš²š³š¶š»š²
See More
Founder of Friday AIĀ ā¢Ā 3m
From Emotional AI to Enterprise Infrastructure: The Story of Friday AI I donāt know whether to call it a phobia or obsession, but when it comes to my work, I chase correctness relentlessly. Iām not perfect, but Iām consistent. Where It Started Fri
See More
AI Deep Explorer | f...Ā ā¢Ā 11m
LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance
See MoreDownload the medial app to read full posts, comements and news.