Software Developerย โขย 3m
1. LoRA on a reasonably small open model (best balance for small compute): apply low-rank adapters (PEFT/LoRA). Requires less GPU memory and works well for 700โ3000 rows. 2. Full fine-tune (costly / heavy): only if you have >A100 GPU or cloud paid GPU. Not recommended for early MVP. 3. No-fine-tune alternative (fast & free): use retrieval + prompting (RAG) โ keep base LLM and add context from your 3k+ rows. Great when compute is limited.
Business Coachย โขย 3m
๐ฅ Government set to name ~8 Indian teams for foundational model incentives next week โ second-round beneficiaries may include BharatGen; GPU access remains tight as only ~17,374 of planned 34,333 GPUs are installed so far. ๐ค Why It Matters โ More
See MoreWilling to contribut...ย โขย 1m
I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan
See More
Founder of Friday AIย โขย 6m
๐จ Open AI is an Wrapper๐๐คฏ Hot take, but letโs break it down logically: OpenAI is not a full-stack AI company โ itโs a high-level wrapper over Azure and NVIDIA. Hereโs why that matters ๐ ๐น 1. Infra Backbone = Microsoft Azure Almost 90%+ of Op
See More
AI Deep Explorer | f...ย โขย 9m
"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1๏ธโฃ Fin
See More
Founder | Agentic AI...ย โขย 29d
SLM vs LLM โ which AI model is best for you? Iโve explained both in simple steps below. ๐ฆ๐๐ (๐ฆ๐บ๐ฎ๐น๐น ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น) (๐ด๐ต๐ฆ๐ฑ-๐ฃ๐บ-๐ด๐ต๐ฆ๐ฑ) Lightweight AI models built for speed, focus, and on-device execution. 1. ๐๐ฒ๐ณ๐ถ๐ป๐ฒ
See More
Founder of Friday AIย โขย 9d
From Emotional AI to Enterprise Infrastructure: The Story of Friday AI I donโt know whether to call it a phobia or obsession, but when it comes to my work, I chase correctness relentlessly. Iโm not perfect, but Iโm consistent. Where It Started Fri
See More
AI Deep Explorer | f...ย โขย 8m
LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance
See MoreFounder | Agentic AI...ย โขย 1m
Steps to building real-world AI systems. I've given a simple detailed explanation below. ๐ฆ๐๐ฒ๐ฝ 1 โ ๐๐ฒ๐ฝ๐น๐ผ๐๐บ๐ฒ๐ป๐ & ๐๐ผ๐บ๐ฝ๐๐๐ฒ ๐๐ฎ๐๐ฒ๐ฟ โข This is where all the ๐ต๐ฒ๐ฎ๐๐ ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐๐ถ๐ป๐ด ๐ต๐ฎ๐ฝ๐ฝ๐ฒ๐ป๐. โข It provides the ๐ต๐ฎ๐ฟ๏ฟฝ
See More
Download the medial app to read full posts, comements and news.