Back

Subhajit Mandal

Software Developer • 20d

1. LoRA on a reasonably small open model (best balance for small compute): apply low-rank adapters (PEFT/LoRA). Requires less GPU memory and works well for 700–3000 rows. 2. Full fine-tune (costly / heavy): only if you have >A100 GPU or cloud paid GPU. Not recommended for early MVP. 3. No-fine-tune alternative (fast & free): use retrieval + prompting (RAG) — keep base LLM and add context from your 3k+ rows. Great when compute is limited.

Reply

More like this

Recommendations from Medial

Image Description

Sandeep Prasad

Business Coach • 23d

🔥 Government set to name ~8 Indian teams for foundational model incentives next week – second-round beneficiaries may include BharatGen; GPU access remains tight as only ~17,374 of planned 34,333 GPUs are installed so far. 🤔 Why It Matters – More

See More
Reply
2
1

Swamy Gadila

Founder of Friday AI • 3m

🚨 Open AI is an Wrapper👀🤯 Hot take, but let’s break it down logically: OpenAI is not a full-stack AI company — it’s a high-level wrapper over Azure and NVIDIA. Here’s why that matters 👇 🔹 1. Infra Backbone = Microsoft Azure Almost 90%+ of Op

See More
Reply
2
4

AI Engineer

AI Deep Explorer | f... • 5m

LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance

See More
Reply
2

AI Engineer

AI Deep Explorer | f... • 6m

"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1️⃣ Fin

See More
Reply
1
8
Image Description

Soumya

Developer • 10m

💡An Idea to Change the Game for AI Startups: Making AI Processing Faster, Cheaper, and Effortless Running AI models like ChatGPT, DALL·E, or AlphaCode is a computing monster—they need massive power to function, which makes them expensive to operate

See More
2 Replies
4
Image Description

Animesh Kumar Singh

Hey I am on Medial • 2m

The "AI Personality Forge: Product :"Cognate" Cognate is not an AI assistant. It's a platform on your phone or computer that lets you forge, train, and deploy dozens of tiny, specialized AI "personalities" for specific tasks in seconds. How it wo

See More
1 Reply
2

Rahul Agarwal

Founder | Agentic AI... • 2d

Fine-tune vs Prompt vs Context Engineering. Simple step-by-step breakdown for each approach. 𝗙𝗶𝗻𝗲-𝗧𝘂𝗻𝗶𝗻𝗴 (𝗠𝗼𝗱𝗲𝗹-𝗟𝗲𝘃𝗲𝗹 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻) 𝗙𝗹𝗼𝘄: 1. Collect Data → Gather domain-specific info (e.g., legal docs). 2. Sta

See More
Reply
2
13
Image Description
Image Description

Venkatesh

Bussiness Developmen... • 10m

Winning Against the Odds: How One Small Business Secured a Tender A Dream Worth Pursuing Ravi Kumar, the founder of BuildRight Constructions, had always dreamed of taking on larger projects. However, competing against industry giants felt impossible

See More
6 Replies
4
Image Description

Tarun Suthar

CA Inter | CS Execut... • 1m

🚀 "Artificial Intelligence Trends Report" by "Mary Meeker" 🚀 The AI Window Is Now: Where Founders Should Build in 2025-2030.. The BOND AI Trends 2025 report is one of the most data-rich looks at how Artificial Intelligence is reshaping the global

See More
2 Replies
4

Download the medial app to read full posts, comements and news.