Back

Narendra

Willing to contribut... • 9d

I spent 4 weeks interviewing 30 people who tried to fine-tune LLMs. Here's what I found:70% of them spent 2-4 weeks just preparing data. 60% copied hyperparameters from blog posts without understanding why. 53% abandoned projects because costs spiraled to $2-3K unexpectedly.The pattern was clear: Fine-tuning isn't a technical problem anymore. It's a workflow problem.Real quotes from my interviews:A YC founder: "We spent 3 weeks getting data in the right format. Had to redo it twice. Then wondered if it was even better than prompt engineering. Dropped it."An ML engineer at a unicorn startup: "I tried 50 different learning rates. Never knew if mine was good or just good enough. Shipped the least-bad version because we ran out of runway."A technical co-founder: "Labeling cost us $8K. Still had consistency issues. Had to re-label 40% of examples."The real insight: Everyone's solving the same problems from scratch. Data formatting. Hyperparameter tuning. Cost estimation. But no one's built the workflow that just... works.Cursor did this for coding—took VS Code's power and made it approachable for everyone. I'm building the same thing for LLM fine-tuning.What I'm building: → Upload your data. Get instant quality scoring + format fixes. → Describe what you want in plain English. Get auto-configured hyperparameters. → See cost estimates upfront. No $2K surprises. → Train in hours, not weeks.Reality check: This could completely fail. My validation shows 73% would pay $400-500/month, but that's 30 people saying "maybe." I need to prove people will actually pay.Looking for 5 people to pilot this (free access).Requirements:Attempted fine-tuning in the last 1-2 months. Hit roadblocks with data prep, hyperparameters, or costs2-3 hours available to test the MVP. Reply or DM if this sounds like your experience. Building this in public—will share results good or bad.#MachineLearning #LLM #BuildInPublic #AI #Startups

Reply

More like this

Recommendations from Medial

Image Description
Image Description

Dhruv Pithadia

A.I. Enthusiast • 8m

Working on a cool AI project, that involves vector db and LLM fine-tuning

2 Replies
2

Comet

#freelancer • 11m

10 Best LLM Tools to Simplify Your Workflow 1️⃣ LangChain LangChain's flexibility suits complex AI and multi-stage processing. 2️⃣ Cohere Cohere offers integration, scalability, and customization. 3️⃣ Falcon Falcon offers cost-effective, high-per

See More
Reply
1
2
Image Description
Image Description

Narendra

Willing to contribut... • 8d

I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan

See More
2 Replies
10
1
Image Description

Varun reddy

 • 

GITAM • 1y

Fine-Tuning: The Secret Sauce of AI Magic! Ever wonder how AI gets so smart? It’s all about fine-tuning! Imagine a pre-trained model as a genius with general knowledge. 🧠✨ Fine-tuning takes that genius and hones its skills for a specific task, li

See More
1 Reply
4
Image Description
Image Description

Nikhil Raj Singh

Entrepreneur | Build... • 2m

Hiring AI/ML Engineer 🚀 Join us to shape the future of AI. Work hands-on with LLMs, transformers, and cutting-edge architectures. Drive breakthroughs in model training, fine-tuning, and deployment that directly influence product and research outcom

See More
4 Replies
2
12
2
Image Description
Image Description

Comet

#freelancer • 3m

The “Entry-Level” AI Engineer Job Description of 2025 🚨 HR today: > “We’re looking for someone who knows Python, OOP, NumPy, Pandas, SQL, Advanced Excel, Power BI, ALL ML/DL models, every AWS service, LangChain, Docker, Kubernetes, LLM fine-tuning

See More
3 Replies
11

Ayush Maurya

AI Pioneer • 9m

"Synthetic Data" is used in AI and LLM training !! • cheap • easy to produce • perfectly labelled data ~ derived from the real world data to replicate the properties and characteristics of the rela world data. It's used in training an LLM (LLMs

See More
Reply
4

Aditya Karnam

Hey I am on Medial • 7m

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
Reply
1

Download the medial app to read full posts, comements and news.