Back

More like this

Recommendations from Medial

Image Description
Image Description

Nikhil Raj Singh

Entrepreneur | Build... • 1m

Hiring AI/ML Engineer 🚀 Join us to shape the future of AI. Work hands-on with LLMs, transformers, and cutting-edge architectures. Drive breakthroughs in model training, fine-tuning, and deployment that directly influence product and research outcom

See More
4 Replies
2
12
2

Gigaversity

Gigaversity.in • 3m

Overfitting, underfitting, and fitting — these aren't just technical terms, but critical checkpoints in every machine learning workflow. Understanding these concepts is key to evaluating model behavior, improving generalization, and building solutio

See More
Reply
4
Image Description

Varun reddy

 • 

GITAM • 1y

Fine-Tuning: The Secret Sauce of AI Magic! Ever wonder how AI gets so smart? It’s all about fine-tuning! Imagine a pre-trained model as a genius with general knowledge. 🧠✨ Fine-tuning takes that genius and hones its skills for a specific task, li

See More
1 Reply
4

Kimiko

Startups | AI | info... • 5m

X updates its developer agreement to ban third parties from using the X API or X Content for training or fine-tuning foundation or frontier AI models.

Reply
12

Aditya Karnam

Hey I am on Medial • 7m

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
Reply
1
Image Description
Image Description

Narendra

Willing to contribut... • 2d

I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan

See More
2 Replies
10
1

AI Engineer

AI Deep Explorer | f... • 7m

"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1️⃣ Fin

See More
Reply
1
8
Image Description
Image Description

Dhruv Pithadia

A.I. Enthusiast • 7m

Working on a cool AI project, that involves vector db and LLM fine-tuning

2 Replies
2
Image Description
Image Description

lakshya sharan

Do not try, just do ... • 1y

Random Thought : I was wondering, why ChatGPT weren't build on the Increment Learning modle.. Because I might destroy it's algo.... Let me expain.. In the world of machine learning, training models can be approached in two main ways, Batch Lea

See More
2 Replies
5

Ayush Maurya

AI Pioneer • 10m

Are there any Startups working on building datasets for Large Language Model Training !

Reply
4

Download the medial app to read full posts, comements and news.