Back

More like this

Recommendations from Medial

Image Description
Image Description

Nikhil Raj Singh

Entrepreneur | Build...ย โ€ขย 6m

Hiring AI/ML Engineer ๐Ÿš€ Join us to shape the future of AI. Work hands-on with LLMs, transformers, and cutting-edge architectures. Drive breakthroughs in model training, fine-tuning, and deployment that directly influence product and research outcom

See More
4 Replies
2
12
2

Gigaversity

Gigaversity.inย โ€ขย 8m

Overfitting, underfitting, and fitting โ€” these aren't just technical terms, but critical checkpoints in every machine learning workflow. Understanding these concepts is key to evaluating model behavior, improving generalization, and building solutio

See More
Reply
4
Image Description

Varun reddy

ย โ€ขย 

GITAMย โ€ขย 1y

Fine-Tuning: The Secret Sauce of AI Magic! Ever wonder how AI gets so smart? Itโ€™s all about fine-tuning! Imagine a pre-trained model as a genius with general knowledge. ๐Ÿง โœจ Fine-tuning takes that genius and hones its skills for a specific task, li

See More
1 Reply
4

Kimiko

Startups | AI | info...ย โ€ขย 9m

X updates its developer agreement to ban third parties from using the X API or X Content for training or fine-tuning foundation or frontier AI models.

Reply
12

Aditya Karnam

Hey I am on Medialย โ€ขย 1y

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
Reply
1
Image Description
Image Description

Narendra

Willing to contribut...ย โ€ขย 4m

I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan

See More
2 Replies
10
1

Bhoop Singh Gurjar

AI Deep Explorer | f...ย โ€ขย 12m

"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1๏ธโƒฃ Fin

See More
Reply
1
8
Image Description
Image Description

Dhruv Pithadia

A.I. Enthusiastย โ€ขย 1y

Working on a cool AI project, that involves vector db and LLM fine-tuning

2 Replies
2
Image Description
Image Description

lakshya sharan

Do not try, just do ...ย โ€ขย 1y

Random Thought : I was wondering, why ChatGPT weren't build on the Increment Learning modle.. Because I might destroy it's algo.... Let me expain.. In the world of machine learning, training models can be approached in two main ways, Batch Lea

See More
2 Replies
5

Ayush Maurya

AI Pioneerย โ€ขย 1y

Are there any Startups working on building datasets for Large Language Model Training !

Reply
4

Download the medial app to read full posts, comements and news.