Startups | AI | info...ย โขย 5m
X updates its developer agreement to ban third parties from using the X API or X Content for training or fine-tuning foundation or frontier AI models.

ย โขย
GITAMย โขย 1y
Fine-Tuning: The Secret Sauce of AI Magic! Ever wonder how AI gets so smart? Itโs all about fine-tuning! Imagine a pre-trained model as a genius with general knowledge. ๐ง โจ Fine-tuning takes that genius and hones its skills for a specific task, li
See More
Entrepreneur | Build...ย โขย 2m
Hiring AI/ML Engineer ๐ Join us to shape the future of AI. Work hands-on with LLMs, transformers, and cutting-edge architectures. Drive breakthroughs in model training, fine-tuning, and deployment that directly influence product and research outcom
See MoreAI Deep Explorer | f...ย โขย 7m
"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1๏ธโฃ Fin
See More
Hey I am on Medialย โขย 8m
"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -
See MoreTurning dreams into ...ย โขย 8m
India should focus on fine-tuning existing AI models and building applications rather than investing heavily in foundational models or AI chips, says Groq CEO Jonathan Ross. Is this the right strategy for India to lead in AI innovation? Thoughts?
Gigaversity.inย โขย 4m
Overfitting, underfitting, and fitting โ these aren't just technical terms, but critical checkpoints in every machine learning workflow. Understanding these concepts is key to evaluating model behavior, improving generalization, and building solutio
See More



Willing to contribut...ย โขย 15d
I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan
See More
AI Deep Explorer | f...ย โขย 7m
LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance
See MoreHey I am on Medialย โขย 11m
๐ Hiring for Generative AI Engineer (Remote) ๐ AI-based SaaS company in the health sector seeks Generative AI Engineer. Requirements: - 1-3 years of experience in Generative AI - Expertise in LLMs and Diffusion Models - Strong foundation in compu
See MoreDownload the medial app to read full posts, comements and news.