Back

Another open-source model has arrived, and it’s even better than DeepSeek-V3. The Allen Institute for AI just introduced Tülu 3 (405B) 🐫, a post-training model that is a fine-tune of Llama 3.1 405B, which outperforms DeepSeek V3.

Anonymous

Anonymous 3

Hey I am on Medial • 2m

But if people are fine tuning their models on top of the existing models then toh its not a good thing na? Like imagine training on AI output...thought that was recursion or something.

0 replies

More like this

Recommendations from Medial

Image Description

Jainil Prajapati

Turning dreams into ... • 1m

India should focus on fine-tuning existing AI models and building applications rather than investing heavily in foundational models or AI chips, says Groq CEO Jonathan Ross. Is this the right strategy for India to lead in AI innovation? Thoughts?

2 replies3 likes
Image Description

Varun reddy

 • 

GITAM • 10m

Fine-Tuning: The Secret Sauce of AI Magic! Ever wonder how AI gets so smart? It’s all about fine-tuning! Imagine a pre-trained model as a genius with general knowledge. 🧠✨ Fine-tuning takes that genius and hones its skills for a specific task, li

See More
1 replies4 likes

Bhoop singh Gurjar

AI Deep Explorer | f... • 2d

"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1️⃣ Fin

See More
0 replies7 likes
1

Dhruv Pithadia

A.I. Enthusiast • 18d

Working on a cool AI project, that involves vector db and LLM fine-tuning

0 replies2 likes

Ayush Maurya

AI Pioneer • 3m

BREAKTHROUGH INSIGHT: Most people train AI models. Smart people fine-tune AI models. But the real secret? Learning to dance with AI's existing knowledge. Stop forcing. Start flowing.

0 replies3 likes

Yogesh Jamdade

..... • 9m

Hey everyone, I'm an engineering student geeking out over Generative AI. Loving LangChain, Hugging Face models, Crew.ai's chatbots, fine-tuning, and RAG. Plus, machine learning and data science are pretty cool too! Anyone else into this stuff? Looki

See More
0 replies3 likes
Image Description

Yogesh Jamdade

..... • 9m

Hey everyone, I'm an engineering student geeking out over Generative AI. Loving LangChain, Hugging Face models, Crew.ai's chatbots, fine-tuning, and RAG. Plus, machine learning and data science are pretty cool too! Anyone else into this stuff? Looki

See More
2 replies6 likes
1
Image Description

Soumya

Developer • 4m

💡An Idea to Change the Game for AI Startups: Making AI Processing Faster, Cheaper, and Effortless Running AI models like ChatGPT, DALL·E, or AlphaCode is a computing monster—they need massive power to function, which makes them expensive to operate

See More
2 replies4 likes

Zaki Aslam

Dope • 23d

Out of all the startups and professionals here, how many are actively working on developing their own AI models? And how many are working on developing apps, tools or services based on existing AI models

0 replies5 likes

Aditya Karnam

Hey I am on Medial • 11d

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
0 replies
1

Download the medial app to read full posts, comements and news.