Exposed: The Truth Behind Volkai – Is It Truly India Made AI?
So, I recently dug into the buzz around Volkai, the AI company gaining fame as India’s very own innovation in the AI space. Everyone's been hyping it up for being completely built from
Fine-Tuning: The Secret Sauce of AI Magic!
Ever wonder how AI gets so smart? It’s all about fine-tuning!
Imagine a pre-trained model as a genius with general knowledge. 🧠✨ Fine-tuning takes that genius and hones its skills for a specific task, li
See More
1 replies4 likes
Dhruv Pithadia
A.I. Enthusiast • 2m
Working on a cool AI project, that involves vector db and LLM fine-tuning
0 replies2 likes
Kimiko
Startups | AI | info... • 16h
X updates its developer agreement to ban third parties from using the X API or X Content for training or fine-tuning foundation or frontier AI models.
0 replies9 likes
Venture Linkup
Where Businesses Con... • 1m
A fine is a tax for doing wrong. A tax is a fine for doing well.
I really want to work on AI projects but I'm inexperienced with company work, I used to work as a research intern at a lab and was a data science intern at another place. I really want to get into working on hf models, langchain, langgraph, fine-tuni
See More
1 replies10 likes
Bhoop singh Gurjar
AI Deep Explorer | f... • 2m
"A Survey on Post-Training of Large Language Models"
This paper systematically categorizes post-training into five major paradigms:
1. Fine-Tuning
2. Alignment
3. Reasoning Enhancement
4. Efficiency Optimization
5. Integration & Adaptation
1️⃣ Fin
Does anyone know how to publish an e book directly? Or do I need publishers for e books as well?
2 replies3 likes
Jainil Prajapati
Turning dreams into ... • 3m
India should focus on fine-tuning existing AI models and building applications rather than investing heavily in foundational models or AI chips, says Groq CEO Jonathan Ross.
Is this the right strategy for India to lead in AI innovation? Thoughts?
2 replies3 likes
Aditya Karnam
Hey I am on Medial • 2m
"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training:
```
python lora.py \
--train \
--model 'mistralai/Mistral-7B-Instruct-v0.2' \
-