Hi any body using foundational models (llms) in development if you doing so are you using closed like gpt or Gemini for opensource models if you are using closed source why only that because you can save money by using opensource with low parameter m
any developer here?
what's the best way to fine tune the mvp text based chat bot model
I got two data
1. main data 700 rows
2. data for fine tune more than 3000+ rows
feeling problems downloading the local model
is there more ways
in starting
See More
1 replies2 likes
Guru shankar
S&C coach, Digital m... • 11m
A community won't be formed. If you roast in public and apologize in person.
2 replies5 likes
Ganesh Nayak
Nothing is everythin... • 10m
Most people won't start their startups
Most startups will fail
Most of them won't be unicorns
Only a few will know real success
And maybe one of those few will change the world for the better
4 replies6 likes
Ayush Maurya
AI Pioneer • 3m
BREAKTHROUGH INSIGHT:
Most people train AI models.
Smart people fine-tune AI models.
But the real secret?
Learning to dance with AI's existing knowledge.
Stop forcing. Start flowing.
0 replies3 likes
Suprodip Bhattacharya
Entrepreneur || Star... • 3m
Just a scratch idea not refined what if an oldercare super app.. to fulfil all oldage people need..like from caregivers to companion to spend time,doctor checkup,instant doctor at home etc with them and where children left them in oldage home..A pack
BURNING DESIRE
FAITH
DECISION
IMAGINATION
SPECIALIZED KNOWLEDGE
PERSISTENT
ORGANIZED PLANING
.....These are the steps to become rich without any of them you won't be rich
Comment which should be added here other than these...?
October showcase won't be there??
Niket Raj Dwivedi, if yes when it will start??
3 replies5 likes
Chamarti Sreekar
Passionate about Pos... • 2m
Another open-source model has arrived, and it’s even better than DeepSeek-V3.
The Allen Institute for AI just introduced Tülu 3 (405B) 🐫, a post-training model that is a fine-tune of Llama 3.1 405B, which outperforms DeepSeek V3.