"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training:
```
python lora.py \
--train \
--model 'mistralai/Mistral-7B-Instruct-v0.2' \
-
one of the best articles that I read on deepseek and it's affect on the nvidia stock, it explains in details of how the models is trained and fine tuned to have such strong logical thinking.
0 replies9 likes
Kundan Karmakar
A Billionair at 2044ย โขย 17d
so glad seeing you all just made it to 79%, We will launch the Consulting Program soon,
Stay tuned
0 replies2 likes
Shoeb Sheikh
Build the future of ...ย โขย 6m
here is something new with ai technology which help other to make progress and upgrade there productivity inviting for collaboration using base technologies python , pre fine-tuned models , etc
0 replies2 likes
Deepak Tiwari
Co- founder Gioch Pv...ย โขย 7m
Stay Tuned....
4 replies6 likes
Sanket Jadyar
Keep it up and do yo...ย โขย 11m
Stay tuned
0 replies7 likes
Sohaib
pills and pitches , ...ย โขย 10m
Just started reading 'Atomic Habits' and finished Chapter 1! Excited to share key insights after each chapter. Stay tuned for more!