"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ --data '/path/to/data' \ --batch-size 2 \ --lora-layers 8 \ --iters 1000 ``` No cloud needed—just my Mac and MLX! Highly recommend for efficient local fine-tuning. #MLX #LLaMA3.2 #LoRA #FineTuning #AppleSilicon
Download the medial app to read full posts, comements and news.