Do not try, just do ...ย โขย 1y
Random Thought : I was wondering, why ChatGPT weren't build on the Increment Learning modle.. Because I might destroy it's algo.... Let me expain.. In the world of machine learning, training models can be approached in two main ways, Batch Learning and Incremental Learning. Offline training involves training the model on a fixed dataset all at once, typically split into training, and test sets. After the initial training phase, the model is used to make predictions on new data without further updates. On the other hand, online training continuously updates the model as new data comes in, learning incrementally from each new data point or batch. This allows the model to adapt to new patterns and data changes over time. ___________________________________ If later approach were adopted, someone easily manipulated the algo by feeding the wrong data... Like 2+2=6 not 4.... ___<<<<__<<<_____<<<<__________ This in my thoughts, whats yours....?? #MachineLearning #DataScience
โLearning, building,...ย โขย 1m
Day 8/60: Batch vs. Online Learning! Good morning everyone my name is Anuj Tongse and today is day 8/60 of AI machine learning challenge The AI Machine Learning Challenge continues! Today was a deep dive into how models digest data. ๐ง โThe Break
See MoreโLearning, building,...ย โขย 1m
Day 7/60: " Machine has 7 letters " " learning has 8 " You guys better know the reason My name is Anuj Tongse and after the pivot from Deep learning to machine learning Today's updates are here:- The Al Machine Learning Challenge continues! Today
See MoreHey I am on Medialย โขย 1y
"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -
See MoreFounder | Agentic AI...ย โขย 5m
4 different ways of training LLM's. I've given a simple detailed explanation below. 1.) ๐๐ฐ๐ฐ๐๐ฟ๐ฎ๐๐ฒ ๐๐ฎ๐๐ฎ ๐๐๐ฟ๐ฎ๐๐ถ๐ผ๐ป (๐๐๐ฒ๐ฝ-๐ฏ๐-๐๐๐ฒ๐ฝ) Prepares clean, consistent, and useful data so the model learns effectively. 1. Collect text
See MoreGigaversity.inย โขย 9m
Overfitting, underfitting, and fitting โ these aren't just technical terms, but critical checkpoints in every machine learning workflow. Understanding these concepts is key to evaluating model behavior, improving generalization, and building solutio
See MoreGigaversity.inย โขย 11m
One of our recent achievements showcases how optimizing code and parallelizing processes can drastically improve machine learning model training times. The Challenge: Long Training Times Our model training process was initially taking 8 hoursโslow
See MoreDownload the medial app to read full posts, comements and news.