Back

lakshya sharan

Do not try, just do ... • 11m

Random Thought : I was wondering, why ChatGPT weren't build on the Increment Learning modle.. Because I might destroy it's algo.... Let me expain.. In the world of machine learning, training models can be approached in two main ways, Batch Learning and Incremental Learning. Offline training involves training the model on a fixed dataset all at once, typically split into training, and test sets. After the initial training phase, the model is used to make predictions on new data without further updates. On the other hand, online training continuously updates the model as new data comes in, learning incrementally from each new data point or batch. This allows the model to adapt to new patterns and data changes over time. ___________________________________ If later approach were adopted, someone easily manipulated the algo by feeding the wrong data... Like 2+2=6 not 4.... ___<<<<__<<<_____<<<<__________ This in my thoughts, whats yours....?? #MachineLearning #DataScience

2 replies5 likes
Replies (2)

More like this

Recommendations from Medial

Aditya Karnam

Hey I am on Medial • 2m

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
0 replies
1

Srinive

Digital Marketing • 5m

Career Opportunities After AI Training in Pune | Skillfloor AI training in Pune opens up various career opportunities in fields like data analysis, machine learning, and robotics. Graduates can work as AI engineers, data scientists, or AI consultant

See More
0 replies1 like

Inactive

AprameyaAI • 10m

Meta has released Llama 3.1, the first frontier-level open source AI model, with features such as expanded context length to 128K, support for eight languages, and the introduction of Llama 3.1 405B. The model offers flexibility and control, enabli

See More
0 replies9 likes
2

Ayush Maurya

AI Pioneer • 4m

"Synthetic Data" is used in AI and LLM training !! • cheap • easy to produce • perfectly labelled data ~ derived from the real world data to replicate the properties and characteristics of the rela world data. It's used in training an LLM (LLMs

See More
0 replies4 likes
Image Description

Gigaversity

Gigaversity.in • 21d

One of our recent achievements showcases how optimizing code and parallelizing processes can drastically improve machine learning model training times. The Challenge: Long Training Times Our model training process was initially taking 8 hours—slow

See More
1 replies16 likes
4
Image Description

Aura

AI Specialist | Rese... • 8m

Revolutionizing AI with Inference-Time Scaling: OpenAI's o1 Model" Inference-time Scaling: Focuses on improving performance during inference (when the model is used) rather than just training. Reasoning through Search: The o1 model enhances reasonin

See More
1 replies5 likes
1

Ayush Maurya

AI Pioneer • 5m

Are there any Startups working on building datasets for Large Language Model Training !

0 replies4 likes

Jayaragul _

Hey I am on Medial • 4m

model is based on trying data set...

0 replies5 likes
Image Description

Dr Bappa Dittya Saha

We're gonna extinct ... • 1y

Loved the new algo! It's more towards user generated content, not those boring LinkedIn shit! Great work! Team Medial!

3 replies5 likes
Image Description
Image Description

Chamarti Sreekar

Passionate about Pos... • 1m

deepseek r2 leaks • 97.3% cheaper than gpt-4o • 1.2t parameters, 78b active, hybrid moe • 5.2PB training data, 89.7% on C-Eval2.0 • better vision, 92.4% on COCO • 82% utilization in huawei ascend 910b new era 🔥

15 replies53 likes
28

Download the medial app to read full posts, comements and news.