Back

lakshya sharan

Do not try, just do ... • 1y

Random Thought : I was wondering, why ChatGPT weren't build on the Increment Learning modle.. Because I might destroy it's algo.... Let me expain.. In the world of machine learning, training models can be approached in two main ways, Batch Learning and Incremental Learning. Offline training involves training the model on a fixed dataset all at once, typically split into training, and test sets. After the initial training phase, the model is used to make predictions on new data without further updates. On the other hand, online training continuously updates the model as new data comes in, learning incrementally from each new data point or batch. This allows the model to adapt to new patterns and data changes over time. ___________________________________ If later approach were adopted, someone easily manipulated the algo by feeding the wrong data... Like 2+2=6 not 4.... ___<<<<__<<<_____<<<<__________ This in my thoughts, whats yours....?? #MachineLearning #DataScience

2 Replies
5
Replies (2)

More like this

Recommendations from Medial

Aditya Karnam

Hey I am on Medial • 3m

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
Reply
1

Srinive

Digital Marketing • 7m

Career Opportunities After AI Training in Pune | Skillfloor AI training in Pune opens up various career opportunities in fields like data analysis, machine learning, and robotics. Graduates can work as AI engineers, data scientists, or AI consultant

See More
Reply
1

Inactive

AprameyaAI • 11m

Meta has released Llama 3.1, the first frontier-level open source AI model, with features such as expanded context length to 128K, support for eight languages, and the introduction of Llama 3.1 405B. The model offers flexibility and control, enabli

See More
Reply
2
9

Ayush Maurya

AI Pioneer • 5m

"Synthetic Data" is used in AI and LLM training !! • cheap • easy to produce • perfectly labelled data ~ derived from the real world data to replicate the properties and characteristics of the rela world data. It's used in training an LLM (LLMs

See More
Reply
4
Image Description

Gigaversity

Gigaversity.in • 2m

One of our recent achievements showcases how optimizing code and parallelizing processes can drastically improve machine learning model training times. The Challenge: Long Training Times Our model training process was initially taking 8 hours—slow

See More
1 Reply
4
16
Image Description

Dr Bappa Dittya Saha

We're gonna extinct ... • 1y

Loved the new algo! It's more towards user generated content, not those boring LinkedIn shit! Great work! Team Medial!

3 Replies
5
Image Description
Image Description

Abin

Orcus • 1m

Exciting AI Updates: ElevenLabs v3, Google Gemini 2.5 Pro, HeyGen 5 New AI Studio, OpenAI Data Connectors, and Google Phone App Local AI....

1 Reply
6
1
Image Description

Aura

AI Specialist | Rese... • 10m

Revolutionizing AI with Inference-Time Scaling: OpenAI's o1 Model" Inference-time Scaling: Focuses on improving performance during inference (when the model is used) rather than just training. Reasoning through Search: The o1 model enhances reasonin

See More
1 Reply
1
5

Anshuman Sharma

Turing Data into str... • 7d

🌦 Weather Prediction Using Machine Learning 📊 I’m excited to share one of my recent projects — a Weather Prediction Model built using machine learning! 🚀 📌 Project Overview: The model was trained to predict temperature based on historical weath

See More
Reply
2

Download the medial app to read full posts, comements and news.