Back to feeds

lakshya sharan

Stealth • 5m

Random Thought : I was wondering, why ChatGPT weren't build on the Increment Learning modle.. Because I might destroy it's algo.... Let me expain.. In the world of machine learning, training models can be approached in two main ways, Batch Learning and Incremental Learning. Offline training involves training the model on a fixed dataset all at once, typically split into training, and test sets. After the initial training phase, the model is used to make predictions on new data without further updates. On the other hand, online training continuously updates the model as new data comes in, learning incrementally from each new data point or batch. This allows the model to adapt to new patterns and data changes over time. ___________________________________ If later approach were adopted, someone easily manipulated the algo by feeding the wrong data... Like 2+2=6 not 4.... ___<<<<__<<<_____<<<<__________ This in my thoughts, whats yours....?? #MachineLearning #DataScience

2 replies5 likes
Replies (2)

More like this

Recommendations from Medial

Srinive

Stealth • 19d

Career Opportunities After AI Training in Pune | Skillfloor AI training in Pune opens up various career opportunities in fields like data analysis, machine learning, and robotics. Graduates can work as AI engineers, data scientists, or AI consultant

See More
0 replies1 like

Ayush Maurya

Stealth • 12d

Are there any Startups working on building datasets for Large Language Model Training !

0 replies4 likes

Inactive

Stealth • 5m

Meta has released Llama 3.1, the first frontier-level open source AI model, with features such as expanded context length to 128K, support for eight languages, and the introduction of Llama 3.1 405B. The model offers flexibility and control, enabli

See More
0 replies9 likes
2
Image Description

Aura

Stealth • 3m

Revolutionizing AI with Inference-Time Scaling: OpenAI's o1 Model" Inference-time Scaling: Focuses on improving performance during inference (when the model is used) rather than just training. Reasoning through Search: The o1 model enhances reasonin

See More
1 replies5 likes
1
Anonymous
Image Description

Hey founders any new updates for the medial app

1 replies7 likes
Image Description

Harshavardhan

Stealth • 9m

Started a new YouTube Channel on AI News and Updates. I will also start posting new AI Tools, News and Updates in here, just make sure to follow me.

2 replies8 likes
Image Description

Bappa Dittya Saha

Stealth • 7m

Loved the new algo! It's more towards user generated content, not those boring LinkedIn shit! Great work! Team Medial!

3 replies4 likes

Aryan patil

Stealth • 9m

In traditional programming, the focus is on using rules and data to find answers. This is typically represented as rules + data = answers. In contrast, AI/ML takes a different approach: Answers + data = rules. In AI/ML, we train models by providing

See More
0 replies4 likes

Param Suthar

Stealth • 6m

"Education is not the learning of facts, but it is a training of the mind to think" ~ Albert Einstein

0 replies5 likes
1
Image Description
Image Description

Ripudaman Singh

Stealth • 3m

Recently had an idea about training a model on memes and making a simple classification model. Memes can be dark, intelligent, puns, and all sorts of things. If a model is trained on such a labelled dataset of memes, it can better classify a future m

See More
9 replies8 likes

Download the medial app to read full posts, comements and news.