Back

Recently had an idea about training a model on memes and making a simple classification model. Memes can be dark, intelligent, puns, and all sorts of things. If a model is trained on such a labelled dataset of memes, it can better classify a future m

See More

Ripudaman Singh

Trying to build big,... • 8m

Yes it’s entirely possible right. That’s the whole point of training a model. Once it is trained, it can be fine tuned to accomodate a person’s interests,opinions and preferences

0 replies

More like this

Recommendations from Medial

Ayush

 • 

Medial • 3m

one of the best articles that I read on deepseek and it's affect on the nvidia stock, it explains in details of how the models is trained and fine tuned to have such strong logical thinking.

0 replies9 likes

Aditya Karnam

Hey I am on Medial • 1m

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
0 replies
1
Image Description

Varun reddy

 • 

GITAM • 12m

Fine-Tuning: The Secret Sauce of AI Magic! Ever wonder how AI gets so smart? It’s all about fine-tuning! Imagine a pre-trained model as a genius with general knowledge. 🧠✨ Fine-tuning takes that genius and hones its skills for a specific task, li

See More
1 replies4 likes

Bhoop singh Gurjar

AI Deep Explorer | f... • 1m

"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1️⃣ Fin

See More
0 replies8 likes
1

Swami Gadila

Founder of FRIDAY • 35m

Introducing Friday — An Emotionally Intelligent AI Fine-Tuned to Feel. We didn’t build an LLM from scratch. We did something smarter. We took a state-of-the-art pre-trained model and fine-tuned it with 1 million emotionally rich, real-world tokens

See More
0 replies2 likes
Image Description
Image Description

Vishu Bheda

 • 

Medial • 5m

Jensen Huang, the CEO of NVIDIA, describes how AI is advancing in three key dimensions: 1. Pre-training: This is like getting a college degree. AI models are trained on massive datasets to develop broad, general knowledge about the world. 2. Post-

See More
5 replies13 likes
2
Image Description
Image Description

Chirotpal Das

Building an AI eco-s... • 4m

🤔 𝐎𝐩𝐞𝐧𝐀𝐈 𝐨𝟏 - 𝐢𝐬 𝐢𝐭 𝐦𝐨𝐫𝐞 𝐛𝐢𝐠𝐠𝐞𝐫 𝐨𝐫 𝐦𝐨𝐫𝐞 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐞𝐝? We're all excited about OpenAI's o1 model and many other such bigger models, but here's what keeps me up at night: Are we witnessing a genuinely larger, more a

See More
4 replies13 likes

Ayush gandewar

Just go with the flo... • 1m

The secret to creating an error free AI is the main science behind ai is we give output and we have a expected output in between we have a fine layer in which we have neural network we need to increase the neural network in the layers instead of add

See More
0 replies2 likes
1
Image Description
Image Description

Austin Johnson

Helping Artists Find... • 3m

Exposed: The Truth Behind Volkai – Is It Truly India Made AI? So, I recently dug into the buzz around Volkai, the AI company gaining fame as India’s very own innovation in the AI space. Everyone's been hyping it up for being completely built from

See More
15 replies8 likes
1
Image Description

Glory Jagjivan

In the process of Le... • 5m

An AI Outfit builder the user inputs the clothes they have the whole wardrobe and their style preferences, body structure, weight, height, accessories they have, body shape, and describe the events they are dressing up for tha application give the us

See More
1 replies3 likes

Download the medial app to read full posts, comements and news.