so many models are dropping everday but claude sonnet3.5 still remains as dev’s first choice.
claude really did something right with sonnet3.5
PRATHAM
Experimenting On lea... • 2m
o3 tho caters a huge audience rather than niche audience like Devs but the fine tuned models may turn out well
0 replies
More like this
Recommendations from Medial
Kundan Karmakar
A Billionair at 2044 • 25d
India is a huge money printing machine in Mass Market rather than Niche Market.
2 replies17 likes
Shoeb Sheikh
Build the future of ... • 6m
here is something new with ai technology which help other to make progress and upgrade there productivity inviting for collaboration using base technologies python , pre fine-tuned models , etc
0 replies2 likes
Jainil Prajapati
Turning dreams into ... • 1m
India should focus on fine-tuning existing AI models and building applications rather than investing heavily in foundational models or AI chips, says Groq CEO Jonathan Ross.
Is this the right strategy for India to lead in AI innovation? Thoughts?
2 replies3 likes
Ayush
•
Medial • 2m
one of the best articles that I read on deepseek and it's affect on the nvidia stock, it explains in details of how the models is trained and fine tuned to have such strong logical thinking.
Huge announcement from Meta. Welcome Llama 3.1🔥
This is all you need to know about it:
The new models:
- The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models
When was the last time you read a random article or a blog on your first attempts ?. As we see there is huge lack of attention span in people, that is the reason we have huge audience on instagram reels and youtube shorts rather than long form conten
See More
13 replies15 likes
Harsh Gupta
Digital Marketer (Me... • 11m
Hi Medial family 👋
Here is some key lesson for D2C Brands !
Esse tho dekh le kaam se kaam , usse tho dekh nahi paya tuu 👀🤧
1) FIND YOUR NICHE : Don't be afraid to carve out a specific niche within your market. A well-defined target audience all
🤔 𝐎𝐩𝐞𝐧𝐀𝐈 𝐨𝟏 - 𝐢𝐬 𝐢𝐭 𝐦𝐨𝐫𝐞 𝐛𝐢𝐠𝐠𝐞𝐫 𝐨𝐫 𝐦𝐨𝐫𝐞 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐞𝐝?
We're all excited about OpenAI's o1 model and many other such bigger models, but here's what keeps me up at night: Are we witnessing a genuinely larger, more a
See More
4 replies13 likes
Soumya
Developer • 4m
💡An Idea to Change the Game for AI Startups: Making AI Processing Faster, Cheaper, and Effortless
Running AI models like ChatGPT, DALL·E, or AlphaCode is a computing monster—they need massive power to function, which makes them expensive to operate
See More
2 replies4 likes
Bhoop singh Gurjar
AI Deep Explorer | f... • 10d
"A Survey on Post-Training of Large Language Models"
This paper systematically categorizes post-training into five major paradigms:
1. Fine-Tuning
2. Alignment
3. Reasoning Enhancement
4. Efficiency Optimization
5. Integration & Adaptation
1️⃣ Fin