Agreed, you will spend millions and even billionaire still won't be able to match the performance of Existing Models.. Better than this is to fine tune existing models and spend on marketing
0 replies
More like this
Recommendations from Medial
Ayush Maurya
AI Pioneer • 5m
BREAKTHROUGH INSIGHT:
Most people train AI models.
Smart people fine-tune AI models.
But the real secret?
Learning to dance with AI's existing knowledge.
Stop forcing. Start flowing.
0 replies3 likes
Jainil Prajapati
Turning dreams into ... • 3m
India should focus on fine-tuning existing AI models and building applications rather than investing heavily in foundational models or AI chips, says Groq CEO Jonathan Ross.
Is this the right strategy for India to lead in AI innovation? Thoughts?
💡 5 Things You Need to Master for learn for integrating AI into your project
1️⃣ Retrieval-Augmented Generation (RAG): Combine search with AI for precise and context-aware outputs.
2️⃣ Vector Databases: Learn how to store and query embeddings for e
Will Al software engineer Devin take our jobs?
It's a concern on everyone's mind.
But the real question is:
Will Devin enhance our jobs?
The answer is a resounding YES.
Devin autonomously builds complete apps with just a prompt.
Devin con learn
See More
1 replies4 likes
Chirotpal Das
Building an AI eco-s... • 5m
🤔 𝐎𝐩𝐞𝐧𝐀𝐈 𝐨𝟏 - 𝐢𝐬 𝐢𝐭 𝐦𝐨𝐫𝐞 𝐛𝐢𝐠𝐠𝐞𝐫 𝐨𝐫 𝐦𝐨𝐫𝐞 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐞𝐝?
We're all excited about OpenAI's o1 model and many other such bigger models, but here's what keeps me up at night: Are we witnessing a genuinely larger, more a
See More
4 replies13 likes
Tirush V
Infrastructure engin... • 6m
So imagine you have need a product that you upload any type of doc and chat with it. People like students, teachers, lawyers….. so on would like and need these tool. What if there’s a tool that performance much better with accuracy?
Imagine existing
See More
0 replies6 likes
Bhoop singh Gurjar
AI Deep Explorer | f... • 1m
Top 10 AI Research Papers Since 2015 🧠
1. Attention Is All You Need (Vaswani et al., 2017)
Impact: Introduced the Transformer architecture, revolutionizing natural language processing (NLP).
Key contribution: Attention mechanism, enabling models
"A Survey on Post-Training of Large Language Models"
This paper systematically categorizes post-training into five major paradigms:
1. Fine-Tuning
2. Alignment
3. Reasoning Enhancement
4. Efficiency Optimization
5. Integration & Adaptation
1️⃣ Fin
The Next AI Battleground? Open-Source LLMs Are Gaining Fast
GPT-4 may still lead the pack — but the real action is now in open-source LLMs, and the gap is closing *faster than anyone expected
In just 3 months:
- Mistral’s Mixtral matched GPT-3.5 on