Hey I am on Medial • 1d
I completely understand where you're coming from. However, this isn’t about quick money or repackaging open-source models. Our solution involves custom-trained models, domain-specific tuning, and has been tested over multiple days in real scenarios by another vendor to ensure reliability and performance. We’re transparent about the architecture and are open to demonstrating the code, running it on custom data, and discussing how it can be adapted to specific use cases. Happy to clarify further if you’re genuinely interested.
Software Engineer | ... • 7m
💡 5 Things You Need to Master for learn for integrating AI into your project 1️⃣ Retrieval-Augmented Generation (RAG): Combine search with AI for precise and context-aware outputs. 2️⃣ Vector Databases: Learn how to store and query embeddings for e
See MoreTurning dreams into ... • 4m
India should focus on fine-tuning existing AI models and building applications rather than investing heavily in foundational models or AI chips, says Groq CEO Jonathan Ross. Is this the right strategy for India to lead in AI innovation? Thoughts?
AI Deep Explorer | f... • 3m
"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1️⃣ Fin
See MoreAI agent developer |... • 12d
Open ai has released the o3 pro model which is well enough to replace a senior software developer To make things worse it can be the foundational steps towards AGI by open ai First for the newbies we have two types of models Two types of models
See More19yo ✨ #developer le... • 1y
Meta, formerly Facebook, has unveiled two new open-source AI models called Llama 3 8B and Llama 3 70B, with 8 billion and 70 billion parameters respectively. 🚀 These models outperform some rivals and spark debate over open versus closed source AI de
See MoreDownload the medial app to read full posts, comements and news.