Back

Chirotpal Das

Building an AI eco-s... ‱ 3m

đŸ€” đŽđ©đžđ§đ€đˆ 𝐹𝟏 - 𝐱𝐬 𝐱𝐭 đŠđšđ«đž đ›đąđ đ đžđ« đšđ« đŠđšđ«đž 𝐟𝐱𝐧𝐞-𝐭𝐼𝐧𝐞𝐝? We're all excited about OpenAI's o1 model and many other such bigger models, but here's what keeps me up at night: Are we witnessing a genuinely larger, more advanced LLM, or is this the result of brilliant engineering and fine-tuning of existing architectures? 𝐓𝐡𝐞 đ«đžđšđ„ đȘ𝐼𝐞𝐬𝐭𝐱𝐹𝐧 𝐱𝐬: Can we, as users and developers, ever truly distinguish between a massive pre-trained model and an expertly fine-tuned one by ourselves? It's like trying to tell if a master chef created a new recipe or perfectly refined an existing one. The taste might be extraordinary either way. What do you think? 🧠

4 replies13 likes
Replies (4)

Download the medial app to read full posts, comements and news.