đ€ đđ©đđ§đđ đšđ - đąđŹ đąđ đŠđšđ«đ đđąđ đ đđ« đšđ« đŠđšđ«đ đđąđ§đ-đđźđ§đđ? We're all excited about OpenAI's o1 model and many other such bigger models, but here's what keeps me up at night: Are we witnessing a genuinely larger, more advanced LLM, or is this the result of brilliant engineering and fine-tuning of existing architectures? đđĄđ đ«đđđ„ đȘđźđđŹđđąđšđ§ đąđŹ: Can we, as users and developers, ever truly distinguish between a massive pre-trained model and an expertly fine-tuned one by ourselves? It's like trying to tell if a master chef created a new recipe or perfectly refined an existing one. The taste might be extraordinary either way. What do you think? đ§
Download the medial app to read full posts, comements and news.