๐ค ๐๐ฉ๐๐ง๐๐ ๐จ๐ - ๐ข๐ฌ ๐ข๐ญ ๐ฆ๐จ๐ซ๐ ๐๐ข๐ ๐ ๐๐ซ ๐จ๐ซ ๐ฆ๐จ๐ซ๐ ๐๐ข๐ง๐-๐ญ๐ฎ๐ง๐๐? We're all excited about OpenAI's o1 model and many other such bigger models, but here's what keeps me up at night: Are we witnessing a genuinely larger, more advanced LLM, or is this the result of brilliant engineering and fine-tuning of existing architectures? ๐๐ก๐ ๐ซ๐๐๐ฅ ๐ช๐ฎ๐๐ฌ๐ญ๐ข๐จ๐ง ๐ข๐ฌ: Can we, as users and developers, ever truly distinguish between a massive pre-trained model and an expertly fine-tuned one by ourselves? It's like trying to tell if a master chef created a new recipe or perfectly refined an existing one. The taste might be extraordinary either way. What do you think? ๐ง
Download the medial app to read full posts, comements and news.