🚀 Associate Innovat... • 1d
A few weeks ago, I was running yet another AI task… and I froze. Which LLM should I use this time? GPT-4 was powerful, but expensive. Claude was cheaper, but hit-or-miss. Gemini was fast, but sometimes unpredictable. I’d waste time testing each one — copy-pasting prompts, comparing results, watching the cost counter tick up. And honestly? It was starting to slow me down, big time. So instead of repeating that loop, I built something for myself. It’s called AutoLLM. It’s a tool that takes any input (prompt, task, whatever) and automatically chooses the best LLM to handle it — based on intent, quality, and cost. One call in → smart routing → the best model out. No switching tabs. No manual testing. No API juggling. Since I started using AutoLLM, my dev workflow has become faster, cheaper, and… kinda magical. I don’t think about “which model to use” anymore — the system handles it for me. Right now, it’s just a working MVP I built for my own use, but I’m considering opening it up to others. Would this be helpful for your projects or team? Curious to hear your thoughts — happy to share early access if anyone’s interested. Just DM me. 🔁
Python Developer 💻 ... • 4m
3B LLM outperforms 405B LLM 🤯 Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 🤯 🤯 LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-
See Morestuck bw building my... • 6d
When was the last time you felt market was stable for trading? For me it was last September before the downfall hated trading again but experience till September was unmatchable. this use to be the the happiest part of my day and i hated weekends ba
See MoreDownload the medial app to read full posts, comements and news.