17 | Founder of Styl...ย โขย 7m
For past context, I am launching an AI SaaS that aims to lower llm subscription fees by providing users 15 plus llm models having comparable performance to ChatGpt models. I am working on optimizing pricing. Which is better. Please comment the reason if possible and follow me for more.
building hatchup.aiย โขย 2m
DeepSeek has quietly dropped V3.1, a 685B-parameter open-source LLM on Hugging Faceโ128K token context, hybrid reasoning/chat/coding, multi-precision support. Early benchmarks: 71.6 % Aider coding, on par with proprietary models & 68ร cheaper.

Founder of Friday AIย โขย 21d
Big News: Friday AI โ Adaptive API is Coming! Weโre launching Adaptive API, the worldโs first real-time context scaling framework for LLMs. Today, AI wastes massive tokens on static context โ chat, code, or docs all use the same window. The result?
See More
Python Developer | E...ย โขย 9m
Indiaโs large language model (LLM) is expected to be ready within the next 10 months, said the Minister of Electronics and IT Ashwini Vaishnaw on Thursday. โWe have created the framework, and it is being launched today. Our focus is on building AI m
See MoreBusiness Coachย โขย 2m
๐ฅSarvam AI to Launch India's First Homegrown LLM๐ฅ ๐ฅ Sarvam AI building India's first domestic LLM with 4,096 H100 GPUs โ Bengaluru startup secured โน99 crore in GPU subsidies under IndiaAI Mission, launching by early 2026 . ๐ค Why It Matters โ Re
See MoreFounder of Friday AIย โขย 3m
Introducing the World's First Adaptive Context Window for Emotional AI Built by the Friday AI Core Technologies Research Team Most LLMs today work with static memory โ 8K, 32K, maybe 128K tokens. But human conversations aren't static. Emotions evo
See MoreFounder of Friday AIย โขย 3m
Introducing the World's First Adaptive Context Window for Emotional AI Built by the Friday AI Core Technologies Research Team Most LLMs today work with static memory โ 8K, 32K, maybe 128K tokens. But human conversations aren't static. Emotions evo
See More
Hey I am on Medialย โขย 1y
Huge announcement from Meta. Welcome Llama 3.1๐ฅ This is all you need to know about it: The new models: - The Meta Llama 3.1 family of multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models
See More
ย โขย
akirolabsย โขย 1y
I am looking for a ( Senior ) Data Scientist/ LLM Engineer / ML Engineer to join akirolabs in Berlin and join us in building the leading LLM for procurement. Rest assured, we are fully prepared to provide visa sponsorship for relocation to Germany i
See MoreDev| tester| automat...ย โขย 2m
๐ Sharing my portfolio update! Past Goal: Built scalable applications that could handle growth with efficiency. Current Goal: Diving deep into Large Language Models (LLMs), exploring how they can shape intelligent solutions. Future Goal: Contribu
See More
Download the medial app to read full posts, comements and news.