Back

Sandeep Prasad

Business Coachย โ€ขย 2m

๐Ÿ”ฅ Government set to name ~8 Indian teams for foundational model incentives next week โ€“ second-round beneficiaries may include BharatGen; GPU access remains tight as only ~17,374 of planned 34,333 GPUs are installed so far. ๐Ÿค” Why It Matters โ€“ More subsidised compute means faster Indiaโ€‘tuned models, but the GPU crunch could slow training unless procurement accelerates or inferenceโ€‘efficient approaches are prioritised. ๐Ÿš€ Action/Example โ€“ Founders should prepare grant docs and pivot to efficient training/inference (LoRA, distillation, 4โ€‘bit quant) to ride the incentive window despite supply constraints. ๐ŸŽฏ Who Benefits โ€“ AI researchers, Indic LLM builders, and startups focused on lowโ€‘cost inference at scale. Tap โค๏ธ if you like this post.

Reply
2
1

More like this

Recommendations from Medial

Aditya Karnam

Hey I am on Medialย โ€ขย 8m

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
Reply
1

Inactive

AprameyaAIย โ€ขย 1y

Meta has released Llama 3.1, the first frontier-level open source AI model, with features such as expanded context length to 128K, support for eight languages, and the introduction of Llama 3.1 405B. The model offers flexibility and control, enabli

See More
Reply
2
9
Image Description
Image Description

Narendra

Willing to contribut...ย โ€ขย 1m

I fine-tuned 3 models this week to understand why people fail. Used LLaMA-2-7B, Mistral-7B, and Phi-2. Different datasets. Different methods (full tuning vs LoRA vs QLoRA). Here's what I learned that nobody talks about: 1. Data quality > Data quan

See More
2 Replies
10
1

PraYash

Technology, Business...ย โ€ขย 1m

When AI changed the rules, cloud computing had to change too. And thatโ€™s exactly where Oracle took the lead. Most cloud giants like AWS, Azure, and GCP still rely on virtualization โ€” where resources like CPU, GPU, and memory are shared across users.

See More
Reply
6
Image Description

Rahul Agarwal

Founder | Agentic AI...ย โ€ขย 8d

4 different ways of training LLM's. I've given a simple detailed explanation below. 1.) ๐—”๐—ฐ๐—ฐ๐˜‚๐—ฟ๐—ฎ๐˜๐—ฒ ๐——๐—ฎ๐˜๐—ฎ ๐—–๐˜‚๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป (๐˜€๐˜๐—ฒ๐—ฝ-๐—ฏ๐˜†-๐˜€๐˜๐—ฒ๐—ฝ) Prepares clean, consistent, and useful data so the model learns effectively. 1. Collect text

See More
Reply
1
9
1

AI Engineer

AI Deep Explorer | f...ย โ€ขย 7m

LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance

See More
Reply
2
Image Description
Image Description

Mada Dhivakar

Letโ€™s connect and bu...ย โ€ขย 6m

Why Grok AI Outperformed ChatGPT & Gemini โ€” Without Spending Billions In 2025, leading AI companies invested heavily in R&D: ChatGPT: $75B Gemini: $80B Meta: $65B Grok AI, developed by Elon Musk's xAI, raised just $10B yet topped global benchmar

See More
1 Reply
7
1
Image Description

Parampreet Singh

Python Developer ๐Ÿ’ป ...ย โ€ขย 9m

3B LLM outperforms 405B LLM ๐Ÿคฏ Similarly, a 7B LLM outperforms OpenAI o1 & DeepSeek-R1 ๐Ÿคฏ ๐Ÿคฏ LLM: llama 3 Datasets: MATH-500 & AIME-2024 This has done on research with compute optimal Test-Time Scaling (TTS). Recently, OpenAI o1 shows that Test-

See More
1 Reply
5
Image Description
Image Description

mg

mysterious guyย โ€ขย 6m

๐Ÿ’€ 99% of AI Startups Will Die by 2026 โ€“ Here's Why 1. LLM Wrappers โ‰  Real Products Most AI startups are just pretty UIs over OpenAIโ€™s API. No real backend. No IP. No moat. They're charging โ‚น4,000/month for workflows that cost โ‚น300 if done directly

See More
4 Replies
33
35
1

Swamy Gadila

Founder of Friday AIย โ€ขย 5m

๐Ÿšจ Open AI is an Wrapper๐Ÿ‘€๐Ÿคฏ Hot take, but letโ€™s break it down logically: OpenAI is not a full-stack AI company โ€” itโ€™s a high-level wrapper over Azure and NVIDIA. Hereโ€™s why that matters ๐Ÿ‘‡ ๐Ÿ”น 1. Infra Backbone = Microsoft Azure Almost 90%+ of Op

See More
Reply
2
4

Download the medial app to read full posts, comements and news.