Back

Anonymous

Anonymous 4

Hey I am on Medial • 1y

VLMs might be more efficient in the long run, but training them requires a lot of resources. The paper mentions various approaches to training, but without adequate computational power, it's challenging for smaller teams to implement these models. Anyone here with experience on this?

Reply
1

More like this

Recommendations from Medial

Image Description
Image Description

Shuvodip Ray

 • 

YouTube • 1y

Researchers at Meta recently presented ‘An Introduction to Vision-Language Modeling’, to help people better understand the mechanics behind mapping vision to language. The paper includes everything from how VLMs work, how to train them, and approache

See More
4 Replies
3

Vedant SD

Finance Geek | Conte... • 1y

Day 72: The Art of Scaling: From Startup to Growth Stage Scaling a startup from a small team to a large organization requires careful planning and execution. Here's how Bengaluru entrepreneurs can navigate this growth phase: * Hire the Right People:

See More
Reply
1
4
Image Description

Sandeep Prasad

Business Coach • 1m

🔥 Government set to name ~8 Indian teams for foundational model incentives next week – second-round beneficiaries may include BharatGen; GPU access remains tight as only ~17,374 of planned 34,333 GPUs are installed so far. 🤔 Why It Matters – More

See More
Reply
2
1

AI Engineer

AI Deep Explorer | f... • 6m

LLM Post-Training: A Deep Dive into Reasoning LLMs This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance

See More
Reply
2

Farhan Raza

Founder And CEO Give... • 9m

Q: How are you going to implement it in Jhansi? And what about the plans of other cities? Ans: We will implement GiveAt in Jhansi by onboarding local restaurants with eco-friendly packaging materials and providing training for seamless adoption. A d

See More
Reply
4

AI Engineer

AI Deep Explorer | f... • 6m

"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1️⃣ Fin

See More
Reply
1
8

Radhemohan Pal

Let's connect to wor... • 1y

Part 1 Indian startups often face several common challenges, which can hinder their growth and success: 1. **Funding Issues**: Many startups struggle to secure adequate funding, particularly in their early stages. Investors can be risk-averse, and

See More
Reply
2

Vikas Acharya

 • 

Medial • 1y

Disrupt or Be Disrupted: The Future of Startups Startups are redefining industries by challenging traditional businesses through innovation, agility, and customer-centric approaches. With tech-driven solutions and cost-efficient operations, they’re

See More
Reply
2

Satyam Kumar

"Turning visions int... • 6m

ANT Group Uses Domestic Chips to train AI models and cut COSTS Ant Group is relying on Chinese-made semiconductors to train artificial intelligence models to reduce costs and lessen dependence on restricted US technology, according to people familia

See More
Reply
2
8

Download the medial app to read full posts, comements and news.