Back

Pay ₹1,950 a month for a search engine that gets its data from google. 😑

Anonymous

Anonymous 1

Hey I am on Medial • 6m

All data out there is from google, and with their fine tuning for "not being rude" I think these AI LLM models are too liberal.

1 replies
Replies (1)

More like this

Recommendations from Medial

Dhruv Pithadia

A.I. Enthusiast • 2m

Working on a cool AI project, that involves vector db and LLM fine-tuning

0 replies2 likes

Yogesh Jamdade

..... • 11m

Hey everyone, I'm an engineering student geeking out over Generative AI. Loving LangChain, Hugging Face models, Crew.ai's chatbots, fine-tuning, and RAG. Plus, machine learning and data science are pretty cool too! Anyone else into this stuff? Looki

See More
0 replies3 likes
Image Description

Yogesh Jamdade

..... • 11m

Hey everyone, I'm an engineering student geeking out over Generative AI. Loving LangChain, Hugging Face models, Crew.ai's chatbots, fine-tuning, and RAG. Plus, machine learning and data science are pretty cool too! Anyone else into this stuff? Looki

See More
2 replies6 likes
1
Image Description

Ishan Mishra

Hey I am on Medial • 3m

I really want to work on AI projects but I'm inexperienced with company work, I used to work as a research intern at a lab and was a data science intern at another place. I really want to get into working on hf models, langchain, langgraph, fine-tuni

See More
1 replies10 likes
Image Description

Jainil Prajapati

Turning dreams into ... • 3m

India should focus on fine-tuning existing AI models and building applications rather than investing heavily in foundational models or AI chips, says Groq CEO Jonathan Ross. Is this the right strategy for India to lead in AI innovation? Thoughts?

2 replies3 likes

Bhoop singh Gurjar

AI Deep Explorer | f... • 2m

"A Survey on Post-Training of Large Language Models" This paper systematically categorizes post-training into five major paradigms: 1. Fine-Tuning 2. Alignment 3. Reasoning Enhancement 4. Efficiency Optimization 5. Integration & Adaptation 1️⃣ Fin

See More
0 replies8 likes
1

Mukesh Jha

Mapping AI to Use-ca... • 1y

Are there any data scientists here? What new ways are you using large language models (LLMs) in your everyday tasks? Do you think we should include LLM topics in data science courses? If so, what should we focus on teaching? For example: - The bas

See More
0 replies3 likes

Comet

#freelancer • 5m

10 Best LLM Tools to Simplify Your Workflow 1️⃣ LangChain LangChain's flexibility suits complex AI and multi-stage processing. 2️⃣ Cohere Cohere offers integration, scalability, and customization. 3️⃣ Falcon Falcon offers cost-effective, high-per

See More
0 replies2 likes
Image Description

Soumya

Developer • 6m

💡An Idea to Change the Game for AI Startups: Making AI Processing Faster, Cheaper, and Effortless Running AI models like ChatGPT, DALL·E, or AlphaCode is a computing monster—they need massive power to function, which makes them expensive to operate

See More
2 replies4 likes

Aditya Karnam

Hey I am on Medial • 2m

"Just fine-tuned LLaMA 3.2 using Apple's MLX framework and it was a breeze! The speed and simplicity were unmatched. Here's the LoRA command I used to kick off training: ``` python lora.py \ --train \ --model 'mistralai/Mistral-7B-Instruct-v0.2' \ -

See More
0 replies
1

Download the medial app to read full posts, comements and news.