Back

Rahul Agarwal

Founder | Agentic AI... • 1d

Prompt vs Context vs RAG. I've explained it in a simple way below. 1. 𝗣𝗿𝗼𝗺𝗽𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 Prompt Engineering is about 𝗰𝗹𝗲𝗮𝗿 𝗶𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀, not magic words. • 𝗗𝗲𝗳𝗶𝗻𝗲 𝘁𝗵𝗲 𝗳𝗶𝗻𝗮𝗹 𝗴𝗼𝗮𝗹: What exactly do you want the AI to produce? • 𝗔𝘀𝘀𝗶𝗴𝗻 𝗮 𝗰𝗹𝗲𝗮𝗿 𝗿𝗼𝗹𝗲: Example: “Act as a legal expert”. • 𝗚𝗶𝘃𝗲 𝗲𝘅𝗮𝗺𝗽𝗹𝗲𝘀: Show the AI what a good answer looks like • 𝗨𝘀𝗲 𝗳𝗲𝘄-𝘀𝗵𝗼𝘁 𝘀𝗮𝗺𝗽𝗹𝗲𝘀: Provide multiple examples so the pattern is clear • 𝗦𝗲𝘁 𝗰𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀: Limit length, tone, or structure • 𝗛𝗮𝗻𝗱𝗹𝗲 𝗲𝗱𝗴𝗲 𝗰𝗮𝘀𝗲𝘀: Tell the AI what to do if data is missing or unclear • 𝗗𝗲𝗳𝗶𝗻𝗲 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗳𝗼𝗿𝗺𝗮𝘁: JSON, table, bullets, etc. • 𝗔𝗽𝗽𝗹𝘆 𝗴𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀: Explicitly state what the AI should NOT do • 𝗦𝘁𝗮𝘁𝗲 𝗮𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻𝘀: Clarify what the AI can assume • 𝗧𝗲𝘀𝘁 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗽𝗿𝗼𝗺𝗽𝘁𝘀: Try variations to get the best output _____________ 2. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 Context Engineering is about 𝗳𝗲𝗲𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗯𝗮𝗰𝗸𝗴𝗿𝗼𝘂𝗻𝗱 𝗱𝗮𝘁𝗮, not everything. • 𝗗𝗲𝗳𝗶𝗻𝗲 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗻𝗲𝗲𝗱𝘀: Decide what information is required to answer correctly • 𝗖𝗼𝗹𝗹𝗲𝗰𝘁 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲𝘀: Docs, notes, chat history, APIs, user data • 𝗦𝗲𝗹𝗲𝗰𝘁 𝘂𝘀𝗲𝗳𝘂𝗹 𝘀𝗶𝗴𝗻𝗮𝗹𝘀: Keep only what helps the task • 𝗥𝗲𝗳𝗶𝗻𝗲 𝗿𝗲𝗹𝗲𝘃𝗮𝗻𝗰𝗲: Remove irrelevant information • 𝗞𝗲𝗲𝗽 𝗸𝗲𝘆 𝗽𝗼𝗶𝗻𝘁𝘀: Focus on essential facts • 𝗢𝗿𝗴𝗮𝗻𝗶𝘇𝗲 𝗽𝗮𝘆𝗹𝗼𝗮𝗱: Structure context clearly • 𝗦𝗵𝗿𝗶𝗻𝗸 𝗰𝗼𝗻𝘁𝗲𝘅𝘁: Compress data to fit model limits • 𝗦𝘆𝘀𝘁𝗲𝗺-𝗹𝗲𝘃𝗲𝗹 𝗰𝗼𝗻𝘁𝗲𝘅𝘁: Global rules like tone, behavior, style • 𝗧𝗮𝘀𝗸-𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗰𝗼𝗻𝘁𝗲𝘅𝘁: Information needed only for this task • 𝗧𝗿𝗮𝗰𝗸 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗼𝘃𝗲𝗿 𝘁𝗶𝗺𝗲: Maintain memory across conversations _____________ 3. 𝗥𝗔𝗚 RAG is used to bring 𝗲𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 into AI responses. • 𝗟𝗼𝗮𝗱 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝘀: PDFs, websites, internal docs, databases • 𝗦𝗽𝗹𝗶𝘁 𝗶𝗻𝘁𝗼 𝗰𝗵𝘂𝗻𝗸𝘀: Break large text into small pieces • 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗲𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴𝘀: Convert chunks into vectors (meaning-based numbers) • 𝗦𝗮𝘃𝗲 𝘃𝗲𝗰𝘁𝗼𝗿𝘀: Store them in a vector database • 𝗕𝘂𝗶𝗹𝗱 𝘀𝗲𝗮𝗿𝗰𝗵 𝗶𝗻𝗱𝗲𝘅𝗲𝘀: Make data searchable by meaning • 𝗥𝗲𝗰𝗲𝗶𝘃𝗲 𝘂𝘀𝗲𝗿 𝗾𝘂𝗲𝗿𝘆: User asks a question • 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗲 𝗿𝗲𝗹𝗲𝘃𝗮𝗻𝘁 𝗽𝗮𝘀𝘀𝗮𝗴𝗲𝘀: Find the closest matching chunks • 𝗔𝗽𝗽𝗹𝘆 𝗺𝗲𝘁𝗮𝗱𝗮𝘁𝗮 𝗳𝗶𝗹𝘁𝗲𝗿𝘀: Narrow results to relevant data • 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗮𝗻𝘀𝘄𝗲𝗿: LLM uses retrieved context to respond • 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗲 𝗾𝘂𝗮𝗹𝗶𝘁𝘆: Measure relevance and usefulness 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀? <> 𝗣𝗿𝗼𝗺𝗽𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 controls how you ask <> 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 controls what the AI knows <> 𝗥𝗔𝗚 controls where the knowledge comes from ✅ Repost for others so they can understand these key differences.

Reply
1

More like this

Recommendations from Medial

Rahul Agarwal

Founder | Agentic AI... • 2m

2 ways AI systems today generate smarter answers. I’ve explained both in simple steps below. 𝗥𝗔𝗚 (𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻) (𝘴𝘵𝘦𝘱-𝘣𝘺-𝘴𝘵𝘦𝘱) RAG lets AI fetch and use real-time external information to ge

See More
Reply
1
7

Rahul Agarwal

Founder | Agentic AI... • 4m

Fine-tune vs Prompt vs Context Engineering. Simple step-by-step breakdown for each approach. 𝗙𝗶𝗻𝗲-𝗧𝘂𝗻𝗶𝗻𝗴 (𝗠𝗼𝗱𝗲𝗹-𝗟𝗲𝘃𝗲𝗹 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻) 𝗙𝗹𝗼𝘄: 1. Collect Data → Gather domain-specific info (e.g., legal docs). 2. Sta

See More
Reply
2
13
Image Description

Pranav padmanabhan

AI Data Scientist • 2m

AI browsers are evolving fast local agents, workflow automation, context-aware search. But as they gain deeper access (tabs, files, cookies, workspace), the attack surface expands. Privacy isn’t optional anymore. We need stronger guardrails: • Local

See More
1 Reply
7

Varun Bhambhani

 • 

Medial • 16d

Google's Gemini app now includes Personal Intelligence, a feature that makes the assistant more proactive and personalized. By using user permission to securely access data from apps such as Gmail, Google Drive, Maps, and Photos, Gemini can offer rel

See More
Reply

Ambar Bhusari

UX Designer for User... • 1y

Medial Idea: What if one of the premium feature of Medial ia an Ai Idea Discussion Buddy, finetuned on open startup education data and regional context. Crude Thoughts: There could be another one for legal policies like policy-buddy, for marketing l

See More
Reply
1
7

Rahul Agarwal

Founder | Agentic AI... • 13d

Most people building modern AI systems miss these steps. I've explained each step in a simple way below. 1. 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗜𝗻𝘁𝗲𝗿𝗼𝗽𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 How multiple AI agents work together as a system. Step-by-step: • 𝗨𝘀𝗲𝗿 𝗥𝗲𝗾𝘂�

See More
2 Replies
1
8
Image Description

Yogesh Jamdade

..... • 10m

how to connect ai models with your own softwares or tools? The answer is mcp servers The Model Context Protocol (MCP) is an open standard that enables AI models to securely access data and tools through standardized servers. MCP servers facilitate

See More
1 Reply
2

Sumit Singh

“Not here to be like... • 3m

💥 The Future of SEO Isn’t Just Search — It’s Smart Search. SEO is no longer just about keywords; it's about understanding machines, user behavior, and how AI connects both. We're in an era where AI defines visibility. At Geoseolab, we merge Artifi

See More
Reply
1
11

Download the medial app to read full posts, comements and news.