Back

Rahul Agarwal

Founder | Agentic AI... • 25d

What exactly is Context Engineering in AI? A quick 2-minute simple breakdown for you. 𝗙𝗶𝗿𝘀𝘁, 𝗵𝗼𝘄 𝗶𝘀 𝗶𝘁 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗳𝗿𝗼𝗺 𝗣𝗿𝗼𝗺𝗽𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴? • 𝗣𝗿𝗼𝗺𝗽𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 = crafting a single clever input to guide the model. • 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 = designing the entire environment of information (memory, docs, tools, prompts) so the model always has the right context to work with. These are the steps: 1. 𝗗𝗲𝗰𝗶𝗱𝗲 𝘁𝗵𝗲 𝗴𝗼𝗮𝗹 (𝗯𝗲𝗳𝗼𝗿𝗲 𝗮𝗻𝘆𝘁𝗵𝗶𝗻𝗴) • Ask yourself: 𝘞𝘩𝘢𝘵 𝘥𝘰 𝘐 𝘸𝘢𝘯𝘵 𝘵𝘩𝘦 𝘮𝘰𝘥𝘦𝘭 𝘵𝘰 𝘱𝘳𝘰𝘥𝘶𝘤𝘦? (e.g., a short email, a JSON object, a lesson plan). 2. 𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀 / 𝗦𝘆𝘀𝘁𝗲𝗺 𝗣𝗿𝗼𝗺𝗽𝘁 (𝗿𝗶𝗴𝗵𝘁-𝘁𝗼𝗽) • These are the model’s rules and persona. Think of it as: “You are X, follow Y style and constraints.” 3. 𝗨𝘀𝗲𝗿 𝗣𝗿𝗼𝗺𝗽𝘁 (𝗿𝗶𝗴𝗵𝘁-𝗺𝗶𝗱𝗱𝗹𝗲)• This is the immediate request or question from the user. It tells the model the task. 4. 𝗧𝗼𝗼𝗹𝘀 (𝗿𝗶𝗴𝗵𝘁-𝗯𝗼𝘁𝘁𝗼𝗺)• External helpers the model can call (search, calculator, calendar, API). 5. 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗢𝘂𝘁𝗽𝘂𝘁 (𝗯𝗼𝘁𝘁𝗼𝗺)• A requested format for the result so it’s machine-friendly (JSON, CSV, bullet list). 6. 𝗟𝗼𝗻𝗴-𝗧𝗲𝗿𝗺 𝗠𝗲𝗺𝗼𝗿𝘆 (𝗹𝗲𝗳𝘁-𝗯𝗼𝘁𝘁𝗼𝗺)• Stable facts about the user, company, style preferences, or past interactions you want the model to remember over many sessions. 7. 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗲𝗱 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 (𝗹𝗲𝗳𝘁-𝗺𝗶𝗱𝗱𝗹𝗲)• Documents or snippets pulled in just for this query (knowledge base, product docs, previous chat lines). 8. 𝗦𝘁𝗮𝘁𝗲 (𝘀𝗵𝗼𝗿𝘁-𝘁𝗲𝗿𝗺 𝗺𝗲𝗺𝗼𝗿𝘆) (𝘁𝗼𝗽)• What’s happening 𝘳𝘪𝘨𝘩𝘵 𝘯𝘰𝘸 in the conversation — recent messages, variables, or temporary flags. 𝗛𝗼𝘄 𝘁𝗵𝗲𝘀𝗲 𝗽𝗮𝗿𝘁𝘀 𝘄𝗼𝗿𝗸 𝘁𝗼𝗴𝗲𝘁𝗵𝗲𝗿 (𝗳𝗹𝗼𝘄) 1. System prompt sets the rules first (how the model should behave). 2. Long-term memory and retrieved context provide background facts the model should use. 3. State supplies immediate conversation history and live variables. 4. User prompt gives the specific task. 5. The model can call Tools if it needs real-time info or actions. 6. The model returns the answer, ideally in the Structured Output format you asked for. ✅ Repost for others who want to understand Context Engineering.

Reply
12

More like this

Recommendations from Medial

Rahul Agarwal

Founder | Agentic AI... • 1m

Fine-tune vs Prompt vs Context Engineering. Simple step-by-step breakdown for each approach. 𝗙𝗶𝗻𝗲-𝗧𝘂𝗻𝗶𝗻𝗴 (𝗠𝗼𝗱𝗲𝗹-𝗟𝗲𝘃𝗲𝗹 𝗖𝘂𝘀𝘁𝗼𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻) 𝗙𝗹𝗼𝘄: 1. Collect Data → Gather domain-specific info (e.g., legal docs). 2. Sta

See More
Reply
2
13
Image Description
Image Description

Vishu Bheda

 • 

Medial • 3m

𝗧𝗵𝗲 𝟱 𝗔𝗜 𝗺𝗼𝗱𝗲𝗹𝘀 𝘁𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘀𝘁𝘂𝗰𝗸 𝘄𝗶𝘁𝗵 𝗺𝗲 𝗿𝗶𝗴𝗵𝘁 𝗮𝗳𝘁𝗲𝗿 𝗹𝗮𝘂𝗻𝗰𝗵 (𝗻𝗼𝘁 “𝗯𝗲𝘀𝘁,” 𝗷𝘂𝘀𝘁 𝘂𝗻𝗳𝗼𝗿𝗴𝗲𝘁𝘁𝗮𝗯𝗹𝗲): • Claude 3.5 Sonnet – balanced, smooth, felt almost human. • o3 – search + s

See More
12 Replies
3
20

AMBIT VITIN

Helping founders fix... • 3m

Most of us rely on prompting to get the work done through ChatGPT. Even I used to follow this technique until one of my friends taught me how to master Context Engineering. Here's how it works: Prompting vs Context Engineering Prompting is about a

See More
Reply
1

Rahul Agarwal

Founder | Agentic AI... • 7d

2 ways AI systems today generate smarter answers. I’ve explained both in simple steps below. 𝗥𝗔𝗚 (𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹-𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻) (𝘴𝘵𝘦𝘱-𝘣𝘺-𝘴𝘵𝘦𝘱) RAG lets AI fetch and use real-time external information to ge

See More
Reply
1
7

Kavin AI Explorer

 • 

Earney • 4m

Same Performance as Claude Sonnet 4 next to Kimi K2 Qwen3-Coder is simply one of the best coding model we've ever seen. → Still 100% open source → Up to 1M context window 🔥 → 35B active parameters They're releasing CLI tool as well ↓

Reply
3

Muttu Havalagi

🎥-🎵-🏏-⚽ "You'll N... • 1y

Here's a short roadmap for learning CSS: 1. Basic CSS: Start with understanding selectors, properties, and values to style HTML elements. 2. Box Model: Learn how the box model works, including margin, padding, border, and content. 3. Layout: Dive

See More
Reply
4

Inactive

AprameyaAI • 1y

Meta has released Llama 3.1, the first frontier-level open source AI model, with features such as expanded context length to 128K, support for eight languages, and the introduction of Llama 3.1 405B. The model offers flexibility and control, enabli

See More
Reply
2
9
Image Description

Shuvodip Ray

 • 

YouTube • 1y

Researchers at Google DeepMind introduced Semantica, an image-conditioned diffusion model capable of generating images based on the semantics of a conditioning image. The paper explores adapting image generative models to different datasets. Instea

See More
2 Replies
3

Gigaversity

Gigaversity.in • 6m

We developed a deep neural network for forecasting internal resource demand across systems, expecting it to outperform traditional models. But in deployment, a simple linear regression delivered better accuracy, stability, and faster inference times.

See More
Reply
8
Image Description
Image Description

LIKHITH

"You never know" • 1y

Highlights of OPEN AI's Spring update. They are introducing GPT-4o and the highlights are ●Memory and Context The model now includes a "Memory" feature that recalls previous interactions and context, resulting in a more consistent and tailored use

See More
24 Replies
4
24

Download the medial app to read full posts, comements and news.