Highlights of OPEN AI's Spring update.
They are introducing GPT-4o and the highlights are
●Memory and Context
The model now includes a "Memory" feature that recalls previous interactions and context, resulting in a more consistent and tailored use
See More
Udyamee
Baki sab thik ? • 1y
Memory and context & vision capabilities will highly improve current GTP usability in day to day usage.
Btw great post. Crisp and to the point 💪
Highlights of OPEN AI's Spring update.
They are introducing GPT-4o and the highlights are
●Memory and Context
The model now includes a "Memory" feature that recalls previous interactions and context, resulting in a more consistent and tailored use
Hi mate's....
I'm Srujan today an interesting topic about money
Most of the youngsters in India they are searching an easy wayy to get rich but mean while they are lose Time, Investing money,etc...
They don't know Which platform is best and which i
See More
0 replies4 likes
The next billionaire
Unfiltered and real ... • 1m
Greg Isenberg just shared 23 MCP STARTUP IDEAS TO BUILD IN 2025 (ai agents/ai/mcp ideas) and its amazing:
"1. PostMortemGuy – when your app breaks (bug, outage), MCP agent traces every log, commit, and Slack message. Full incident report in seconds.
Name: Sow and Reap
problem it's solving:-the company is trying to reduce the usage of fertilizers in modern day farming by educating them constantly and the carbon credits these practices generate are kept for sale and farmers can earn additional re
See More
0 replies12 likes
Hawk
Product Ops Wizard a... • 2m
We’re excited to share that our team at J.B.Enterprises has successfully developed a powerful and efficient chat agent using a combination of n8n, OpenAI, and Airtable. This solution allows seamless communication with users directly on our client’s w
"A Survey on Post-Training of Large Language Models"
This paper systematically categorizes post-training into five major paradigms:
1. Fine-Tuning
2. Alignment
3. Reasoning Enhancement
4. Efficiency Optimization
5. Integration & Adaptation
1️⃣ Fin
One of our recent achievements showcases how optimizing code and parallelizing processes can drastically improve machine learning model training times.
The Challenge: Long Training Times
Our model training process was initially taking 8 hours—slow
LLM Post-Training: A Deep Dive into Reasoning LLMs
This survey paper provides an in-depth examination of post-training methodologies in Large Language Models (LLMs) focusing on improving reasoning capabilities. While LLMs achieve strong performance
See More
0 replies2 likes
Niket Raj Dwivedi
•
Medial • 24d
WhatsApp is slowly turning into the Facebook of messengers- bloated, cluttered, and directionless. As a product, it's regressing. Here's a breakdown of everything that's broken in WhatsApp today, from a my lens:
Chats in WhatsApp feel like a noisy c
👉 Follow || 🔖 Bookmark || 💭 Comment down
your thoughts ||----------
.
How Telegram Became a Billion-Dollar Company with Just 120 Employees? 📈🤯
Telegram's success story is incredible! Here are key lessons we can learn from them:
Focus on User P