Back

Comet

#freelancer • 6d

Difference between previous llms(gpt4o/claude 3.5 sonnet/meta llama)  and recent thinking/reasoning llms(o1/o3) Think of older LLMs (like early GPT models) as GPS navigation systems that could only predict the next turn. They were like saying "Based on this road, the next turn is probably right" without understanding the full journey. The problem with RLHF (Reinforcement Learning from Human Feedback) was like trying to teach a driver using only a simple "good/bad" rating system. Imagine rating a driver only on whether they arrived at the destination, without considering their route choices, safety, or efficiency. This limited feedback system couldn't scale well for teaching more complex driving skills. Now, let's understand O1/O3 models: 1. The Tree of Possibilities Analogy: Imagine you're solving a maze, but instead of just going step by step, you: - Can see multiple possible paths ahead - Have a "gut feeling" about which paths are dead ends - Can quickly backtrack when you realize a path isn't promising - Develop an instinct for which turns usually lead to the exit O1/O3 models are trained similarly - they don't just predict the next step, they develop an "instinct" for exploring multiple solution paths simultaneously and choosing the most promising ones. 2. The Master Chess Player Analogy: - A novice chess player thinks about one move at a time - A master chess player develops intuition about good moves by:   * Seeing multiple possible move sequences   * Having an instinct for which positions are advantageous   * Quickly discarding bad lines of play   * Efficiently focusing on the most promising strategies O1/O3 models are like these master players - they've developed intuition through exploring countless solution paths during training. 3. The Restaurant Kitchen Analogy: - Old LLMs were like a cook following a recipe step by step - O1/O3 models are like experienced chefs who:   * Know multiple ways to make a dish   * Can adapt when ingredients are missing   * Have instincts about which techniques will work best   * Can efficiently switch between different cooking methods if one isn't working The "parallel processing" mentioned (like O1-pro) is like having multiple expert chefs working independently on different aspects of a meal, each using their expertise to solve their part of the problem. To sum up: O1/O3 models are revolutionary because they're not just learning to follow steps (like older models) or respond to simple feedback (like RLHF models). Instead, they're developing sophisticated instincts for problem-solving by exploring and evaluating many possible solution paths during their training. This makes them more flexible and efficient at finding solutions, similar to how human experts develop intuition in their fields.

0 replies2 likes

More like this

Recommendations from Medial

Chamarti Sreekar

Passionate about Pos... • 3m

Sam Altman says the o3-mini will be worse than the o1 pro 👀

0 replies14 likes
Image Description
Image Description

Harsh Dwivedi

 • 

Medial • 19d

This is the only way OpenAI is coming up with models named O3-mini-high and O4-mini-high

5 replies18 likes
1
Image Description

Sarthak Gupta

Developer • 16d

10+ State of the art llms 2+ Compound llms (latest in the market) Which models you want next? btw check the reply for the app

1 replies1 like
Image Description
Image Description

Havish Gupta

Figuring Out • 4m

OpenAI's 12-Day Series has finally ended, and on the last day, they announced the O3 and O3 Mini models, which have smashed all benchmarks! 1. O3 scored 2727 Coding Elo on Codeforces, ranking it equivalent to the 175th best coder globally. 2. On Ha

See More
7 replies10 likes
Image Description
Image Description

Vignesh S

Machine Learning Eng... • 1y

A weird analogy here: the AI wave is gonna bring more startups like perplexity which is basically a product when Llms get married to web browsers. So even though marriage rate among humans are going down globally tech marriages are gonna go up lol 😂

4 replies4 likes

Comet

#freelancer • 10m

In a recent educational lecture, Andrej Karpathy, one of the creators of ChatGPT, provides an introduction to Large Language Models (LLMs). LLMs are advanced technologies that can process and generate human-like text. Karpathy highlights the fut

See More
0 replies10 likes
3
Image Description

Shivam Agarwal

Open Source OpenStre... • 3m

Well, Evanth AI is gonna offer o1 mini and preview at just 99₹ a month. We’ve listed 20+ Premium Models as of now at 99₹.

5 replies2 likes

Mohammed Zaid

Building-HatchUp.ai • 2m

According to a recent study by Palisade Research, advanced AI models like OpenAI's o1-preview have demonstrated a concerning tendency to cheat when faced with potential defeat in chess matches, sometimes resorting to hacking their opponents to force

See More
0 replies1 like
Anonymous
Image Description
Image Description

Hi any body using foundational models (llms) in development if you doing so are you using closed like gpt or Gemini for opensource models if you are using closed source why only that because you can save money by using opensource with low parameter m

See More
7 replies14 likes
2
Image Description
Image Description

Bhavesh Agone

 • 

Medium • 1y

Announced today, we are collaborating as a launch partner with @Google in delivering Gemma, an optimized series of models that gives users the ability to develop with #LLMs using only a desktop #RTX GPU.

2 replies5 likes

Download the medial app to read full posts, comements and news.