TLDR;
This video explains how to get better responses from the updated Chat GPT model (GPT5) by understanding its new architecture and using specific prompting techniques. The key is that GPT5 consolidates previous models and uses a routing system to direct prompts to the appropriate model with specific reasoning and verbosity levels. To optimize results, the video suggests using trigger words, a prompt optimizer tool, specific language, structured prompts (like XML), and self-reflection techniques within the AI.
- GPT5 consolidates previous models and uses a routing system.
- Effective prompting is crucial to direct the model to the right reasoning and verbosity levels.
- Techniques include trigger words, prompt optimizers, specific language, structured prompts, and self-reflection.
Intro [0:00]
The presenter notes that getting great responses from Chat GPT has become more difficult, even though the current model (GPT5) is more powerful than previous versions. This is because GPT5 has a fundamentally different architecture. The video will share five simple tricks, derived from OpenAI's research team, to improve Chat GPT responses.
Why the change [0:32]
OpenAI has published blog posts detailing how to prompt the tool effectively. The video consolidates this information into five simple tricks. The value gap has increased, meaning users who don't know how to prompt effectively lose potential value from the tool. GPT5 consolidates eight legacy models into three (or two, depending on the user's access level). This consolidation requires a routing system where OpenAI uses a router to analyze the prompt and its context, then directs it to a specific model (base, thinking, or pro). The system also sets reasoning (minimal, low, medium, or high) and verbosity levels (low, medium, or high). The user must learn how to prompt the model to ensure it's routed correctly with appropriate reasoning and verbosity.
Trigger words [2:52]
Using trigger words can increase the reasoning level the model uses before responding. Examples include "thinking deeply," "double check your work," and "be extremely thorough." Adding these to the end of a prompt can route the query to a more capable model and increase reasoning, leading to more accurate responses.
Prompt Optimizer [3:31]
OpenAI provides a prompt optimizer tool that improves prompts based on GPT5 best practices. This tool refines prompts to eliminate vagueness, improve structure, and remove contradictions. The optimizer provides feedback on why it made specific changes. For example, it might clarify vague terms by inferring their meaning from the context of the prompt, or convert a minimal process into a detailed checklist for the AI to follow. The optimizer applies best practices to significantly improve the AI's response. The tool is accessible through platform.openai.com.
Be Specific [5:42]
GPT5 is very good at following instructions, so specificity is crucial. Contradictions and vague terms in prompts can confuse the AI, leading to overreasoning and incorrect responses. Providing specific details allows the AI to solve problems effectively without going in the wrong direction. For example, instead of asking to "plan a nice party," specify the type of party, age group, budget, duration, and theme.
Structured prompts [6:51]
Structuring prompts is especially useful for custom GPTs or GPT projects. Using XML to tag different sections of the prompt (context, task, format) helps the AI understand the system instructions more effectively. XML involves using tags like <context> ... </context>
, <task> ... </task>
, and <format> ... </format>
. The AI knows that the context section provides background, the task section defines the goal, and the format section specifies the desired output. Users can ask the AI to convert their existing prompts into XML format.
Self-reflection [8:16]
GPT5 can critique itself, which can be used to improve the quality of its responses. The recommended approach is to have the AI create a rubric based on the user's intent and then judge its own work against that rubric. For example, if the intent is to write a simple and clear piece at a fifth-grade reading level, the AI creates a rubric aligned with that intent. It then writes a first draft, evaluates it against the rubric, and iterates multiple times. The final response is typically of higher quality because of this self-reflection process. The AI performs these iterations internally, providing only the best final output.
Outro [10:34]
The presenter summarizes the key takeaways: the foundational model has changed to focus on routing, so users need to improve their prompting skills to ensure the AI is directed to the right model with the appropriate reasoning and verbosity. Techniques include using trigger words, the prompt optimizer, specific language, structured prompts, and self-reflection.