Prompt Engineering
Prompt design is the process of creating a prompt that is tailored to the specific task that the system is being asked to perform.
Prompt engineering is the process of creating a prompt that is designed to improve performance.
Prompting Principles
Principle 1: Write clear and specific instructions
Tactic 1: Use delimiters to clearly indicate distinct parts of the input
- Delimiters can be anything like: ,
"""
,< >
,<tag> </tag>
,:
Tactic 2: Ask for a structured output
- JSON, HTML
Tactic 3: Ask the model to check whether conditions are satisfied
Tactic 4: "Few-shot" prompting
Principle 2: Give the model time to "think"
Tactic 1: Specify the steps required to complete a task
Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion
Imitating
- In the style of x write about y
Prompting Techniques
Chain-of-thought
Chain-of-thought (CoT) prompting is a technique that allows large language models (LLMs) to solve a problem as a series of intermediate steps before giving a final answer. Chain-of-thought prompting improves reasoning ability by inducing the model to answer a multi-step problem with steps of reasoning that mimic a train of thought. It allows large language models to overcome difficulties with some reasoning tasks that require logical thinking and multiple steps to solve, such as arithmetic or commonsense reasoning questions.
Other techniques
- Generated knowledge prompting
- Least-to-most prompting
- Self-consistency decoding
- Complexity-based prompting
- Self-refine
- Tree-of-thought
- Maieutic prompting
- Directional-stimulus prompting
Prompt engineering - Wikipedia
Parameters
Temperature
Controls the randomness of the model's output. A higher temperature makes the output more random, while a lower temperature makes it more deterministic.
Understanding OpenAI's Temperature Parameter | Colt Steele
Other Topics
- Iterative
- Summarizing
- Inferring
- Transforming
- Expanding
- Chatbot
- Conclusion
ChatGPT Prompt Engineering for Developers - DeepLearning.AI
Assistant APIs
The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling.
At a high level, a typical integration of the Assistants API has the following flow:
- Create an Assistant in the API by defining its custom instructions and picking a model. If helpful, enable tools like Code Interpreter, Retrieval, and Function calling.
- Create a Thread when a user starts a conversation.
- Add Messages to the Thread as the user ask questions.
- Run the Assistant on the Thread to trigger responses. This automatically calls the relevant tools.
Create AI Assistants with OpenAI's Assistants API
Knowledge based retrieval tool -
platform.openai.com/docs/assistants/overview
Learning
- Large Language Models and Cybersecurity - What You Should Know
- Understanding Large Language Models - by Sebastian Raschka
- The Art of Prompt Design: Use Clear Syntax | by Scott Lundberg | May, 2023 | Towards Data Science
- Prompt Engineering - Google Slides
- Prompt Engineering Tutorial - Master ChatGPT and LLM Responses - YouTube
- Advanced Prompt Engineering for Content Creators - Full Handbook
- Prompt Engineering with Llama 2 - DeepLearning.AI