Prompt engineering is the skill of crafting effective questions to get optimal results from Large Language Models (LLMs). 

  • It allows users to communicate with LLMs using natural language, eliminating the need for technical expertise in machine learning, statistics, and data analysis. 

With prompt engineering, users can “program” LLMs in plain language, making it more accessible and user-friendly

Prompt Engineering
Types of Prompt Engineering

Types of Prompt Engineering

  1. Clearly define the role: Specify the task or question you want the LLM to perform or answer.
  2. Provide context and input data: Give the LLM relevant information to work with.
  3. Give specific instructions: Tell the LLM what to do with the context and input data.
  4. Use varied examples: Help the LLM understand what you’re looking for by providing diverse examples.
  5. Set constraints: Limit the scope of the LLM’s output to avoid inaccuracies.
  6. Break down complex tasks: Divide difficult tasks into simpler prompts.
  7. Self-evaluation instructions: Ask the LLM to review its responses before producing them.

Be creative: Think outside the box, explore new possibilities with LLMs, and prompt engineering.

Types of Prompt Engineering

Types of Prompt Engineering
Types of Prompt Engineering

1. Direct prompting (Zero-shot)

Direct prompting, also known as zero-shot prompting, is a technique in prompt engineering where you provide a Large Language Model (LLM) with a simple and direct instruction or question, without any additional context or examples. This approach relies on the LLM’s ability to understand and generate responses based on its pre-training and knowledge.

In zero-shot prompting, you:

  • Provide a single, concise prompt
  • Ask the LLM to generate a response without any additional context or guidance
  • Rely on the LLM’s pre-training and knowledge to produce an accurate response
Examples:

Simple questions:
Prompt: "What is the capital of France?"
Response: "Paris"

Definitions:
Prompt: "Define artificial intelligence"
Response: "Artificial intelligence (AI) refers to the development of ...."

2. Prompting with examples (One-, few-, and multi-shot)

Prompting with examples is a technique in prompt engineering where you provide a Large Language Model (LLM) with one or more examples to help it understand the task or question you want it to perform or answer. This approach is useful when the task is complex or requires specific guidance.

There are three types of prompting with examples:

One-shot:

  • Provide a single example to help the LLM understand the task
  • The LLM generates a response based on this single example
Prompt: "Summarize the following text: 'The company reported a significant increase in profits last quarter.'"
Example: "Here's a summary of a similar text: 'The company saw a substantial rise in revenue last quarter.'"
Response: "The company's profits increased significantly last quarter."

Few-shot:

  • Provide a small set of examples (typically 2–5) to help the LLM understand the task
  • The LLM generates a response based on these few examples
Prompt: "Generate a product description for a new smartwatch."
Examples:
"Here's a description for a similar product: 'Introducing the X500 smartwatch, with advanced fitness tracking and notification features.'"
"Another example: 'The Y700 smartwatch offers stylish design and seamless integration with your smartphone.'"
Response: "Introducing the Z1000 smartwatch, with cutting-edge health monitoring and personalized alerts."

Multi-shot:

  • Provide a larger set of examples (typically 6 or more) to help the LLM understand the task
  • The LLM generates a response based on these multiple examples
Prompt: Predict up to 5 emojis as a response to a text chat message. The output
should only include emojis.

input: The new visual design is blowing my mind 🤯
output: ➕,💘, ❤‍🔥

input: Well that looks great regardless
output: ❤️,🪄

input: Unfortunately this won't work
output: 💔,😔

input: sounds good, I'll look into that
output: 🙏,👍

input: 10hr cut of jeff goldblum laughing URL
output: 😂,💀,⚰️

input: Woo! Launch time!

3. Chain-of-thought prompting

Chain-of-thought prompting is a technique in prompt engineering where you provide a Large Language Model (LLM) with a series of interconnected prompts or questions that guide its thinking and generation process. This approach simulates a human-like thought process, where each step builds upon the previous one, allowing the LLM to produce more coherent and accurate responses.

In chain-of-thought prompting, you:

  • Break down a complex task or question into smaller, sequential steps
  • Provide each step as a separate prompt or question
  • Ask the LLM to respond to each step, building upon its previous responses
Example:
Prompt 1: "What is the main topic of the text: 'The company reported a significant increase in profits last quarter.'?"
Response: "The main topic is the company's financial performance."
Prompt 2: "What specific aspect of financial performance is mentioned in the text?"
Response: "The text mentions an increase in profits."
Prompt 3: "What could be a possible reason for this increase in profits?"
Response: "A possible reason could be effective cost management and strategic investments."

Zero-shot CoT

This is a technique in prompt engineering that combines the concepts of zero-shot prompting and chain-of-thought prompting.

  • In zero-shot CoT, you provide a Large Language Model (LLM) with a series of interconnected prompts or questions, just like in chain-of-thought prompting. 
  • However, you don’t provide any examples or demonstrations of how to answer each step. 
  • Instead, you rely on the LLM’s pre-training and knowledge to generate responses to each prompt, building upon its previous responses.
Prompt:

I went to the market and bought 10 apples. I gave 2 apples to the neighbor and
2 to the repairman. I then went and bought 5 more apples and ate 1. How many
apples was I left with?

Let's think step by step.

Prompt iteration strategies

Prompt iteration strategies
Types of Prompt Engineering

Here are a few ideas for refining prompts if you get stuck:

Note: These strategies may become less useful or necessary over time as models improve.

  1. Repeat keywords, phrases, or ideas
  2. Specify your desired output format (CSV, JSON, etc.)
  3. Use all caps to stress important points or instructions. For example: “Your explanation should be impossible to misinterpret. Every single word must ooze clarity!
  4. Use synonyms or alternate phrasing (e.g., instead of “Summarize,” try appending “tldr” to some input text). Swap in different words or phrases and document which ones work better and which are worse.
  5. Try the sandwich technique with long prompts: Add the same statement in different places.
  6. Use a prompt library for inspiration. Prompt Gallery is a good place to start.

Valuable comments