Basic Prompts
You can achieve a lot with simple prompts, but the quality of the results depends on the amount of information you provide and how well crafted it is. A prompt can contain information such as instruction or question that you are passing to the model and include other details like context, inputs, or examples. You can use these elements to better instruct the model and, as a result, achieve better outcomes.
Let’s start by analyzing a basic example of a simple prompt:
Prompt
The sky is
Output:
blue.
The sky is blue on a clear day. On a cloudy day, the sky can be gray or white.
As you can see, the language model generates a continuation of strings that make sense in the context of "The sky is"
. The output can be unexpected or far from the task we want to accomplish.
This basic example also highlights the need to provide more context or instructions about what specifically we want to achieve.
Let’s try to improve it a bit:
Prompt:
Complete the sentence: The sky is
Output:
so beautiful.
Is this better? Well, we told the model to complete the sentence, so the result is much better as it follows exactly what we asked it to do ("complete the sentence"). This approach of designing ideal prompts to instruct the model to perform a task is called prompt engineering.
The above example is a basic illustration of what is possible with LLMs today. Today’s LLMs are capable of performing all kinds of advanced tasks ranging from text summarization to mathematical reasoning and code generation.
Prompt Formatting
We tried a very simple prompt above. A standard prompt has the following format:
<Question>?
or
<Instruction>
This can be formatted in a question-and-answer (QA) format, which is standard in many QA datasets, as follows:
Q: <Question>?A:
By requesting as above, also known as zero-shot prompt, you are directly asking the model for an answer without any example or demonstration of the task you want to perform. Some large language models have the capability to perform zero-shot prompts, but this depends on the complexity and knowledge of the task in question.
Given the standard format above, a popular and effective request technique is called few-shot prompt, where we provide examples (i.e., demonstrations). Few-shot prompts can be formatted as follows:
<Question>?<Answer><Question>?<Answer><Question>?<Answer><Question>?
The QA format version would look like this:
Q: <Question>?A: <Answer>Q: <Question>?A: <Answer>Q: <Question>?A: <Answer>Q: <Question>?A:
Remember that it is not necessary to use the QA format. The format of the prompt depends on the task at hand. For example, you can perform a simple classification task and provide samples that demonstrate the task as follows:
Prompt:
This is amazing! // PositiveThis is bad! // NegativeWow, this movie was awesome! // PositiveWhat a horrible show! //
Output:
Negative
Few-shot prompts allow contextual learning, which is the capability of language models to learn tasks given some demonstrations.