Prompts for LLM
Some interesting properties and suggestions in prompts
It is likely that any of the examples can produce different results in a particular moment, due to LLM evolution. However the main concepts will remain valid in other examples.
Elements of a prompt
Generally speaking, and at a high level, a prompt can have any of the following:
Instructions
Question
Input data
Examples
In order to obtain a result, either 1 or 2 must be present. Everything else is optional.
Prompt engineering?
Prompt engineering is a very recent but rapidly growing discipline that has the goal of designing the optimal prompt given a generative model and a goal. Prompt engineering is growing so quickly that many believe that it will replace other aspects of machine learning such as feature engineering or architecture engineering for large neural networks.
Prompt engineering requires some domain understanding to incorporate the goal into the prompt (e.g. by determining what good and bad outcomes should look like). It also requires understanding of the model. Different models will respond differently to the same kind of prompting.
Generating prompts at some scale requires a programmatic approach. At the most basic level you want to generate prompt templates that can be programmatically modified according to some dataset or context.
Finally, prompt engineering, as any engineering discipline, is iterative and requires some exploration in order to find the right solution. While this is not something that I have heard of, prompt engineering will require many of the same engineering processes as software engineering (e.g. version control, and regression testing).
Instructions + Question
Beyond asking a simple question, possibly the next level of sophistication in a prompt is to include some instructions on how the model should answer the question.
Here I ask for advice on how to write a university essay, but also include instructions on the different aspects we are interested to hear about in the answer.
Instructions + Input data
Back to the example, let’s see what happens when we input some data about someone and give some instructions:
Errors and different results
We must not forget we are working with a Language Model, so predicting the next word is what it really does, and some possibilities they manage to solve are considered emergence abilities.
Any of these errors can be solved at any time, and new errors can arise.
Let's see an example:
Role prompting
Very interesting to adapt the answer to specific areas, jargons, roles, etc. and, sometimes, it helps also to generate more accurate results.
Few shot learning
Language models are pre-trained in a unsupervised way, but they allowed subsequent processes called 'fine-tuning', which generally means the use of a collection of examples used in a supervised training.
But, we can improve and get much more accurate results in our tasks using few shot learning, i.e., by providing examples before the question or task to solve:
Chain of thought prompting
In chain of thought prompting, we explicitly encourage the model to be factual/correct by forcing it to follow a series of steps in its “reasoning”.
Example:
Help the system
Use interaction to force the LLM to acquire reasoning skills in finding the answer:
References: https://learnprompting.org/
Last updated