The advent of ChatGPT and LLM (Large Language Models) has revolutionized the world. Almost every industry has undergone a revolution and is adopting these powerful means of artificial intelligence to build new tools and services. But how do we build the prompts, i.e., the instructions, to make these models generate what we are interested in? Prompt engineering is the discipline that addresses this very issue. Let's discover some tips for improving our prompts to artificial intelligence tools.

Share

Reading time: 6 minutes

On November 30, 2022, the world underwent an extraordinary revolution: the first version of ChatGPT was released! Before that time, artificial intelligence was relegated to academic circles and the research and development sectors of hi-tech companies. Some interesting results had already been achieved, but they had not encountered the success and popularity that ChatGPT has achieved in a very short time. But why do we speak of a revolution? Applications of the large language models (LLMs) on which OpenAI’s chatbot is based have invaded so many areas from generating multimedia content for the Web (images, video, text) to supporting in industrial activities. From these models, additional software has developed that exploits the potential of LLMs to provide even more advanced and customizable tools and services.

Of course, this has led to some criticism of the massive use of artificial intelligence by raising concerns about its impact in employment and the ethics of its use. These issues are still a matter of debate today and most likely will be for a long time to come. In this article, we do not want to address these issues, which are very sensitive, but we do want to introduce a new branch that has developed precisely with the advent of ChatGPT and its siblings: prompt engineering. We will thus analyze prompt engineering and some useful tips for constructing effective prompts to get the best results.

Prompt engineering

Prompt engineering is a new discipline that enables the development and optimization of prompts for efficient use of language models (LMs) in various applications and research areas. It helps to understand the capabilities and limitations of large language models (LLMs).

Researchers use prompt engineering to improve the capabilities of LLMs in various simple and complex tasks, such as question answering and arithmetic reasoning. Software developers should focus on designing strong and efficient prompt techniques that work smoothly with LLMs and other tools.

It is not limited to prompt design and development, but encompasses a wider range of functionality. Prompt engineering involves a range of skills and techniques used to interact with and develop LLMs, making it an essential skill for interfacing, building, and understanding the capabilities of LLMs. Prompt engineering can improve the security of LLMs and develop new capabilities, such as enhancing LLMs with external tools and domain knowledge.

LLM Configuration

When working with prompts, the LLM can be accessed through an API or directly. You can define various parameters to get different results for prompts.

  • Temperature: A lower temperature implies more deterministic results, in which the most likely token is always selected. A higher temperature, on the other hand, may increase diversity and allow for more innovative alternatives. In essence, the weight of other potential tokens is elevated. For fact-based AI tasks, it is advisable to use lower temperature values for more concise and concrete answers. However, for poem generation and other creative tasks, increasing the temperature may be advantageous.
  • top_p: can check the determinism of the model when it generates a response using a temperature sampling technique called core sampling. If precise and concrete responses are desired, a low value is recommended. If a wider range of responses is desired, the value should be increased.

It is recommended to change only one parameter, not both. Keep in mind that the version of LLM used may cause different results.

Elements of a Prompt

A prompt can contain any of the following components:

  • Instruction: a specific task or instruction to be executed by the model
  • Context: may involve external information or additional contexts that can direct the model toward better responses.
  • Data in Input: is the input or question for which we are interested in finding an answer
  • Output Indicator: indicates the type or format of the output.

Not all components are necessary for a prompt, and the format depends on the task at hand. Below we will look at some examples of how to use these elements.

General Tips for Prompt Design

Here are some tips to keep in mind when designing your prompts:

Simple start

When starting to design prompts, keep in mind that this is an iterative process that requires a lot of experimentation to achieve optimal results.

You can start with simple prompts and add more elements and context as you aim for better results. This is why it is critical to modify the prompts along the way. When you have a major task that involves many different subtasks, you can try to break the task down into simpler subtasks and keep increasing as you get better results. This way you avoid adding too much complexity to the prompt design process at the beginning.

Instructions

You can design effective prompts for various simple tasks, using commands to tell the model what you want to achieve, such as “Write,” “Rank,” “Summarize,” “Translate,” “Sort,” etc.

Keep in mind that you also have to experiment a lot to figure out what works best. Try different instructions with different keywords, contexts, and data and see what works best for your use case and business. Usually, the more context specific and relevant to the activity you are trying to perform, the better. In future guides we will discuss the importance of sampling and adding more context.

Others recommend placing instructions at the beginning of the prompt. They also recommend using a clear separator, such as “###,” to separate the instruction from the context.

For example:

				
					### Instruction ###
Translate the following text into Spanish
Text: Ciao!
				
			

Output:

				
					¡Hola!
				
			

Specificity

Be very specific about the instruction and the task you want the model to perform. The more descriptive and detailed the request, the better the results will be. This is especially important when searching for a desired result or generation style. There are no specific tokens or keywords that lead to better results. It is more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective in getting the desired results in specific formats.

When designing prompts, one must also keep in mind the length of the prompt, as there are limits as to its length. You have to think about how specific and detailed you need to be. Including too much unnecessary detail is not necessarily a good approach. The details must be relevant and contribute to the task at hand. This is something you will have to experiment with a lot. We encourage experimentation and iteration to optimize prompts for your applications.

As an example, let’s try a simple prompt to extract specific information from a text.

				
					Extract the name of the places in the following text. 
Desired format:
Place: <list_of_place_names_separated_from_virgula>
Input: "Although these developments are encouraging for researchers, much still remains a mystery. "We often have a black box between the brain and the effect we see in the periphery," says Henrique Veiga-Fernandes, a neuroimmunologist at the Champalimaud Center for the Unknown in Lisbon. "If we want to use it in the therapeutic context, we have to actually understand the mechanism.""

				
			

Output:

				
					Place: Champalimaud Center for the Unknown in Lisbon.
				
			

The input text is obtained from this Nature article.

Avoiding inaccuracy

It is always best to be specific and direct in providing instructions and information. The more direct the prompt, the more effective the output will be.

For example, you might be interested in learning about the concept of prompt engineering. You might try something like:

				
					Explain the concept of prompt engineering. Keep the explanation short, just a few sentences, and don't be too descriptive.
				
			

Output:

				
					Prompt engineering involves crafting specific instructions or queries to elicit desired responses from language models like me. It's a process of fine-tuning prompts to achieve more accurate and relevant outputs, often used to control the tone, style, or depth of information provided by the model. This technique is crucial for obtaining precise and tailored results in various applications.
				
			

As you can see the output is not bad, although it is not clear from the prompt how many phrases to use and in what style. However, if we use a more direct statement as follows the output will be more inherent to what we want. For example, if we write:

				
					Use 2-3 sentences to explain the concept of prompt engineering to a high school student.
				
			

The resulting output will be:

				
					Prompt engineering is like giving a computer really clear instructions or questions to get the best answers. It's like finding the right way to ask a question so the computer understands exactly what you want. This helps get accurate and helpful responses from the computer.
				
			

Do or do not do?

Another common tip when designing prompts is to avoid saying what not to do, but instead to say what to do. This increases specificity and helps the model focus on the details that lead to good responses.

Here is an example of a movie recommendation chatbot that fails to do exactly what I don’t want it to do because of the way I wrote the instruction, focusing on what not to do.

				
					An agent recommending films to a client is presented below. DO NOT ASK FOR INTEREST. DO NOT ASK FOR PERSONAL INFORMATION.
Client: Please recommend a movie based on my interests.
Agent:

				
			

Output:

				
					Sure! Can you tell me if you're in the mood for something adventurous or more on the romantic side?

				
			

Here, instead, is a more specific prompt.

				
					The following is an agent recommending films to a client. The agent's task is to recommend a movie from among the globally most trending movies. He must refrain from asking users about their preferences and avoid asking for personal information. If the agent does not have a movie to recommend, he or she should respond, "Sorry, we could not find a movie to recommend today."
Customer: Please recommend a movie based on my interests.
Agent:


				
			

The result is in a clear and well-defined response that requires no further interaction.

				
					I'd recommend checking out "Dune". It's currently one of the most popular movies globally. Enjoy!


				
			

You can find more examples in this article.

Conclusions

In this article we have analyzed how prompts should be structured to achieve the best results. The examples are for illustration only and depend on the LLM model used (in our case ChatGPT). Starting from the prompts, however, we can construct prompts for our use cases, with the understanding that only with constant experimentation can we define the appropriate prompt for the application context of our interest.

More To Explore

Python language

Pandas: data analysis with Python [part 1].

Data scientists continually need to read, manipulate, and analyze data. In many cases they use specific tools, but sometimes they need to develop their own code. To do this, the Pandas library comes to our aid. Let’s learn about its data structures, how we can read data from different sources and manipulate it for our purposes.

Artificial intelligence

Gradio: web applications in Python for AI [Part 3]

With Gradio, it is possible to create web applications for our machine learning and AI models in just a few lines of code. Through some examples, we will see the advanced features available, such as authentication, caching, and input file processing. We will also build a chatbot and an image classifier from pre-trained models. Finally we will discuss how to deploy our project in a few simple steps.

Leave a Reply

Your email address will not be published. Required fields are marked *

Progetta con MongoDB!!!

Acquista il nuovo libro che ti aiuterà a usare correttamente MongoDB per le tue applicazioni. Disponibile ora su Amazon!

Design with MongoDB

Design with MongoDB!!!

Buy the new book that will help you to use MongoDB correctly for your applications. Available now on Amazon!