AI: prompt engineering to reduce hallucinations [part 1]
![](https://flowygo.com/wp-content/uploads/2023/10/Prompt_Engineering-1024x575.webp)
Prompt engineering techniques allow us to improve the reasoning and responses provided by LLMs , such as ChatGPT. However, are we sure that the responses received are correct? In some cases no! When this happens, the model is said to have hallucinated. Let’s find out what this is and what are the techniques to reduce the probability of getting them.