This page looks best with JavaScript enabled

Mastering Prompt Engineering with ChatGPT: A Guide to Getting the Best Results

 ·  ☕ 10 min read  ·  ✍️ Iskander Samatov

Mastering Prompt Engineering with ChatGPT


ChatGPT has taken the Web by storm. It is very good at generating and analyzing information and it continues to improve rapidly. It’s no wonder that nowadays it’s hard to find an industry that is not actively trying to implement ChatGPT in their business.

But interacting with ChatGPT has more to it than meets the eye. There’s a skill to crafting prompts for these AI models in a way that maximizes the accuracy and quality of the response and it’s called prompt engineering. By mastering prompt engineering, you can reap many benefits in both personal and professional areas of life.

So in this post, I will cover what prompt engineering is, the fundamentals of successful prompt engineering, and a number of techniques you can use to get better quality responses.

Understanding ChatGPT

At the heart of ChatGPT lies LLM, which stands for a large language model. These AI models focus on pattern recognition within a text and are trained on vast amounts of text data. LLM models are great for tasks like answering questions, generating text, translating languages, and more. What’s also cool about these models is that they can understand conversational flow, so you can interact with them as if you are talking to a real person.

A sophisticated LLM model combined with a vast amount of training data, as well as an iterative approach to improving the model through parameter tweaking and response rating are the ingredients that makes ChatGPT so powerful.

Since ChatGPT is a generative AI, its main approach to generating content is pattern matching. It generates new content based on patterns and styles of the data it has seen previously.

As powerful as ChatGPT is, there are some limitations you should be aware of. The most important one is that it has a cutoff date for its training data. At the time of writing, the most up-to-date model is 4.0 with the cutoff date of April 2023.

What is Prompt Engineering?

As I already mentioned, prompt engineering is the process of crafting inputs for AI models that maximizes the quality, relevance and accuracy of the responses. The general philosophy when it comes to crafting prompts is “garbage in, garbage out”. This philosophy promotes writing clear, concise and focused prompts.

The opposite of writing clear and concise prompts is writing prompts that are unfocused, have too much information and are ambiguous in their desired outcomes.

While ChatGPT is powerful, it becomes exponentially more powerful if you follow best practices and know how to use it.

Crafting Effective Prompts

Now that you are aware of what prompt engineering is and the importance of crafting effective prompts, let’s dive into some practical strategies.

Key Characteristics

The key characteristics of an effective prompt are clarity, specificity, and open-endedness. Clarity ensures that the AI model understands exactly what is being asked, while specificity helps to narrow down the scope of the response. Open-endedness, on the other hand, allows for creative and comprehensive answers from the model.

By balancing these three elements, you can craft prompts that elicit high-quality and relevant responses from ChatGPT. For example, let’s compare these two prompts that ask for cooking directions:

"How do I cook meat?" vs "Explain how to cook a brisket for a beginner, including preparation, cooking time, and common mistakes to avoid.

The first example is vague and will result in a generic response that most likely will not address your specific needs. The second example, on the other hand, should result in a more thorough, tailored answer aimed at a beginner. It will cover all of the important points that you would like to see in your cooking directions.

SALT Framework

We are already off to a good start, but you can refine your prompts even further. To do that you can apply the SALT Framework. SALT stands for Style, Audience, Length, and Tone. To ensure that the ChatGPT’s response aligns with your desired output, you should consider these elements when crafting prompts.

Going back to our cooking example, here is how you can apply the SALT framework:
"Write a 500-word article in a friendly and conversational tone, explaining how to cook a brisket for a beginner. The article should include step-by-step instructions with a focus on avoiding common mistakes. Assume the audience has little to no cooking experience.”

Now, let’s break it down how the SALT framework works in this example:

  • Style: The prompt specifies a “friendly and conversational” style which will produce a post written in a casual and approachable manner.
  • Audience: It clarifies that the target audience is beginners with little cooking experience, making sure that the explanation will be easy to understand.
  • Length: The request for a “500-word article” provides a concrete word count to shape the depth and detail of the response. Note, however, that in some cases, ChatGPT might not follow this requirement exactly and exceed the limit.
  • Tone: By asking for a friendly tone you are telling ChatGPT to adopt a friendly and encouraging tone rather than an informative and technical one.

As a side note: Using audience control when prompting ChatGPT can also be extremely useful when learning complex ideas. For example, here’s a prompt I used recently:

"Explain the concept of distribution in statistics like I'm a 10-year-old.”

Formatting your prompt

The format of your prompt can often be just as important as its content. ChatGPT performs much better when the format of the prompt is clearly structured and properly emphasizes the important points.

A useful technique for formatting your prompt better is to use the markdown language . Markdown helps emphasize important parts of your prompt using various symbols, such as bold or italic, and structure your prompts with headings, subheadings, and others.

Techniques in Prompt Engineering

Zero-shot. One-shot. Few-shot.

Now that we have covered the general principles of effective prompt engineering, let’s dive into specific techniques that we can apply to different scenarios.

Since ChatGPT is great at pattern matching based on existing data, providing examples can make a huge difference in the output.

With that in mind, there are three distinct techniques that can be used for different use cases: Zero-shot, One-shot and Few-shot learning.

Zero-shot learning is when you ask ChatGPT to generate a response or complete a task without providing any specific examples or prior data. With this approach, ChatGPT will reply on its broad, pre-trained knowledge. By not being constrained to specific examples, zero-shot learning encourages more open-endedness, allowing the model to interpret and respond to tasks creatively.

As you might have guessed at this point, one-shot learning is when you provide exactly one example to ChatGPT. One-shot learning is useful when you don’t have many examples, but you still want the AI model to follow a certain pattern in its response.

And finally, few-shot learning is when you provide three or more examples. This is especially useful if you want the ChatGPT to emulate a certain style or format. A common use case is tailoring responses to match personal or organizational communication styles. It is also useful for providing the model with specific reproducible decision-making frameworks.

Chain of thought

Now, let’s talk about another handy technique: chain of thought prompting. The chain of thought focuses on improving the reasoning of the model by encouraging it to outline a step-by-step set of instructions for solving a specific problem.

Going back to our cooking example, here’s how you can apply the chain of thought prompting:

"Explain how to cook a brisket step by step, starting from choosing the right cut of meat to slicing it for serving. Include preparation, seasoning, cooking techniques, and resting time, with an explanation for each step."

Using this technique to guide the model ensures that the details that are important to you are not overlooked, and that the model’s reasoning is sound.

Chaining Prompts and Breaking Down Complex Tasks

Prompt chaining

One of the features that makes ChatGPT so powerful is the awareness of the previous context.

This awareness allows us to interact with ChatGPT in a conversational style by asking questions, receiving answers, and refining the prompt. This is a drastically different from how we’re traditionally used to interacting with software.

This process of asking questions and further refining your prompt based on the answers is called prompt chaining. It’s a different approach from trying to get it right with the first prompt only. Prompt chaining is perfect for breaking down complex tasks that require ChatGPT to be aware of a lot of context and nuances. The longer you have a focused conversation with ChatGPT and provide a better context, the more enhanced and improved its responses will be. This again goes back to the idea that prompt engineering is a dynamic process.

Breaking down instructions

Another useful approach for guiding and helping the model solve complex tasks is to break down the solution into manageable, step-by-step instructions:

Provide a step-by-step guide for smoking a brisket. Start with choosing the right cut, then explain trimming, seasoning, and marinating. Next, describe the smoking process, including temperature and timing. Finish with how to rest and slice the brisket.

Doing so helps ChatGPT yield more accurate responses that align better with your desired output because you are essentially providing it with the specific details to focus on while coming up with the answer.

While ChatGPT’s ability to keep track of the context is useful and can be further amplified by using the techniques we’ve covered so far, it’s important to keep in mind that ChatGPT has its limits. The model’s ability to remember context diminishes if the topics of the conversations shift frequently. So a good rule of thumb is to keep one conversation per topic, and if you have a new problem to solve, it’s best to start a new chat.

Ensuring Accuracy and Fact-Checking

Hallucinations

Now that we have covered techniques for improving the quality of the prompt, let’s cover some techniques for ensuring the accuracy and factual correctness.

When discussing the accuracy, it is important to be aware of the concept of hallucinations in AI. Hallucinations are situations when the model confidently gives you wrong answers, that oftentimes sound believable.

Hallucinations usually happen when the ChatGPT cannot provide correct answers due to its cutoff date. If your prompt requires factual accuracy, you can add a sentence at the end of the prompt, something along the lines of: “If you don’t know the answer and believe this information is after your cutoff date, please specify that you don’t know.”

Another useful way to ensure that the response doesn’t contain hallucinations is to explicitly ask ChatGPT to cite the sources.

Here’s how you can use this technique to make absolutely sure that ChatGPT is giving you legit cooking directions:

"Describe the process of smoking a brisket and include any relevant expert tips from well-known barbecue pitmasters. Cite the sources where these tips come from if possible.”

Asking for factual correctness this way ensures that the model avoids hallucinations and other inconsistencies in its output.

Web browsing

The paid ChatGPT+ plan allows the model to browse the web for information. This feature can also be highly useful if you absolutely have to ensure factual correctness.

For example, you can explicitly instruct ChatGPT to only generate responses by browsing the internet and citing reliable sources:

"Browse the web to find expert opinions from professional pitmasters on the best wood for smoking brisket. Summarize their recommendations and cite the sources.”

Conclusion

And that’s it for this post!

In this post, we covered prompt engineering as well as some of the key concepts for creating effective prompts. We covered techniques and frameworks such as the SALT, continuously refining the prompts, evaluating responses for accuracy, and various useful approaches for prompt engineering.

Just like with anything, practice makes perfect. So, I suggest you try using these frameworks and techniques and experiment applying them to your prompts to see what works best for you.

If you’d like to get more web development, React and TypeScript tips consider following me on Twitter, where I share things as I learn them.

Happy coding!

Share on

Software Development Tutorials
WRITTEN BY
Iskander Samatov
The best up-to-date tutorials on React, JavaScript and web development.