10 Essential Prompt Engineering Criteria to Kickstart Your Success

Jollen Moyani - May 30 '23 - - Dev Community

Prompt engineering is a vital aspect of working with large language models like ChatGPT. It involves creating effective prompts that lead to accurate and useful responses. Prompts are questions or instructions given to a large language model (LLM) to obtain a specific output.

Whether you are a software developer, data analyst, or business professional, understanding prompt engineering can help you get the most out of your LLM.

LLMs are powerful machine learning models trained on vast amounts of text data. They produce high-quality text responses that are often indistinguishable from human-written responses. These models can perform various tasks such as content generation, content classification, and question answering.

Why should prompts be effective?

Effective prompts are crucial to get the best result from LLMs. The text input guides the model’s response in the right direction and allows us to specify the information we need. A well-crafted prompt with the right keywords and conditions can extract specific information from the model, such as the capital of a country.

Conversely, a poorly constructed prompt can lead to inaccurate or irrelevant information. Therefore, it is essential to structure the prompt thoughtfully, choosing the right words and phrases to direct the model’s response.

By doing so, we can ensure we receive the desired information from the LLM.

Criteria for good prompts

Let’s discuss the top 10 characteristics of a good prompt:

1. Relevancy

The prompt given to an LLM should be clear and specific, providing enough information to it to generate the desired output. Ambiguity in the prompt can result in inaccurate or irrelevant responses.

2. Context

The context in prompts refers to information included in the request that provides a better framework for producing the desired output. Context can be any relevant details, such as keywords, phrases, sample sentences, or even paragraphs, that inform the LLM. Providing context to your prompts allows the model to generate more accurate and relevant responses.

3. Tailored to the target audience

The prompt should be tailored to the target audience to ensure the generated text is appropriate and relevant. For instance, prompts designed to produce technical writing should differ from those designed for creative writing.

4. Designed for a specific use case

The prompt should be created with a specific use case in mind. Include details about where and why you are going to publish the material. These will influence the prompt’s tone, language, and style, therefore affecting the output.

5. Include high-quality training data

The quality of the generated output is directly related to the quality of any training data included in the prompt. High-quality training data can help ensure that the LLM generates accurate and relevant output based on the prompt.

6. Proper wording

The wording in the prompt is essential because it plays a crucial role in determining the quality and accuracy of the generated output. The wording of the prompt should be clear, specific, and unambiguous to ensure that the generated text is relevant and appropriate.

Levels of improving the wording

Let’s see some examples of improving the wording in prompts to get accurate results:

  • Vague wording : Generate a program that takes input and produces output.

This prompt should be more specific and provide specific information about the expected input or output. Otherwise, it will lead to irrelevant or inaccurate responses from the model.

  • Clear wording : Write a C# program that calculates the average of three numbers and displays the result.

This prompt is clear and specific, providing a clear task for the model. It includes relevant keywords, such as C# and average , to give context to the prompt.

  • Technical wording : Develop a C# console application that implements a bubble sort algorithm on an array of integers.

This prompt is tailored for a technical audience and requires the model to know C# programming concepts and algorithms. It also includes technical terms such as console application and bubble sort algorithm.

  • Creative wording : Create a C# program that tells a story using randomly generated words.

This prompt is designed for a creative writing use case and requires the model to generate imaginative and engaging text. It includes the keyword C# to provide a specific use case for the model.

  • Domain-specific wording : Write a C# program that calculates the financial data for a retail store, including revenue, expenses, and profit margins.

This prompt is tailored for a specific use case related to financial data analysis. It includes relevant domain-specific terms such as revenue , expenses , and profit margins to provide context for the model.

7. Apply formatting

Formatting is significant in the prompt submitted to LLMs as it can impact the generated output. Formatting your prompts with bullet points, numbered lists, and delimiters will give your prompt more potential. If the prompt is not properly formatted, it can result in unclear or incomplete information being fed into the model, leading to irrelevant or inaccurate responses. Additionally, formatting can help improve the prompt’s readability and overall quality.

8. Provide conditions

Conditions in prompt engineering refer to adding additional information or constraints to the prompt to guide the generated response in a specific direction. They may be specific keywords, guidelines, or instructions for the model to follow in producing content. Overall, conditions allow for greater control over the generated output and can help improve the quality and relevance of the result.

9. Appropriate vocabulary

Using industry-specific terminologies, jargon, or technical terms can help improve the relevance and accuracy of the generated text for a specific use case. For example, Maui generally means the island, but from a developer’s perspective, it means the cross-platform domain from Microsoft. So in the prompt, a developer would have to use .NET MAUI instead of Maui. However, balancing this with simplicity and clarity is important to ensure the target audience understands the generated text. Additionally, incorporating feedback from users and domain experts can help refine and improve the vocabulary used in LLM prompts.

Examples :

  • Technical writing prompt: “Write a detailed user manual for ……….”
  • Creative writing prompt: “Write a short story about ……….”
  • Academic writing prompt: “Write a research paper on ……….”
  • Marketing copy prompt: “Write persuasive ad copy for……….”

10. Provide a role

Assigning a role to the LLM in the prompt can help establish expectations for the generated text’s tone, language, and style, making it more consistent, professional, and ultimately useful.

Examples

Let’s see some examples that use the prompt criteria we’ve covered. These examples all use ChatGPT as the LLM.

Criteria With the criteria Without the criteria
Relevancy

Write C# code to find the maximum value in an array of integers. If there are multiple maximum values, return the index of the first occurrence.

This prompt provides relevant details about what type of array to use and how to handle cases with multiple maximum values. With this information, ChatGPT can generate more accurate code that fits the user’s request.

Write code to find the maximum value in an array.

This prompt needs to be more specific and provide specific instructions on what type of array to use or how to handle cases with multiple maximum values.

Context

Write a method to add two numbers together and return the result as a formatted string, where the first parameter is the dollar amount and the second parameter is the cents.

This prompt provides additional context that helps ChatGPT understand the problem better. By including the desired output format and specifying that the numbers represent dollars and cents, ChatGPT can generate more accurate and useful code.With the criteria Context

Write a method to add two numbers together.

This prompt is too simple and doesn’t provide any context for the problem. ChatGPT can generate code to add two numbers together, but it won’t know the method’s purpose or how it will be used in this case.

Tailored to a target audience

Write a beginner’s guide to using a software program for graphic designers.

This prompt targets a specific audience: graphic designers who are beginners to using the software program. By providing this instruction, ChatGPT will have a better understanding of the intended audience and can tailor the guide accordingly. The prompt can be further improved by including additional details about the audience, such as their skill level or specific needs and which software program should be the topic of the guide.With the criteria Tailored to a target audience

Write a guide on how to use a software program.

This prompt is too general and lacks information about the target audience. It’s difficult for ChatGPT to generate a useful guide without knowing who the guide is intended for.Without the criteria Tailored to a target audience

Specific use case

Write an algorithm for a login page for a social media app that allows users to sign in with their email or phone number.

This prompt provides a specific use case for the login page, which is the starting point for a social media app. With this information, ChatGPT can generate a procedure that lists all the necessary functionalities for a login page, such as a form for users to enter their email or phone number and password, as well as a method to verify the user’s credentials and allowing them to log in to the app.With the criteria Specific use case

Write an algorithm for a login page.

This prompt is too general and lacks information about the intended use case. It’s easier for ChatGPT to generate code by knowing the specific use case or context for the login page.Without the criteria Specific use case

Training data

Write code to calculate the area of a rectangle, but allow the user to input the length and width of the rectangle in either centimeters or inches. The expected code is:

public static double CalculateRectangleArea(double length, double width, string unit)
{
   double area = 0;
   //
   //
   //
   return area;
}

This prompt includes additional training data for ChatGPT to generate code that meets the specified requirements. The training data should include examples of how to convert centimeters to inches and vice versa, as well as how to calculate the area of a rectangle with the given length and width in either unit of measurement. This information can be provided as example code snippets or written instructions that ChatGPT can use to generate the desired code.With the criteria Training data

Write code to calculate the area of a rectangle.

This prompt is too simple and doesn’t include any specific training data. ChatGPT can easily generate code to calculate a rectangle’s area without additional training data, but it may not produce code in the format you were hoping for.Without the criteria Training data

Proper wording

Write a C# program that calculates the financial data for a retail store, including revenue, expenses, and profit margins.

This prompt uses domain-specific wording related to financial data analysis. It includes relevant terms such as revenue, expenses, and profit margins to provide context for the model.

Generate a program that takes input and produces output.

This prompt is too vague and does not provide any information about the expected input or output. It can lead to irrelevant or inaccurate responses from ChatGPT.

Formatting

Review the following aspects of the provided code:

  1. Should meet mobile device accessibility:
    • Screen text reader
    • Keyboard navigation
    • Color contrast Automation properties
  2. Should have 100% code coverage
  3. Should use Material theme sizing and colors

This prompt breaks down the review aspects into three clear points with subpoints for the first, making it easier for ChatGPT to understand the requirements and focus on each aspect separately. The prompt also uses ordered lists and proper indentation, which makes it more visually appealing and easy to read. The formatting of the prompt helps ensure that ChatGPT gets all the important details and can efficiently complete the code review.

Please review the following code and ensure it meets all standards like accessibility, UI, and test coverage.

This prompt lacks clarity and could benefit from more details about each standard. Providing these additional details in the form of a list gives ChatGPT a clear set of items to assess.

Conditions

Write a C# program that calculates the sum of two user-input numbers. The program should prompt the user for input, validate that the input is numeric, and output the result with two decimal places.

This prompt provides a set of conditions for the generated text. The model is guided to generate a response that meets the specified criteria, such as using the particular programming language (C#) and adhering to certain design principles.

With the criteria Conditions

Write a program that calculates the sum of two numbers.

This prompt provides no specific conditions for the generated text. The model is free to generate any response it deems appropriate response based on its training data.

Without the criteria Conditions
Vocabulary

Write C# code to add two numbers and return the result in double type.

It uses technical language, such as C# code, add, result, and double type, which are specific and concise, making it appropriate for communicating a programming task to the GPT.With the criteria Vocabulary

Write an essay to add two numbers and return the result as a double type.

It uses general language, such as essay, add, and result, which may not provide enough context or specificity for ChatGPT to generate the desired output accurately. Additionally, essay implies a different writing style and structure than needed for a programming task, which is implied by double type.Without the criteria Vocabulary

Role

You are a software developer. What is maui.

This prompt produces output that explains the .NET MAUI cross-platform technology from the viewpoint of a software developer.

With the criteria Role

What is maui

This prompt outputs an explanation the geographic island because ChatGPT didn’t have any indication that you were inquiring about a software development framework.

Without the criteria Role

Conclusion

Thanks for reading! I hope you found this blog helpful in understanding the importance of prompt engineering for LLMs like ChatGPT to get accurate results. Try out the tips listed in this blog post and leave your feedback in the comments section below.

Syncfusion provides the world’s best UI component suite for building robust

web, desktop, and mobile apps. It offers over 1,800 components and frameworks for WinForms, WPF, WinUI, .NET MAUI, ASP.NET (Web Forms, MVC, Core), UWP, Xamarin, Flutter, JavaScript, Angular, Blazor, Vue, and React platforms.

Current customers can access the latest version of Essential Studio from the License and Downloads page, and Syncfusion-curious developers can use the 30-day free trial to explore its features.

You can contact us through our support forums, support portal, or feedback portal. We are always happy to assist you!

Related blogs

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .