All
August 15, 2024

LLM Prompting: How to Prompt LLMs for Best Results

Not sure what is LLM prompting and how to craft quality prompts that get you great results? Learn what it takes to craft efficient LLM prompts in this post.
Grab your AI use cases template
Icon Rounded Arrow White - BRIX Templates
Oops! Something went wrong while submitting the form.
LLM Prompting: How to Prompt LLMs for Best Results

Getting the best results from your large language models involves thoughtfully crafting your prompts. Aiming for conciseness, structure, and context and including as many details as possible can make LLMs' outputs more accurate.

We’ll teach you how to improve your existing prompts so you can take advantage of LLMs’ full capabilities.

What Is LLM Prompting?

LLM prompting is creating specific inputs that help guide your large language model in generating the desired outcome. It can involve asking questions or giving detailed instructions.

Certain large language models heavily rely on the details and structure of prompts (input) to provide accurate and relevant responses. This means prompting has a direct impact on the quality of the output results.

Therefore, it’s best to think of LLM prompting as strategic communication to achieve optimal results.

Prompting is one of the areas where humans and AI need to work together to achieve the best results.

What Is an LLM Prompt?

An LLM prompt is a simple input or query you provide to a large language model.

The response depends on the quality of your inputs (prompts).

LLM prompt example

The most common LLM prompt types include:

  • Questions
  • Responses
  • Statements
  • Detailed instructions

Here is an example of a prompt:

“List the top five ethical concerns surrounding the development of AI using bullet points.”

You can ask, explain, create content, or even analyze data using prompts, so it’s important to learn how to craft good LLM prompts.

Common LLM prompt types

Here are the do’s and don’ts of prompt engineering techniques:

Prompt Engineering Techniques: How to Write Good LLM Prompts

Writing good LLM prompts requires you to:

  • Be specific and clear
  • Structure the prompts
  • Provide context when possible
  • Ask open-ended questions for an explanation
  • Ask for examples
  • Avoid ambiguity
  • Tailor prompts to model capabilities
  • Be concise and comprehensive
How to write good LLM prompts

1. Be Clear and Specific

Being clear and specific helps the model understand your needs and provide more accurate responses.

A great example of clear and specific prompt engineering is:

“Explain the main differences between supervised and unsupervised learning in AI.”

Details help the model identify what you need. Being specific provides better guidance for the model and ensures outputs align with your needs.

Don’ts

Don’t use vague and overly broad prompts like: “Tell me about artificial intelligence.”.

Vagueness can lead to a wide range of outputs, which might not be what you’re looking for. Instead, guide the AI model with more specific and clear instructions.

2. Structure the Prompts

Structuring the prompts helps increase clarity and focus in the model’s output. Organizing a prompt using bullet points, numbering, or headings helps the LLM understand each part of your input.

For example, if you want to know more about the advantages and disadvantages of AI in education, separating these two in a single prompt will ensure a more comprehensive response.

You can also ensure that the outputs are structured by asking the model to use bullet points or numbered lists, or splitting its output into headings:

“Tell me about the advantages and disadvantages of AI in education. Separate advantages and disadvantages in subheadings, and list them using bullet points.”

Don’ts

Avoid adding multiple requests to a single, unstructured prompt. This can confuse the LLM, and result in incomplete or flawed responses.

3. Provide Context When Possible

Relevant background information or an explanation of the purpose of your request can help the model generate the desired response aligned with your needs. This is very useful for complex and specific topics where LLM might not grasp the input without context.

For example, if you want to learn more about AI’s ethical concerns for your presentation, your prompt can look like this:

“Can you outline the main ethical concerns of AI in bullet points for my presentation?”.

If you want your LLM to provide better responses, familiarize it with your use case.

Don’ts

Don’t be vague, exclude context, or believe the LLM will understand your needs with minimal details. Using out-of-context prompts like “Give me some ethical concerns.” will not result in a response tailored to your specific needs.

4. Ask Open-Ended Questions

Asking open-ended questions is a great way to craft prompts that will result in detailed and specific outputs. Such questions encourage the large language model to explore complex topics instead of providing a simple yes/no answer.

Instead of asking “Is AI important?”, you can expand the scope of your question like this:

“What are the potential impacts of advanced AI in the next decade?”

Such natural language prompts will push the model to explore deeper and come up with richer and more informative results. This will help you uncover deeper insights rather than just basing your knowledge on general information.

Asking open-ended questions for explanation is an excellent prompt engineering strategy to utilize LLM’s capabilities to explore and analyze more data.

Don’ts

Don’t use yes or no prompts if you require a detailed response. Also, don’t ask short questions without giving much context or explanation.

5. Ask for Examples

Examples help improve clarity and understanding of the outputs but also provide better results. Asking the LLM to explain certain things with examples can make complex topics easier to understand.

Encouraging the model to use examples will provide illustrations as outputs that clarify concepts, make information more accessible, and provide an engaging learning experience.

For example, instead of requesting the LLM to explain blockchain technology, ask it to illustrate how it works using examples related to a certain industry.

Such prompts could look like this:

“Explain blockchain technology using examples related to the banking industry.”

Don’ts

When requesting the LLM to use examples, some of the things you shouldn’t do include:

  • Assuming familiarity
  • Using complex references
  • Relying on ambiguous language
  • Mixing analogies
  • Neglecting to clarify the purpose

6. Avoid Ambiguity

Avoiding ambiguity enhances the quality and relevance of the outputs.

It also reduces the chance of multiple interpretations of one request. Improving your prompts’ clarity will ensure your model understands your needs and reduces misunderstandings.

For example, instead of using a prompt like: “Talk about AI learning,” you could use the following LLM prompt:

“Describe how reinforcement learning differs from supervised learning in AI.”

Avoiding ambiguous language the LLM to generate accurate outputs that align with your needs.

Don’ts

When avoiding ambiguous language, don’t mix up concepts or combine unrelated topics into one prompt.

Avoid pronouns that don’t specify the subject or object, as this can easily lead to misunderstanding. Instead of using a prompt like “What are its benefits?”, use a prompt like “What are the benefits of [object]”.

Lastly, never use jargon without explanation. Some models may not have relevant industry knowledge, so providing definitions or context for clarity will help improve the output’s accuracy.

How not to prompt a LLM

7. Tailor Prompts to Model’s Capabilities

Understanding the strengths and weaknesses of your LLM allows you to use prompts that leverage its unique capabilities.

Many large language models excel at generating content, summarizing information, or providing explanations, so using prompts within these capabilities will help improve the quality and relevance of the model’s outputs.

Knowing what type of LLM you have and what it excels at will help you shape and craft prompts to play to these strengths. As a result, you’ll receive more relevant and engaging outputs.

Don’ts

LLM prompt expectations

Don’t expect real-time and most relevant information, as it is possibly outside of LLM’s training data. Trying to gain real-time information from an LLM that’s not capable of doing so can lead to inaccurate and outdated results.

8. Be Concise and Comprehensive

It’s important to balance conciseness and thoroughness to help the model focus on key elements of your prompt without overwhelming it with information.

Finding the balance will help the model provide detailed yet accurate and focused responses on specific topics. For example, instead of asking your LLM to explain a few different and similar topics, try to streamline your request with a prompt like:

“Explain the process of how a neural network learns, focusing on backpropagation.”

Don’ts

Don’t overload the LLM with long prompts and go into excessive detail. This can dilute the main question and won’t get you accurate responses.

How to Test LLM Prompts

Now that you’re ready to craft quality LLM prompts, it’s time to learn how to test these prompts to ensure you get quality outputs.

Testing LLM prompts helps evaluate its effectiveness based on the quality of the received output.

The key metrics you can test your LLM prompts for include:

  • Grounding — Grounding is determined by comparing the LLM’s outputs against ground truths in a specific domain. This metric can show you how accurate your LLM is in specific domain knowledge.
  • Relevance — Relevance indicates if the LLM’s outputs fit your expectations.
  • Efficiency — Efficiency determines the speed to produce the output, which is something you can easily notice after entering your prompts.
  • Versatility — Versatility is related to how many different types of queries your LLM can handle without producing irrelevant outputs. A quality LLM will be able to accurately handle a wide range of queries.
  • Hallucinations and toxicity — Hallucination and toxicity determine if an LLM contains factually untrue information and uses inappropriate natural language, biases, or threats.
Key metrics to test LLM prompts

LLM developers can test models and prompts in-depth by measuring the quality of inputs and expected outputs against an established baseline in specific use cases.

However, as an LLM user, you can compare your previous inputs with now-improved outputs and judge the outputs by experience using the earlier-mentioned key metrics.

Knowing what to look for when testing your LLM using quality prompts will help you notice if it lacks in any field. Also, if you’re interested in advanced LLM prompt engineering, we recommend looking into few shot prompting and zero shot prompting techniques.

Make the Most From Your Workflow With Us

Do you need help automating your workflows and integrating AI solutions into your existing system? Please schedule a 30-minute call with our experts. We can discuss your needs, show you how our AI solutions work live, and show you how they can automate complex tasks within your workflow.

In this article

Schedule a free,
30-minute call

Explore how our AI Agents can help you unlock enterprise-wide automation.

See how AI Agents work in real time

Learn how to apply them to your business

Discuss pricing & project roadmap

Get answers to all your questions