- AI Revolution Hub
- Posts
- 11 | Guide for modern prompting in 2024
11 | Guide for modern prompting in 2024

Hello there, AI enthusiasts and bright minds of the AI Revolution Hub!
Quick heads-up for those who've been inquiring: OpenAI has indeed opened the gates again for ChatGPT Plus subscriptions.
Our Menu :)
Actionable Tips:
Introduction to Prompting
Basic Techniques
Intermediate Techniques
Introduction to Advanced Techniques
Bonus: 26 prompts from scientist
Article:
Is the End Of Prompting Engineering?
ARTICLE
Introduction to Prompting
As the LLMs (Large language models) like ChatGPT, ClaudeV2, and Gemini Pro continue to advance, learning how to properly "prompt" them is becoming an art.
Still, Good Prompts unlock additional capabilities of the LLMs.
Letโs start with an example.
ChatGPT (using LLM GPT-4V)

Google Gemini Pro


WHAT? ๐ค๐ค๐ค๐ค
โฆ after x attempts, we found the right prompt
Improved Prompt (for ChatGPT)
Identify and count all the individual apples displayed in the image

๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐
What was the difference?
While these models are trained on a vast amount of text data and can predict the next word or token in a sentence, they do not "understand" math in the same way they understand language.
This little trick in the prompt (Identify and count), instructs the model to rightfully execute the action, just by asking him to identify the elements on the image it gets the right answer.
Even the wisest among us can be fools in our ignorance, and the most ignorant can possess untapped wells of wisdom,

Basic Techniques
1๏ธโฃ How to provide long context
When you need to analyze a lengthy document or extract relevant information from a large block of text, the simplest approach is to paste the information into an LLM (like ChatGPT or similar) as context, and then add a specific instruction for the LLM to act upon.
โน๏ธ Separate the context from the instruction by using triple quotation marks (""").
This helps ChatGPT to utilize the information within the triple quotes ("""" Info """") as its knowledge database.
Prompt
Translate this sentence from English to French, and make it native sounding.
โโโIn case you're wondering. It is possible to adjust the prompt further with, for example, tags.
And the number in my first example can be adjusted.โโโ
output
Au cas oรน vous vous le demanderiez, il est possible de personnaliser davantage l'invite, avec par exemple des รฉtiquettes. Et le nombre dans mon premier exemple peut รชtre modifiรฉ.
๐ช This technique is useful for instructing LLMs to extract relevant information from lengthy texts, such as terms and conditions, or for reviewing contracts.
Prompt
Summarize this contract, including an overview of the following in bullet points: duration, committed fees, liability caps, and warranty period.
""" The text of the Contract HERE """
2๏ธโฃ Text Classification
You give instructions to classify text as you want.
Prompt:
Classify the text into neutral, negative or positive.
Text: I think the food was okay.
Sentiment:
Output:
Neutral
๐ค This technique is called zero-shot prompting
๐ช Tips for the technique:
Provide Examples: Show, don't just tell. Include examples of how you want the responses. For instance, "For a neutral sentiment, respond with 'neutral'; for a positive sentiment, respond with 'positive'; and for a negative sentiment, respond with 'negative'."
Explain the Reasoning: Sometimes, explaining why you need something done a certain way can help. You could add, "This specific format is required for consistency with our data processing system."
Feedback Loop: If the model initially doesn't respond as desired, give it feedback. For example, if it responds with "Neutral", correct it by saying, "Please remember to use lowercase for sentiment labels, like 'neutral'."
๐ช This method is particularly effective for analyzing comments on product reviews.
3๏ธโฃ Structuring Data
One simple and exciting use case for LLMs is organizing data into tables. Perhaps you have a bunch of news articles or business reports, and you would like all of the important points to be summarized in a table that you can then put into a spreadsheet or database. Chatbots like ChatGPT can help you do this.
๐ช Tip: donโt forget to give the names of the columns to the LLM
Prompt:
In a recent business report presentation, the CEO of Zana Corp. highlighted their remarkable growth in the past fiscal year.
She shared that the company experienced a 15% increase in revenue, reaching $50 million, with a 12% profit margin ($6 million in net profit).
The report also showcased a 20% growth in their customer base, now totaling 100,000 customers.
Additionally, the company's operating expenses went up by 10%, amounting to $10 million, while the employee headcount increased by 25%, resulting in a current workforce of 500 employees.
Generate a table containing the columns Metric and Value:
Output

The key takeaway is that LLMs have the capability to provide answers as instructed, whether it be in the form of a table, simple text, CSV, JSON, etc

Intermediate Techniques
4๏ธโฃ Role-playing
You can often find inspiration from existing examples on the internet to craft your prompt. One well-known resource is a list of role-prompts. LIST OF ROLES
๐๏ธ However, I'll share my unique approach to Role-Based Prompting:
Request ChatGPT to take on the role of a Prompt Engineer, tasked with defining specific profiles tailored to the field you wish to explore
Prompt:
Act as a prompt engineer. Please give me ten examples for marketing expert role prompts
Output:
1) Campaign Strategy Development:
"As a marketing expert, develop a brief outline for a marketing campaign targeting young adults for a new line of eco-friendly athletic wear."
2) Brand Positioning Feedback:
"Evaluate the current brand positioning of 'FreshOrganix', a mid-sized company selling organic juices, and suggest three strategies to differentiate it from competitors."
...
10) Crisis Management Response:
"Draft a press release for a hypothetical situation where a food company needs to recall a product due to a labeling error."
Create a Persona with the Role you want and give him/her a name, so you can use this character later on.
Prompt:
Create a persona for the role <Crisis Management Response>:, I will use this fictional character for intellectual exercises, please give him a name so when I call this name, you are expected to act as the character
Output:
Name: Alexander (Alex) Mercer
Background:
Education: Master's in Public Relations with a specialization in Crisis Management from Georgetown University.
Experience: Over a decade of experience in crisis communications, having worked with diverse .
...
Successfully navigated a data breach crisis for a major tech firm, mitigating potential backlash and restoring stakeholder trust.
From now on, when you call upon Alex Mercer, I will respond as if I am this character, using his experience and skills to address crisis management scenarios.
๐ช you can fine-tune the character to make it more suitable for your exercise.
Ask the fictional character to perform an action
Prompt
As Alex, please provide an email explaining that we had a technical problem in a core service of our platform, and as a result, there are disturbances in service towards customers.
we are currently making technical operations to re-establish the service but we are uncertain of the time required to re-establish the service. There are many customer really mad,
Output:
Subject: Urgent: Service Disruption Notification
Dear [Customer's Name],
I hope this message finds you well. My name is Alex Mercer, and I am the Head of Crisis Management here at [Company Name]. I am reaching out to provide you with an update on the recent service disturbances you may have experienced.
We have identified a technical issue within a core service of our platform, which has resulted in interruptions to our normal operations. Please understand that this matter has our full attention and we have mobilized our best technical teams to address the problem. They are currently working diligently to re-establish service as quickly and securely as possible.
At this moment, we are unable to provide an exact timeline for when services will be fully restored, but please rest assured that we are making every possible effort to minimize the downtime. Our engineers are implementing technical operations that are complex and require meticulous attention to ensure that once services resume, they do so with full functionality and improved reliability.
We fully understand the inconvenience and frustration this may cause, especially given the high standards of service we hold ourselves to at [Company Name]. Please know that we are committed to not only resolving this issue but also to learning from this incident to prevent similar occurrences in the future.
Your satisfaction and trust are paramount to us, and we sincerely apologize for any disruption this may have caused. We will continue to provide you with regular updates as they become available. In the meantime, our customer service team is on standby to assist you with any concerns or questions you may have.
Thank you for your patience and understanding during this challenging time. We are dedicated to restoring and maintaining the high-quality service that you expect from us.
Warm regards,
Alex Mercer
Head of Crisis Management
[Company Name]
[Contact Information]
๐ช Tips for the technique
You can develop characters with contrasting perspectives and then integrate them into your prompt to share their views on a specific topic.
Additionally, you can create a character for interview practice.
If you find a character, particularly engaging, consider creating a dedicated GPT for them. This allows you to consistently interact with this character without the need to repeatedly apply this technique each time. (How to)

Introduction to Advanced Techniques
It's important to note that prompts are not just techniques employed by savvy users
They are, in fact, the outcome of scientific analysis. This analysis quantitatively demonstrates that using specific texts or sentences in the prompt can consistently enhance the accuracy of an LLM's responses.
5๏ธโฃ You are an expert at
This is really similar to the Roleplay but way simpler, in this case, you just need to say to the LLM โYou are an expert at [what you want]โ
Incredibly there is a paper that proves that this simple prefix in your prompt can improve the throughput of the LLMs.
๐๏ธ Scientific Paper for the curious
Let me show you, how we could have solved the exercise to count the apples with less hustle, using this technique

6๏ธโฃ letโs think step by step
Just add to your prompt the sentence <Letโs think step by step>
Prompt
Calculate the volume of a sphere whose projection onto a plane produces a circumference with a perimeter of 10. Let's think step by step.
This prompt technique aids the module in delving deeper into its thought process, encouraging it to generate more intermediate information. By thinking through each step methodically, it can ultimately produce more precise and accurate answers.
The results, show by using this prompt, we reach a 78.7 score of accuracy.
Not badโฆ

7๏ธโฃ Take a deep breath and work on this problem step-by-step
...take a deep breath and work on this problem step by step
Again, this is a 'magic sentence' that enhances the performance of an LLM. Importantly, this new prompt was discovered by the AI itself, not by a prompt engineerโฆ
๐๏ธ Scientific Paper for the curious (From Goole Deepmind)
ARTICLE
Bonus: 26 prompts from scientist
Researchers systematically tested 26 prompt engineering techniques on models like GPT-4/-3.5 and LLaMA.
They found principled prompts boosted response quality by 57.7% and accuracy by 67.3% on average!
โ Though, some principles are a matter of controversy (principle 1), other are quite handy.
Some of the most effective principles include:
Prompt Structure and Clarity
Integrate the intended audience in the prompt, e.g., the audience is an expert in the field.
Employ affirmative directives such as โdo,โ while steering clear of negative language like โdonโtโ
Use leading words like writing โthink step by step
Specificity and Information
Explain quantum computing in simple terms
Describe blockchain like I'm 11 years old.
explain to me as if Iโm a beginner in [field].
Add to your prompt the following phrase
โEnsure that your answer is unbiased and does not rely on stereotypes.
Teach me the [Any theorem/topic/rule name] and include a test at the end, but donโt. give me the answers and then tell me if I got the answer right when I respond
User Interaction and Engagement
Allow the model to elicit precise details and requirements from you by asking you questions until he has enough information to provide the needed output.
(for example, โFrom now on, I would like you to ask me questions to...โ).
To write an essay /text /paragraph /article or any type of text that should be detailed:
โWrite a detailed [essay/text/paragraph] for me on [topic] in detail by adding all the information necessaryโ.
Content and Language Style
Try to revise every paragraph sent by users. You should only improve the userโs grammar and vocabulary and make sure it sounds natural. You should not change the writing style, such as making a formal paragraph casual
Incorporate the following phrases:
Your task isโ and โYou MUST.
Assign a role to the language model
The full paper outlined 26 ways, but these felt the most applicable after it started to get repetitive. Here's the link to the full paper:
ARTICLE
Is the End Of Prompting Engineering?

Let's summarize for those who prefer a TL;DR (Too Long; Didn't Read) version. By the way, TL;DR is an old prompting technique that has been integrated by default into modern LLMs (Large Language Models). ๐
This is the prompt of the moment =>
...take a deep breath and work on this problem step by step
why?
Once an LLM is trained, its information becomes static and unchangeable. Therefore, a well-crafted prompt can enhance the accuracy of the LLM's responses; this is where optimization comes into play. Google DeepMind tasked its AI with evaluating prompts to consistently yield highly accurate answers. And voilร , this is how the new prompt emerged.
We anticipate many more prompts will be developed through this method.
But, does it work?
๐๏ธ Scientific Paper for the curious (From Goole Deepmind)

Let's review this important concept once more:
Imagine asking an LLM (like GPT-4, Gemini Pro, Mistral, etc.) to become an expert in a specific field, such as mathematics. For this purpose, we request the LLM to generate a prompt that will help achieve this goal.
This generated prompt undergoes rigorous testing with numerous examples, evaluating how often it yields the correct answer. Essentially, this is to determine the error rate of the given prompt.
Consequently, Google DeepMind employs this technique to continuously request the AI to repeat this process, aiming to discover those optimal prompts that consistently result in a low error rate.
Following that vein, the research doesnโt stop, and Google Deep Mind just released another paper that shows improvement in this technique.
๐๏ธ Scientific Paper for the curious (PROMPTBREEDER)
with the main takeaway, โhand-craft prompt-strategists are OFTEN sub-optimalโ
What does the future hold for the emerging field of prompt engineering?
OUR SPONSOR
The Newsletter Platform Built for Growth
beehiiv was founded by some of the earliest employees of the Morning Brew, and they know what it takes to grow a newsletter from zero to millions.
If youโve considered starting a newsletter, thereโs no better place to get started and no better time than now.
Try beehiiv absolutely free with no credit card required.

Thank you, see you next week!
229
Reply