We have the answers to your questions! - Don't miss our next open house about the data universe!

Automated Prompt Engineering: How does it work?

- Reading Time: 6 minutes
Automated Prompt Engineering

Automated Prompt Engineering is a new approach to automating the engineering and writing of prompts for Large Language Models (LLMs) such as GPT and other generative AIs. Find out all you need to know!

With the emergence of tools like ChatGPT and MidJourney, generative AI is causing a revolution in a wide range of sectors. We’ve entered a new era.

It is now possible to generate text, images, or even audio and video in just a few seconds. It’s an upheaval, with massive consequences for a wide variety of professions.

Artificial intelligence can now be used to create software and websites, design new products or architectural plans, analyze data or illustrate children’s books.

However, to get the results you want from these powerful tools, you need to be able to formulate your requirements precisely: this is what we call Prompt Engineering.

This can be a long and tedious process. After writing an initial prompt, it is often necessary to modify it several times to perfect it until the AI generates exactly what is expected of it.

To save even more time and maximize efficiency, researchers and other AI experts have begun to create various Automated Prompt Engineering techniques.

As you will discover, this approach has many advantages and can make all the difference between an amateur user and a true professional. Before going into more detail about the existing methods and their benefits, let’s first go back to the basics.

What is prompt engineering?

Large Language Models are capable of creating human-like text based on the commands they receive: prompts.

The term Prompt Engineering refers to the creation of efficient, precise prompts to achieve the best results with generative AI tools based on Large Language Models (LLMs).

This discipline requires expertise in natural language processing (NLP) and LLM. Prompt Engineers need to be able to formulate clear, context-specific questions and sentences, in order to obtain precise, relevant answers from the AI.

Whether generating detailed marketing reports, creating engaging content for a website or writing computer code: prompt engineering is a very useful time-saving skill.

And contrary to popular belief, you don’t need to wield a programming language or have software development skills to excel in this field.

What’s most important in this engineering role is a mastery of language and an analytical mind. A high-quality prompt should include context to help the AI understand the situation, and instructions to explain precisely what you want it to do.

This reduces ambiguity and the risk of receiving irrelevant results, gives the generative AI greater control and saves time.

As you can imagine, formulating the perfect prompt can take a lot of patience and multiple attempts. That’s why it can be very interesting to automate the process.

Why automate prompt engineering? What are the benefits?

Techniques for creating prompts automatically are not just a simple way of generating high-quality content with AI.

It’s also an alternative to training LLMs from data aggregated from the web or books, considered the norm until now.

Rather than assembling vast datasets and manually creating labels, synthetic data can be produced automatically.

As a result, training AI models on large datasets becomes much easier and faster. Automated engineering methods therefore have revolutionary potential for the AI industry.

This approach can also boost LLM performance, thanks to prompts tailored to the specific task in hand. By extension, it can also make AIs more versatile.

There are several techniques for automated prompt engineering. Among the most commonly used are gradient-based optimization, rule-based systems and Machine Learning.

Let’s take a closer look at two particularly compelling methods: the Automatic Prompt Engineer (APE) framework and the OPRO! program.

1. Automatic Prompt Engineer (APE): a framework for automatically generating prompts

Researchers Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han and their colleagues at the University of Toronto, the Vector Institute and the University of Waterloo have created a procedure for generating prompt text for large language models: Automatic Prompt Engineer (APE).

Originally published in November 2022 and updated in March 2023, their method involves giving input-output pairs to an LLM to generate a prompt.

With inputs similar to those given upstream, the prompt will enable the LLM to produce results similar to the outputs. It will also be able to generate variations.

This approach therefore requires two LLMs: a prompt generator, and a content generator. To generate the prompts, the researchers used GPT-3 and InstructGPT, backed up by T5, GLM and InsertGPT to fill in the gaps.

InstructGPT was used to generate the content. To get the AI to generate prompts, they gave it a sentence such as “I gave a friend an instruction and five inputs. The friend read the instruction and wrote an output for each of the inputs. Here are the input-output pairs”.

This instruction was accompanied by a small set of examples from the Instruction Induction dataset, such as the names of two animals and which one is larger.

The prompt generator then responded with a prompt such as “Choose the animal that is larger”. This prompt and 50 example inputs were used to feed the content generator, which was able to produce the results.

To assess the effectiveness of their technique, the researchers evaluated the quality of the prompt based on the number of cases where the content generator produced results that matched expectations exactly.

This enabled them to boost performance by asking the generator to produce a prompt similar to the one that had received the highest score. The step was repeated three times until the ultimate prompt was obtained.

On each of the 24 tasks reviewed in Instruction Induction, the prompts generated by InstructGPT using APE outperformed those created by humans.

This method therefore produces prompts capable of leading the content generator to produce results of the highest quality. APE could therefore maximize the performance of all LLMs!

2. OPRO: a program to let AI choose the best prompt

In September 2023, a team of Google DeepMind researchers led by Chrengrun Yang created a program called OPRO that lets LLMs try out different prompts until they find the most suitable one for solving a task.

Rather than trying to manually modify a prompt over and over again to perfect it, it is possible to automate this iterative process.

The researchers also chose to use natural language to describe the optimization problem, rather than relying on programming. This allows the AI to adapt to constantly changing requests for optimization on different tasks.

They then instruct the LLM to iteratively generate new solutions based on the problem description and previously found solutions.

At the heart of the OPRO program is an algorithm called “Meta-Prompt”. It reviews previous prompts, and measures their success in solving a given problem.

The algorithm then generates multiple prompts and tries them out to find the best one. In this way, Meta-Prompt can be compared to a person typing multiple variations of prompts on their keyboard until they find the most effective one.

It can be connected to various LLMs to produce prompts and responses, such as GPT-3 or GPT-4 and Google PaLM 2.

The authors of the study began by testing OPRO on simple problems such as linear regression, on which the program was instructed to “minimize a function” by finding a pair of numbers similar to past examples while producing a smaller numerical value.

The language model is therefore capable of finding solutions to a mathematical problem just from a prompt, without the need to use a “solver” program designed specifically for this purpose.

Subsequently, the researchers wanted to verify how well Meta-Prompt can optimize prompts by testing it on benchmarks such as GSM8K introduced in 2021 by OpenAI, or BIG-bench created by Google in 2022.

Overall, prompts optimized by Meta-Prompt outperformed those created by humans on both benchmarks by a margin of over 50%.

 

💡Related articles:

NLP training: Become an NLP Pro and Master the Art of Natural Language Processing
Natural Language Processing (NLP): Definition and Principles
Large Language Models (LLM) : Everything you need to know
Dense Neural Networks: Understanding Their Structure and Function

Conclusion: Automated Prompt Engineering, the key to unlocking the full potential of AI

By automating the entire prompt engineering process, Automated Prompt Engineering takes the use of generative AI to a new level. Thanks to this method, you can increase your productivity tenfold.

To become an expert in this field, turn to DataScientest! Our Prompt Engineering & Generative AI training course will teach you how to write perfect prompts in just two days.

First, you’ll learn the fundamentals of Machine Learning, Deep Learning and Generative AI, before taking a closer look at ChatGPT for text, DALL-E and Canvas for images, and ElevenLabs for audio.

By the end of the course, you’ll be able to write high-quality prompts and use the various AIs to create content, develop a web app or analyze data.

This course is entirely distance learning, and our organization is eligible for funding options. If you want to go even further in the field of AI, you can also choose our Data Scientist, Machine Learning Engineer, Deep Learning or MLOps training courses. Don’t wait any longer and discover DataScientest!

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!
icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox