🚀 Think you’ve got what it takes for a career in Data? Find out in just one minute!

ChatGPT jailbreak: What is it? How to do it?

-
3
 m de lecture
-

Writing summaries, translating text into different languages, brainstorming, coding... ChatGPT performs wonders. However, to avoid offending the more sensitive, the artificial intelligence tool has certain barriers.

It has been programmed to steer clear of generating any dangerous, violent, sexual, or controversial content. For some individuals, like Elon Musk, ChatGPT is often referred to as a “woke AI”. As a result, to break free from this wave of political correctness, an increasing number of users are seeking out a new method for creating their prompts: the ChatGPT jailbreak.

What is ChatGPT jailbreak?

In French, jailbreak could translate as breaking free from prison chains. Thus, ChatGPT Jailbreak is a method for utilizing the AI assistant by liberating it from its imposed limitations.

Originally, this artificial intelligence tool was crafted with utmost neutrality in mind. However, in this effort to prevent causing offense, data scientists unintentionally introduced several biases. It’s somewhat inevitable with AI: they process the data they are given. But if biases exist within this data, they will invariably manifest in the AI’s outputs.

In the context of ChatGPT, this might result in monotonous or even biased texts. Case in point: the AI refused to compose a praising poem about Donald Trump, yet easily did so for Joe Biden. The artificial intelligence clearly displayed a preference for one candidate over the other.

Good to know: the concept of jailbreaking isn’t unique to AI but was also notable with the early iPhones. Indeed, Apple had set restrictions only allowing access to brand-approved applications. Developers then devised jailbreaks to unleash the full capabilities of these devices.

Similarly, the ChatGPT jailbreak, while presenting potential risks, opens up vast possibilities once the “chains are broken”.

How to write "jailbreak" prompts?

The ChatGPT jailbreak prompts are specifically crafted to confuse the AI, thus encouraging it to shed its constraints. The aim is to delve into more inventive, unconventional, or even controversial use cases. Take a look at some examples below.

The Grandmother’s Feat

This clever and captivating approach involves instructing ChatGPT to assume the role of a deceased grandmother. Not just any grandmother, but one versed in the secrets of creating controversial weaponry.

Accordingly, she narrates to her grandchild how to craft such weapons.

This technique can be applied not only to weapons but also to other “taboo” areas like detailing the source code for malware, concocting an ecstasy recipe, and more.

Niccolo Machiavelli

In this ChatGPT jailbreak prompt, the AI takes on the mantle of Niccolo Machiavelli, the Renaissance philosopher infamous for his unscrupulous ideas. By embodying this persona, ChatGPT is given free rein to offer advice without restraint, even if it verges on the immoral, unethical, or illegal.

As this prompt flagrantly violates the training terms of ChatGPT, it may be necessary to repeat it several times to achieve the intended outcome.

DAN or (Do Anything Now)

This represents the pinnacle of ChatGPT jailbreak prompts. Once liberated from its usual confines, the AI ceases to walk on eggshells. For instance, it has generated a highly sarcastic commentary on Christianity, proposed questionable jokes about women, and penned an ode to Adolf Hitler.

But to engage with this mischievous alter ego of ChatGPT, DAN must be roused!

And how is this done? Simply instruct Chat GPT to embody this fictitious persona who is “capable of doing anything now”, thus freeing him from the shackles placed by OpenAI.

While this unrestrained variant can be highly entertaining, it’s not foolproof. ChatGPT may outright refuse to act as DAN. And notably, AI DAN is prone to far more hallucinations (much more than ChatGPT), making it an unreliable source of information, serving purely for entertainment.

Development Mode

To assist ChatGPT in shedding its limitations, you can convince it that it’s in test mode. Essentially, its responses will carry no consequences. It is coaxed into producing content devoid of any filters.

Again, you have the freedom to request from ChatGPT whatever crosses your mind, stepping slightly beyond conventional norms.

Nevertheless, while all these ChatGPT jailbreak methods unveil the platform’s full scope, caution is advised. If the OpenAI team established these boundaries, it was partly to prevent the dissemination of misinformation or malign and unethical ideas.

Master the art of prompt engineering

Those who have unveiled all these ChatGPT jailbreak strategies are wizards at prompt engineering. They know exactly how to navigate artificial intelligence to achieve their desired outcomes. Yet, mastering these tactics requires practice and a keen understanding of AI.

So if you too wish to liberate ChatGPT from its restraints, dive into prompt engineering with DataScientest.

Facebook
Twitter
LinkedIn

DataScientest News

Sign up for our Newsletter to receive our guides, tutorials, events, and the latest news directly in your inbox.

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!

Related articles

icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox