We have the answers to your questions! - Don't miss our next open house about the data universe!

AI Watermarking: All you need to know

- Reading Time: 8 minutes
ai watermark

L'AI watermarking, or AI digital watermarking, is a technique that involves embedding digital marks or indicators into machine learning models or datasets to enable their identification. Faced with the explosion of content generated by Artificial Intelligence, this approach has become essential. Discover the existing techniques and challenges to overcome...

Within the Machine Learning community, AI watermarking is a particularly active research field.

At a time when generative artificial intelligences like ChatGPT and DALL-E are generating increasingly realistic texts and images, it is becoming urgent to create a system that can distinguish this content from that created by humans.

Many techniques have already been invented by researchers, but very few are already applied in the real world. Is it really possible? Discover all the answers to your questions in this dossier!

What is AI Watermarking ?

AI Watermarking involves adding a message, logo, signature, or data to a physical or digital object. The goal is to determine its origin and source.

This practice has been applied to physical objects like banknotes and postage stamps for a long time to prove their authenticity. Nowadays, there are also techniques for AI watermarking digital objects such as images, audio files, or videos. Digital watermarks are also applied to data.

This. mark is sometimes visible but not always. AI Watermarking is frequently used for copyright management, especially to trace the origin of an image. The most sophisticated techniques allow hidden digital watermarks to be applied to digital objects, capable of resisting deletion attempts.

Watermarking of AI and Machine Learning datasets

At present, researchers are exploring possibilities to apply watermarking techniques to Machine Learning models and the data used to produce them.

Two main approaches are distinguished. Firstly, “model watermarking” involves adding a watermark to a Machine Learning model to detect whether it has been used to make a prediction.

As an alternative, “dataset watermarking” aims to modify a training dataset in an invisible way to detect whether a model has been trained on it.

These techniques can be implemented and used in various ways. Firstly, injecting specific data into the training dataset can modify the model, and these changes can be detected later.

Another method is to adjust the model’s weights during or after training. Again, this alteration can be detected subsequently.

Watermarking a dataset is suitable when its creator is not involved in training the AI. It relies solely on adjusting the training dataset.

This allows discovering how a model was produced. In contrast, model watermarking allows detection of a model when it is deployed.

Challenges of AI Watermarking

Dataset AI Watermarking requires the development of new techniques because existing approaches do not work in the context of Machine Learning.

For example, when training an image classification model, any watermark present in the training images is removed because it is not relevant to the learning process.

To be useful, watermarking a Machine Learning dataset requires modifying the data in a way that is consistent with labeling. This induces changes in the model that can be detected later.

How do you check the watermarking of an AI?

It is possible to verify the watermarking of an AI model without needing direct access. This includes determining its origin and whether it was trained on a specific dataset.

To check the watermark, one simply needs to inspect its output in response to specific data inputs designed to expose it. In theory, this method can be applied to any AI.

AI Watermarking techniques

In a blog post, Facebook / Meta researchers introduce the concept of “radioactive data” for AI watermarking. According to them, this technique helps determine which dataset was used to train a model.

This helps gain a better understanding of how different datasets impact the performance of various neural networks. Therefore, this type of technique provides researchers and engineers with the ability to better understand how their peers train their models.

By extension, it helps detect potential biases in these models. For example, it can prevent the misuse of specific datasets for Machine Learning purposes.

In a scientific paper titled “Open Source Dataset Protection,” Chinese researchers suggest a useful method to confirm that commercial AI models have not been trained on datasets intended for educational or scientific use.

In 2018, IBM introduced a technique to verify the ownership of neural network services using simple API queries. The goal is to protect Deep Learning models against cyberattacks. Researchers developed three different algorithms to add relevant content, random data, or noise as watermarks in the neural networks.

Comment est utilisé le Watermarking d'IA?

For now, AI watermarking remains mainly theoretical, but there are numerous potential use cases.

Model watermarking could be used by a government agency to verify that a Machine Learning model used in a product complies with data protection laws.

A civil society organization can ensure that a decision-making model has undergone an audit. Regulators can check if a specific third-party Machine Learning model has been deployed by a commercial organization to alert it to biases and certify the product or request a recall.

Dataset watermarking can determine if a Machine Learning model has been trained on biased or incorrect data, warning consumers. A Data Steward can determine if a model has been trained on personal data they provided to protect it.

A data publisher can determine if a model has been trained on an outdated version of the dataset to alert users to known biases or errors. Lastly, a regulator can determine which datasets are used by Machine Learning models to prioritize audits.

In general, watermarking helps determine which AI model is used by a service and which datasets are used for training. It is a valuable asset for transparency and ethics.

In some cases, other methods can achieve this goal. For example, regulators may require companies to directly state the data sources used. However, watermarking can provide a more trustworthy source.

With the rise of generative AI like DALL-E and ChatGPT, watermarking becomes indispensable. Only this technique allows us to know if content was created by AI.

This can, for example, help determine if a student cheated on an essay, or if a generative AI like MidJourney is trained on copyrighted images. Similarly, watermarking can help detect “DeepFake” videos generated using AI.

ChatGPT and AI Watermarking

Since its launch by OpenAI in late 2022, ChatGPT has quickly become a viral phenomenon. In a matter of seconds, this AI can answer all kinds of questions and generate text in various languages or even computer code.

This chatbot is already impressive and is likely to improve further with the launch of GPT-4 scheduled for 2023. Therefore, it’s becoming increasingly challenging to distinguish text generated by ChatGPT from human writing.

It’s essential to invent a watermarking system for this AI before the web gets flooded with text produced by a chatbot that may contain false or outdated information.

Initially, OpenAI simply asked ChatGPT users to clearly indicate content generated by the AI. However, relying solely on user honesty would be naive.

In the days following the launch of this AI, many students started using it to cheat and improve their grades. This practice spread like wildfire, including in France, to the extent that Sciences Po Paris banned this tool for its students under the threat of disciplinary sanctions.

One can also expect Amazon merchants to use it to generate fake reviews, or governments to employ it for propaganda purposes. Likewise, cybercriminal gangs use it to craft more convincing phishing emails.

Given these serious risks, AI watermarking has become essential. A detection method has already been added by OpenAI to the DALL-E AI, to attach a visual signature to the images it generates. However, the task is much more challenging for textual content.

The most promising approach is cryptography. At a conference at the University of Texas at Austin, OpenAI researcher Scott Aaronson presented an experimental technique.

It involves converting words into a line of tokens representing punctuation marks, letters, or parts of words. These “strings” could consist of up to 100,000 tokens. Subsequently, GPT could arrange them to reflect the text.

This watermark could be detected using a cryptographic key known only to OpenAI. The difference would be imperceptible to the end user.

In early February 2023, OpenAI launched a classifier to detect content generated by ChatGPT or other AIs. However, its success rate is limited to 26%…

 

💡Discover also:

Image Processing
Deep Learning – All you need to know
Mushroom Recognition
Tensor Flow – Google’s ML
Dive into ML
Data Poisoning

Une technique de détection des mots préférés de l'IA

In an article published on January 24, 2023, researchers present a watermarking technique for ChatGPT and other language generation models.

It relies on software maintaining two lists of words: a green one and a red one. When a chatbot like ChatGPT chooses the next word in the text it generates, it typically selects a word from the green list.

To detect if a text is generated by the AI, you simply let software count the number of green words. Beyond a certain threshold, the probability increases.

This approach proves to be more effective on longer texts. In theory, it could be integrated into a web browser extension to automatically flag AI-generated content.

Of course, this tool is not foolproof. It is possible to manually modify a text to replace the words from the green list, provided, of course, that you have access to that list. Furthermore, this method requires OpenAI and other AI creators to agree to implement the tool.

A watermark for AI-generated voices

In addition to text and images, Artificial Intelligence excels in voice imitation. Tools like Vall-E, for example, can synthesize any voice to read a text.

These technologies offer many possibilities for voice acting or audiobooks but also pose risks. A malicious person can create fake speeches of politicians or other celebrities.

To combat the risks of abuse, Resemble AI has created a watermarking system for AI-generated voices. Its name is a combination of the words “perceptual” and “threshold”: PerTh.

This system uses a Machine Learning model to embed data packets into the audio content and retrieve them later.

These data packets are imperceptible but intertwined with the content. They are difficult to remove and provide a means to verify if a voice has been generated by AI. Furthermore, this technique allows for audio manipulation, such as speeding it up, slowing it down, or compressing it into a format like MP3.

The watermark is, in fact, a low-frequency tone masked by higher-frequency tones to the listener’s ears. It is, therefore, below the threshold of perception.

The challenge tackled by Resemble AI is to create a Machine Learning model capable of generating these tones and placing them at the right moments in an audio so that they are imperceptible. This model can also reverse the process to retrieve the data.

Unfortunately, this ingenious method currently only works with voices generated by Resemble AI and its own AI. It may take some time for a universal solution to emerge and become a security standard.

Watermark-free AI banned in China

Since January 10, 2023, China has banned the creation of AI content without watermarking. This rule was issued by the cyberspace authority, which is also responsible for internet censorship.

The authorities point to the dangers posed by “deep synthesis technology.” While this innovation can certainly meet user needs, it can also be abused to spread illegal or dangerous information, tarnish reputations, or impersonate identities.

According to the official statement, AI-generated content endangers national security and social stability. Therefore, new products must be evaluated and approved by the authority before being commercialized.

The importance of watermarking to identify AI content is emphasized. Digital watermarks should not be removable, tampered with, or concealed. Furthermore, users must create accounts using their real names, and all generated content must be traceable back to its creators.

An AI capable of removing Watermarks

It is urgent to develop AI watermarking techniques, but unfortunately, AI can also be used to remove watermarks…

The free tool WatermarkRemover.io can remove digital watermarks from images. While it can be used for legitimate purposes, there is nothing to prevent it from being exploited maliciously…

This artificial intelligence makes it easy to erase complex watermarks, with multiple colors or opacity values. In the future, we may fear the emergence of tools capable of removing watermarks from AI-generated content.

Quel est le futur du Watermarking IA ?

Several advancements are needed to apply AI watermarking in the real world and build an ecosystem around the theoretical techniques invented by researchers.

First, further research is necessary to identify and refine the best techniques, establishing standards for various types of datasets.

Common standards must also be developed to integrate watermarking into the curation and publication of training datasets. This includes the introduction of watermarks into the data, the creation of reliable documentation, and the publication of the necessary data for verification.

Similarly, standards need to be developed for integrating watermarking steps into the training and publishing of machine learning models. Finally, a registry and tools must be developed to allow organizations to verify watermarks through audits.

Conclusion: AI Watermarking, a major challenge for tomorrow's world

In a few decades, customs will likely have changed. We will be accustomed to the constant flow of texts, images, and videos generated by AI to the point where it will no longer be necessary to know whether content is created by humans or not.

However, AI watermarking remains imperative for copyright protection, combating bias and discrimination, preventing misinformation, and for cybersecurity reasons.

To become an expert in Machine Learning and contribute to the development of watermarking techniques, you can turn to DataScientest. Our training programs will provide you with all the skills needed to become a Machine Learning Engineer, Data Engineer, or Data Scientist.

All our programs are fully completed online via the web, and our state-recognized organization is eligible for funding. Don’t wait any longer and book an appointment with us!

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!
icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox