We have the answers to your questions! - Don't miss our next open house about the data universe!

Artificial intelligence: DeepFake detectors have become indispensable

- Reading Time: 2 minutes
Discover why DeepFake detectors have become indispensable in the landscape of artificial intelligence, safeguarding against the proliferation of synthetic media and ensuring the integrity of digital content. Explore their significance, challenges, and impact on the evolving AI landscape.

DeepFakes detection tools are already essential in the face of the proliferation of falsified videos thanks to artificial intelligence. Find out how far research has progressed in this area, and what methods are used by existing tools.

Thanks to artificial intelligence, it is now possible to modify a video or make it up from scratch. These are known as “DeepFakes”: deceptive videos created using Deep Learning. But this technology can be exploited for malicious purposes…

Discrediting a celebrity by replacing the face of an X-rated actress with her own, manipulating crowds by imputing words to a politician… DeepFakes open up a wide range of possibilities for disinformation.

In the past, editing videos required the expertise of professionals and budgets of several million euros. Now, DeepFakes are accessible to any amateur. It only takes a few weeks to create fake videos.

In fact, and this is perhaps the most serious consequence, it is no longer possible to “believe what you see”. Every video on the internet is now potentially fake. In the age of DeepFake, every image we see has to be questioned, and doubt is de rigueur. And this phenomenon will only intensify over the coming years, as technology develops.

So, just as ‘Decoders’ and the like check the veracity of news articles on the internet, we need to deploy tools to detect DeepFakes and identify them as such.

For around three years, a field of research has been developing around the detection of DeepFakes. There are two main approaches.

How do DeepFakes detectors work?

The first involves spotting suspicious behaviour by a person in a video. An AI can be fed a large number of authentic videos of a celebrity, so that it learns to immediately detect any anomalies in their gestures or speech.

The second, more general technique involves identifying the differences between DeepFakes and real videos.

For example, given that DeepFakes are often created by combining images, it is possible to compare the images to detect inconsistencies in the video. This same method can also be used to identify fraudulent audio content.

Detection tools based on these two approaches already exist, but they are not yet very effective. Researchers will therefore need to improve and strengthen this technology before it can be made available to everyone to counter the coming wave of DeepFakes.

But time is running out. The tools to create DeepFakes are already available to anyone. It is therefore urgent to enable those on the front line to deal with this threat.

For example, researchers John Sohrawardi and Matthew Wright from the Rochester Institute of Technology have decided to work with journalists to help them combat disinformation. Their tool for detecting DeepFakes complements existing techniques for verifying information, such as cross-checking sources.

At the same time, technology giants such as Facebook and Microsoft are investing in technologies to understand and detect DeepFakes. These investments will speed up research and save precious time in this race against time.

It will also be necessary to educate and warn the general public on a massive scale about this new danger. In the age of artificial intelligence, the fight against disinformation will be harder than ever.

Now you know what DeepFakes detectors are. For more information, check out our dossier on DeepFakes and the different ways in which hackers exploit AI.

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!
icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox