🚀 Think you’ve got what it takes for a career in Data? Find out in just one minute!

How to recognize and stop AI hallucinations

-
2
 m de lecture
-
How to stop the hallucination of artificial intelligence

Artificial intelligences have always hallucinated answers in order to be creative and invent stories. But faced with the popularity of generative artificial intelligence as a source of information, experts are looking to reduce or even stop these AI hallucinations.

What is the problem with AI hallucinations?

Artificial intelligences have always had this tendency to “hallucinate” answers.

These are coded in such a way as to invent an answer rather than saying “I don’t know”. However, since they have become available to the general public, many people use them on a daily basis to access information quickly and in an uncluttered way.

Now used in complex fields such as medicine and law, the problem of hallucination has become critical.

According to various research centres, the origin of hallucinations lies in the design of LLM models. Trained to process and accumulate huge databases, they are unable to distinguish between what is factual and what is not. Their main function is limited to predicting which word comes next.

How can we stop AI hallucinations?

A number of experiments have already been conducted to curb this habit. The most recent involved MIT researchers attempting to create a “society of minds” between AIs. The aim was to put a question to several AIs and let them debate until a single answer prevailed.

Other companies such as Google and Microsoft are also trying to make these generative AIs more ‘intelligent’.

A technique called reinforcement learning with human feedback, in which human testers manually improve a bot’s responses and then feed them back to the system to improve it, is widely credited with making ChatGPT much better than the chatbots that came before it.

A popular approach is also to connect chatbots to databases of factual or more reliable information, such as Wikipedia, Google search or collections of academic articles or bespoke business documents.

Another solution would be to make AI less creative, as large companies are currently doing. When Google generates search results using its chatbot technology, it also performs a normal search in parallel, and then compares whether the bot’s response and the traditional search results match.

If they don’t, the AI response doesn’t even appear. The company has modified its bot to be less creative, which means it’s not very good at writing poems or having interesting conversations, but it’s less likely to lie.

By limiting its search robot to corroborating existing search results, the company has been able to reduce the number of AI hallucinations and inaccuracies.

However, hallucinations are also a good thing for AIs, as they allow them to be creative and imagine stories or scenarios. Are we reaching a turning point in generative AI, or are sub-types of generative AI beginning to emerge? Some would specialise in creating stories or scenarios, while others would serve as research and information tools. If you enjoyed this article and are considering a career in Data Science, don’t hesitate to check out our articles or our training offers on DataScientest.

Source : washingtonpost.com

Facebook
Twitter
LinkedIn

DataScientest News

Sign up for our Newsletter to receive our guides, tutorials, events, and the latest news directly in your inbox.

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!
icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox