🚀 Think you’ve got what it takes for a career in Data? Find out in just one minute!

Exploring the Boundaries of Artificial Intelligence

-
3
 m de lecture
-
AI

Do you know Spot? A robot deployed in Singapore’s green spaces to enforce social distancing. The news has given grist to the mill of alarmists about the unlimited use of AI. It’s true that the image of this robot walking between inhabitants to call them to order could come out of an episode of Black Mirror.

How about DataScientest takes a look at the limits of artificial intelligence compared with human intelligence, to help you see things more clearly.

Why is artificial intelligence so worrying?

After gradually invading our daily lives with autonomous cars, connected objects and intelligent assistants such as Siri, Alexa or “Ok Google”, artificial intelligence is making machines capable of transporting us from point A to point B, and even helping with decision-making in the medical field.

So, in the midst of a pandemic, some fear that machines may have too much influence over human life. One of the big questions, then, is where this progress will stop, and whether machines will topple us like in the worst Hollywood scenarios. Rest assured, we’re a long way from that!

Many people use the mechanical autonomy acquired by machines in recent years to demonstrate the shift towards moral autonomy, what is known as strong AI. However, if we ourselves are incapable of explaining human feelings such as love, compassion or sadness, then it’s unlikely that a machine will be able to do so.

The human brain is endowed with billions of nervous combinations, making it capable of adapting and using its environment, its experience and that of the people around it.

Cerebral plasticity is a perfect example. A deaf person will be able to recycle his auditory cortex for a new task, whereas a machine, enslaved to its function, would be helpless and useless.

Technological and cognitive limits prevent machines from achieving complex reasoning outside the tasks for which they are designed. For the time being, they can neither acquire subjective experience nor truly communicate with the world around them.

At present, all intelligent machines have failed the Turing test, an experiment designed to prove that a machine can impersonate a human being.

The key point: lack of initiative

The key point: a lack of initiative

No artificial intelligence will be proactive and go beyond the limits of its program.

If you pass a domestic robot on the street fetching milk, the trap would be to think that it’s human, because it would be incapable of understanding an aggression if it saw one.

And even if it has been trained to recognize such situations, if its program doesn’t tell it to intervene, it will continue on its way, unable to assess the seriousness of the situation, the victim’s distress or the danger.

We’re a long way from Skynet sending Terminators to threaten us, and there’s little risk of your smartphone rebelling other than by refusing to send a text message.

Every decision is the result of training to accomplish the task in hand, not of taking the initiative.

One of the best examples is that of autonomous cars which, in the event of danger, will have to make crucial choices, even sacrificing people to preserve as many lives as possible. Once again, we can see the limits of such technology, as the program will take into account the number of lives saved rather than their age, whereas the latter parameter may instinctively come into play for humans.

At present, machines do not interact with each other and are therefore incapable of making choices that challenge their limited basic functions. What’s more, these functions may run counter to the moral and social rules that define our society, the result of exchanges, organizations and evolutions that are incomprehensible to machines.

It is therefore impossible to imagine leaving an artificial intelligence in charge of deciding who should or should not be treated in a crisis, whether a threatening person should be executed, which person should be given priority in a fire…

Indeed, the machine’s choices would be based on optimization, and would not take into account the complexity of such situations.

Nevertheless, according to an ifop survey, the majority of French people don’t know how to define artificial intelligence, which is to be expected given that it’s a new and specialized field. But the stakes are enormous!

We need to steer this technology in the right direction to serve the common good, while at the same time reducing fears and ignorance about its use and capabilities.

That’s why DataScientest has created an awareness-raising training course for managers in this field, with modules on privacy, data management…

Facebook
Twitter
LinkedIn

DataScientest News

Sign up for our Newsletter to receive our guides, tutorials, events, and the latest news directly in your inbox.

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!
icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox