We have the answers to your questions! - Don't miss our next open house about the data universe!

AGI, or General Artificial Intelligence: What is it?

- Reading Time: 3 minutes

For better or worse, the advent of autonomous superintelligence might sooner or later come to pass. The consequences for our civilization could surpass imagination...

This is a topic that regularly resurfaces in the spotlight: with the progress of new AI models such as GPT-4 Turbo or Llama 3, are we on the brink of the AGI revolution?

AGI is defined as an intelligence that is not specialized in any particular task. The term describes systems that could teach themselves to perform any task that humans might undertake and even outperform them. Their intelligence would span across any domain, without the need for prior human intervention. Is this a fantasy, or a potential reality?

For now, the topic of AGI is taken very seriously at the highest levels. OpenAI defined it in their 2018 charter as “highly autonomous systems that outperform humans at economically valuable work — for the benefit of all humanity”. However, OpenAI’s CEO, Sam Altman, has more recently softened the concept, speaking of “AI systems generally smarter than humans”, a seemingly easier milestone to achieve.

Difference Between AGI and AI

AGI is usually contrasted with narrow or specialized AI, which is designed to perform specific tasks or solve particular problems. Most of today’s AI is focused on a specific problem and can often solve it better than humans. IBM’s supercomputer Watson, applications such as ChatGPT or Midjourney, bank loan assessment systems, and those dedicated to diagnosing diseases are examples of narrow AI.

Let’s remember that such a narrow AI defeated Garry Kasparov at chess over 20 years ago. But… It didn’t know how to mow the lawn, prepare a recipe, or do anything else that humans can do. An AGI would know how to carry out all these tasks, and hence, we can regard it as a strong artificial intelligence.

Will AGI occur in our lifetime?

Experts differ on the potential date for the advent of AGI. Turing Award winner Geoff Hinton believes that AGI could be less than 20 years away from today and would not pose an existential threat.

The CEO of Anthropic (Claude), Dario Amodei, has even stated that the arrival of AGI is a matter of a few years. Google DeepMind co-founder Shane Legg predicts that there is a 50% chance that AGI will arrive by 2028.

Futurologist Ray Kurzweil estimated that computers would reach human intelligence levels by 2029 and improve at an exponential rate, allowing them to operate at levels beyond human understanding and control. This point of superintelligence is called by Kurzweil the singularity.

However, Turing Award winner Yoshua Bengio believes it could take unforeseen decades to achieve AGI. Google Brain co-founder Andrew Ng asserts that the industry is still “very far” from realizing systems intelligent enough to qualify as AGI.

Should we fear AGI?

While various experts remain skeptical about whether AGI is achievable, some are primarily wondering if it’s desirable.

There is a lot of debate surrounding the potential risk of AGIs. Some believe that AGI systems will be inherently dangerous because they could invent their own plans and objectives. Others believe that the emergence of AGI will be a gradual and iterative process, and we will have time to build safeguards at each step.

If there’s one aspect of AGI that tends to worry us, it is its potential for total independence. The superintelligent systems of the future might operate without the supervision of a human operator and even work together towards goals they set for themselves. If AGI is applied to autonomous cars – which currently require a human presence to manage decision-making in ambiguous situations – who would be held responsible if things don’t go as planned? This question and many others are already on the agenda today.

Cosmologist Stephen Hawking warned of the dangers of AGI as early as 2014 in a BBC interview. “The development of full artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, limited by slow biological evolution, could not compete and would be superseded.”

More pragmatically, being capable of performing generalized tasks implies that AGI will impact the labor market much more than current AIs. An AGI that could read an X-ray, take into consideration a patient’s history, write a suitable recommendation, and kindly explain it to the patient, would be able to replace our doctors. The potential consequences for our civilization are immense.

Add to this the ability of AGIs to produce new AGIs, and we enter an unpredictable realm that calls for our immediate intense and potentially preventive contemplation.

You are not available?

Leave us your e-mail, so that we can send you your new articles when they are published!
icon newsletter

DataNews

Get monthly insider insights from experts directly in your mailbox