The history of artificial intelligence can be divided into two major phases. One of these is symbolic AI. What does this mysterious term encompass? What are the strengths and limitations of this approach?
Since the explosion of ChatGPT in late November 2022, generative AI has been in the spotlight. However, this type of service relies on a relatively recent approach to artificial intelligence that demands particularly resource-intensive learning in terms of computing power. The beginnings of AI took place with another model, symbolic AI, which, after a phase of discredit, may very well regain some credibility…
What is symbolic AI?
The first attempts to simulate intelligence through a machine stemmed from the work of Alan Turing and bring us to the late 1950s. In reality, we must go back to the British philosopher Thomas Hobbes (1588 – 1679) to find the premises of such work. Hobbes believed that “thinking is manipulating symbols, and reasoning is calculating”. The French philosopher Descartes (1596 – 1650) went in the same direction, asserting that the universe was written in the language of mathematics. According to him, all reality is mathematical.
By relying on such theories, it became possible to envisage machines that manipulate symbols to emulate human thought. This discipline became known as symbolic AI. It effectively began in 1959 when researchers Herbert Simon, Allen Newell, and Cliff Shaw attempted to build a computer capable of solving problems in a manner similar to humans.
In fact, symbolic AI strives to represent knowledge in the form of rules applied to symbols that represent objects or concepts in our world. The result of symbolic AI was a logic-based programming.
A commonly given example is that of a medical diagnosis: IF a patient has frequent sneezing AND itchy eyes, THEN it is probably a seasonal allergy; OTHERWISE, move on to the next rule.
This approach gave birth to expert systems and decision support systems.
What are the main applications of symbolic AI?
Symbolic AI has been utilized in many fields:
- Natural Language Processing (NLP) with assistants like Siri or Alexa,
- Medical diagnosis,
- Autonomous vehicles,
- Robots capable of avoiding obstacles and interacting with humans
What led to the decline of symbolic AI?
Until the late 1980s, symbolic AI was a major path for research and application development. However, this approach gradually revealed its limitations.
- Since symbolic AI operates based on programmed rules, its learning capacity is low, causing the system to struggle to adapt when encountering unforeseen situations.
- Symbolic AI requires an exhaustive knowledge base to function correctly. If this foundation is incomplete, its effectiveness will be diminished.
- Symbolic AI relies on precise knowledge representations and will be troubled by uncertain or ambiguous data.
- Accurate pattern recognition and thus biometrics are unlikely with symbolic AI.
- Generating content that is both original and relevant is difficult to conceive with this approach.
What brought about the rise of machine learning?
Starting in the 1990s, another approach gained prominence: neural networks, with two main forms, machine learning and deep learning. In this context, the system analyzes vast quantities of data and attempts, through a long, partly empirical training process, to establish mathematical relationships between what is perceived and what is expected. Significant breakthroughs occurred from 2010 onwards, enabling the emergence of highly popular applications like ChatGPT and Midjourney.
The only issue is that these systems based on big data processing can sometimes be perplexing and prone to certain errors. You may have had to attempt multiple times before achieving the desired visual output from an application like Stable Diffusion. Occasionally, it might seem like such a generative AI fails to understand your requests. Indeed, neural AIs often operate in ways that lack transparent logic. Instead, we encounter the triumph of data quantity over pure reasoning.
Is there a possible return of symbolic AI?
However, symbolic AI may not have said its final word. Some, including IBM, are now aiming to combine both approaches with what is called neuro-symbolic AI. Simply put, neural AI would be used for pattern recognition, such as information, and would be complemented by a symbolic AI that applies a predictable logic to the analyzed data.
It turns out that symbolic AI has its strengths. Since its reasoning is coded, it is easy for us to understand how it arrived at a conclusion and, if necessary, to amend it. Moreover, it is much less resource-intensive and therefore more eco-friendly: it consumes on average 143 times less energy than a machine learning model. Symbolic AI can perfectly suit certain specific applications such as email filtering.
Thus, after nearly two decades of discredit, symbolic AI seems poised to make a grand comeback.