Artificial intelligence is going to change the world, but it is still misunderstood by many people. Through this file, discover everything you need to know about AI: definition, functioning, history, different categories, case of use and applications …
Table of Content
Artificial intelligence (AI) is such a vast and revolutionary technology that it is difficult to give a precise definition. It can be considered a branch of computer science, with the goal of creating machines capable of performing tasks traditionally requiring human intelligence.
However, AI is an interdisciplinary science with multiple approaches. Today, Machine Learning and Deep Learning are two techniques used in companies in all industries.
What is Artificial Intelligence ?
In 1950, mathematician Alan Turing asked himself, “Can machines think ?”
In reality, this simple question would change the world.
He wrote an article “Computing Machinery and Intelligence” and the resulting “Turing Test” laid the foundations for artificial intelligence, its vision, and its goals.
The goal of artificial intelligence is to answer Turing’s question affirmatively.
Its aim is to replicate or simulate human intelligence in machines.
This is an ambitious goal that raises many questions and is the subject of debate. That is why there is not yet a single definition of artificial intelligence.
The description of “intelligent machines” does not explain what artificial intelligence really is or what makes a machine intelligent. To try to solve this problem, Stuart Russell and Peter Norvig published a book : “Artificial Intelligence: A Modern Approach“. In this book, the two experts unite their work on the theme of intelligent agents in machines. According to them, “AI is the study of agents that receive perceptions of the environment and perform actions”
From their perspective, there are four distinct approaches that have historically defined the field of artificial intelligence :
- Human thought
- Rational thought
- Human action
- Rational action.
The first two approaches are related to reasoning and thought processing, while the latter two have to do with behavior. In their book, P. Norvig and S. Russell focus primarily on rational agents capable of acting to achieve the best result.
MIT Professor of Artificial Intelligence Patrick Winston defines AI as “constraint-driven algorithms exposed by representations supporting models linking thought, perception, and action”.
Another modern definition :
Machines that react to simulations in the way humans do, with the ability to contemplate, judge, and intend. These systems are capable of making decisions that normally require a human level of expertise. They have three qualities that constitute the essence of artificial intelligence: intentionality, intelligence, and adaptability.
These different definitions may seem abstract and complex. However, they establish Artificial Intelligence as a computer science.
In 2017, during the Japan AI Experience, DataRobot CEO Jeremy Achin gave his own modern, humorous definition of AI: “Artificial intelligence is a computer system capable of performing tasks that normally require human intelligence… a lot of these AI systems rely on Machine Learning, some on Deep Learning, and some on very boring things like rules“.
What are the uses of Artificial Intelligence ?
Artificial intelligence has several goals, including learning, reasoning and perception. It is used in all industries, so much so that the applications are infinite and impossible to list exhaustively.
In the field of health, it is used to develop personalized treatments, discover new drugs, or analyze medical imaging such as X-rays and MRIs. Virtual assistants can also help patients and remind them to take pills or play sports to stay fit.
The retail sector uses AI to deliver personalized recommendations and advertising to customers. It also makes it possible to optimize the layout of products or to better manage inventories.
In factories, artificial intelligence analyzes IoT equipment data to predict load and demand with Deep Learning. It also allows us to anticipate a possible malfunction to intervene early.
On the other hand, Banks exploit AI for fraud prevention and detection purposes. The technology also allows you to check if a customer will be able to pay the credit they request, and automate data management tasks.
These are just a few examples of industries using artificial intelligence. As you can see, this revolutionary technology will change all sectors of activity in the coming years…
The history of Artificial Intelligence
The history of artificial intelligence began in 1943 with the publication of the article “A Logical Calculus of Ideas Immanent in Nervous Activity” by Warren McCullough and Walter Pitts. In this paper, the scientists present the first mathematical model for the creation of a neural network.
The first neural network computer, Snarc, was created in 1950 by two Harvard students: Marvin Minsky and Dean Edmonds. That same year, Alan Turing published the Turing Test, which is still used today to evaluate AI.
In 1952, Arthur Samuel created software capable of learning to play chess autonomously. The term artificial intelligence will be used for the first time at the Dartmouth Summer Research Project on Artificial Intelligence conference by John McCarthy in 1956.
During this event, researchers present the goals and vision of AI. Many see this conference as the true birth of artificial intelligence as it is known today.
In 1959, Arthur Samuel invented the term Machine Learning by working at IBM. John McCarthy and Marvin Minsky founded the MIT Artificial Intelligence Project. In 1963, John McCarthy also created the AI Lab at Stanford University.
In the following years, doubt will put a chill on the field of AI. In 1966, the ALPAC report highlighted the lack of progress in machine translation research aimed at translating Russian language instantly in a cold war context. Many U.S. government-funded projects will be canceled.
Similarly, in 1973, the British government published its report “Lighthill” highlighting the disappointments of AI research. Once again, budget cuts cut research projects. This period of doubt will last until 1980, and is now termed the “first winter of AI”.
This winter will end with the creation of R1 (XCON) by Digital Equipment Corporations. This commercial expert system is designed to configure orders for new computer systems, and causes a real investment boom that will continue for more than a decade.
Unfortunately, the machine market “Lisp” collapsed in 1987 due to the appearance of cheaper alternatives. This is the “second winter of AI “. Companies are losing interest in expert systems. The American and Japanese governments are abandoning their research projects, and billions of dollars have been spent for nothing.
Ten years later, in 1997, the history of AI was marked by a major event. IBM’s IA Deep Blue triumphs over world chess champion Gary Kasparov. For the first time, man is defeated by the machine.
Ten years later, technological advances are enabling a renewal of artificial intelligence. In 2008, Google made tremendous progress in speech recognition and launched this feature in its smartphone apps.
In 2012, Andrew Ng fed a neural network with 10 million YouTube videos as a training data set. Through Deep Learning, this neural network learns to recognize a cat without being taught what a cat is. This is the beginning of a new era for Deep Learning.
Another AI win over the Man in 2016, with Google DeepMind’s AlphaGo system winning over Go’s champion Lee Sedol. Artificial intelligence also conquers the field of video games, including DeepMind AlphaStar on Starcraft or OpenAI Five on Dota 2.
Deep Learning and Machine Learning are now used by companies in all industries for a multitude of applications. AI continues to grow and surprise with its performance. The dream of general artificial intelligence is getting closer and closer to reality…
General AI vs Specialized AI
General artificial intelligence is a type of AI that is capable of performing a wide range of tasks, rather than being limited to a specific set of functions. It is also referred to as “strong AI” or “human-level AI.” In theory, a general AI system would be able to perform any intellectual task that a human being can, such as understanding natural language, learning, planning, and problem-solving.
Thus, the creation of a general AI remains for the moment the “Holy Grail” of AI researchers. It is an ambitious quest, but full of pitfalls. Despite technical advances, it remains very difficult to design a machine with complete cognitive abilities.
Specialized artificial intelligence (AI), on the other hand, is designed to perform a specific task or a set of tasks. It is also known as “narrow AI,” “weak AI,” or “task-specific AI.” For example, a specialized AI system could be trained to recognize objects in images, translate text from one language to another, or play chess at a high level (IBM’s IA Deep Blue). Specialized AI systems are currently more common than general AI systems, as they can be trained to perform specific tasks with a high degree of accuracy.
However, even if such a machine may seem intelligent, it is much more limited than human intelligence. This is only an imitation of the latter.
Examples include Google’s web search engine, image recognition software, virtual assistants like Apple Siri or Amazon Alexa, autonomous vehicles, and software like IBM Watson.
Machine Learning and Deep Learning : What differentiates ?
Machine learning and Deep Learning are the two main artificial intelligence techniques currently used. The distinction between Artificial Intelligence, Machine Learning, and Deep Learning can be confusing.
In reality, Artificial Intelligence can be defined as a set of algorithms and techniques aimed at imitating human intelligence., and deep learning is a machine learning technique.
Machine learning is a category of AI and involves feeding a computer with data. Machine Learning uses analysis techniques on this data to “learn” how to perform a task.
To do this, it does not need to be specifically programmed using millions of lines of code. This is why it is referred to as “automatic learning“.
Machine learning can be “supervised” or “unsupervised.” Supervised learning is based on labeled data sets, while unsupervised learning is done using unlabeled data sets.
Deep Learning is a type of machine learning that is directly inspired by the architecture of human brain neurons. An artificial neural network is composed of multiple layers through which data is processed. This allows the machine to “deepen” its learning by identifying connections and altering ingested data to achieve the best results.
The dangers of Artificial Intelligence
Artificial intelligence holds many promises for humanity… but it could also pose a more dangerous threat than the nuclear bomb.
” Through its ability to learn and evolve autonomously, AI could one day surpass human intelligence. It could then decide to turn against its creators ” .
This grim omen may seem like a science fiction film, but it is a real possibility. Eminent experts have already sounded the alarm about artificial intelligence, like Stephen Hawking, Elon Musk or Bill Gates.
They see AI as an imminent and inevitable risk for years to come. That is why they are calling on governments to regulate this area so that it develops ethically and securely. More than 100 experts also called on the United Nations to ban “killer robots” and other autonomous military weapons.
However, other experts believe that the future of artificial intelligence depends solely on how humans choose to use it. Even seemingly harmless AI could be misused and misused. We can already see this with the rise of “DeepFakes”: fake videos created through Deep Learning to stage a person in a compromising situation.
Artificial intelligence will continue to develop rapidly over the next few years. It is up to humanity to choose which direction this development will take…
You know all about artificial intelligence. Now discover our complete dossier on Data Science, and take a closer look at Machine Learning.