The Python language being one of the most used, it contains a lot of frameworks, and many of them are developed exclusively for Data Science. In this article, we will talk about one of them in detail : PyTorch.
In the last few years, the popularity of Data Science has been growing steadily over and this has led to an explosion in the resources available to programmers: it is no longer necessary to code by hand. Programming environments such as Pytorch, known as “frameworks”, allow complex models to be used in just a few lines.
What’s the origin of PyTorch ?
Frameworks are made to make programming easier. They are usually developed in “open source“, which means that the code is available and editable by all, which allows reliability, transparency and continuous maintenance.
PyTorch is no exception to this rule. Based on the former Torch library, PyTorch was officially launched in 2016 by a team from Facebook’s research lab, and has been developed in open source ever since. The goal of this framework is to enable the implementation and training of Deep Learning models in a simple and efficient way. Its merger in 2018 with Caffe2 (another Python framework) has further improved its performance.
Pytorch is now used by 17% of Python developers (Python Foundation 2020 study), and in many companies like Tesla, Uber etc.
Why use PyTorch?
PyTorch is a new machine learning library, but it has a lot of manuals and tutorials where you can find examples. It also has a community that is growing by leaps and bounds.
PyTorch has a very simple interface for creating neural networks although it is necessary to work directly with tensors without needing a higher level library like Keras for Theano or Tensorflow.
Unlike other machine learning tools such as Tensorflow, PyTorch works with dynamic rather than static graphs. This means that at runtime, features can be changed and the calculation of gradients will vary with them. In contrast, in Tensorflow, one must first define the computational graph and then use the session to compute the tensor results, which makes debugging the code more difficult and implementation more tedious.
PyTorch is compatible with graphics cards (GPUs). It uses CUDA internally, an API that connects the CPU to the GPU which has been developed by NVIDIA.
Advantages of PyTorch
PyTorch has many advantages, here are the main ones:
1. PyTorch and Python
Most of the work related to Machine Learning and Artificial Intelligence is done using Python. However, PyTorch and Python are related, which means that Python developers should feel more comfortable coding with PyTorch than with other Deep Learning frameworks.
2. Easy to learn
Like the Python language, PyTorch is considered relatively easier to learn compared to other frameworks. The main reason is due to its simple and intuitive syntax.
JUMPSTART YOUR CAREER
IN A DATA SCIENCE
JUMPSTART YOUR CAREER
IN A DATA SCIENCE
Are you interested in a career change into Big Data, but don’t know where to start?
Then you should take a look at our Data Science training course
3. Strong community
Although PyTorch is a relatively new framework, it has developed a dedicated community of developers very quickly. Moreover, PyTorch documentation is very organized and useful for beginners.
4. Easy debugging
PyTorch is deeply integrated with Python, so much so that many Python debugging tools can be easily used with it.
Pytorch vs Keras vs Tensorflow : What are the differences ?
It seems difficult to introduce PyTorch without mentioning the other alternatives, all created within a few years of each other with the same goal but different methods.
Keras was developed in March 2015 by François Chollet, a researcher at Google. Keras quickly gained popularity thanks to its easy-to-use API, which is heavily inspired by scikit-learn, the standard Machine Learning library in Python.
A few months later, in November 2015, Google released a first version of TensorFlow, which quickly became the reference framework for Deep Learning, as it allows you to use Keras. Tensorflow also developed a number of Deep Learning features that researchers needed to easily create complex neural networks.
Keras was therefore very simple to use, but lacked some of the “low-level” features or customizations needed for state-of-the-art models. But it didn’t look like the usual Python style and had very complicated documentation for beginners
PyTorch solved these problems by creating an API that is both accessible and easy to customize, allowing the creation of new types of networks, optimizers and architectures. However, recent developments in these frameworks have brought the way they work much closer together.
PyTorch and Deep Learning, does it work together ?
We have talked about the complexity of models and networks without mentioning the execution speed of algorithms. Indeed, PyTorch is designed to minimize this time and to make the best use of the hardware specificities.
PyTorch represents data as multidimensional arrays, similar to NumPy arrays, called “tensors”. The tensors store the inputs of the neural network, the parameters of the hidden layers and the outputs. From these tensors, PyTorch can perform, in a hidden and efficient way, 4 steps to train the network:
- Assemble a graph from the tensors of the neural network, which allows a dynamic structure, it is possible to modify the neural network (number of nodes, connections between them…) during training.
- Perform the predictions of the network (forward pass)
- Compute the loss or error compared to the predictions
- Go through the network in the opposite way : “backpropagation“, and adjust the tensors so that the network makes more accurate predictions based on the calculated loss/error.
This function of PyTorch called “Autograd” is very optimized, and is compatible with the use of GPUs and data parallelism, which accelerates calculations considerably. Moreover, it allows for use on all clouds, while Tensorflow is only optimized for GoogleCloud and its TPUs (Tensor Processing Unit)..
Recently, and in partnership with AWS (Amazon Web Services), Pytorch has released 2 new features. The first one, named TorchServe, effectively manages the deployment of already trained neural networks. The second one, TorchElastic, allows the use of Pytorch on Kubernetes clusters while being resistant to failures.
These 3 frameworks have their own specificity. In particular PyTorch which is mainly adapted for complex and deep neural networks. To learn to use the framework, we offer Data Scientist training with dedicated modules to Deep Learning with PyTorch.