Apache Spark is a unified, ultra-fast analytics engine for large-scale data processing. It enables large-scale analysis through cluster machines. It is mainly dedicated to Big Data and Machine Learning.
What is Apache Spark?
For the curious, let’s go back to the creation of Apache Spark!
It all started in 2009. Spark was designed by Matei Zaharia, a Canadian computer scientist, during his PhD at the University of California at Berkeley. Initially, its development is a solution to accelerate the processing of Hadoop systems.
Today it is a project of the Apache Foundation. Since 2009, more than 1200 developers have contributed to the project. Some of them are from well-known companies like Intel, Facebook, IBM, Netflix…
In 2014, Spark officially set a new record in large-scale sorting. It won the Daytona Grey Sort competition by sorting 100 TB of data in just 23 minutes. The previous world record was 72 minutes set by Yahoo using a 2100-node MapReduce Hadoop cluster, while Spark uses only 206 nodes. This means it sorted the same data three times faster using ten times fewer machines.
Furthermore, while there is no official petabyte sorting competition, Spark goes even further by sorting 1 PB of data, which is equivalent to 10 trillion records, on 190 machines in less than four hours.
This was one of the first petabyte-scale sorts ever done in a public cloud. Achieving this benchmark marks a significant milestone for the Spark project. It proves that Spark is delivering on its promise to serve as a faster, more scalable engine for processing data of all sizes, from GBs to TBs even going to PBs.
Apache Spark: the largest open source Big Data project
Originally developed at UC Berkeley in 2009, Apache Spark is a unified analytical engine for Big Data and Machine Learning. The tool is distinguished by its impressive speed and ease of use.
Since its launch, Apache Spark has been adopted by many companies in a wide variety of industries. Internet giants like Netflix, Yahoo and eBay have deployed Spark and are processing multiple petabytes of data on clusters of over 8,000 nodes.
In just a few years, Apache Spark has quickly become the largest open source Big Data project. It has over 1000 contributors from more than 250 organizations.
This 100% open source project is hosted by the Apache Software Foundation. However, Apache Spark, Spark and the Spark logo are trademarks of the ASF.
As a non-profit organization, the ASF must take precautions about how its trademarks are used by organizations. In particular, it must ensure that its software products are clearly distinguishable from all potential third-party products.
Companies wishing to provide Apache Spark-based software, services, events, and other products should refer to the foundation’s trademark policy and FAQ.
Commercial or open source software products are not allowed to use Spark in their name, except as “powered by Apache Spark” or “for Apache Spark”. Strict rules must be followed.
Names derived from “Spark” such as “Sparkly” are also not allowed, and company names may not include “Spark”. Package identifiers may contain the word “spark”, but the full name used for the software package must follow the rules.
Written material must refer to the project as “Apache Spark” in the first mention, and logos derived from Spark’s are not allowed. Finally, domain names containing “Spark” are not allowed without written permission from Apache Spark PMC.
What are the benefits of Spark?
As you might have guessed, the main advantage of Spark is its speed. Spark was designed from the ground up with performance in mind. It uses in-memory computing and other optimizations for this.
Today it is estimated to be 100 times faster than Hadoop for data processing, uses fewer resources than Hadoop and has a simpler programming model.
Developers mainly highlight the speed of the product in terms of task execution compared to MapReduce.
Spark is also known for its ease of use and sophisticated analytics. Indeed, it has easy-to-use APIs to work on large data sets.
In addition, Spark has some versatility. It has software for processing data in streams, a graph processing system. It also allows you to develop applications in Java, Scala, Python and R in a simplified way as well as to perform SQL queries.
The analysis engine includes numerous high-level libraries that support SQL queries, streaming data, machine learning and graph processing. These standard libraries allow developers to be more productive. They can easily be combined in the same application to create complex workflows.
Finally, spark achieves high performance for batch and streaming data with a DAG scheduler, a query optimizer and a physical execution engine.
The differences between Spark and MapReduce
Let’s quickly define what MapReduce is:
It is a programming model launched by Google. MapReduce allows the manipulation of large amounts of data. To process them, it distributes them in a cluster of machines.
MapReduce is very popular with companies with large data processing centers, such as Amazon or Facebook. Various frameworks have been created to implement it. The best known is Hadoop, developed by Apache Software Foundation.
Moreover, with MapReduce, the specification of the iteration remains the responsibility of the programmer. Processes specific to failure recovery management result in poor performance. Spark uses a very different method. It places the datasets in RAM and avoids the penalty of disk writes.
Thus, Spark supports In-memory processing, which increases the performance of Big-Data analytics applications and thus increases speed. It performs all data analysis operations in memory in real time and relies on disks only when memory is not sufficient. In contrast, Hadoop writes directly to disks after each operation and works in stages.
Who uses Spark?
Since its release, the unified analytics engine has seen rapid adoption by companies in various industries. Internet stalwarts such as Netflix, Yahoo and eBay have developed Spark on a massive scale.
Currently, Spark has more than 1200 contributors such as Intel, Facebook, IBM… and is now the most important community in the world of Big Data.
It allows unifying all spark Big Data applications. Spark is also suitable for real-time marketing campaigns, online product recommendations or cybersecurity.
What are the different tools in Spark?
- Spark SQL allows users to execute SQL queries to change and transform data.
- Spark streaming offers its user a data processing stream. It uses real-time data.
- Spark graphX processes information from graphs.
- Spark MLlib is a machine learning library containing all the classical learning algorithms and utilities such as classification, regression, clustering, collaborative filtering and dimension reduction.
The Apache spark project is still alive and kicking! Many companies worldwide use it on a daily basis. It is an essential tool in the field of Big data and Data Science!
If you are interested in this field, do not hesitate to contact our experts to learn more about our training courses in Data Science and Big Data !