
Introducing Apache Spark
Hadoop and MR have been around for 10 years and have proven to be the best solution to process massive data with high performance. However, MR lacked performance in iterative computing where the output between multiple MR jobs had to be written to HDFS. In a single MR job, it lacked performance because of the drawbacks of the MR framework.
Let's take a look at the history of computing trends to understand how computing paradigms have changed over the last two decades.
The trend has been to Reference the URI when the network was cheaper (in 1990), Replicate when storage became cheaper (in 2000), and Recompute when memory became cheaper (in 2010), as shown in Figure 2.5:

Figure 2.5: Trends of computing
Note
So, what really changed over a period of time?
Tape is dead, disk has become tape, and SSD has almost become the disk. Now, caching data in RAM is the current trend.
Let's understand why memory-based computing is important and how it provides significant performance benefits.
Figure 2.6 indicates the data transfer rates from various mediums to the CPU. Disk to CPU is 100 MB/s, SSD to CPU is 600 MB/s, and over the network to CPU is 1 MB to 1 GB/s. However, RAM to CPU transfer speed is astonishingly fast, which is 10 GB/s. So, the idea is to cache all or partial data in-memory so that higher performance can be achieved:

Figure 2.6: Why memory?
Spark history
Spark started in 2009 as a research project in the UC Berkeley RAD Lab, which later became the AMPLab. The researchers in the lab had previously been working on Hadoop MapReduce and observed that MR was inefficient for iterative and interactive computing jobs. Thus, from the beginning, Spark was designed to be fast for interactive queries and iterative algorithms, bringing in ideas such as support for in-memory storage and efficient fault recovery.
In 2011, the AMPLab started to develop higher-level components on Spark such as Shark and Spark Streaming. These components are sometimes referred to as the Berkeley Data Analytics Stack (BDAS).
Spark was first open sourced in March 2010 and transferred to the Apache Software Foundation in June 2013.
In February 2014, it became a top-level project at the Apache Software Foundation. Spark has since become one of the largest open source communities in Big Data. Now, over 250 contributors in over 50 organizations are contributing to Spark development. The user base has increased tremendously from small companies to Fortune 500 companies. Figure 2.7 shows you the history of Apache Spark:

Figure 2.7: The history of Apache Spark
What is Apache Spark?
Let's understand what Apache Spark is and what makes it a force to reckon with in Big Data analytics:
- Apache Spark is a fast enterprise-grade large-scale data processing engine, which is interoperable with Apache Hadoop.
- It is written in Scala, which is both an object-oriented and functional programming language that runs in a JVM.
- Spark enables applications to distribute data reliably in-memory during processing. This is the key to Spark's performance as it allows applications to avoid expensive disk access and performs computations at memory speeds.
- It is suitable for iterative algorithms by having every iteration access data through memory.
- Spark programs perform 100 times faster than MR in-memory or 10 times faster on disk (http://spark.apache.org/).
- It provides native support for Java, Scala, Python, and R languages with interactive shells for Scala, Python, and R. Applications can be developed easily, and often 2 to 10 times less code is needed.
- Spark powers a stack of libraries, including Spark SQL and DataFrames for interactive analytics, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for real-time analytics. You can combine these features seamlessly in the same application.
- Spark runs on Hadoop, Mesos, standalone cluster managers, on-premise hardware, or in the cloud.
What Apache Spark is not
Hadoop provides HDFS for storage and MR for compute. However, Spark does not provide any specific storage medium. Spark is mainly a compute engine, but you can store data in-memory or on Tachyon to process it.
Spark has the ability to create distributed datasets from any file stored in the HDFS or other storage systems supported by Hadoop APIs (including your local filesystem, Amazon S3, Cassandra, Hive, HBase, Elasticsearch, and others).
It's important to note that Spark is not Hadoop and does not require Hadoop to run it. It simply has support for storage systems implementing Hadoop APIs. Spark supports text files, sequence files, Avro, Parquet, and any other Hadoop InputFormat.
Note
Does Spark replace Hadoop?
Spark is designed to interoperate with Hadoop. It's not a replacement for Hadoop, but it's a replacement for the MR framework on Hadoop. All Hadoop processing frameworks (Sqoop, Hive, Pig, Mahout, Cascading, and Crunch) using MR as an engine now use Spark as an additional processing engine.
MapReduce issues
MR developers faced challenges with respect to performance and converting every business problem to an MR problem. Let's understand the issues related to MR and how they are addressed in Apache Spark:
- MR creates separate JVMs for every Mapper and Reducer. Launching JVMs takes a considerable amount of time.
- MR code requires a significant amount of boilerplate coding. The programmer needs to think and design every business problem in terms of Map and Reduce, which makes it a very difficult program. One MR job can rarely do a full computation. You need multiple MR jobs to finish the complete task, and need to design and keep track of optimizations at all levels. Hive and Pig solve this problem. However, they are not suitable for all use cases.
- An MR job writes the data to disk between each job and hence is not suitable for iterative processing.
- A higher level of abstraction, such as Cascading and Scalding, provides better programming of MR jobs. However, it does not provide any additional performance benefits.
- MR does not provide great APIs either.
MR is slow because every job in an MR job flow stores the data on disk. Multiple queries on the same dataset will read data separately and create high disk I/O, as shown in Figure 2.8:

Figure 2.8: MapReduce versus Apache Spark
Spark takes the concept of MR to the next level to store intermediate data in-memory and reuses it, as needed, multiple times. This provides high performance at memory speeds, as shown in Figure 2.8.
Note
If I have only one MR job, does it perform the same as Spark?
No, the performance of the Spark job is superior to the MR job because of in-memory computations and its shuffle improvements. The performance of Spark is superior to MR even when the memory cache is disabled. A new shuffle implementation (sort-based shuffle instead of hash-based shuffle), new network module (based on netty instead of using block manager to send shuffle data), and new external shuffle service make Spark perform the fastest petabyte sort (on 190 nodes with 46 TB RAM) and terabyte sort. Spark sorted 100 TB of data using 206 EC2 i2.8xlarge machines in 23 minutes. The previous world record was 72 minutes, set by a Hadoop MR cluster of 2,100 nodes. This means that Spark sorted the same data 3 times faster using 10 times fewer machines. All the sorting took place on disk (HDFS) without using Spark's in-memory cache (https://databricks.com/blog/2014/10/10/spark-petabyte-sort.html).
To summarize, here are the differences between MR and Spark:

Spark's stack
Spark's stack components are Spark Core, Spark SQL, Datasets and DataFrames, Spark Streaming, Structured Streaming, MLlib, GraphX, and SparkR as shown in Figure 2.9:

Figure 2.9: The Apache Spark ecosystem
Here is a comparison of Spark components with Hadoop Ecosystem components:

To understand the Spark framework at a higher level, let's take a look at these core components of Spark and their integrations:

The Spark ecosystem is a unified stack that provides you with the power of combining SQL, streaming, and machine learning in one program. The advantages of unification are as follows:
- No need of copying or ETL of data between systems
- Combines processing types in one program
- Code reuse
- One system to learn
- One system to maintain
An example of unification is shown in Figure 2.10:

Figure 2.10: The unification of the Apache Spark ecosystem