Apache Spark is a powerful alternative to Hadoop MapReduce, with several, rich functionality features, like machine learning, real-time stream processing, and graph computations. It is an open-source distributed cluster-computing framework. It is designed to cover a wide range of workloads such as batch applications, iterative algorithms, interactive queries, and streaming. Apart from supporting all these workloads in a respective system. It reduces the management burden of maintaining separate tools.

Spark Core Component
Due to its In-memory computation capability, it provides high speed. In Spark core, we use a special data structure RDD. RDD refers to Resilient Distributed Datasets. While we need to share or reuse data in computing systems like Hadoop MapReduce. Data sharing or reuse in distributed computing systems like Hadoop MapReduce requires the data to be stored in intermediate stores like Amazon S3 or HDFS. This slows down the overall computation speed. Because of several replications, IO operations, and serializations in storing the data in these intermediate stable data stores. Resilient Distributed Datasets overcome this drawback of Hadoop MapReduce by allowing – fault-tolerant ‘in-memory’ computations.
Operations on RDDs
i) Transformations
Coarse-grained operations like join, union, filter, or map on existing RDDs which produce a new RDD, with the result of the operation, are referred to as transformations. All transformations in Spark are lazy. Spark not execute them immediately. It a lineage is created that tracks all the transformations to be applied on an RDD.
ii) Actions
Actions are functions that return the end result of RDD computations. It uses a lineage graph to load data onto the RDD in a particular order. Actions return the final result to the Spark Driver After all of the transformations are complete. Actions are operations that provide non-RDD values.
Spark SQL Component1
In Spark SQL, we have an extensible optimizer for the core functioning in SQL. This is a cost-oriented optimizer. It helps developers to improve the productivity of the system. Spark SQL also increases the performance of the queries that they write. It is fully compatible with HIVE data. Hive defines as an open-source data warehouse system. It builds on top of Hadoop. This helps in querying and analyzing large datasets stored in Hadoop files.
Spark allows you to work on this data with SQL. Dataframes are equivalent to relational tables. They can be constructed from any external databases, structured files, or already existing RDDs. Dataframes have all the features of RDD such as immutable, resilient, and in-memory but with an extra feature of being structured and easy to work with. Dataframe API is also available in Scala, Python, R, and Java.
Apache Spark Streaming
Spark Streaming is a lightweight API. It allows developers to perform batch processing and real-time streaming of data with ease. It provides secure, reliable, and fast processing of live data streams. Spark Streaming is one of those unique features. It has empowered Spark to potentially take the role of Apache Storm. Spark Streaming mainly enables you to create analytical and interactive applications for live streaming data. It is very useful for Online Advertisements. Also for Campaigns, Finance, Supply Chain Management, etc.

Apache Spark MLlib
MLlib is a distributed machine learning framework above Spark. Because of the distributed memory-based Spark architecture. MLlib is very simple to use and scalable. It is compatible with various programming languages. MLlib has easily integrated with other tools also. The deployment and development of scalable pipelines are becoming easier through MLlib.

The motive behind MLlib creation is to make machine learning scalable and easy. It contains machine learning libraries that have an implementation of various machine learning algorithms. For example, clustering, regression, classification, and collaborative filtering. Some lower-level machine learning primitives like generic gradient descent optimization algorithms are also present in MLlib.
Apache Spark GrahphX

GraphX is a distributed graph-processing framework on top of Spark. It provides an API for expressing graph computation. That can model the user-defined graphs by using the Pregel abstraction API. It also provides an optimized runtime for this abstraction. GraphX enables users to build, transform and reason about data at scale. It is available with a library of common algorithms already. For cross-world manipulations, GraphX is an API.
Conclusion
In this blog, we have noticed that this is Apache Spark Ecosystem components are making it popular. Since its components are providing ease to use so. As a matter of fact, it becomes a common platform for all types of Data Processing.