Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.
Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing.
Write applications quickly in Java, Scala or Python.
Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala and Python shells.
Combine SQL, streaming, and complex analytics.
Spark powers a stack of high-level tools including Spark SQL, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.
Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, S3.
You can run Spark readily using its standalone cluster mode, on EC2, or run it on Hadoop YARN or Apache Mesos. It can read from HDFS, HBase, Cassandra, and any Hadoop data source.
Spark is used at a wide range of organizations to process large datasets. You can find example use cases at the Spark Summit conference, or on the Powered By page.
There are many ways to reach the community:
Apache Spark is built by a wide set of developers from over 50 companies. Since the project started in 2009, more than 400 developers have contributed to Spark!
The project's committers come from 16 organizations.
If you'd like to participate in Spark, or contribute to the libraries on top of it, learn how to contribute.
Learning Spark is easy whether you come from a Java or Python background: