This page tracks external software projects that supplement Apache Spark and add to its ecosystem.
To add a project, open a pull request against the spark-website
repository. Add an entry to
this markdown file,
jekyll build to generate the HTML too. Include
both in your pull request. See the README in this repo for more information.
Note that all project and product names should follow trademark guidelines.
spark-packages.org is an external,
community-managed list of third-party libraries, add-ons, and applications that work with
Apache Spark. You can add a package as long as you have a GitHub repository.
- REST Job Server for Apache Spark -
REST interface for managing and submitting Spark jobs on the same cluster
(see blog post
- MLbase - Machine Learning research project on top of Spark
- Apache Mesos - Cluster management system that supports
- Alluxio (née Tachyon) - Memory speed virtual distributed
storage system that supports running Spark
- FiloDB - a Spark integrated analytical/columnar
database, with in-memory option capable of sub-second concurrent queries
- Zeppelin - Multi-purpose notebook which supports 20+ language backends,
including Apache Spark
- EclairJS - enables Node.js developers to code
- Mist - Serverless proxy for Spark cluster (spark middleware)
- K8S Operator for Apache Spark - Kubernetes operator for specifying and managing the lifecycle of Apache Spark applications on Kubernetes.
- IBM Spectrum Conductor - Cluster management software that integrates with Spark and modern computing frameworks.
- Delta Lake - Storage layer that provides ACID transactions and scalable metadata handling for Apache Spark workloads.
- MLflow - Open source platform to manage the machine learning lifecycle, including deploying models from diverse machine learning libraries on Apache Spark.
- Koalas - Data frame API on Apache Spark that more closely follows Python’s pandas.
- Apache DataFu - A collection of utils and user-defined-functions for working with large scale data in Apache Spark, as well as making Scala-Python interoperability easier.
Applications Using Spark
- Apache Mahout - Previously on Hadoop MapReduce,
Mahout has switched to using Spark as the backend
- Apache MRQL - A query processing and optimization
system for large-scale, distributed data analysis, built on top of Apache Hadoop, Hama, and Spark
- BlinkDB - a massively parallel, approximate query engine built
on top of Shark and Spark
- Spindle - Spark/Parquet-based web
analytics query engine
- Thunderain - a framework
for combining stream processing with historical data, think Lambda architecture
- DF from Ayasdi - a Pandas-like data frame
implementation for Spark
- Oryx - Lambda architecture on Apache Spark,
Apache Kafka for real-time large scale machine learning
- ADAM - A framework and CLI for loading,
transforming, and analyzing genomic data using Apache Spark
- TransmogrifAI - AutoML library for building modular, reusable, strongly typed machine learning workflows on Spark with minimal hand tuning
- Natural Language Processing for Apache Spark - A library to provide simple, performant, and accurate NLP annotations for machine learning pipelines
- Rumble for Apache Spark - A JSONiq engine to query, with a functional language, large, nested, and heterogeneous JSON datasets that do not fit in dataframes.
Performance, Monitoring, and Debugging Tools for Spark
Additional Language Bindings
C# / .NET
- Mobius: C# and F# language binding and extensions to Apache Spark