Monitoring and Instrumentation

There are several ways to monitor Spark applications.

Web Interfaces

Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:

You can access this interface by simply opening http://<driver-node>:4040 in a web browser. If multiple SparkContexts are running on the same host, they will bind to succesive ports beginning with 4040 (4041, 4042, etc).

Spark’s Standlone Mode cluster manager also has its own web UI.

Note that in both of these UIs, the tables are sortable by clicking their headers, making it easy to identify slow tasks, data skew, etc.

Metrics

Spark has a configurable metrics system based on the Coda Hale Metrics Library. This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV files. The metrics system is configured via a configuration file that Spark expects to be present at $SPARK_HOME/conf/metrics.conf. A custom file location can be specified via the spark.metrics.conf Java system property. Spark’s metrics are decoupled into different instances corresponding to Spark components. Within each instance, you can configure a set of sinks to which metrics are reported. The following instances are currently supported:

Each instance can report to zero or more sinks. Sinks are contained in the org.apache.spark.metrics.sink package:

The syntax of the metrics configuration file is defined in an example configuration file, $SPARK_HOME/conf/metrics.conf.template.

Advanced Instrumentation

Several external tools can be used to help profile the performance of Spark jobs: