Class JavaSparkContext

Object
org.apache.spark.api.java.JavaSparkContext
All Implemented Interfaces:
Closeable, AutoCloseable

public class JavaSparkContext extends Object implements Closeable
A Java-friendly version of SparkContext that returns JavaRDDs and works with Java collections instead of Scala ones.

Note:
Only one SparkContext should be active per JVM. You must stop() the active SparkContext before creating a new one.
  • Constructor Details

    • JavaSparkContext

      public JavaSparkContext(SparkContext sc)
    • JavaSparkContext

      public JavaSparkContext()
      Create a JavaSparkContext that loads settings from system properties (for instance, when launching with ./bin/spark-submit).
    • JavaSparkContext

      public JavaSparkContext(SparkConf conf)
      Parameters:
      conf - a SparkConf object specifying Spark parameters
    • JavaSparkContext

      public JavaSparkContext(String master, String appName)
      Parameters:
      master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
      appName - A name for your application, to display on the cluster web UI
    • JavaSparkContext

      public JavaSparkContext(String master, String appName, SparkConf conf)
      Parameters:
      master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
      appName - A name for your application, to display on the cluster web UI
      conf - a SparkConf object specifying other Spark parameters
    • JavaSparkContext

      public JavaSparkContext(String master, String appName, String sparkHome, String jarFile)
      Parameters:
      master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
      appName - A name for your application, to display on the cluster web UI
      sparkHome - The SPARK_HOME directory on the worker nodes
      jarFile - JAR file to send to the cluster. This can be a path on the local file system or an HDFS, HTTP, HTTPS, or FTP URL.
    • JavaSparkContext

      public JavaSparkContext(String master, String appName, String sparkHome, String[] jars)
      Parameters:
      master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
      appName - A name for your application, to display on the cluster web UI
      sparkHome - The SPARK_HOME directory on the worker nodes
      jars - Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.
    • JavaSparkContext

      public JavaSparkContext(String master, String appName, String sparkHome, String[] jars, Map<String,String> environment)
      Parameters:
      master - Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
      appName - A name for your application, to display on the cluster web UI
      sparkHome - The SPARK_HOME directory on the worker nodes
      jars - Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.
      environment - Environment variables to set on worker nodes
  • Method Details

    • fromSparkContext

      public static JavaSparkContext fromSparkContext(SparkContext sc)
    • toSparkContext

      public static SparkContext toSparkContext(JavaSparkContext jsc)
    • jarOfClass

      public static String[] jarOfClass(Class<?> cls)
      Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.
      Parameters:
      cls - (undocumented)
      Returns:
      (undocumented)
    • jarOfObject

      public static String[] jarOfObject(Object obj)
      Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext. In most cases you can call jarOfObject(this) in your driver program.
      Parameters:
      obj - (undocumented)
      Returns:
      (undocumented)
    • union

      public <T> JavaRDD<T> union(JavaRDD<T>... rdds)
      Build the union of JavaRDDs.
    • union

      public <K, V> JavaPairRDD<K,V> union(JavaPairRDD<K,V>... rdds)
      Build the union of JavaPairRDDs.
    • union

      public JavaDoubleRDD union(JavaDoubleRDD... rdds)
      Build the union of JavaDoubleRDDs.
    • sc

      public SparkContext sc()
    • statusTracker

      public JavaSparkStatusTracker statusTracker()
    • isLocal

      public Boolean isLocal()
    • sparkUser

      public String sparkUser()
    • master

      public String master()
    • appName

      public String appName()
    • resources

      public Map<String,ResourceInformation> resources()
    • jars

      public List<String> jars()
    • startTime

      public Long startTime()
    • version

      public String version()
      The version of Spark on which this application is running.
    • defaultParallelism

      public Integer defaultParallelism()
      Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD).
    • defaultMinPartitions

      public Integer defaultMinPartitions()
      Default min number of partitions for Hadoop RDDs when not given by user
    • parallelize

      public <T> JavaRDD<T> parallelize(List<T> list, int numSlices)
      Distribute a local Scala collection to form an RDD.
    • emptyRDD

      public <T> JavaRDD<T> emptyRDD()
      Get an RDD that has no partitions or elements.
    • parallelize

      public <T> JavaRDD<T> parallelize(List<T> list)
      Distribute a local Scala collection to form an RDD.
    • parallelizePairs

      public <K, V> JavaPairRDD<K,V> parallelizePairs(List<scala.Tuple2<K,V>> list, int numSlices)
      Distribute a local Scala collection to form an RDD.
    • parallelizePairs

      public <K, V> JavaPairRDD<K,V> parallelizePairs(List<scala.Tuple2<K,V>> list)
      Distribute a local Scala collection to form an RDD.
    • parallelizeDoubles

      public JavaDoubleRDD parallelizeDoubles(List<Double> list, int numSlices)
      Distribute a local Scala collection to form an RDD.
    • parallelizeDoubles

      public JavaDoubleRDD parallelizeDoubles(List<Double> list)
      Distribute a local Scala collection to form an RDD.
    • textFile

      public JavaRDD<String> textFile(String path)
      Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings. The text files must be encoded as UTF-8.
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
    • textFile

      public JavaRDD<String> textFile(String path, int minPartitions)
      Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings. The text files must be encoded as UTF-8.
      Parameters:
      path - (undocumented)
      minPartitions - (undocumented)
      Returns:
      (undocumented)
    • wholeTextFiles

      public JavaPairRDD<String,String> wholeTextFiles(String path, int minPartitions)
      Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file. The text files must be encoded as UTF-8.

      For example, if you have the following files:

      
         hdfs://a-hdfs-path/part-00000
         hdfs://a-hdfs-path/part-00001
         ...
         hdfs://a-hdfs-path/part-nnnnn
       

      Do

      
         JavaPairRDD<String, String> rdd = sparkContext.wholeTextFiles("hdfs://a-hdfs-path")
       

      then rdd contains

      
         (a-hdfs-path/part-00000, its content)
         (a-hdfs-path/part-00001, its content)
         ...
         (a-hdfs-path/part-nnnnn, its content)
       

      Parameters:
      minPartitions - A suggestion value of the minimal splitting number for input data.
      path - (undocumented)
      Returns:
      (undocumented)
      Note:
      Small files are preferred, large file is also allowable, but may cause bad performance.

    • wholeTextFiles

      public JavaPairRDD<String,String> wholeTextFiles(String path)
      Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file. The text files must be encoded as UTF-8.

      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      See Also:
      • wholeTextFiles(path: String, minPartitions: Int).
    • binaryFiles

      public JavaPairRDD<String,PortableDataStream> binaryFiles(String path, int minPartitions)
      Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

      For example, if you have the following files:

      
         hdfs://a-hdfs-path/part-00000
         hdfs://a-hdfs-path/part-00001
         ...
         hdfs://a-hdfs-path/part-nnnnn
       

      Do

      
         JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path")
       

      then rdd contains

      
         (a-hdfs-path/part-00000, its content)
         (a-hdfs-path/part-00001, its content)
         ...
         (a-hdfs-path/part-nnnnn, its content)
       

      Parameters:
      minPartitions - A suggestion value of the minimal splitting number for input data.
      path - (undocumented)
      Returns:
      (undocumented)
      Note:
      Small files are preferred; very large files but may cause bad performance.

    • binaryFiles

      public JavaPairRDD<String,PortableDataStream> binaryFiles(String path)
      Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

      For example, if you have the following files:

      
         hdfs://a-hdfs-path/part-00000
         hdfs://a-hdfs-path/part-00001
         ...
         hdfs://a-hdfs-path/part-nnnnn
       

      Do

      
         JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path")
       
      ,

      then rdd contains

      
         (a-hdfs-path/part-00000, its content)
         (a-hdfs-path/part-00001, its content)
         ...
         (a-hdfs-path/part-nnnnn, its content)
       

      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
      Note:
      Small files are preferred; very large files but may cause bad performance.
    • binaryRecords

      public JavaRDD<byte[]> binaryRecords(String path, int recordLength)
      Load data from a flat binary file, assuming the length of each record is constant.

      Parameters:
      path - Directory to the input data files
      recordLength - (undocumented)
      Returns:
      An RDD of data with values, represented as byte arrays
    • sequenceFile

      public <K, V> JavaPairRDD<K,V> sequenceFile(String path, Class<K> keyClass, Class<V> valueClass, int minPartitions)
      Get an RDD for a Hadoop SequenceFile with given key and value types.

      Parameters:
      path - (undocumented)
      keyClass - (undocumented)
      valueClass - (undocumented)
      minPartitions - (undocumented)
      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • sequenceFile

      public <K, V> JavaPairRDD<K,V> sequenceFile(String path, Class<K> keyClass, Class<V> valueClass)
      Get an RDD for a Hadoop SequenceFile.

      Parameters:
      path - (undocumented)
      keyClass - (undocumented)
      valueClass - (undocumented)
      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • objectFile

      public <T> JavaRDD<T> objectFile(String path, int minPartitions)
      Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.
      Parameters:
      path - (undocumented)
      minPartitions - (undocumented)
      Returns:
      (undocumented)
    • objectFile

      public <T> JavaRDD<T> objectFile(String path)
      Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.
      Parameters:
      path - (undocumented)
      Returns:
      (undocumented)
    • hadoopRDD

      public <K, V, F extends org.apache.hadoop.mapred.InputFormat<K, V>> JavaPairRDD<K,V> hadoopRDD(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions)
      Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable, etc).

      Parameters:
      conf - JobConf for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.
      inputFormatClass - Class of the InputFormat
      keyClass - Class of the keys
      valueClass - Class of the values
      minPartitions - Minimum number of Hadoop Splits to generate.

      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • hadoopRDD

      public <K, V, F extends org.apache.hadoop.mapred.InputFormat<K, V>> JavaPairRDD<K,V> hadoopRDD(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass)
      Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable,

      Parameters:
      conf - JobConf for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.
      inputFormatClass - Class of the InputFormat
      keyClass - Class of the keys
      valueClass - Class of the values

      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • hadoopFile

      public <K, V, F extends org.apache.hadoop.mapred.InputFormat<K, V>> JavaPairRDD<K,V> hadoopFile(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions)
      Get an RDD for a Hadoop file with an arbitrary InputFormat.

      Parameters:
      path - (undocumented)
      inputFormatClass - (undocumented)
      keyClass - (undocumented)
      valueClass - (undocumented)
      minPartitions - (undocumented)
      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • hadoopFile

      public <K, V, F extends org.apache.hadoop.mapred.InputFormat<K, V>> JavaPairRDD<K,V> hadoopFile(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass)
      Get an RDD for a Hadoop file with an arbitrary InputFormat

      Parameters:
      path - (undocumented)
      inputFormatClass - (undocumented)
      keyClass - (undocumented)
      valueClass - (undocumented)
      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • newAPIHadoopFile

      public <K, V, F extends org.apache.hadoop.mapreduce.InputFormat<K, V>> JavaPairRDD<K,V> newAPIHadoopFile(String path, Class<F> fClass, Class<K> kClass, Class<V> vClass, org.apache.hadoop.conf.Configuration conf)
      Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.

      Parameters:
      path - (undocumented)
      fClass - (undocumented)
      kClass - (undocumented)
      vClass - (undocumented)
      conf - (undocumented)
      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • newAPIHadoopRDD

      public <K, V, F extends org.apache.hadoop.mapreduce.InputFormat<K, V>> JavaPairRDD<K,V> newAPIHadoopRDD(org.apache.hadoop.conf.Configuration conf, Class<F> fClass, Class<K> kClass, Class<V> vClass)
      Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.

      Parameters:
      conf - Configuration for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.
      fClass - Class of the InputFormat
      kClass - Class of the keys
      vClass - Class of the values

      Returns:
      (undocumented)
      Note:
      Because Hadoop's RecordReader class re-uses the same Writable object for each record, directly caching the returned RDD will create many references to the same object. If you plan to directly cache Hadoop writable objects, you should first copy them using a map function.
    • union

      public <T> JavaRDD<T> union(scala.collection.Seq<JavaRDD<T>> rdds)
      Build the union of JavaRDDs.
    • union

      public <K, V> JavaPairRDD<K,V> union(scala.collection.Seq<JavaPairRDD<K,V>> rdds)
      Build the union of JavaPairRDDs.
    • union

      public JavaDoubleRDD union(scala.collection.Seq<JavaDoubleRDD> rdds)
      Build the union of JavaDoubleRDDs.
    • broadcast

      public <T> Broadcast<T> broadcast(T value)
      Broadcast a read-only variable to the cluster, returning a Broadcast object for reading it in distributed functions. The variable will be sent to each cluster only once.
      Parameters:
      value - (undocumented)
      Returns:
      (undocumented)
    • stop

      public void stop()
      Shut down the SparkContext.
    • close

      public void close()
      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface Closeable
    • getSparkHome

      public Optional<String> getSparkHome()
      Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference). If neither of these is set, return None.
      Returns:
      (undocumented)
    • addFile

      public void addFile(String path)
      Add a file to be downloaded with this Spark job on every node. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs, use SparkFiles.get(fileName) to find its download location.

      Parameters:
      path - (undocumented)
      Note:
      A path can be added only once. Subsequent additions of the same path are ignored.
    • addFile

      public void addFile(String path, boolean recursive)
      Add a file to be downloaded with this Spark job on every node. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs, use SparkFiles.get(fileName) to find its download location.

      A directory can be given if the recursive option is set to true. Currently directories are only supported for Hadoop-supported filesystems.

      Parameters:
      path - (undocumented)
      recursive - (undocumented)
      Note:
      A path can be added only once. Subsequent additions of the same path are ignored.
    • addJar

      public void addJar(String path)
      Adds a JAR dependency for all tasks to be executed on this SparkContext in the future. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.

      Parameters:
      path - (undocumented)
      Note:
      A path can be added only once. Subsequent additions of the same path are ignored.
    • hadoopConfiguration

      public org.apache.hadoop.conf.Configuration hadoopConfiguration()
      Returns the Hadoop configuration used for the Hadoop code (e.g. file systems) we reuse.

      Returns:
      (undocumented)
      Note:
      As it will be reused in all Hadoop RDDs, it's better not to modify it unless you plan to set some global configurations for all Hadoop RDDs.
    • setCheckpointDir

      public void setCheckpointDir(String dir)
      Set the directory under which RDDs are going to be checkpointed. The directory must be an HDFS path if running on a cluster.
      Parameters:
      dir - (undocumented)
    • getCheckpointDir

      public Optional<String> getCheckpointDir()
    • getConf

      public SparkConf getConf()
      Return a copy of this JavaSparkContext's configuration. The configuration ''cannot'' be changed at runtime.
      Returns:
      (undocumented)
    • setCallSite

      public void setCallSite(String site)
      Pass-through to SparkContext.setCallSite. For API support only.
      Parameters:
      site - (undocumented)
    • clearCallSite

      public void clearCallSite()
      Pass-through to SparkContext.setCallSite. For API support only.
    • setLocalProperty

      public void setLocalProperty(String key, String value)
      Set a local property that affects jobs submitted from this thread, and all child threads, such as the Spark fair scheduler pool.

      These properties are inherited by child threads spawned from this thread. This may have unexpected consequences when working with thread pools. The standard java implementation of thread pools have worker threads spawn other worker threads. As a result, local properties may propagate unpredictably.

      Parameters:
      key - (undocumented)
      value - (undocumented)
    • getLocalProperty

      public String getLocalProperty(String key)
      Get a local property set in this thread, or null if it is missing. See org.apache.spark.api.java.JavaSparkContext.setLocalProperty.
      Parameters:
      key - (undocumented)
      Returns:
      (undocumented)
    • setJobDescription

      public void setJobDescription(String value)
      Set a human readable description of the current job.
      Parameters:
      value - (undocumented)
      Since:
      2.3.0
    • setLogLevel

      public void setLogLevel(String logLevel)
      Control our logLevel. This overrides any user-defined log settings.
      Parameters:
      logLevel - The desired log level as a string. Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
    • setJobGroup

      public void setJobGroup(String groupId, String description, boolean interruptOnCancel)
      Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.

      Often, a unit of execution in an application consists of multiple Spark actions or jobs. Application programmers can use this method to group all those jobs together and give a group description. Once set, the Spark web UI will associate such jobs with this group.

      The application can also use org.apache.spark.api.java.JavaSparkContext.cancelJobGroup to cancel all running jobs in this group. For example,

      
       // In the main thread:
       sc.setJobGroup("some_job_to_cancel", "some job description");
       rdd.map(...).count();
      
       // In a separate thread:
       sc.cancelJobGroup("some_job_to_cancel");
       

      If interruptOnCancel is set to true for the job group, then job cancellation will result in Thread.interrupt() being called on the job's executor threads. This is useful to help ensure that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208, where HDFS may respond to Thread.interrupt() by marking nodes as dead.

      Parameters:
      groupId - (undocumented)
      description - (undocumented)
      interruptOnCancel - (undocumented)
    • setJobGroup

      public void setJobGroup(String groupId, String description)
      Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.

      Parameters:
      groupId - (undocumented)
      description - (undocumented)
      See Also:
      • setJobGroup(groupId: String, description: String, interruptThread: Boolean). This method sets interruptOnCancel to false.
    • clearJobGroup

      public void clearJobGroup()
      Clear the current thread's job group ID and its description.
    • setInterruptOnCancel

      public void setInterruptOnCancel(boolean interruptOnCancel)
      Set the behavior of job cancellation from jobs started in this thread.

      Parameters:
      interruptOnCancel - If true, then job cancellation will result in Thread.interrupt() being called on the job's executor threads. This is useful to help ensure that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208, where HDFS may respond to Thread.interrupt() by marking nodes as dead.

      Since:
      3.5.0
    • addJobTag

      public void addJobTag(String tag)
      Add a tag to be assigned to all the jobs started by this thread.

      Parameters:
      tag - The tag to be added. Cannot contain ',' (comma) character.

      Since:
      3.5.0
    • removeJobTag

      public void removeJobTag(String tag)
      Remove a tag previously added to be assigned to all the jobs started by this thread. Noop if such a tag was not added earlier.

      Parameters:
      tag - The tag to be removed. Cannot contain ',' (comma) character.

      Since:
      3.5.0
    • getJobTags

      public Set<String> getJobTags()
      Get the tags that are currently set to be assigned to all the jobs started by this thread.

      Returns:
      (undocumented)
      Since:
      3.5.0
    • clearJobTags

      public void clearJobTags()
      Clear the current thread's job tags.

      Since:
      3.5.0
    • cancelJobGroup

      public void cancelJobGroup(String groupId)
      Cancel active jobs for the specified group. See org.apache.spark.api.java.JavaSparkContext.setJobGroup for more information.
      Parameters:
      groupId - (undocumented)
    • cancelJobsWithTag

      public void cancelJobsWithTag(String tag)
      Cancel active jobs that have the specified tag. See org.apache.spark.SparkContext.addJobTag.

      Parameters:
      tag - The tag to be cancelled. Cannot contain ',' (comma) character.

      Since:
      3.5.0
    • cancelAllJobs

      public void cancelAllJobs()
      Cancel all jobs that have been scheduled or are running.
    • getPersistentRDDs

      public Map<Integer,JavaRDD<?>> getPersistentRDDs()
      Returns a Java map of JavaRDDs that have marked themselves as persistent via cache() call.

      Returns:
      (undocumented)
      Note:
      This does not necessarily mean the caching or computation was successful.