Class JavaSparkContext
- All Implemented Interfaces:
Closeable
,AutoCloseable
SparkContext
that returns
JavaRDD
s and works with Java collections instead of Scala ones.
- Note:
- Only one
SparkContext
should be active per JVM. You muststop()
the activeSparkContext
before creating a new one.
-
Constructor Summary
ConstructorDescriptionCreate a JavaSparkContext that loads settings from system properties (for instance, when launching with .JavaSparkContext
(String master, String appName) JavaSparkContext
(String master, String appName, String sparkHome, String jarFile) JavaSparkContext
(String master, String appName, String sparkHome, String[] jars) JavaSparkContext
(String master, String appName, String sparkHome, String[] jars, Map<String, String> environment) JavaSparkContext
(String master, String appName, SparkConf conf) JavaSparkContext
(SparkConf conf) -
Method Summary
Modifier and TypeMethodDescriptionvoid
Add a file to be downloaded with this Spark job on every node.void
Add a file to be downloaded with this Spark job on every node.void
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.void
Add a tag to be assigned to all the jobs started by this thread.appName()
binaryFiles
(String path) Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.binaryFiles
(String path, int minPartitions) Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array.JavaRDD<byte[]>
binaryRecords
(String path, int recordLength) Load data from a flat binary file, assuming the length of each record is constant.<T> Broadcast<T>
broadcast
(T value) Broadcast a read-only variable to the cluster, returning aBroadcast
object for reading it in distributed functions.void
Cancel all jobs that have been scheduled or are running.void
cancelJobGroup
(String groupId) Cancel active jobs for the specified group.void
cancelJobsWithTag
(String tag) Cancel active jobs that have the specified tag.void
Pass-through to SparkContext.setCallSite.void
Clear the current thread's job group ID and its description.void
Clear the current thread's job tags.void
close()
Default min number of partitions for Hadoop RDDs when not given by userDefault level of parallelism to use when not given by user (e.g. parallelize and makeRDD).<T> JavaRDD<T>
emptyRDD()
Get an RDD that has no partitions or elements.static JavaSparkContext
getConf()
Return a copy of this JavaSparkContext's configuration.Get the tags that are currently set to be assigned to all the jobs started by this thread.getLocalProperty
(String key) Get a local property set in this thread, or null if it is missing.Returns a Java map of JavaRDDs that have marked themselves as persistent via cache() call.Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference).org.apache.hadoop.conf.Configuration
Returns the Hadoop configuration used for the Hadoop code (e.g. file systems) we reuse.<K,
V, F extends org.apache.hadoop.mapred.InputFormat<K, V>>
JavaPairRDD<K,V> hadoopFile
(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass) Get an RDD for a Hadoop file with an arbitrary InputFormat<K,
V, F extends org.apache.hadoop.mapred.InputFormat<K, V>>
JavaPairRDD<K,V> hadoopFile
(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions) Get an RDD for a Hadoop file with an arbitrary InputFormat.<K,
V, F extends org.apache.hadoop.mapred.InputFormat<K, V>>
JavaPairRDD<K,V> hadoopRDD
(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass) Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable,<K,
V, F extends org.apache.hadoop.mapred.InputFormat<K, V>>
JavaPairRDD<K,V> hadoopRDD
(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions) Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable, etc).isLocal()
static String[]
jarOfClass
(Class<?> cls) Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.static String[]
jarOfObject
(Object obj) Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext.jars()
master()
<K,
V, F extends org.apache.hadoop.mapreduce.InputFormat<K, V>>
JavaPairRDD<K,V> newAPIHadoopFile
(String path, Class<F> fClass, Class<K> kClass, Class<V> vClass, org.apache.hadoop.conf.Configuration conf) Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.<K,
V, F extends org.apache.hadoop.mapreduce.InputFormat<K, V>>
JavaPairRDD<K,V> newAPIHadoopRDD
(org.apache.hadoop.conf.Configuration conf, Class<F> fClass, Class<K> kClass, Class<V> vClass) Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.<T> JavaRDD<T>
objectFile
(String path) Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.<T> JavaRDD<T>
objectFile
(String path, int minPartitions) Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition.<T> JavaRDD<T>
parallelize
(List<T> list) Distribute a local Scala collection to form an RDD.<T> JavaRDD<T>
parallelize
(List<T> list, int numSlices) Distribute a local Scala collection to form an RDD.parallelizeDoubles
(List<Double> list) Distribute a local Scala collection to form an RDD.parallelizeDoubles
(List<Double> list, int numSlices) Distribute a local Scala collection to form an RDD.<K,
V> JavaPairRDD<K, V> parallelizePairs
(List<scala.Tuple2<K, V>> list) Distribute a local Scala collection to form an RDD.<K,
V> JavaPairRDD<K, V> parallelizePairs
(List<scala.Tuple2<K, V>> list, int numSlices) Distribute a local Scala collection to form an RDD.void
removeJobTag
(String tag) Remove a tag previously added to be assigned to all the jobs started by this thread.sc()
<K,
V> JavaPairRDD<K, V> sequenceFile
(String path, Class<K> keyClass, Class<V> valueClass) Get an RDD for a Hadoop SequenceFile.<K,
V> JavaPairRDD<K, V> sequenceFile
(String path, Class<K> keyClass, Class<V> valueClass, int minPartitions) Get an RDD for a Hadoop SequenceFile with given key and value types.void
setCallSite
(String site) Pass-through to SparkContext.setCallSite.void
setCheckpointDir
(String dir) Set the directory under which RDDs are going to be checkpointed.void
setInterruptOnCancel
(boolean interruptOnCancel) Set the behavior of job cancellation from jobs started in this thread.void
setJobDescription
(String value) Set a human readable description of the current job.void
setJobGroup
(String groupId, String description) Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.void
setJobGroup
(String groupId, String description, boolean interruptOnCancel) Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.void
setLocalProperty
(String key, String value) Set a local property that affects jobs submitted from this thread, and all child threads, such as the Spark fair scheduler pool.void
setLogLevel
(String logLevel) Control our logLevel.void
stop()
Shut down the SparkContext.Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.static SparkContext
union
(JavaDoubleRDD... rdds) Build the union of JavaDoubleRDDs.<K,
V> JavaPairRDD<K, V> union
(JavaPairRDD<K, V>... rdds) Build the union of JavaPairRDDs.<T> JavaRDD<T>
Build the union of JavaRDDs.union
(scala.collection.Seq<JavaDoubleRDD> rdds) Build the union of JavaDoubleRDDs.<K,
V> JavaPairRDD<K, V> union
(scala.collection.Seq<JavaPairRDD<K, V>> rdds) Build the union of JavaPairRDDs.<T> JavaRDD<T>
Build the union of JavaRDDs.version()
The version of Spark on which this application is running.wholeTextFiles
(String path) Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.wholeTextFiles
(String path, int minPartitions) Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI.
-
Constructor Details
-
JavaSparkContext
-
JavaSparkContext
public JavaSparkContext()Create a JavaSparkContext that loads settings from system properties (for instance, when launching with ./bin/spark-submit). -
JavaSparkContext
- Parameters:
conf
- aSparkConf
object specifying Spark parameters
-
JavaSparkContext
- Parameters:
master
- Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).appName
- A name for your application, to display on the cluster web UI
-
JavaSparkContext
- Parameters:
master
- Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).appName
- A name for your application, to display on the cluster web UIconf
- aSparkConf
object specifying other Spark parameters
-
JavaSparkContext
- Parameters:
master
- Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).appName
- A name for your application, to display on the cluster web UIsparkHome
- The SPARK_HOME directory on the worker nodesjarFile
- JAR file to send to the cluster. This can be a path on the local file system or an HDFS, HTTP, HTTPS, or FTP URL.
-
JavaSparkContext
- Parameters:
master
- Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).appName
- A name for your application, to display on the cluster web UIsparkHome
- The SPARK_HOME directory on the worker nodesjars
- Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.
-
JavaSparkContext
public JavaSparkContext(String master, String appName, String sparkHome, String[] jars, Map<String, String> environment) - Parameters:
master
- Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).appName
- A name for your application, to display on the cluster web UIsparkHome
- The SPARK_HOME directory on the worker nodesjars
- Collection of JARs to send to the cluster. These can be paths on the local file system or HDFS, HTTP, HTTPS, or FTP URLs.environment
- Environment variables to set on worker nodes
-
-
Method Details
-
fromSparkContext
-
toSparkContext
-
jarOfClass
Find the JAR from which a given class was loaded, to make it easy for users to pass their JARs to SparkContext.- Parameters:
cls
- (undocumented)- Returns:
- (undocumented)
-
jarOfObject
Find the JAR that contains the class of a particular object, to make it easy for users to pass their JARs to SparkContext. In most cases you can call jarOfObject(this) in your driver program.- Parameters:
obj
- (undocumented)- Returns:
- (undocumented)
-
union
Build the union of JavaRDDs. -
union
Build the union of JavaPairRDDs. -
union
Build the union of JavaDoubleRDDs. -
sc
-
statusTracker
-
isLocal
-
sparkUser
-
master
-
appName
-
resources
-
jars
-
startTime
-
version
The version of Spark on which this application is running. -
defaultParallelism
Default level of parallelism to use when not given by user (e.g. parallelize and makeRDD). -
defaultMinPartitions
Default min number of partitions for Hadoop RDDs when not given by user -
parallelize
Distribute a local Scala collection to form an RDD. -
emptyRDD
Get an RDD that has no partitions or elements. -
parallelize
Distribute a local Scala collection to form an RDD. -
parallelizePairs
Distribute a local Scala collection to form an RDD. -
parallelizePairs
Distribute a local Scala collection to form an RDD. -
parallelizeDoubles
Distribute a local Scala collection to form an RDD. -
parallelizeDoubles
Distribute a local Scala collection to form an RDD. -
textFile
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings. The text files must be encoded as UTF-8.- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
-
textFile
Read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings. The text files must be encoded as UTF-8.- Parameters:
path
- (undocumented)minPartitions
- (undocumented)- Returns:
- (undocumented)
-
wholeTextFiles
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file. The text files must be encoded as UTF-8.For example, if you have the following files:
hdfs://a-hdfs-path/part-00000 hdfs://a-hdfs-path/part-00001 ... hdfs://a-hdfs-path/part-nnnnn
Do
JavaPairRDD<String, String> rdd = sparkContext.wholeTextFiles("hdfs://a-hdfs-path")
then
rdd
contains(a-hdfs-path/part-00000, its content) (a-hdfs-path/part-00001, its content) ... (a-hdfs-path/part-nnnnn, its content)
- Parameters:
minPartitions
- A suggestion value of the minimal splitting number for input data.path
- (undocumented)- Returns:
- (undocumented)
- Note:
- Small files are preferred, large file is also allowable, but may cause bad performance.
-
wholeTextFiles
Read a directory of text files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file. The text files must be encoded as UTF-8.- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- See Also:
-
wholeTextFiles(path: String, minPartitions: Int)
.
-
binaryFiles
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.For example, if you have the following files:
hdfs://a-hdfs-path/part-00000 hdfs://a-hdfs-path/part-00001 ... hdfs://a-hdfs-path/part-nnnnn
Do
JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path")
then
rdd
contains(a-hdfs-path/part-00000, its content) (a-hdfs-path/part-00001, its content) ... (a-hdfs-path/part-nnnnn, its content)
- Parameters:
minPartitions
- A suggestion value of the minimal splitting number for input data.path
- (undocumented)- Returns:
- (undocumented)
- Note:
- Small files are preferred; very large files but may cause bad performance.
-
binaryFiles
Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.For example, if you have the following files:
hdfs://a-hdfs-path/part-00000 hdfs://a-hdfs-path/part-00001 ... hdfs://a-hdfs-path/part-nnnnn
Do
,JavaPairRDD<String, byte[]> rdd = sparkContext.dataStreamFiles("hdfs://a-hdfs-path")
then
rdd
contains(a-hdfs-path/part-00000, its content) (a-hdfs-path/part-00001, its content) ... (a-hdfs-path/part-nnnnn, its content)
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Note:
- Small files are preferred; very large files but may cause bad performance.
-
binaryRecords
Load data from a flat binary file, assuming the length of each record is constant.- Parameters:
path
- Directory to the input data filesrecordLength
- (undocumented)- Returns:
- An RDD of data with values, represented as byte arrays
-
sequenceFile
public <K,V> JavaPairRDD<K,V> sequenceFile(String path, Class<K> keyClass, Class<V> valueClass, int minPartitions) Get an RDD for a Hadoop SequenceFile with given key and value types.- Parameters:
path
- (undocumented)keyClass
- (undocumented)valueClass
- (undocumented)minPartitions
- (undocumented)- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
sequenceFile
Get an RDD for a Hadoop SequenceFile.- Parameters:
path
- (undocumented)keyClass
- (undocumented)valueClass
- (undocumented)- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
objectFile
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.- Parameters:
path
- (undocumented)minPartitions
- (undocumented)- Returns:
- (undocumented)
-
objectFile
Load an RDD saved as a SequenceFile containing serialized objects, with NullWritable keys and BytesWritable values that contain a serialized partition. This is still an experimental storage format and may not be supported exactly as is in future Spark releases. It will also be pretty slow if you use the default serializer (Java serialization), though the nice thing about it is that there's very little effort required to save arbitrary objects.- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
-
hadoopRDD
public <K,V, JavaPairRDD<K,F extends org.apache.hadoop.mapred.InputFormat<K, V>> V> hadoopRDD(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions) Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable, etc).- Parameters:
conf
- JobConf for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.inputFormatClass
- Class of the InputFormatkeyClass
- Class of the keysvalueClass
- Class of the valuesminPartitions
- Minimum number of Hadoop Splits to generate.- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
hadoopRDD
public <K,V, JavaPairRDD<K,F extends org.apache.hadoop.mapred.InputFormat<K, V>> V> hadoopRDD(org.apache.hadoop.mapred.JobConf conf, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass) Get an RDD for a Hadoop-readable dataset from a Hadoop JobConf giving its InputFormat and any other necessary info (e.g. file name for a filesystem-based dataset, table name for HyperTable,- Parameters:
conf
- JobConf for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.inputFormatClass
- Class of the InputFormatkeyClass
- Class of the keysvalueClass
- Class of the values- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
hadoopFile
public <K,V, JavaPairRDD<K,F extends org.apache.hadoop.mapred.InputFormat<K, V>> V> hadoopFile(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions) Get an RDD for a Hadoop file with an arbitrary InputFormat.- Parameters:
path
- (undocumented)inputFormatClass
- (undocumented)keyClass
- (undocumented)valueClass
- (undocumented)minPartitions
- (undocumented)- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
hadoopFile
public <K,V, JavaPairRDD<K,F extends org.apache.hadoop.mapred.InputFormat<K, V>> V> hadoopFile(String path, Class<F> inputFormatClass, Class<K> keyClass, Class<V> valueClass) Get an RDD for a Hadoop file with an arbitrary InputFormat- Parameters:
path
- (undocumented)inputFormatClass
- (undocumented)keyClass
- (undocumented)valueClass
- (undocumented)- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
newAPIHadoopFile
public <K,V, JavaPairRDD<K,F extends org.apache.hadoop.mapreduce.InputFormat<K, V>> V> newAPIHadoopFile(String path, Class<F> fClass, Class<K> kClass, Class<V> vClass, org.apache.hadoop.conf.Configuration conf) Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.- Parameters:
path
- (undocumented)fClass
- (undocumented)kClass
- (undocumented)vClass
- (undocumented)conf
- (undocumented)- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
newAPIHadoopRDD
public <K,V, JavaPairRDD<K,F extends org.apache.hadoop.mapreduce.InputFormat<K, V>> V> newAPIHadoopRDD(org.apache.hadoop.conf.Configuration conf, Class<F> fClass, Class<K> kClass, Class<V> vClass) Get an RDD for a given Hadoop file with an arbitrary new API InputFormat and extra configuration options to pass to the input format.- Parameters:
conf
- Configuration for setting up the dataset. Note: This will be put into a Broadcast. Therefore if you plan to reuse this conf to create multiple RDDs, you need to make sure you won't modify the conf. A safe approach is always creating a new conf for a new RDD.fClass
- Class of the InputFormatkClass
- Class of the keysvClass
- Class of the values- Returns:
- (undocumented)
- Note:
- Because Hadoop's RecordReader class re-uses the same Writable object for each
record, directly caching the returned RDD will create many references to the same object.
If you plan to directly cache Hadoop writable objects, you should first copy them using
a
map
function.
-
union
Build the union of JavaRDDs. -
union
Build the union of JavaPairRDDs. -
union
Build the union of JavaDoubleRDDs. -
broadcast
Broadcast a read-only variable to the cluster, returning aBroadcast
object for reading it in distributed functions. The variable will be sent to each cluster only once.- Parameters:
value
- (undocumented)- Returns:
- (undocumented)
-
stop
public void stop()Shut down the SparkContext. -
close
public void close()- Specified by:
close
in interfaceAutoCloseable
- Specified by:
close
in interfaceCloseable
-
getSparkHome
Get Spark's home location from either a value set through the constructor, or the spark.home Java property, or the SPARK_HOME environment variable (in that order of preference). If neither of these is set, return None.- Returns:
- (undocumented)
-
addFile
Add a file to be downloaded with this Spark job on every node. Thepath
passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs, useSparkFiles.get(fileName)
to find its download location.- Parameters:
path
- (undocumented)- Note:
- A path can be added only once. Subsequent additions of the same path are ignored.
-
addFile
Add a file to be downloaded with this Spark job on every node. Thepath
passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI. To access the file in Spark jobs, useSparkFiles.get(fileName)
to find its download location.A directory can be given if the recursive option is set to true. Currently directories are only supported for Hadoop-supported filesystems.
- Parameters:
path
- (undocumented)recursive
- (undocumented)- Note:
- A path can be added only once. Subsequent additions of the same path are ignored.
-
addJar
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future. Thepath
passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI.- Parameters:
path
- (undocumented)- Note:
- A path can be added only once. Subsequent additions of the same path are ignored.
-
hadoopConfiguration
public org.apache.hadoop.conf.Configuration hadoopConfiguration()Returns the Hadoop configuration used for the Hadoop code (e.g. file systems) we reuse.- Returns:
- (undocumented)
- Note:
- As it will be reused in all Hadoop RDDs, it's better not to modify it unless you plan to set some global configurations for all Hadoop RDDs.
-
setCheckpointDir
Set the directory under which RDDs are going to be checkpointed. The directory must be an HDFS path if running on a cluster.- Parameters:
dir
- (undocumented)
-
getCheckpointDir
-
getConf
Return a copy of this JavaSparkContext's configuration. The configuration ''cannot'' be changed at runtime.- Returns:
- (undocumented)
-
setCallSite
Pass-through to SparkContext.setCallSite. For API support only.- Parameters:
site
- (undocumented)
-
clearCallSite
public void clearCallSite()Pass-through to SparkContext.setCallSite. For API support only. -
setLocalProperty
Set a local property that affects jobs submitted from this thread, and all child threads, such as the Spark fair scheduler pool.These properties are inherited by child threads spawned from this thread. This may have unexpected consequences when working with thread pools. The standard java implementation of thread pools have worker threads spawn other worker threads. As a result, local properties may propagate unpredictably.
- Parameters:
key
- (undocumented)value
- (undocumented)
-
getLocalProperty
Get a local property set in this thread, or null if it is missing. Seeorg.apache.spark.api.java.JavaSparkContext.setLocalProperty
.- Parameters:
key
- (undocumented)- Returns:
- (undocumented)
-
setJobDescription
Set a human readable description of the current job.- Parameters:
value
- (undocumented)- Since:
- 2.3.0
-
setLogLevel
Control our logLevel. This overrides any user-defined log settings.- Parameters:
logLevel
- The desired log level as a string. Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
-
setJobGroup
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.Often, a unit of execution in an application consists of multiple Spark actions or jobs. Application programmers can use this method to group all those jobs together and give a group description. Once set, the Spark web UI will associate such jobs with this group.
The application can also use
org.apache.spark.api.java.JavaSparkContext.cancelJobGroup
to cancel all running jobs in this group. For example,// In the main thread: sc.setJobGroup("some_job_to_cancel", "some job description"); rdd.map(...).count(); // In a separate thread: sc.cancelJobGroup("some_job_to_cancel");
If interruptOnCancel is set to true for the job group, then job cancellation will result in Thread.interrupt() being called on the job's executor threads. This is useful to help ensure that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208, where HDFS may respond to Thread.interrupt() by marking nodes as dead.
- Parameters:
groupId
- (undocumented)description
- (undocumented)interruptOnCancel
- (undocumented)
-
setJobGroup
Assigns a group ID to all the jobs started by this thread until the group ID is set to a different value or cleared.- Parameters:
groupId
- (undocumented)description
- (undocumented)- See Also:
-
setJobGroup(groupId: String, description: String, interruptThread: Boolean)
. This method sets interruptOnCancel to false.
-
clearJobGroup
public void clearJobGroup()Clear the current thread's job group ID and its description. -
setInterruptOnCancel
public void setInterruptOnCancel(boolean interruptOnCancel) Set the behavior of job cancellation from jobs started in this thread.- Parameters:
interruptOnCancel
- If true, then job cancellation will result inThread.interrupt()
being called on the job's executor threads. This is useful to help ensure that the tasks are actually stopped in a timely manner, but is off by default due to HDFS-1208, where HDFS may respond to Thread.interrupt() by marking nodes as dead.- Since:
- 3.5.0
-
addJobTag
Add a tag to be assigned to all the jobs started by this thread.- Parameters:
tag
- The tag to be added. Cannot contain ',' (comma) character.- Since:
- 3.5.0
-
removeJobTag
Remove a tag previously added to be assigned to all the jobs started by this thread. Noop if such a tag was not added earlier.- Parameters:
tag
- The tag to be removed. Cannot contain ',' (comma) character.- Since:
- 3.5.0
-
getJobTags
Get the tags that are currently set to be assigned to all the jobs started by this thread.- Returns:
- (undocumented)
- Since:
- 3.5.0
-
clearJobTags
public void clearJobTags()Clear the current thread's job tags.- Since:
- 3.5.0
-
cancelJobGroup
Cancel active jobs for the specified group. Seeorg.apache.spark.api.java.JavaSparkContext.setJobGroup
for more information.- Parameters:
groupId
- (undocumented)
-
cancelJobsWithTag
Cancel active jobs that have the specified tag. Seeorg.apache.spark.SparkContext.addJobTag
.- Parameters:
tag
- The tag to be cancelled. Cannot contain ',' (comma) character.- Since:
- 3.5.0
-
cancelAllJobs
public void cancelAllJobs()Cancel all jobs that have been scheduled or are running. -
getPersistentRDDs
Returns a Java map of JavaRDDs that have marked themselves as persistent via cache() call.- Returns:
- (undocumented)
- Note:
- This does not necessarily mean the caching or computation was successful.
-