Class HadoopRDD<K,V>
- All Implemented Interfaces:
Serializable
,org.apache.spark.internal.Logging
,scala.Serializable
org.apache.hadoop.mapred
).
param: sc The SparkContext to associate the RDD with. param: broadcastedConf A general Hadoop Configuration, or a subclass of it. If the enclosed variable references an instance of JobConf, then that JobConf will be used for the Hadoop job. Otherwise, a new JobConf will be created on each executor using the enclosed Configuration. param: initLocalJobConfFuncOpt Optional closure used to initialize any JobConf that HadoopRDD creates. param: inputFormatClass Storage format of the data to be read. param: keyClass Class of the key associated with the inputFormatClass. param: valueClass Class of the value associated with the inputFormatClass. param: minPartitions Minimum number of HadoopRDD partitions (Hadoop Splits) to generate.
- See Also:
- Note:
- Instantiating this class directly is not recommended, please use
org.apache.spark.SparkContext.hadoopRDD()
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Constructor Summary
ConstructorDescriptionHadoopRDD
(SparkContext sc, org.apache.hadoop.mapred.JobConf conf, Class<? extends org.apache.hadoop.mapred.InputFormat<K, V>> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions) HadoopRDD
(SparkContext sc, Broadcast<SerializableConfiguration> broadcastedConf, scala.Option<scala.Function1<org.apache.hadoop.mapred.JobConf, scala.runtime.BoxedUnit>> initLocalJobConfFuncOpt, Class<? extends org.apache.hadoop.mapred.InputFormat<K, V>> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions) -
Method Summary
Modifier and TypeMethodDescriptionstatic void
addLocalConfiguration
(String jobTrackerId, int jobId, int splitId, int attemptId, org.apache.hadoop.mapred.JobConf conf) Add Hadoop configuration specific to a single partition and attempt.void
Mark this RDD for checkpointing.InterruptibleIterator<scala.Tuple2<K,
V>> compute
(Partition theSplit, TaskContext context) :: DeveloperApi :: Implemented by subclasses to compute a given partition.static Object
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456).static Object
getCachedMetadata
(String key) The three methods below are helpers for accessing the local map, a property of the SparkEnv of the local process.org.apache.hadoop.conf.Configuration
getConf()
scala.collection.Seq<String>
getPreferredLocations
(Partition split) <U> RDD<U>
mapPartitionsWithInputSplit
(scala.Function2<org.apache.hadoop.mapred.InputSplit, scala.collection.Iterator<scala.Tuple2<K, V>>, scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$1) Maps over a partition, providing the InputSplit that was used as the base of the partition.static org.slf4j.Logger
static void
org$apache$spark$internal$Logging$$log__$eq
(org.slf4j.Logger x$1) persist
(StorageLevel storageLevel) Set this RDD's storage level to persist its values across operations after the first time it is computed.Methods inherited from class org.apache.spark.rdd.RDD
aggregate, barrier, cache, cartesian, cleanShuffleDependencies, coalesce, collect, collect, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, dependencies, distinct, distinct, doubleRDDToDoubleRDDFunctions, filter, first, flatMap, fold, foreach, foreachPartition, getCheckpointFile, getNumPartitions, getResourceProfile, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, localCheckpoint, map, mapPartitions, mapPartitionsWithEvaluator, mapPartitionsWithIndex, max, min, name, numericRDDToDoubleRDDFunctions, partitioner, partitions, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeAggregate, treeReduce, union, unpersist, withResources, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionsWithEvaluator, zipWithIndex, zipWithUniqueId
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq
-
Constructor Details
-
HadoopRDD
public HadoopRDD(SparkContext sc, Broadcast<SerializableConfiguration> broadcastedConf, scala.Option<scala.Function1<org.apache.hadoop.mapred.JobConf, scala.runtime.BoxedUnit>> initLocalJobConfFuncOpt, Class<? extends org.apache.hadoop.mapred.InputFormat<K, V>> inputFormatClass, Class<K> keyClass, Class<V> valueClass, int minPartitions) -
HadoopRDD
-
-
Method Details
-
CONFIGURATION_INSTANTIATION_LOCK
Configuration's constructor is not threadsafe (see SPARK-1097 and HADOOP-10456). Therefore, we synchronize on this lock before calling new JobConf() or new Configuration().- Returns:
- (undocumented)
-
getCachedMetadata
The three methods below are helpers for accessing the local map, a property of the SparkEnv of the local process.- Parameters:
key
- (undocumented)- Returns:
- (undocumented)
-
addLocalConfiguration
public static void addLocalConfiguration(String jobTrackerId, int jobId, int splitId, int attemptId, org.apache.hadoop.mapred.JobConf conf) Add Hadoop configuration specific to a single partition and attempt. -
org$apache$spark$internal$Logging$$log_
public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_() -
org$apache$spark$internal$Logging$$log__$eq
public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1) -
getPartitions
-
compute
Description copied from class:RDD
:: DeveloperApi :: Implemented by subclasses to compute a given partition. -
mapPartitionsWithInputSplit
public <U> RDD<U> mapPartitionsWithInputSplit(scala.Function2<org.apache.hadoop.mapred.InputSplit, scala.collection.Iterator<scala.Tuple2<K, V>>, scala.collection.Iterator<U>> f, boolean preservesPartitioning, scala.reflect.ClassTag<U> evidence$1) Maps over a partition, providing the InputSplit that was used as the base of the partition. -
getPreferredLocations
-
checkpoint
public void checkpoint()Description copied from class:RDD
Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set withSparkContext#setCheckpointDir
and all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.- Overrides:
checkpoint
in classRDD<scala.Tuple2<K,
V>>
-
persist
Description copied from class:RDD
Set this RDD's storage level to persist its values across operations after the first time it is computed. This can only be used to assign a new storage level if the RDD does not have a storage level set yet. Local checkpointing is an exception. -
getConf
public org.apache.hadoop.conf.Configuration getConf()
-