public abstract class BaseRRDD<T,U> extends RDD<U> implements org.apache.spark.internal.Logging
| Constructor and Description | 
|---|
| BaseRRDD(RDD<T> parent,
        int numPartitions,
        byte[] func,
        String deserializer,
        String serializer,
        byte[] packageNames,
        Broadcast<Object>[] broadcastVars,
        scala.reflect.ClassTag<T> evidence$1,
        scala.reflect.ClassTag<U> evidence$2) | 
| Modifier and Type | Method and Description | 
|---|---|
| scala.collection.Iterator<U> | compute(Partition partition,
       TaskContext context):: DeveloperApi ::
 Implemented by subclasses to compute a given partition. | 
| Partition[] | getPartitions()Implemented by subclasses to return the set of partitions in this RDD. | 
aggregate, barrier, cache, cartesian, checkpoint, cleanShuffleDependencies, coalesce, collect, collect, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, dependencies, distinct, distinct, doubleRDDToDoubleRDDFunctions, filter, first, flatMap, fold, foreach, foreachPartition, getCheckpointFile, getNumPartitions, getResourceProfile, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, localCheckpoint, map, mapPartitions, mapPartitionsWithEvaluator, mapPartitionsWithIndex, max, min, name, numericRDDToDoubleRDDFunctions, partitioner, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeAggregate, treeReduce, union, unpersist, withResources, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitionsWithEvaluator, zipWithIndex, zipWithUniqueId$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitializepublic scala.collection.Iterator<U> compute(Partition partition, TaskContext context)
RDDpublic Partition[] getPartitions()
RDD
 The partitions in this array must satisfy the following property:
   rdd.partitions.zipWithIndex.forall { case (partition, index) => partition.index == index }