spark

RDD

abstract class RDD[T] extends Serializable with Logging

A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. Represents an immutable, partitioned collection of elements that can be operated on in parallel. This class contains the basic operations available on all RDDs, such as map, filter, and persist. In addition, PairRDDFunctions contains operations available only on RDDs of key-value pairs, such as groupByKey and join; DoubleRDDFunctions contains operations available only on RDDs of Doubles; and SequenceFileRDDFunctions contains operations available on RDDs that can be saved as SequenceFiles. These operations are automatically available on any RDD of the right type (e.g. RDD[(Int, Int)] through implicit conversions when you import spark.SparkContext._.

Internally, each RDD is characterized by five main properties:

All of the scheduling and execution in Spark is done based on these methods, allowing each RDD to implement its own way of computing itself. Indeed, users can implement custom RDDs (e.g. for reading data from a new storage system) by overriding these functions. Please refer to the Spark paper for more details on RDD internals.

Linear Supertypes
Logging, Serializable, Serializable, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Hide All
  2. Show all
  1. RDD
  2. Logging
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
Visibility
  1. Public
  2. All

Instance Constructors

  1. new RDD(oneParent: spark.RDD[_])(implicit arg0: ClassManifest[T])

    Construct an RDD with just a one-to-one dependency on one parent

  2. new RDD(sc: SparkContext, deps: Seq[spark.Dependency[_]])(implicit arg0: ClassManifest[T])

Abstract Value Members

  1. abstract def compute(split: Partition, context: TaskContext): Iterator[T]

    Implemented by subclasses to compute a given partition.

  2. abstract def getPartitions: Array[Partition]

    Implemented by subclasses to return the set of partitions in this RDD.

    Implemented by subclasses to return the set of partitions in this RDD. This method will only be called once, so it is safe to implement a time-consuming computation in it.

    Attributes
    protected

Concrete Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. def ++(other: RDD[T]): RDD[T]

    Return the union of this RDD and another one.

    Return the union of this RDD and another one. Any identical elements will appear multiple times (use .distinct() to eliminate them).

  5. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  6. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  7. def aggregate[U](zeroValue: U)(seqOp: (U, T) ⇒ U, combOp: (U, U) ⇒ U)(implicit arg0: ClassManifest[U]): U

    Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value".

    Aggregate the elements of each partition, and then the results for all the partitions, using given combine functions and a neutral "zero value". This function can return a different result type, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an U and one operation for merging two U's, as in scala.TraversableOnce. Both of these functions are allowed to modify and return their first argument instead of creating a new U to avoid memory allocation.

  8. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  9. def cache(): RDD[T]

    Persist this RDD with the default storage level (MEMORY_ONLY).

  10. def cartesian[U](other: RDD[U])(implicit arg0: ClassManifest[U]): RDD[(T, U)]

    Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs of elements (a, b) where a is in this and b is in other.

  11. def checkpoint(): Unit

    Mark this RDD for checkpointing.

    Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint directory set with SparkContext.setCheckpointDir() and all references to its parent RDDs will be removed. This function must be called before any job has been executed on this RDD. It is strongly recommended that this RDD is persisted in memory, otherwise saving it on a file will require recomputation.

  12. def clearDependencies(): Unit

    Clears the dependencies of this RDD.

    Clears the dependencies of this RDD. This method must ensure that all references to the original parent RDDs is removed to enable the parent RDDs to be garbage collected. Subclasses of RDD may override this method for implementing their own cleaning logic. See UnionRDD for an example.

    Attributes
    protected
  13. def clone(): AnyRef

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  14. def coalesce(numPartitions: Int, shuffle: Boolean = false): RDD[T]

    Return a new RDD that is reduced into numPartitions partitions.

  15. def collect[U](f: PartialFunction[T, U])(implicit arg0: ClassManifest[U]): RDD[U]

    Return an RDD that contains all matching values by applying f.

  16. def collect(): Array[T]

    Return an array that contains all of the elements in this RDD.

  17. def context: SparkContext

    The SparkContext that this RDD was created on.

  18. def count(): Long

    Return the number of elements in the RDD.

  19. def countApprox(timeout: Long, confidence: Double = 0.95): PartialResult[BoundedDouble]

    (Experimental) Approximate version of count() that returns a potentially incomplete result within a timeout, even if not all tasks have finished.

  20. def countByValue(): Map[T, Long]

    Return the count of each unique value in this RDD as a map of (value, count) pairs.

    Return the count of each unique value in this RDD as a map of (value, count) pairs. The final combine step happens locally on the master, equivalent to running a single reduce task.

  21. def countByValueApprox(timeout: Long, confidence: Double = 0.95): PartialResult[Map[T, BoundedDouble]]

    (Experimental) Approximate version of countByValue().

  22. final def dependencies: Seq[spark.Dependency[_]]

    Get the list of dependencies of this RDD, taking into account whether the RDD is checkpointed or not.

  23. def distinct(): RDD[T]

  24. def distinct(numPartitions: Int): RDD[T]

    Return a new RDD containing the distinct elements in this RDD.

  25. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  26. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  27. def filter(f: (T) ⇒ Boolean): RDD[T]

    Return a new RDD containing only the elements that satisfy a predicate.

  28. def filterWith[A](constructA: (Int) ⇒ A)(p: (T, A) ⇒ Boolean)(implicit arg0: ClassManifest[A]): RDD[T]

    Filters this RDD with p, where p takes an additional parameter of type A.

    Filters this RDD with p, where p takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

  29. def finalize(): Unit

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  30. def first(): T

    Return the first element in this RDD.

  31. def firstParent[U](implicit arg0: ClassManifest[U]): RDD[U]

    Returns the first parent RDD

    Returns the first parent RDD

    Attributes
    protected[spark]
  32. def flatMap[U](f: (T) ⇒ TraversableOnce[U])(implicit arg0: ClassManifest[U]): RDD[U]

    Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.

  33. def flatMapWith[A, U](constructA: (Int) ⇒ A, preservesPartitioning: Boolean)(f: (T, A) ⇒ Seq[U])(implicit arg0: ClassManifest[A], arg1: ClassManifest[U]): RDD[U]

    FlatMaps f over this RDD, where f takes an additional parameter of type A.

    FlatMaps f over this RDD, where f takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

  34. def fold(zeroValue: T)(op: (T, T) ⇒ T): T

    Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value".

    Aggregate the elements of each partition, and then the results for all the partitions, using a given associative function and a neutral "zero value". The function op(t1, t2) is allowed to modify t1 and return it as its result value to avoid object allocation; however, it should not modify t2.

  35. def foreach(f: (T) ⇒ Unit): Unit

    Applies a function f to all elements of this RDD.

  36. def foreachPartition(f: (Iterator[T]) ⇒ Unit): Unit

    Applies a function f to each partition of this RDD.

  37. def foreachWith[A](constructA: (Int) ⇒ A)(f: (T, A) ⇒ Unit)(implicit arg0: ClassManifest[A]): Unit

    Applies f to each element of this RDD, where f takes an additional parameter of type A.

    Applies f to each element of this RDD, where f takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

  38. def getCheckpointFile: Option[String]

    Gets the name of the file to which this RDD was checkpointed

  39. final def getClass(): java.lang.Class[_]

    Definition Classes
    AnyRef → Any
  40. def getDependencies: Seq[spark.Dependency[_]]

    Implemented by subclasses to return how this RDD depends on parent RDDs.

    Implemented by subclasses to return how this RDD depends on parent RDDs. This method will only be called once, so it is safe to implement a time-consuming computation in it.

    Attributes
    protected
  41. def getPreferredLocations(split: Partition): Seq[String]

    Optionally overridden by subclasses to specify placement preferences.

    Optionally overridden by subclasses to specify placement preferences.

    Attributes
    protected
  42. def getStorageLevel: StorageLevel

    Get the RDD's current storage level, or StorageLevel.

    Get the RDD's current storage level, or StorageLevel.NONE if none is set.

  43. def glom(): RDD[Array[T]]

    Return an RDD created by coalescing all elements within each partition into an array.

  44. def groupBy[K](f: (T) ⇒ K, p: Partitioner)(implicit arg0: ClassManifest[K]): RDD[(K, Seq[T])]

    Return an RDD of grouped items.

  45. def groupBy[K](f: (T) ⇒ K, numPartitions: Int)(implicit arg0: ClassManifest[K]): RDD[(K, Seq[T])]

    Return an RDD of grouped elements.

    Return an RDD of grouped elements. Each group consists of a key and a sequence of elements mapping to that key.

  46. def groupBy[K](f: (T) ⇒ K)(implicit arg0: ClassManifest[K]): RDD[(K, Seq[T])]

    Return an RDD of grouped items.

  47. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  48. val id: Int

    A unique ID for this RDD (within its SparkContext).

  49. def initLogging(): Unit

    Attributes
    protected
    Definition Classes
    Logging
  50. def isCheckpointed: Boolean

    Return whether this RDD has been checkpointed or not

  51. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  52. final def iterator(split: Partition, context: TaskContext): Iterator[T]

    Internal method to this RDD; will read from cache if applicable, or otherwise compute it.

    Internal method to this RDD; will read from cache if applicable, or otherwise compute it. This should not be called by users directly, but is available for implementors of custom subclasses of RDD.

  53. def keyBy[K](f: (T) ⇒ K): RDD[(K, T)]

    Creates tuples of the elements in this RDD by applying f.

  54. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  55. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  56. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  57. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  58. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  59. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  60. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  61. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  62. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  63. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  64. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  65. def map[U](f: (T) ⇒ U)(implicit arg0: ClassManifest[U]): RDD[U]

    Return a new RDD by applying a function to all elements of this RDD.

  66. def mapPartitions[U](f: (Iterator[T]) ⇒ Iterator[U], preservesPartitioning: Boolean)(implicit arg0: ClassManifest[U]): RDD[U]

    Return a new RDD by applying a function to each partition of this RDD.

  67. def mapPartitionsWithIndex[U](f: (Int, Iterator[T]) ⇒ Iterator[U], preservesPartitioning: Boolean)(implicit arg0: ClassManifest[U]): RDD[U]

    Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

  68. def mapWith[A, U](constructA: (Int) ⇒ A, preservesPartitioning: Boolean)(f: (T, A) ⇒ U)(implicit arg0: ClassManifest[A], arg1: ClassManifest[U]): RDD[U]

    Maps f over this RDD, where f takes an additional parameter of type A.

    Maps f over this RDD, where f takes an additional parameter of type A. This additional parameter is produced by constructA, which is called in each partition with the index of that partition.

  69. var name: String

    A friendly name for this RDD

  70. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  71. final def notify(): Unit

    Definition Classes
    AnyRef
  72. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  73. val partitioner: Option[Partitioner]

    Optionally overridden by subclasses to specify how they are partitioned.

  74. final def partitions: Array[Partition]

    Get the array of partitions of this RDD, taking into account whether the RDD is checkpointed or not.

  75. def persist(): RDD[T]

    Persist this RDD with the default storage level (MEMORY_ONLY).

  76. def persist(newLevel: StorageLevel): RDD[T]

    Set this RDD's storage level to persist its values across operations after the first time it is computed.

    Set this RDD's storage level to persist its values across operations after the first time it is computed. Can only be called once on each RDD.

  77. def pipe(command: Seq[String], env: Map[String, String]): RDD[String]

    Return an RDD created by piping elements to a forked external process.

  78. def pipe(command: Seq[String]): RDD[String]

    Return an RDD created by piping elements to a forked external process.

  79. def pipe(command: String): RDD[String]

    Return an RDD created by piping elements to a forked external process.

  80. final def preferredLocations(split: Partition): Seq[String]

    Get the preferred location of a split, taking into account whether the RDD is checkpointed or not.

  81. def reduce(f: (T, T) ⇒ T): T

    Reduces the elements of this RDD using the specified commutative and associative binary operator.

  82. def sample(withReplacement: Boolean, fraction: Double, seed: Int): RDD[T]

    Return a sampled subset of this RDD.

  83. def saveAsObjectFile(path: String): Unit

    Save this RDD as a SequenceFile of serialized objects.

  84. def saveAsTextFile(path: String): Unit

    Save this RDD as a text file, using string representations of elements.

  85. def setName(_name: String): RDD[T]

    Assign a name to this RDD

  86. def subtract(other: RDD[T], p: Partitioner): RDD[T]

    Return an RDD with the elements from this that are not in other.

  87. def subtract(other: RDD[T], numPartitions: Int): RDD[T]

    Return an RDD with the elements from this that are not in other.

  88. def subtract(other: RDD[T]): RDD[T]

    Return an RDD with the elements from this that are not in other.

    Return an RDD with the elements from this that are not in other.

    Uses this partitioner/partition size, because even if other is huge, the resulting RDD will be <= us.

  89. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  90. def take(num: Int): Array[T]

    Take the first num elements of the RDD.

    Take the first num elements of the RDD. This currently scans the partitions *one by one*, so it will be slow if a lot of partitions are required. In that case, use collect() to get the whole RDD instead.

  91. def takeSample(withReplacement: Boolean, num: Int, seed: Int): Array[T]

  92. def toArray(): Array[T]

    Return an array that contains all of the elements in this RDD.

  93. def toDebugString: String

    A description of this RDD and its recursive dependencies for debugging.

  94. def toString(): String

    Definition Classes
    RDD → AnyRef → Any
  95. def union(other: RDD[T]): RDD[T]

    Return the union of this RDD and another one.

    Return the union of this RDD and another one. Any identical elements will appear multiple times (use .distinct() to eliminate them).

  96. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  97. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  98. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  99. def zip[U](other: RDD[U])(implicit arg0: ClassManifest[U]): RDD[(T, U)]

    Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc.

    Zips this RDD with another one, returning key-value pairs with the first element in each RDD, second element in each RDD, etc. Assumes that the two RDDs have the *same number of partitions* and the *same number of elements in each partition* (e.g. one was made through a map on the other).

Deprecated Value Members

  1. def mapPartitionsWithSplit[U](f: (Int, Iterator[T]) ⇒ Iterator[U], preservesPartitioning: Boolean)(implicit arg0: ClassManifest[U]): RDD[U]

    Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

    Return a new RDD by applying a function to each partition of this RDD, while tracking the index of the original partition.

    Annotations
    @deprecated
    Deprecated

    (Since version 0.7.0) use mapPartitionsWithIndex

Inherited from Logging

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any