spark.streaming

PairDStreamFunctions

class PairDStreamFunctions[K, V] extends Serializable

Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Hide All
  2. Show all
  1. PairDStreamFunctions
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
Visibility
  1. Public
  2. All

Instance Constructors

  1. new PairDStreamFunctions(self: DStream[(K, V)])(implicit arg0: ClassManifest[K], arg1: ClassManifest[V])

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  8. def cogroup[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassManifest[W]): DStream[(K, (Seq[V], Seq[W]))]

    Cogroup this DStream with other DStream using a partitioner.

    Cogroup this DStream with other DStream using a partitioner. For each key k in corresponding RDDs of this or other DStreams, the generated RDD will contains a tuple with the list of values for that key in both RDDs. Partitioner is used to partition each generated RDD.

  9. def cogroup[W](other: DStream[(K, W)])(implicit arg0: ClassManifest[W]): DStream[(K, (Seq[V], Seq[W]))]

    Cogroup this DStream with other DStream.

    Cogroup this DStream with other DStream. For each key k in corresponding RDDs of this or other DStreams, the generated RDD will contains a tuple with the list of values for that key in both RDDs. HashPartitioner is used to partition each generated RDD into default number of partitions.

  10. def combineByKey[C](createCombiner: (V) ⇒ C, mergeValue: (C, V) ⇒ C, mergeCombiner: (C, C) ⇒ C, partitioner: Partitioner)(implicit arg0: ClassManifest[C]): DStream[(K, C)]

    Combine elements of each key in DStream's RDDs using custom functions.

    Combine elements of each key in DStream's RDDs using custom functions. This is similar to the combineByKey for RDDs. Please refer to combineByKey in spark.PairRDDFunctions for more information.

  11. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  12. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  13. def finalize(): Unit

    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws()
  14. def flatMapValues[U](flatMapValuesFunc: (V) ⇒ TraversableOnce[U])(implicit arg0: ClassManifest[U]): DStream[(K, U)]

  15. final def getClass(): java.lang.Class[_]

    Definition Classes
    AnyRef → Any
  16. def groupByKey(partitioner: Partitioner): DStream[(K, Seq[V])]

    Return a new DStream by applying groupByKey on each RDD.

    Return a new DStream by applying groupByKey on each RDD. The supplied spark.Partitioner is used to control the partitioning of each RDD.

  17. def groupByKey(numPartitions: Int): DStream[(K, Seq[V])]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  18. def groupByKey(): DStream[(K, Seq[V])]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  19. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, Seq[V])]

    Create a new DStream by applying groupByKey over a sliding window on this DStream.

    Create a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

  20. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, Seq[V])]

    Return a new DStream by applying groupByKey over a sliding window on this DStream.

    Return a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream; if not specified then Spark's default number of partitions will be used

  21. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration): DStream[(K, Seq[V])]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  22. def groupByKeyAndWindow(windowDuration: Duration): DStream[(K, Seq[V])]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. This is similar to DStream.groupByKey() but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  23. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  24. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  25. def join[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassManifest[W]): DStream[(K, (V, W))]

    Join this DStream with other DStream, that is, each RDD of the new DStream will be generated by joining RDDs from this and other DStream.

    Join this DStream with other DStream, that is, each RDD of the new DStream will be generated by joining RDDs from this and other DStream. Uses the given Partitioner to partition each generated RDD.

  26. def join[W](other: DStream[(K, W)])(implicit arg0: ClassManifest[W]): DStream[(K, (V, W))]

    Join this DStream with other DStream.

    Join this DStream with other DStream. HashPartitioner is used to partition each generated RDD into default number of partitions.

  27. def mapValues[U](mapValuesFunc: (V) ⇒ U)(implicit arg0: ClassManifest[U]): DStream[(K, U)]

  28. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  29. final def notify(): Unit

    Definition Classes
    AnyRef
  30. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  31. def reduceByKey(reduceFunc: (V, V) ⇒ V, partitioner: Partitioner): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. spark.Partitioner is used to control the partitioning of each RDD.

  32. def reduceByKey(reduceFunc: (V, V) ⇒ V, numPartitions: Int): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  33. def reduceByKey(reduceFunc: (V, V) ⇒ V): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the associative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  34. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner, filterFunc: ((K, V)) ⇒ Boolean): DStream[(K, V)]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :

    1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".
    reduceFunc

    associative reduce function

    invReduceFunc

    inverse reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

    filterFunc

    Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained

  35. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration = self.slideDuration, numPartitions: Int = ssc.sc.defaultParallelism, filterFunc: ((K, V)) ⇒ Boolean = null): DStream[(K, V)]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :

    1. reduce the new values that entered the window (e.g., adding new counts)

    2. "inverse reduce" the old values that left the window (e.g., subtracting old counts)

    This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    invReduceFunc

    inverse reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    filterFunc

    Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained

  36. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. Similar to DStream.reduceByKey(), but applies it over a sliding window.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

  37. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream.

  38. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  39. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window on this DStream.

    Return a new DStream by applying reduceByKey over a sliding window on this DStream. Similar to DStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  40. def saveAsHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: org.apache.hadoop.mapred.OutputFormat[_, _]], conf: JobConf = new JobConf): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix"

  41. def saveAsHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassManifest[F]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix"

  42. def saveAsNewAPIHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: org.apache.hadoop.mapreduce.OutputFormat[_, _]], conf: Configuration = new Configuration): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  43. def saveAsNewAPIHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassManifest[F]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  44. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  45. def toString(): String

    Definition Classes
    AnyRef → Any
  46. def updateStateByKey[S](updateFunc: (Iterator[(K, Seq[V], Option[S])]) ⇒ Iterator[(K, S)], partitioner: Partitioner, rememberPartitioner: Boolean)(implicit arg0: ClassManifest[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. spark.Paxrtitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated. Note, that this function may generate a different a tuple with a different key than the input key. It is up to the developer to decide whether to remember the partitioner despite the key being changed.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

    rememberPartitioner

    Whether to remember the paritioner object in the generated RDDs.

  47. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner)(implicit arg0: ClassManifest[S]): DStream[(K, S)]

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.

    Create a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

  48. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], numPartitions: Int)(implicit arg0: ClassManifest[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    numPartitions

    Number of partitions of each RDD in the new DStream.

  49. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S])(implicit arg0: ClassManifest[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

  50. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  51. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()
  52. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws()

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any