Packages

c

org.apache.spark.streaming.dstream

PairDStreamFunctions

class PairDStreamFunctions[K, V] extends Serializable

Extra functions available on DStream of (key, value) pairs through an implicit conversion.

Source
PairDStreamFunctions.scala
Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. PairDStreamFunctions
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new PairDStreamFunctions(self: DStream[(K, V)])(implicit kt: ClassTag[K], vt: ClassTag[V], ord: Ordering[K])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @IntrinsicCandidate()
  6. def cogroup[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to partition the generated RDDs.

  7. def cogroup[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  8. def cogroup[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'cogroup' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  9. def combineByKey[C](createCombiner: (V) ⇒ C, mergeValue: (C, V) ⇒ C, mergeCombiner: (C, C) ⇒ C, partitioner: Partitioner, mapSideCombine: Boolean = true)(implicit arg0: ClassTag[C]): DStream[(K, C)]

    Combine elements of each key in DStream's RDDs using custom functions.

    Combine elements of each key in DStream's RDDs using custom functions. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions in the Spark core documentation for more information.

  10. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  11. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  12. def flatMapValues[U](flatMapValuesFunc: (V) ⇒ TraversableOnce[U])(implicit arg0: ClassTag[U]): DStream[(K, U)]

    Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.

  13. def fullOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  14. def fullOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  15. def fullOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'full outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  16. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @IntrinsicCandidate()
  17. def groupByKey(partitioner: Partitioner): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey on each RDD.

    Return a new DStream by applying groupByKey on each RDD. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  18. def groupByKey(numPartitions: Int): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  19. def groupByKey(): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey to each RDD.

    Return a new DStream by applying groupByKey to each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  20. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, Iterable[V])]

    Create a new DStream by applying groupByKey over a sliding window on this DStream.

    Create a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

  21. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey over a sliding window on this DStream.

    Return a new DStream by applying groupByKey over a sliding window on this DStream. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream; if not specified then Spark's default number of partitions will be used

  22. def groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. Similar to DStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  23. def groupByKeyAndWindow(windowDuration: Duration): DStream[(K, Iterable[V])]

    Return a new DStream by applying groupByKey over a sliding window.

    Return a new DStream by applying groupByKey over a sliding window. This is similar to DStream.groupByKey() but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  24. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @IntrinsicCandidate()
  25. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  26. def join[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  27. def join[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  28. def join[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  29. def leftOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  30. def leftOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  31. def leftOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'left outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  32. def mapValues[U](mapValuesFunc: (V) ⇒ U)(implicit arg0: ClassTag[U]): DStream[(K, U)]

    Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.

  33. def mapWithState[StateType, MappedType](spec: StateSpec[K, V, StateType, MappedType])(implicit arg0: ClassTag[StateType], arg1: ClassTag[MappedType]): MapWithStateDStream[K, V, StateType, MappedType]

    Return a MapWithStateDStream by applying a function to every key-value element of this stream, while maintaining some state data for each unique key.

    Return a MapWithStateDStream by applying a function to every key-value element of this stream, while maintaining some state data for each unique key. The mapping function and other specification (e.g. partitioners, timeouts, initial state data, etc.) of this transformation can be specified using StateSpec class. The state data is accessible in as a parameter of type State in the mapping function.

    Example of using mapWithState:

    // A mapping function that maintains an integer state and return a String
    def mappingFunction(key: String, value: Option[Int], state: State[Int]): Option[String] = {
      // Use state.exists(), state.get(), state.update() and state.remove()
      // to manage state, and return the necessary string
    }
    
    val spec = StateSpec.function(mappingFunction).numPartitions(10)
    
    val mapWithStateDStream = keyValueDStream.mapWithState[StateType, MappedType](spec)
    StateType

    Class type of the state data

    MappedType

    Class type of the mapped data

    spec

    Specification of this transformation

  34. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  35. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @IntrinsicCandidate()
  36. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @IntrinsicCandidate()
  37. def reduceByKey(reduceFunc: (V, V) ⇒ V, partitioner: Partitioner): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  38. def reduceByKey(reduceFunc: (V, V) ⇒ V, numPartitions: Int): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  39. def reduceByKey(reduceFunc: (V, V) ⇒ V): DStream[(K, V)]

    Return a new DStream by applying reduceByKey to each RDD.

    Return a new DStream by applying reduceByKey to each RDD. The values for each key are merged using the associative and commutative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  40. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner, filterFunc: ((K, V)) ⇒ Boolean): DStream[(K, V)]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :

    1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".
    reduceFunc

    associative and commutative reduce function

    invReduceFunc

    inverse reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

    filterFunc

    Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained

  41. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration = self.slideDuration, numPartitions: Int = ssc.sc.defaultParallelism, filterFunc: ((K, V)) ⇒ Boolean = null): DStream[(K, V)]

    Return a new DStream by applying incremental reduceByKey over a sliding window.

    Return a new DStream by applying incremental reduceByKey over a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :

    1. reduce the new values that entered the window (e.g., adding new counts)

    2. "inverse reduce" the old values that left the window (e.g., subtracting old counts)

    This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative and commutative reduce function

    invReduceFunc

    inverse reduce function; such that for all y, invertible x: invReduceFunc(reduceFunc(x, y), x) = y

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    filterFunc

    Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained

  42. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. Similar to DStream.reduceByKey(), but applies it over a sliding window.

    reduceFunc

    associative and commutative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    partitioner

    partitioner for controlling the partitioning of each RDD in the new DStream.

  43. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    reduceFunc

    associative and commutative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

    numPartitions

    number of partitions of each RDD in the new DStream.

  44. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window.

    Return a new DStream by applying reduceByKey over a sliding window. This is similar to DStream.reduceByKey() but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative and commutative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

    slideDuration

    sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval

  45. def reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration): DStream[(K, V)]

    Return a new DStream by applying reduceByKey over a sliding window on this DStream.

    Return a new DStream by applying reduceByKey over a sliding window on this DStream. Similar to DStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    reduceFunc

    associative and commutative reduce function

    windowDuration

    width of the window; must be a multiple of this DStream's batching interval

  46. def rightOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.

  47. def rightOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with numPartitions partitions.

  48. def rightOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream.

    Return a new DStream by applying 'right outer join' between RDDs of this DStream and other DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

  49. def saveAsHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: JobConf = ...): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix"

  50. def saveAsHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassTag[F]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix"

  51. def saveAsNewAPIHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: Configuration = ...): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  52. def saveAsNewAPIHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassTag[F]): Unit

    Save each RDD in this DStream as a Hadoop file.

    Save each RDD in this DStream as a Hadoop file. The file name at each batch interval is generated based on prefix and suffix: "prefix-TIME_IN_MS.suffix".

  53. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  54. def toString(): String
    Definition Classes
    AnyRef → Any
  55. def updateStateByKey[S](updateFunc: (Time, K, Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner, rememberPartitioner: Boolean, initialRDD: Option[RDD[(K, S)]] = None)(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

  56. def updateStateByKey[S](updateFunc: (Iterator[(K, Seq[V], Option[S])]) ⇒ Iterator[(K, S)], partitioner: Partitioner, rememberPartitioner: Boolean, initialRDD: RDD[(K, S)])(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. Note, that this function may generate a different tuple with a different key than the input key. Therefore keys may be removed or added in this way. It is up to the developer to decide whether to remember the partitioner despite the key being changed.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream

    rememberPartitioner

    Whether to remember the partitioner object in the generated RDDs.

    initialRDD

    initial state value of each key.

  57. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner, initialRDD: RDD[(K, S)])(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

    initialRDD

    initial state value of each key.

  58. def updateStateByKey[S](updateFunc: (Iterator[(K, Seq[V], Option[S])]) ⇒ Iterator[(K, S)], partitioner: Partitioner, rememberPartitioner: Boolean)(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. Note, that this function may generate a different tuple with a different key than the input key. Therefore keys may be removed or added in this way. It is up to the developer to decide whether to remember the partitioner despite the key being changed.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream

    rememberPartitioner

    Whether to remember the partitioner object in the generated RDDs.

  59. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner)(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    partitioner

    Partitioner for controlling the partitioning of each RDD in the new DStream.

  60. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], numPartitions: Int)(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. Hash partitioning is used to generate the RDDs with numPartitions partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

    numPartitions

    Number of partitions of each RDD in the new DStream.

  61. def updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S])(implicit arg0: ClassTag[S]): DStream[(K, S)]

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.

    Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.

    S

    State type

    updateFunc

    State update function. If this function returns None, then corresponding state key-value pair will be eliminated.

  62. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  63. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  64. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped