class PairDStreamFunctions[K, V] extends Serializable
Extra functions available on DStream of (key, value) pairs through an implicit conversion.
- Alphabetic
 - By Inheritance
 
- PairDStreamFunctions
 - Serializable
 - Serializable
 - AnyRef
 - Any
 
- Hide All
 - Show All
 
- Public
 - All
 
Instance Constructors
Value Members
- 
      
      
      
        
      
    
      
        final 
        def
      
      
        !=(arg0: Any): Boolean
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        ##(): Int
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        ==(arg0: Any): Boolean
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        asInstanceOf[T0]: T0
      
      
      
- Definition Classes
 - Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        clone(): AnyRef
      
      
      
- Attributes
 - protected[lang]
 - Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... ) @native() @IntrinsicCandidate()
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        cogroup[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]
      
      
      
Return a new DStream by applying 'cogroup' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'cogroup' between RDDs of
thisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to partition the generated RDDs. - 
      
      
      
        
      
    
      
        
        def
      
      
        cogroup[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]
      
      
      
Return a new DStream by applying 'cogroup' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'cogroup' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        cogroup[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Iterable[V], Iterable[W]))]
      
      
      
Return a new DStream by applying 'cogroup' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'cogroup' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        combineByKey[C](createCombiner: (V) ⇒ C, mergeValue: (C, V) ⇒ C, mergeCombiner: (C, C) ⇒ C, partitioner: Partitioner, mapSideCombine: Boolean = true)(implicit arg0: ClassTag[C]): DStream[(K, C)]
      
      
      
Combine elements of each key in DStream's RDDs using custom functions.
Combine elements of each key in DStream's RDDs using custom functions. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions in the Spark core documentation for more information.
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        eq(arg0: AnyRef): Boolean
      
      
      
- Definition Classes
 - AnyRef
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        equals(arg0: Any): Boolean
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        flatMapValues[U](flatMapValuesFunc: (V) ⇒ TraversableOnce[U])(implicit arg0: ClassTag[U]): DStream[(K, U)]
      
      
      
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        fullOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]
      
      
      
Return a new DStream by applying 'full outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'full outer join' between RDDs of
thisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD. - 
      
      
      
        
      
    
      
        
        def
      
      
        fullOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]
      
      
      
Return a new DStream by applying 'full outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'full outer join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        fullOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], Option[W]))]
      
      
      
Return a new DStream by applying 'full outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'full outer join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - 
      
      
      
        
      
    
      
        final 
        def
      
      
        getClass(): Class[_]
      
      
      
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @native() @IntrinsicCandidate()
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        groupByKey(partitioner: Partitioner): DStream[(K, Iterable[V])]
      
      
      
Return a new DStream by applying
groupByKeyon each RDD.Return a new DStream by applying
groupByKeyon each RDD. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD. - 
      
      
      
        
      
    
      
        
        def
      
      
        groupByKey(numPartitions: Int): DStream[(K, Iterable[V])]
      
      
      
Return a new DStream by applying
groupByKeyto each RDD.Return a new DStream by applying
groupByKeyto each RDD. Hash partitioning is used to generate the RDDs withnumPartitionspartitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        groupByKey(): DStream[(K, Iterable[V])]
      
      
      
Return a new DStream by applying
groupByKeyto each RDD.Return a new DStream by applying
groupByKeyto each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, Iterable[V])]
      
      
      
Create a new DStream by applying
groupByKeyover a sliding window onthisDStream.Create a new DStream by applying
groupByKeyover a sliding window onthisDStream. Similar toDStream.groupByKey(), but applies it over a sliding window.- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- partitioner
 partitioner for controlling the partitioning of each RDD in the new DStream.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, Iterable[V])]
      
      
      
Return a new DStream by applying
groupByKeyover a sliding window onthisDStream.Return a new DStream by applying
groupByKeyover a sliding window onthisDStream. Similar toDStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- numPartitions
 number of partitions of each RDD in the new DStream; if not specified then Spark's default number of partitions will be used
 - 
      
      
      
        
      
    
      
        
        def
      
      
        groupByKeyAndWindow(windowDuration: Duration, slideDuration: Duration): DStream[(K, Iterable[V])]
      
      
      
Return a new DStream by applying
groupByKeyover a sliding window.Return a new DStream by applying
groupByKeyover a sliding window. Similar toDStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
 - 
      
      
      
        
      
    
      
        
        def
      
      
        groupByKeyAndWindow(windowDuration: Duration): DStream[(K, Iterable[V])]
      
      
      
Return a new DStream by applying
groupByKeyover a sliding window.Return a new DStream by applying
groupByKeyover a sliding window. This is similar toDStream.groupByKey()but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
 - 
      
      
      
        
      
    
      
        
        def
      
      
        hashCode(): Int
      
      
      
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @native() @IntrinsicCandidate()
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        isInstanceOf[T0]: Boolean
      
      
      
- Definition Classes
 - Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        join[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]
      
      
      
Return a new DStream by applying 'join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'join' between RDDs of
thisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD. - 
      
      
      
        
      
    
      
        
        def
      
      
        join[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]
      
      
      
Return a new DStream by applying 'join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        join[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (V, W))]
      
      
      
Return a new DStream by applying 'join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        leftOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]
      
      
      
Return a new DStream by applying 'left outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'left outer join' between RDDs of
thisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD. - 
      
      
      
        
      
    
      
        
        def
      
      
        leftOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]
      
      
      
Return a new DStream by applying 'left outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'left outer join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        leftOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (V, Option[W]))]
      
      
      
Return a new DStream by applying 'left outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'left outer join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        mapValues[U](mapValuesFunc: (V) ⇒ U)(implicit arg0: ClassTag[U]): DStream[(K, U)]
      
      
      
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        mapWithState[StateType, MappedType](spec: StateSpec[K, V, StateType, MappedType])(implicit arg0: ClassTag[StateType], arg1: ClassTag[MappedType]): MapWithStateDStream[K, V, StateType, MappedType]
      
      
      
Return a MapWithStateDStream by applying a function to every key-value element of
thisstream, while maintaining some state data for each unique key.Return a MapWithStateDStream by applying a function to every key-value element of
thisstream, while maintaining some state data for each unique key. The mapping function and other specification (e.g. partitioners, timeouts, initial state data, etc.) of this transformation can be specified usingStateSpecclass. The state data is accessible in as a parameter of typeStatein the mapping function.Example of using
mapWithState:// A mapping function that maintains an integer state and return a String def mappingFunction(key: String, value: Option[Int], state: State[Int]): Option[String] = { // Use state.exists(), state.get(), state.update() and state.remove() // to manage state, and return the necessary string } val spec = StateSpec.function(mappingFunction).numPartitions(10) val mapWithStateDStream = keyValueDStream.mapWithState[StateType, MappedType](spec)
- StateType
 Class type of the state data
- MappedType
 Class type of the mapped data
- spec
 Specification of this transformation
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        ne(arg0: AnyRef): Boolean
      
      
      
- Definition Classes
 - AnyRef
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        notify(): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @native() @IntrinsicCandidate()
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        notifyAll(): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @native() @IntrinsicCandidate()
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKey(reduceFunc: (V, V) ⇒ V, partitioner: Partitioner): DStream[(K, V)]
      
      
      
Return a new DStream by applying
reduceByKeyto each RDD.Return a new DStream by applying
reduceByKeyto each RDD. The values for each key are merged using the supplied reduce function. org.apache.spark.Partitioner is used to control the partitioning of each RDD. - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKey(reduceFunc: (V, V) ⇒ V, numPartitions: Int): DStream[(K, V)]
      
      
      
Return a new DStream by applying
reduceByKeyto each RDD.Return a new DStream by applying
reduceByKeyto each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs withnumPartitionspartitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKey(reduceFunc: (V, V) ⇒ V): DStream[(K, V)]
      
      
      
Return a new DStream by applying
reduceByKeyto each RDD.Return a new DStream by applying
reduceByKeyto each RDD. The values for each key are merged using the associative and commutative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner, filterFunc: ((K, V)) ⇒ Boolean): DStream[(K, V)]
      
      
      
Return a new DStream by applying incremental
reduceByKeyover a sliding window.Return a new DStream by applying incremental
reduceByKeyover a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :- reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".
 
- reduceFunc
 associative and commutative reduce function
- invReduceFunc
 inverse reduce function
- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- partitioner
 partitioner for controlling the partitioning of each RDD in the new DStream.
- filterFunc
 Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained
 - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, invReduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration = self.slideDuration, numPartitions: Int = ssc.sc.defaultParallelism, filterFunc: ((K, V)) ⇒ Boolean = null): DStream[(K, V)]
      
      
      
Return a new DStream by applying incremental
reduceByKeyover a sliding window.Return a new DStream by applying incremental
reduceByKeyover a sliding window. The reduced value of over a new window is calculated using the old window's reduced value :- reduce the new values that entered the window (e.g., adding new counts)
 
2. "inverse reduce" the old values that left the window (e.g., subtracting old counts)
This is more efficient than reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions.
- reduceFunc
 associative and commutative reduce function
- invReduceFunc
 inverse reduce function; such that for all y, invertible x:
invReduceFunc(reduceFunc(x, y), x) = y- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- filterFunc
 Optional function to filter expired key-value pairs; only pairs that satisfy the function are retained
 - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, partitioner: Partitioner): DStream[(K, V)]
      
      
      
Return a new DStream by applying
reduceByKeyover a sliding window.Return a new DStream by applying
reduceByKeyover a sliding window. Similar toDStream.reduceByKey(), but applies it over a sliding window.- reduceFunc
 associative and commutative reduce function
- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- partitioner
 partitioner for controlling the partitioning of each RDD in the new DStream.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration, numPartitions: Int): DStream[(K, V)]
      
      
      
Return a new DStream by applying
reduceByKeyover a sliding window.Return a new DStream by applying
reduceByKeyover a sliding window. This is similar toDStream.reduceByKey()but applies it over a sliding window. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- reduceFunc
 associative and commutative reduce function
- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
- numPartitions
 number of partitions of each RDD in the new DStream.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration, slideDuration: Duration): DStream[(K, V)]
      
      
      
Return a new DStream by applying
reduceByKeyover a sliding window.Return a new DStream by applying
reduceByKeyover a sliding window. This is similar toDStream.reduceByKey()but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- reduceFunc
 associative and commutative reduce function
- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
- slideDuration
 sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval
 - 
      
      
      
        
      
    
      
        
        def
      
      
        reduceByKeyAndWindow(reduceFunc: (V, V) ⇒ V, windowDuration: Duration): DStream[(K, V)]
      
      
      
Return a new DStream by applying
reduceByKeyover a sliding window onthisDStream.Return a new DStream by applying
reduceByKeyover a sliding window onthisDStream. Similar toDStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- reduceFunc
 associative and commutative reduce function
- windowDuration
 width of the window; must be a multiple of this DStream's batching interval
 - 
      
      
      
        
      
    
      
        
        def
      
      
        rightOuterJoin[W](other: DStream[(K, W)], partitioner: Partitioner)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]
      
      
      
Return a new DStream by applying 'right outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'right outer join' between RDDs of
thisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD. - 
      
      
      
        
      
    
      
        
        def
      
      
        rightOuterJoin[W](other: DStream[(K, W)], numPartitions: Int)(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]
      
      
      
Return a new DStream by applying 'right outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'right outer join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        rightOuterJoin[W](other: DStream[(K, W)])(implicit arg0: ClassTag[W]): DStream[(K, (Option[V], W))]
      
      
      
Return a new DStream by applying 'right outer join' between RDDs of
thisDStream andotherDStream.Return a new DStream by applying 'right outer join' between RDDs of
thisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions. - 
      
      
      
        
      
    
      
        
        def
      
      
        saveAsHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: JobConf = ...): Unit
      
      
      
Save each RDD in
thisDStream as a Hadoop file.Save each RDD in
thisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix" - 
      
      
      
        
      
    
      
        
        def
      
      
        saveAsHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassTag[F]): Unit
      
      
      
Save each RDD in
thisDStream as a Hadoop file.Save each RDD in
thisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix" - 
      
      
      
        
      
    
      
        
        def
      
      
        saveAsNewAPIHadoopFiles(prefix: String, suffix: String, keyClass: Class[_], valueClass: Class[_], outputFormatClass: Class[_ <: OutputFormat[_, _]], conf: Configuration = ...): Unit
      
      
      
Save each RDD in
thisDStream as a Hadoop file.Save each RDD in
thisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix". - 
      
      
      
        
      
    
      
        
        def
      
      
        saveAsNewAPIHadoopFiles[F <: OutputFormat[K, V]](prefix: String, suffix: String)(implicit fm: ClassTag[F]): Unit
      
      
      
Save each RDD in
thisDStream as a Hadoop file.Save each RDD in
thisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix". - 
      
      
      
        
      
    
      
        final 
        def
      
      
        synchronized[T0](arg0: ⇒ T0): T0
      
      
      
- Definition Classes
 - AnyRef
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        toString(): String
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        updateStateByKey[S](updateFunc: (Time, K, Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner, rememberPartitioner: Boolean, initialRDD: Option[RDD[(K, S)]] = None)(implicit arg0: ClassTag[S]): DStream[(K, S)]
      
      
      
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.
- S
 State type
- updateFunc
 State update function. If
thisfunction returns None, then corresponding state key-value pair will be eliminated.- partitioner
 Partitioner for controlling the partitioning of each RDD in the new DStream.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        updateStateByKey[S](updateFunc: (Iterator[(K, Seq[V], Option[S])]) ⇒ Iterator[(K, S)], partitioner: Partitioner, rememberPartitioner: Boolean, initialRDD: RDD[(K, S)])(implicit arg0: ClassTag[S]): DStream[(K, S)]
      
      
      
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.
- S
 State type
- updateFunc
 State update function. Note, that this function may generate a different tuple with a different key than the input key. Therefore keys may be removed or added in this way. It is up to the developer to decide whether to remember the partitioner despite the key being changed.
- partitioner
 Partitioner for controlling the partitioning of each RDD in the new DStream
- rememberPartitioner
 Whether to remember the partitioner object in the generated RDDs.
- initialRDD
 initial state value of each key.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner, initialRDD: RDD[(K, S)])(implicit arg0: ClassTag[S]): DStream[(K, S)]
      
      
      
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.
- S
 State type
- updateFunc
 State update function. If
thisfunction returns None, then corresponding state key-value pair will be eliminated.- partitioner
 Partitioner for controlling the partitioning of each RDD in the new DStream.
- initialRDD
 initial state value of each key.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        updateStateByKey[S](updateFunc: (Iterator[(K, Seq[V], Option[S])]) ⇒ Iterator[(K, S)], partitioner: Partitioner, rememberPartitioner: Boolean)(implicit arg0: ClassTag[S]): DStream[(K, S)]
      
      
      
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.
- S
 State type
- updateFunc
 State update function. Note, that this function may generate a different tuple with a different key than the input key. Therefore keys may be removed or added in this way. It is up to the developer to decide whether to remember the partitioner despite the key being changed.
- partitioner
 Partitioner for controlling the partitioning of each RDD in the new DStream
- rememberPartitioner
 Whether to remember the partitioner object in the generated RDDs.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], partitioner: Partitioner)(implicit arg0: ClassTag[S]): DStream[(K, S)]
      
      
      
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. In every batch the updateFunc will be called for each state even if there are no new values. org.apache.spark.Partitioner is used to control the partitioning of each RDD.
- S
 State type
- updateFunc
 State update function. If
thisfunction returns None, then corresponding state key-value pair will be eliminated.- partitioner
 Partitioner for controlling the partitioning of each RDD in the new DStream.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S], numPartitions: Int)(implicit arg0: ClassTag[S]): DStream[(K, S)]
      
      
      
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. Hash partitioning is used to generate the RDDs with
numPartitionspartitions.- S
 State type
- updateFunc
 State update function. If
thisfunction returns None, then corresponding state key-value pair will be eliminated.- numPartitions
 Number of partitions of each RDD in the new DStream.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        updateStateByKey[S](updateFunc: (Seq[V], Option[S]) ⇒ Option[S])(implicit arg0: ClassTag[S]): DStream[(K, S)]
      
      
      
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.
Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. In every batch the updateFunc will be called for each state even if there are no new values. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.
- S
 State type
- updateFunc
 State update function. If
thisfunction returns None, then corresponding state key-value pair will be eliminated.
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        wait(arg0: Long, arg1: Int): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... )
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        wait(arg0: Long): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... ) @native()
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        wait(): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... )
 
 
Deprecated Value Members
- 
      
      
      
        
      
    
      
        
        def
      
      
        finalize(): Unit
      
      
      
- Attributes
 - protected[lang]
 - Definition Classes
 - AnyRef
 - Annotations
 - @throws( classOf[java.lang.Throwable] ) @Deprecated
 - Deprecated