org.apache.spark.sql

GroupedDataset

class GroupedDataset[K, V] extends Serializable

:: Experimental :: A Dataset has been logically grouped by a user specified grouping key. Users should not construct a GroupedDataset directly, but should instead call groupBy on an existing Dataset.

COMPATIBILITY NOTE: Long term we plan to make GroupedDataset) extend GroupedData. However, making this change to the class hierarchy would break some function signatures. As such, this class should be considered a preview of the final API. Changes will be made to the interface after Spark 1.6.

Annotations
@Experimental()
Source
GroupedDataset.scala
Since

1.6.0

Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. GroupedDataset
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def agg[U1, U2, U3, U4](col1: TypedColumn[V, U1], col2: TypedColumn[V, U2], col3: TypedColumn[V, U3], col4: TypedColumn[V, U4]): Dataset[(K, U1, U2, U3, U4)]

    Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.

    Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.

    Since

    1.6.0

  7. def agg[U1, U2, U3](col1: TypedColumn[V, U1], col2: TypedColumn[V, U2], col3: TypedColumn[V, U3]): Dataset[(K, U1, U2, U3)]

    Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.

    Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.

    Since

    1.6.0

  8. def agg[U1, U2](col1: TypedColumn[V, U1], col2: TypedColumn[V, U2]): Dataset[(K, U1, U2)]

    Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.

    Computes the given aggregations, returning a Dataset of tuples for each unique key and the result of computing these aggregations over all elements in the group.

    Since

    1.6.0

  9. def agg[U1](col1: TypedColumn[V, U1]): Dataset[(K, U1)]

    Computes the given aggregation, returning a Dataset of tuples for each unique key and the result of computing this aggregation over all elements in the group.

    Computes the given aggregation, returning a Dataset of tuples for each unique key and the result of computing this aggregation over all elements in the group.

    Since

    1.6.0

  10. def aggUntyped(columns: TypedColumn[_, _]*): Dataset[_]

    Internal helper function for building typed aggregations that return tuples.

    Internal helper function for building typed aggregations that return tuples. For simplicity and code reuse, we do this without the help of the type system and then use helper functions that cast appropriately for the user facing interface. TODO: does not handle aggrecations that return nonflat results,

    Attributes
    protected
  11. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  12. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  13. def cogroup[U, R](other: GroupedDataset[K, U], f: CoGroupFunction[K, V, U, R], encoder: Encoder[R]): Dataset[R]

    Applies the given function to each cogrouped data.

    Applies the given function to each cogrouped data. For each unique group, the function will be passed the grouping key and 2 iterators containing all elements in the group from Dataset this and other. The function can return an iterator containing elements of an arbitrary type which will be returned as a new Dataset.

    Since

    1.6.0

  14. def cogroup[U, R](other: GroupedDataset[K, U])(f: (K, Iterator[V], Iterator[U]) ⇒ TraversableOnce[R])(implicit arg0: Encoder[R]): Dataset[R]

    Applies the given function to each cogrouped data.

    Applies the given function to each cogrouped data. For each unique group, the function will be passed the grouping key and 2 iterators containing all elements in the group from Dataset this and other. The function can return an iterator containing elements of an arbitrary type which will be returned as a new Dataset.

    Since

    1.6.0

  15. def count(): Dataset[(K, Long)]

    Returns a Dataset that contains a tuple with each key and the number of items present for that key.

    Returns a Dataset that contains a tuple with each key and the number of items present for that key.

    Since

    1.6.0

  16. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  17. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  18. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  19. def flatMapGroups[U](f: FlatMapGroupsFunction[K, V, U], encoder: Encoder[U]): Dataset[U]

    Applies the given function to each group of data.

    Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all of the elements in the group. The function can return an iterator containing elements of an arbitrary type which will be returned as a new Dataset.

    This function does not support partial aggregation, and as a result requires shuffling all the data in the Dataset. If an application intends to perform an aggregation over each key, it is best to use the reduce function or an Aggregator.

    Internally, the implementation will spill to disk if any given group is too large to fit into memory. However, users must take care to avoid materializing the whole iterator for a group (for example, by calling toList) unless they are sure that this is possible given the memory constraints of their cluster.

    Since

    1.6.0

  20. def flatMapGroups[U](f: (K, Iterator[V]) ⇒ TraversableOnce[U])(implicit arg0: Encoder[U]): Dataset[U]

    Applies the given function to each group of data.

    Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all of the elements in the group. The function can return an iterator containing elements of an arbitrary type which will be returned as a new Dataset.

    This function does not support partial aggregation, and as a result requires shuffling all the data in the Dataset. If an application intends to perform an aggregation over each key, it is best to use the reduce function or an Aggregator.

    Internally, the implementation will spill to disk if any given group is too large to fit into memory. However, users must take care to avoid materializing the whole iterator for a group (for example, by calling toList) unless they are sure that this is possible given the memory constraints of their cluster.

    Since

    1.6.0

  21. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  22. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  23. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  24. def keyAs[L](implicit arg0: Encoder[L]): GroupedDataset[L, V]

    Returns a new GroupedDataset where the type of the key has been mapped to the specified type.

    Returns a new GroupedDataset where the type of the key has been mapped to the specified type. The mapping of key columns to the type follows the same rules as as on Dataset.

    Since

    1.6.0

  25. def keys: Dataset[K]

    Returns a Dataset that contains each unique key.

    Returns a Dataset that contains each unique key.

    Since

    1.6.0

  26. def mapGroups[U](f: MapGroupsFunction[K, V, U], encoder: Encoder[U]): Dataset[U]

    Applies the given function to each group of data.

    Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all of the elements in the group. The function can return an element of arbitrary type which will be returned as a new Dataset.

    This function does not support partial aggregation, and as a result requires shuffling all the data in the Dataset. If an application intends to perform an aggregation over each key, it is best to use the reduce function or an Aggregator.

    Internally, the implementation will spill to disk if any given group is too large to fit into memory. However, users must take care to avoid materializing the whole iterator for a group (for example, by calling toList) unless they are sure that this is possible given the memory constraints of their cluster.

    Since

    1.6.0

  27. def mapGroups[U](f: (K, Iterator[V]) ⇒ U)(implicit arg0: Encoder[U]): Dataset[U]

    Applies the given function to each group of data.

    Applies the given function to each group of data. For each unique group, the function will be passed the group key and an iterator that contains all of the elements in the group. The function can return an element of arbitrary type which will be returned as a new Dataset.

    This function does not support partial aggregation, and as a result requires shuffling all the data in the Dataset. If an application intends to perform an aggregation over each key, it is best to use the reduce function or an Aggregator.

    Internally, the implementation will spill to disk if any given group is too large to fit into memory. However, users must take care to avoid materializing the whole iterator for a group (for example, by calling toList) unless they are sure that this is possible given the memory constraints of their cluster.

    Since

    1.6.0

  28. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  29. final def notify(): Unit

    Definition Classes
    AnyRef
  30. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  31. val queryExecution: QueryExecution

  32. def reduce(f: ReduceFunction[V]): Dataset[(K, V)]

    Reduces the elements of each group of data using the specified binary function.

    Reduces the elements of each group of data using the specified binary function. The given function must be commutative and associative or the result may be non-deterministic.

    Since

    1.6.0

  33. def reduce(f: (V, V) ⇒ V): Dataset[(K, V)]

    Reduces the elements of each group of data using the specified binary function.

    Reduces the elements of each group of data using the specified binary function. The given function must be commutative and associative or the result may be non-deterministic.

    Since

    1.6.0

  34. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  35. def toString(): String

    Definition Classes
    AnyRef → Any
  36. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped