pyspark.RDD.groupWith

RDD.groupWith(other: pyspark.rdd.RDD[Tuple[Any, Any]], *others: pyspark.rdd.RDD[Tuple[Any, Any]]) → pyspark.rdd.RDD[Tuple[Any, Tuple[pyspark.resultiterable.ResultIterable[Any], …]]][source]

Alias for cogroup but with support for multiple RDDs.

New in version 0.7.0.

Parameters
otherRDD

another RDD

othersRDD

other RDDs

Returns
RDD

a RDD containing the keys and cogrouped values

Examples

>>> rdd1 = sc.parallelize([("a", 5), ("b", 6)])
>>> rdd2 = sc.parallelize([("a", 1), ("b", 4)])
>>> rdd3 = sc.parallelize([("a", 2)])
>>> rdd4 = sc.parallelize([("b", 42)])
>>> [(x, tuple(map(list, y))) for x, y in
...     sorted(list(rdd1.groupWith(rdd2, rdd3, rdd4).collect()))]
[('a', ([5], [1], [2], [])), ('b', ([6], [4], [], [42]))]