pyspark.pandas.groupby.GroupBy.first

GroupBy.first(numeric_only: Optional[bool] = False, min_count: int = - 1) → FrameLike[source]

Compute first of group values.

New in version 3.3.0.

Parameters
numeric_onlybool, default False

Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data.

New in version 3.4.0.

min_countint, default -1

The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.

New in version 3.4.0.

Examples

>>> df = ps.DataFrame({"A": [1, 2, 1, 2], "B": [True, False, False, True],
...                    "C": [3, 3, 4, 4], "D": ["a", "b", "a", "a"]})
>>> df
   A      B  C  D
0  1   True  3  a
1  2  False  3  b
2  1  False  4  a
3  2   True  4  a
>>> df.groupby("A").first().sort_index()
       B  C  D
A
1   True  3  a
2  False  3  b

Include only float, int, boolean columns when set numeric_only True.

>>> df.groupby("A").first(numeric_only=True).sort_index()
       B  C
A
1   True  3
2  False  3
>>> df.groupby("D").first().sort_index()
   A      B  C
D
a  1   True  3
b  2  False  3
>>> df.groupby("D").first(min_count=3).sort_index()
     A     B    C
D
a  1.0  True  3.0
b  NaN  None  NaN