pyspark.pandas.groupby.GroupBy.std

GroupBy.std(ddof: int = 1) → FrameLike[source]

Compute standard deviation of groups, excluding missing values.

New in version 3.3.0.

Parameters
ddofint, default 1

Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

Changed in version 3.4.0: Supported including arbitary integers.

Examples

>>> df = ps.DataFrame({"A": [1, 2, 1, 2], "B": [True, False, False, True],
...                    "C": [3, 4, 3, 4], "D": ["a", "b", "b", "a"]})
>>> df.groupby("A").std()
          B    C
A
1  0.707107  0.0
2  0.707107  0.0