pyspark.pandas.Series.sem

Series.sem(axis: Union[int, str, None] = None, skipna: bool = True, ddof: int = 1, numeric_only: bool = None) → Union[int, float, bool, str, bytes, decimal.Decimal, datetime.date, datetime.datetime, None, Series]

Return unbiased standard error of the mean over requested axis.

New in version 3.3.0.

Parameters
axis: {index (0), columns (1)}

Axis for the function to be applied on.

skipna: bool, default True

Exclude NA/null values when computing the result.

Changed in version 3.4.0: Supported including NA/null values.

ddof: int, default 1

Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

Changed in version 3.4.0: Supported including arbitary integers.

numeric_only: bool, default None

Include only float, int, boolean columns. False is not supported. This parameter is mainly for pandas compatibility.

Returns
scalar(for Series) or Series(for DataFrame)

Examples

>>> psdf = ps.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
>>> psdf
   a  b
0  1  4
1  2  5
2  3  6
>>> psdf.sem()
a    0.57735
b    0.57735
dtype: float64
>>> psdf.sem(ddof=0)
a    0.471405
b    0.471405
dtype: float64
>>> psdf.sem(ddof=2)
a    0.816497
b    0.816497
dtype: float64
>>> psdf.sem(axis=1)
0    1.5
1    1.5
2    1.5
dtype: float64

Support for Series

>>> psser = psdf.a
>>> psser
0    1
1    2
2    3
Name: a, dtype: int64
>>> psser.sem()
0.5773502691896258
>>> psser.sem(ddof=0)
0.47140452079103173