pyspark.pandas.DataFrame.iterrows

DataFrame.iterrows() → Iterator[Tuple[Union[Any, Tuple[Any, …]], pandas.core.series.Series]][source]

Iterate over DataFrame rows as (index, Series) pairs.

Yields
indexlabel or tuple of label

The index of the row. A tuple for a MultiIndex.

datapandas.Series

The data of the row as a Series.

itgenerator

A generator that iterates over the rows of the frame.

Notes

  1. Because iterrows returns a Series for each row, it does not preserve dtypes across the rows (dtypes are preserved across columns for DataFrames). For example,

    >>> df = ps.DataFrame([[1, 1.5]], columns=['int', 'float'])
    >>> row = next(df.iterrows())[1]
    >>> row
    int      1.0
    float    1.5
    Name: 0, dtype: float64
    >>> print(row['int'].dtype)
    float64
    >>> print(df['int'].dtype)
    int64
    

    To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns namedtuples of the values and which is generally faster than iterrows.

  2. You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect.