pyspark.sql.SparkSession.range

SparkSession.range(start: int, end: Optional[int] = None, step: int = 1, numPartitions: Optional[int] = None) → pyspark.sql.dataframe.DataFrame[source]

Create a DataFrame with single pyspark.sql.types.LongType column named id, containing elements in a range from start to end (exclusive) with step value step.

New in version 2.0.0.

Changed in version 3.4.0: Supports Spark Connect.

Parameters
startint

the start value

endint, optional

the end value (exclusive)

stepint, optional

the incremental step (default: 1)

numPartitionsint, optional

the number of partitions of the DataFrame

Returns
DataFrame

Examples

>>> spark.range(1, 7, 2).show()
+---+
| id|
+---+
|  1|
|  3|
|  5|
+---+

If only one argument is specified, it will be used as the end value.

>>> spark.range(3).show()
+---+
| id|
+---+
|  0|
|  1|
|  2|
+---+