pyspark.sql.SparkSession.range

SparkSession.range(start, end=None, step=1, numPartitions=None)[source]

Create a DataFrame with single pyspark.sql.types.LongType column named id, containing elements in a range from start to end (exclusive) with step value step.

Parameters
  • start – the start value

  • end – the end value (exclusive)

  • step – the incremental step (default: 1)

  • numPartitions – the number of partitions of the DataFrame

Returns

DataFrame

>>> spark.range(1, 7, 2).collect()
[Row(id=1), Row(id=3), Row(id=5)]

If only one argument is specified, it will be used as the end value.

>>> spark.range(3).collect()
[Row(id=0), Row(id=1), Row(id=2)]

New in version 2.0.