pyspark.sql.streaming.DataStreamReader.parquet

DataStreamReader.parquet(path, mergeSchema=None, pathGlobFilter=None, recursiveFileLookup=None)[source]

Loads a Parquet file stream, returning the result as a DataFrame.

Note

Evolving.

Parameters
  • mergeSchema – sets whether we should merge schemas collected from all Parquet part-files. This will override spark.sql.parquet.mergeSchema. The default value is specified in spark.sql.parquet.mergeSchema.

  • pathGlobFilter – an optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of `partition discovery`_.

  • recursiveFileLookup – recursively scan a directory for files. Using this option disables `partition discovery`_.

>>> parquet_sdf = spark.readStream.schema(sdf_schema).parquet(tempfile.mkdtemp())
>>> parquet_sdf.isStreaming
True
>>> parquet_sdf.schema == sdf_schema
True

New in version 2.0.