DataStreamReader.
orc
Loads a ORC file stream, returning the result as a DataFrame.
DataFrame
Note
Evolving.
mergeSchema – sets whether we should merge schemas collected from all ORC part-files. This will override spark.sql.orc.mergeSchema. The default value is specified in spark.sql.orc.mergeSchema.
spark.sql.orc.mergeSchema
pathGlobFilter – an optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of `partition discovery`_.
recursiveFileLookup – recursively scan a directory for files. Using this option disables `partition discovery`_.
>>> orc_sdf = spark.readStream.schema(sdf_schema).orc(tempfile.mkdtemp()) >>> orc_sdf.isStreaming True >>> orc_sdf.schema == sdf_schema True
New in version 2.3.