pyspark.sql.DataFrameReader.orc

DataFrameReader.orc(path, mergeSchema=None, pathGlobFilter=None, recursiveFileLookup=None)[source]

Loads ORC files, returning the result as a DataFrame.

Parameters
  • mergeSchema – sets whether we should merge schemas collected from all ORC part-files. This will override spark.sql.orc.mergeSchema. The default value is specified in spark.sql.orc.mergeSchema.

  • pathGlobFilter – an optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of `partition discovery`_.

  • recursiveFileLookup – recursively scan a directory for files. Using this option disables `partition discovery`_.

>>> df = spark.read.orc('python/test_support/sql/orc_partitioned')
>>> df.dtypes
[('a', 'bigint'), ('b', 'int'), ('c', 'int')]

New in version 1.5.