DataFrameReader.
text
Loads text files and returns a DataFrame whose schema starts with a string column named “value”, and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.
DataFrame
By default, each line in the text file is a new row in the resulting DataFrame.
paths – string, or list of strings, for input path(s).
wholetext – if true, read each file from input path(s) as a single row.
lineSep – defines the line separator that should be used for parsing. If None is set, it covers all \r, \r\n and \n.
\r
\r\n
\n
pathGlobFilter – an optional glob pattern to only include files with paths matching the pattern. The syntax follows org.apache.hadoop.fs.GlobFilter. It does not change the behavior of `partition discovery`_.
recursiveFileLookup – recursively scan a directory for files. Using this option disables `partition discovery`_.
>>> df = spark.read.text('python/test_support/sql/text-test.txt') >>> df.collect() [Row(value='hello'), Row(value='this')] >>> df = spark.read.text('python/test_support/sql/text-test.txt', wholetext=True) >>> df.collect() [Row(value='hello\nthis')]
New in version 1.6.