pyspark.sql.DataFrameReader.table

DataFrameReader.table(tableName)[source]

Returns the specified table as a DataFrame.

Parameters

tableName – string, name of the table.

>>> df = spark.read.parquet('python/test_support/sql/parquet_partitioned')
>>> df.createOrReplaceTempView('tmpTable')
>>> spark.read.table('tmpTable').dtypes
[('name', 'string'), ('year', 'int'), ('month', 'int'), ('day', 'int')]

New in version 1.4.