pyspark.sql.DataFrameWriter.saveAsTable

DataFrameWriter.saveAsTable(name, format=None, mode=None, partitionBy=None, **options)[source]

Saves the content of the DataFrame as the specified table.

In the case the table already exists, behavior of this function depends on the save mode, specified by the mode function (default to throwing an exception). When mode is Overwrite, the schema of the DataFrame does not need to be the same as that of the existing table.

  • append: Append contents of this DataFrame to existing data.

  • overwrite: Overwrite existing data.

  • error or errorifexists: Throw an exception if data already exists.

  • ignore: Silently ignore this operation if data already exists.

Parameters
  • name – the table name

  • format – the format used to save

  • mode – one of append, overwrite, error, errorifexists, ignore (default: error)

  • partitionBy – names of partitioning columns

  • options – all other string options

New in version 1.4.