Class DataFrameWriterV2<T>
- All Implemented Interfaces:
CreateTableWriter<T>,WriteConfigMethods<CreateTableWriter<T>>
Dataset to external storage using the v2
API.
- Since:
- 3.0.0
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionabstract voidappend()Append the contents of the data frame to the output table.Clusters the output by the given columns on the storage.abstract DataFrameWriterV2<T>Clusters the output by the given columns on the storage.Add a boolean output option.Add a double output option.Add a long output option.abstract DataFrameWriterV2<T>Add a write option.abstract DataFrameWriterV2<T>Add write options from a Java Map.abstract DataFrameWriterV2<T>Add write options from a Scala Map.abstract voidOverwrite rows matching the given filter condition with the contents of the data frame in the output table.abstract voidOverwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.partitionedBy(Column column, Column... columns) Partition the output table created bycreate,createOrReplace, orreplaceusing the given columns or transforms.abstract DataFrameWriterV2<T>partitionedBy(Column column, scala.collection.immutable.Seq<Column> columns) Partition the output table created bycreate,createOrReplace, orreplaceusing the given columns or transforms.abstract DataFrameWriterV2<T>tableProperty(String property, String value) Add a table property.abstract DataFrameWriterV2<T>Specifies a provider for the underlying output data source.Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.sql.CreateTableWriter
create, createOrReplace, replace
-
Constructor Details
-
DataFrameWriterV2
public DataFrameWriterV2()
-
-
Method Details
-
append
public abstract void append() throws org.apache.spark.sql.catalyst.analysis.NoSuchTableExceptionAppend the contents of the data frame to the output table.If the output table does not exist, this operation will fail with
NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.- Throws:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException- If the table does not exist
-
clusterBy
Description copied from interface:CreateTableWriterClusters the output by the given columns on the storage. The rows with matching values in the specified clustering columns will be consolidated within the same group.For instance, if you cluster a dataset by date, the data sharing the same date will be stored together in a file. This arrangement improves query efficiency when you apply selective filters to these clustering columns, thanks to data skipping.
- Specified by:
clusterByin interfaceCreateTableWriter<T>- Parameters:
colName- (undocumented)colNames- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
clusterBy
public abstract DataFrameWriterV2<T> clusterBy(String colName, scala.collection.immutable.Seq<String> colNames) Description copied from interface:CreateTableWriterClusters the output by the given columns on the storage. The rows with matching values in the specified clustering columns will be consolidated within the same group.For instance, if you cluster a dataset by date, the data sharing the same date will be stored together in a file. This arrangement improves query efficiency when you apply selective filters to these clustering columns, thanks to data skipping.
- Specified by:
clusterByin interfaceCreateTableWriter<T>- Parameters:
colName- (undocumented)colNames- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from interface:WriteConfigMethodsAdd a boolean output option.- Specified by:
optionin interfaceWriteConfigMethods<T>- Parameters:
key- (undocumented)value- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from interface:WriteConfigMethodsAdd a long output option.- Specified by:
optionin interfaceWriteConfigMethods<T>- Parameters:
key- (undocumented)value- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from interface:WriteConfigMethodsAdd a double output option.- Specified by:
optionin interfaceWriteConfigMethods<T>- Parameters:
key- (undocumented)value- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
option
Description copied from interface:WriteConfigMethodsAdd a write option.- Specified by:
optionin interfaceWriteConfigMethods<T>- Parameters:
key- (undocumented)value- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
options
Description copied from interface:WriteConfigMethodsAdd write options from a Scala Map.- Specified by:
optionsin interfaceWriteConfigMethods<T>- Parameters:
options- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
options
Description copied from interface:WriteConfigMethodsAdd write options from a Java Map.- Specified by:
optionsin interfaceWriteConfigMethods<T>- Parameters:
options- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
overwrite
public abstract void overwrite(Column condition) throws org.apache.spark.sql.catalyst.analysis.NoSuchTableException Overwrite rows matching the given filter condition with the contents of the data frame in the output table.If the output table does not exist, this operation will fail with
NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.- Parameters:
condition- (undocumented)- Throws:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException- If the table does not exist
-
overwritePartitions
public abstract void overwritePartitions() throws org.apache.spark.sql.catalyst.analysis.NoSuchTableExceptionOverwrite all partition for which the data frame contains at least one row with the contents of the data frame in the output table.This operation is equivalent to Hive's
INSERT OVERWRITE ... PARTITION, which replaces partitions dynamically depending on the contents of the data frame.If the output table does not exist, this operation will fail with
NoSuchTableException. The data frame will be validated to ensure it is compatible with the existing table.- Throws:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException- If the table does not exist
-
partitionedBy
Description copied from interface:CreateTableWriterPartition the output table created bycreate,createOrReplace, orreplaceusing the given columns or transforms.When specified, the table data will be stored by these values for efficient reads.
For example, when a table is partitioned by day, it may be stored in a directory layout like:
table/day=2019-06-01/table/day=2019-06-02/
Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.
- Specified by:
partitionedByin interfaceCreateTableWriter<T>- Parameters:
column- (undocumented)columns- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
partitionedBy
public abstract DataFrameWriterV2<T> partitionedBy(Column column, scala.collection.immutable.Seq<Column> columns) Description copied from interface:CreateTableWriterPartition the output table created bycreate,createOrReplace, orreplaceusing the given columns or transforms.When specified, the table data will be stored by these values for efficient reads.
For example, when a table is partitioned by day, it may be stored in a directory layout like:
table/day=2019-06-01/table/day=2019-06-02/
Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. In order for partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.
- Specified by:
partitionedByin interfaceCreateTableWriter<T>- Parameters:
column- (undocumented)columns- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
tableProperty
Description copied from interface:CreateTableWriterAdd a table property.- Specified by:
tablePropertyin interfaceCreateTableWriter<T>- Parameters:
property- (undocumented)value- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-
using
Description copied from interface:CreateTableWriterSpecifies a provider for the underlying output data source. Spark's default catalog supports "parquet", "json", etc.- Specified by:
usingin interfaceCreateTableWriter<T>- Parameters:
provider- (undocumented)- Returns:
- (undocumented)
- Inheritdoc:
-