@Experimental public interface SupportsAtomicPartitionManagement extends SupportsPartitionManagement
Table
to operate multiple partitions atomically.
These APIs are used to modify table partition or partition metadata, they will change the table data as well.
createPartitions(org.apache.spark.sql.catalyst.InternalRow[], java.util.Map<java.lang.String, java.lang.String>[])
: add an array of partitions and any data they contain to the
tabledropPartitions(org.apache.spark.sql.catalyst.InternalRow[])
: remove an array of partitions and any data they contain from
the tablepurgePartitions(org.apache.spark.sql.catalyst.InternalRow[])
: remove an array of partitions and any data they contain from
the table by skipping a trash even if it is supportedtruncatePartitions(org.apache.spark.sql.catalyst.InternalRow[])
: truncate an array of partitions by removing partitions
dataModifier and Type | Method and Description |
---|---|
default void |
createPartition(org.apache.spark.sql.catalyst.InternalRow ident,
java.util.Map<String,String> properties)
Create a partition in table.
|
void |
createPartitions(org.apache.spark.sql.catalyst.InternalRow[] idents,
java.util.Map<String,String>[] properties)
Create an array of partitions atomically in table.
|
default boolean |
dropPartition(org.apache.spark.sql.catalyst.InternalRow ident)
Drop a partition from table.
|
boolean |
dropPartitions(org.apache.spark.sql.catalyst.InternalRow[] idents)
Drop an array of partitions atomically from table.
|
default boolean |
purgePartitions(org.apache.spark.sql.catalyst.InternalRow[] idents)
Drop an array of partitions atomically from table, and completely remove partitions data
by skipping a trash even if it is supported.
|
default boolean |
truncatePartitions(org.apache.spark.sql.catalyst.InternalRow[] idents)
Truncate an array of partitions atomically from table, and completely remove partitions data.
|
listPartitionIdentifiers, loadPartitionMetadata, partitionExists, partitionSchema, purgePartition, renamePartition, replacePartitionMetadata, truncatePartition
capabilities, columns, name, partitioning, properties, schema
default void createPartition(org.apache.spark.sql.catalyst.InternalRow ident, java.util.Map<String,String> properties) throws org.apache.spark.sql.catalyst.analysis.PartitionsAlreadyExistException, UnsupportedOperationException
SupportsPartitionManagement
createPartition
in interface SupportsPartitionManagement
ident
- a new partition identifierproperties
- the metadata of a partitionorg.apache.spark.sql.catalyst.analysis.PartitionsAlreadyExistException
- If a partition already exists for the identifierUnsupportedOperationException
- If partition property is not supporteddefault boolean dropPartition(org.apache.spark.sql.catalyst.InternalRow ident)
SupportsPartitionManagement
dropPartition
in interface SupportsPartitionManagement
ident
- a partition identifiervoid createPartitions(org.apache.spark.sql.catalyst.InternalRow[] idents, java.util.Map<String,String>[] properties) throws org.apache.spark.sql.catalyst.analysis.PartitionsAlreadyExistException, UnsupportedOperationException
If any partition already exists, the operation of createPartitions need to be safely rolled back.
idents
- an array of new partition identifiersproperties
- the metadata of the partitionsorg.apache.spark.sql.catalyst.analysis.PartitionsAlreadyExistException
- If any partition already exists for the identifierUnsupportedOperationException
- If partition property is not supportedboolean dropPartitions(org.apache.spark.sql.catalyst.InternalRow[] idents)
If any partition doesn't exists, the operation of dropPartitions need to be safely rolled back.
idents
- an array of partition identifiersdefault boolean purgePartitions(org.apache.spark.sql.catalyst.InternalRow[] idents) throws org.apache.spark.sql.catalyst.analysis.NoSuchPartitionException, UnsupportedOperationException
If any partition doesn't exists, the operation of purgePartitions need to be safely rolled back.
idents
- an array of partition identifiersorg.apache.spark.sql.catalyst.analysis.NoSuchPartitionException
- If any partition identifier to alter doesn't existUnsupportedOperationException
- If partition purging is not supporteddefault boolean truncatePartitions(org.apache.spark.sql.catalyst.InternalRow[] idents) throws org.apache.spark.sql.catalyst.analysis.NoSuchPartitionException, UnsupportedOperationException
If any partition doesn't exists, the operation of truncatePartitions need to be safely rolled back.
idents
- an array of partition identifiersorg.apache.spark.sql.catalyst.analysis.NoSuchPartitionException
- If any partition identifier to truncate doesn't existUnsupportedOperationException
- If partition truncate is not supported