Packages

trait TableProvider extends AnyRef

The base interface for v2 data sources which don't have a real catalog. Implementations must have a public, 0-arg constructor.

Note that, TableProvider can only apply data operations to existing tables, like read, append, delete, and overwrite. It does not support the operations that require metadata changes, like create/drop tables.

The major responsibility of this interface is to return a Table for read/write.

Annotations
@Evolving()
Source
TableProvider.java
Since

3.0.0

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. TableProvider
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Abstract Value Members

  1. abstract def getTable(schema: StructType, partitioning: Array[Transform], properties: Map[String, String]): Table

    Return a Table instance with the specified table schema, partitioning and properties to do read/write.

    Return a Table instance with the specified table schema, partitioning and properties to do read/write. The returned table should report the same schema and partitioning with the specified ones, or Spark may fail the operation.

    schema

    The specified table schema.

    partitioning

    The specified table partitioning.

    properties

    The specified table properties. It's case preserving (contains exactly what users specified) and implementations are free to use it case sensitively or insensitively. It should be able to identify a table, e.g. file path, Kafka topic name, etc.

  2. abstract def inferSchema(options: CaseInsensitiveStringMap): StructType

    Infer the schema of the table identified by the given options.

    Infer the schema of the table identified by the given options.

    options

    an immutable case-insensitive string-to-string map that can identify a table, e.g. file path, Kafka topic name, etc.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
  6. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  7. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  8. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  9. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @IntrinsicCandidate() @native()
  10. def inferPartitioning(options: CaseInsensitiveStringMap): Array[Transform]

    Infer the partitioning of the table identified by the given options.

    Infer the partitioning of the table identified by the given options.

    By default this method returns empty partitioning, please override it if this source support partitioning.

    options

    an immutable case-insensitive string-to-string map that can identify a table, e.g. file path, Kafka topic name, etc.

  11. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  12. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  13. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  14. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @IntrinsicCandidate() @native()
  15. def supportsExternalMetadata(): Boolean

    Returns true if the source has the ability of accepting external table metadata when getting tables.

    Returns true if the source has the ability of accepting external table metadata when getting tables. The external table metadata includes:

    • For table reader: user-specified schema from DataFrameReader/ DataStreamReader and schema/partitioning stored in Spark catalog.
    • For table writer: the schema of the input Dataframe of DataframeWriter/DataStreamWriter.

    By default this method returns false, which means the schema and partitioning passed to Transform[], Map) are from the infer methods. Please override it if this source has expensive schema/partitioning inference and wants external table metadata to avoid inference.

  16. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  17. def toString(): String
    Definition Classes
    AnyRef → Any
  18. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  19. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  20. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable]) @Deprecated
    Deprecated

    (Since version 9)

Inherited from AnyRef

Inherited from Any

Ungrouped