trait SessionConfigSupport extends TableProvider
A mix-in interface for TableProvider. Data sources can implement this interface to
propagate session configs with the specified key-prefix to all data source operations in this
session.
- Annotations
- @Evolving()
- Source
- SessionConfigSupport.java
- Since
3.0.0
- Alphabetic
- By Inheritance
- SessionConfigSupport
- TableProvider
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Abstract Value Members
- abstract def getTable(schema: StructType, partitioning: Array[Transform], properties: Map[String, String]): Table
Return a
Tableinstance with the specified table schema, partitioning and properties to do read/write.Return a
Tableinstance with the specified table schema, partitioning and properties to do read/write. The returned table should report the same schema and partitioning with the specified ones, or Spark may fail the operation.- schema
The specified table schema.
- partitioning
The specified table partitioning.
- properties
The specified table properties. It's case preserving (contains exactly what users specified) and implementations are free to use it case sensitively or insensitively. It should be able to identify a table, e.g. file path, Kafka topic name, etc.
- Definition Classes
- TableProvider
- abstract def inferSchema(options: CaseInsensitiveStringMap): StructType
Infer the schema of the table identified by the given options.
Infer the schema of the table identified by the given options.
- options
an immutable case-insensitive string-to-string map that can identify a table, e.g. file path, Kafka topic name, etc.
- Definition Classes
- TableProvider
- abstract def keyPrefix(): String
Key prefix of the session configs to propagate, which is usually the data source name.
Key prefix of the session configs to propagate, which is usually the data source name. Spark will extract all session configs that starts with
spark.datasource.$keyPrefix, turnspark.datasource.$keyPrefix.xxx -> yyyintoxxx -> yyy, and propagate them to all data source operations in this session.
Concrete Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def inferPartitioning(options: CaseInsensitiveStringMap): Array[Transform]
Infer the partitioning of the table identified by the given options.
Infer the partitioning of the table identified by the given options.
By default this method returns empty partitioning, please override it if this source support partitioning.
- options
an immutable case-insensitive string-to-string map that can identify a table, e.g. file path, Kafka topic name, etc.
- Definition Classes
- TableProvider
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- def supportsExternalMetadata(): Boolean
Returns true if the source has the ability of accepting external table metadata when getting tables.
Returns true if the source has the ability of accepting external table metadata when getting tables. The external table metadata includes:
- For table reader: user-specified schema from
DataFrameReader/DataStreamReaderand schema/partitioning stored in Spark catalog. - For table writer: the schema of the input
DataframeofDataframeWriter/DataStreamWriter.
By default this method returns false, which means the schema and partitioning passed to
Transform[], Map)are from the infer methods. Please override it if this source has expensive schema/partitioning inference and wants external table metadata to avoid inference.- Definition Classes
- TableProvider
- For table reader: user-specified schema from
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)