trait PartitionReaderFactory extends Serializable
A factory used to create PartitionReader
instances.
If Spark fails to execute any methods in the implementations of this interface or in the returned
PartitionReader
(by throwing an exception), corresponding Spark task would fail and
get retried until hitting the maximum retry times.
- Annotations
- @Evolving()
- Source
- PartitionReaderFactory.java
- Since
3.0.0
- Alphabetic
- By Inheritance
- PartitionReaderFactory
- Serializable
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Abstract Value Members
- abstract def createReader(partition: InputPartition): PartitionReader[InternalRow]
Returns a row-based partition reader to read data from the given
InputPartition
.Returns a row-based partition reader to read data from the given
InputPartition
.Implementations probably need to cast the input partition to the concrete
InputPartition
class defined for the data source.
Concrete Value Members
- def createColumnarReader(partition: InputPartition): PartitionReader[ColumnarBatch]
Returns a columnar partition reader to read data from the given
InputPartition
.Returns a columnar partition reader to read data from the given
InputPartition
.Implementations probably need to cast the input partition to the concrete
InputPartition
class defined for the data source. - def supportColumnarReads(partition: InputPartition): Boolean
Returns true if the given
InputPartition
should be read by Spark in a columnar way.Returns true if the given
InputPartition
should be read by Spark in a columnar way. This means, implementations must also implement#createColumnarReader(InputPartition)
for the input partitions that this method returns true.As of Spark 2.4, Spark can only read all input partition in a columnar way, or none of them. Data source can't mix columnar and row-based partitions. This may be relaxed in future versions.