Interface StreamingDataWriterFactory

All Superinterfaces:
Serializable

@Evolving public interface StreamingDataWriterFactory extends Serializable
A factory of DataWriter returned by StreamingWrite.createStreamingWriterFactory(PhysicalWriteInfo), which is responsible for creating and initializing the actual data writer at executor side.

Note that, the writer factory will be serialized and sent to executors, then the data writer will be created on executors and do the actual writing. So this interface must be serializable and DataWriter doesn't need to be.

Since:
3.0.0
  • Method Summary

    Modifier and Type
    Method
    Description
    DataWriter<org.apache.spark.sql.catalyst.InternalRow>
    createWriter(int partitionId, long taskId, long epochId)
    Returns a data writer to do the actual writing work.
  • Method Details

    • createWriter

      DataWriter<org.apache.spark.sql.catalyst.InternalRow> createWriter(int partitionId, long taskId, long epochId)
      Returns a data writer to do the actual writing work. Note that, Spark will reuse the same data object instance when sending data to the data writer, for better performance. Data writers are responsible for defensive copies if necessary, e.g. copy the data before buffer it in a list.

      If this method fails (by throwing an exception), the corresponding Spark write task would fail and get retried until hitting the maximum retry times.

      Parameters:
      partitionId - A unique id of the RDD partition that the returned writer will process. Usually Spark processes many RDD partitions at the same time, implementations should use the partition id to distinguish writers for different partitions.
      taskId - The task id returned by TaskContext.taskAttemptId(). Spark may run multiple tasks for the same partition (due to speculation or task failures, for example).
      epochId - A monotonically increasing id for streaming queries that are split in to discrete periods of execution.