Class DataStreamReader
- All Implemented Interfaces:
org.apache.spark.internal.Logging
Dataset
from external storage systems (e.g. file systems,
key-value stores, etc). Use SparkSession.readStream
to access this.
- Since:
- 2.0.0
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
-
Method Summary
Modifier and TypeMethodDescriptionLoads a CSV file stream and returns the result as aDataFrame
.Specifies the input data source format.Loads a JSON file stream and returns the results as aDataFrame
.load()
Loads input data stream in as aDataFrame
, for data streams that don't require a path (e.g.Loads input in as aDataFrame
, for data streams that read from some path.Adds an input option for the underlying data source.Adds an input option for the underlying data source.Adds an input option for the underlying data source.Adds an input option for the underlying data source.(Java-specific) Adds input options for the underlying data source.(Scala-specific) Adds input options for the underlying data source.Loads a ORC file stream, returning the result as aDataFrame
.Loads a Parquet file stream, returning the result as aDataFrame
.Specifies the schema by using the input DDL-formatted string.schema
(StructType schema) Specifies the input schema.Define a Streaming DataFrame on a Table.Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any.Loads text file(s) and returns aDataset
of String.Loads a XML file stream and returns the result as aDataFrame
.Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContext
-
Method Details
-
csv
Loads a CSV file stream and returns the result as aDataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the CSV-specific options for reading CSV file stream in Data Source Option in the version you use.
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
format
Specifies the input data source format.- Parameters:
source
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
json
Loads a JSON file stream and returns the results as aDataFrame
.JSON Lines (newline-delimited JSON) is supported by default. For JSON (one record per file), set the
multiLine
option to true.This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan.
You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the JSON-specific options for reading JSON file stream in Data Source Option in the version you use.
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
load
Loads input data stream in as aDataFrame
, for data streams that don't require a path (e.g. external key-value stores).- Returns:
- (undocumented)
- Since:
- 2.0.0
-
load
Loads input in as aDataFrame
, for data streams that read from some path.- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
option
Adds an input option for the underlying data source.- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
option
Adds an input option for the underlying data source.- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
option
Adds an input option for the underlying data source.- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
option
Adds an input option for the underlying data source.- Parameters:
key
- (undocumented)value
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
options
(Scala-specific) Adds input options for the underlying data source.- Parameters:
options
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
options
(Java-specific) Adds input options for the underlying data source.- Parameters:
options
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
orc
Loads a ORC file stream, returning the result as aDataFrame
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
ORC-specific option(s) for reading ORC file stream can be found in Data Source Option in the version you use.
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.3.0
-
parquet
Loads a Parquet file stream, returning the result as aDataFrame
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
Parquet-specific option(s) for reading Parquet file stream can be found in Data Source Option in the version you use.
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
schema
Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.- Parameters:
schema
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
schema
Specifies the schema by using the input DDL-formatted string. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.- Parameters:
schemaString
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.3.0
-
table
Define a Streaming DataFrame on a Table. The DataSource corresponding to the table should support streaming mode.- Parameters:
tableName
- The name of the table- Returns:
- (undocumented)
- Since:
- 3.1.0
-
text
Loads text files and returns aDataFrame
whose schema starts with a string column named "value", and followed by partitioned columns if there are any. The text files must be encoded as UTF-8.By default, each line in the text files is a new row in the resulting DataFrame. For example:
// Scala: spark.readStream.text("/path/to/directory/") // Java: spark.readStream().text("/path/to/directory/")
You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the text-specific options for reading text files in Data Source Option in the version you use.
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Since:
- 2.0.0
-
textFile
Loads text file(s) and returns aDataset
of String. The underlying schema of the Dataset contains a single string column named "value". The text files must be encoded as UTF-8.If the directory structure of the text files contains partitioning information, those are ignored in the resulting Dataset. To include partitioning information as columns, use
text
.By default, each line in the text file is a new element in the resulting Dataset. For example:
// Scala: spark.readStream.textFile("/path/to/spark/README.md") // Java: spark.readStream().textFile("/path/to/spark/README.md")
You can set the text-specific options as specified in
DataStreamReader.text
.- Parameters:
path
- input path- Returns:
- (undocumented)
- Since:
- 2.1.0
-
xml
Loads a XML file stream and returns the result as aDataFrame
.This function will go through the input once to determine the input schema if
inferSchema
is enabled. To avoid going through the entire data once, disableinferSchema
option or specify the schema explicitly usingschema
.You can set the following option(s):
maxFilesPerTrigger
(default: no max limit): sets the maximum number of new files to be considered in every trigger.maxBytesPerTrigger
(default: no max limit): sets the maximum total size of new files to be considered in every trigger.
You can find the XML-specific options for reading XML file stream in Data Source Option in the version you use.
- Parameters:
path
- (undocumented)- Returns:
- (undocumented)
- Since:
- 4.0.0
-