class SQLContext extends Logging with Serializable
The entry point for working with structured data (rows and columns) in Spark 1.x.
As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility.
- Self Type
 - SQLContext
 - Annotations
 - @Stable()
 - Source
 - SQLContext.scala
 - Since
 1.0.0
- Grouped
 - Alphabetic
 - By Inheritance
 
- SQLContext
 - Serializable
 - Serializable
 - Logging
 - AnyRef
 - Any
 
- Hide All
 - Show All
 
- Public
 - All
 
Instance Constructors
- 
      
      
      
        
      
    
      
        
        new
      
      
        SQLContext(sparkContext: JavaSparkContext)
      
      
      
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.0.0) Use SparkSession.builder instead
 - 
      
      
      
        
      
    
      
        
        new
      
      
        SQLContext(sc: SparkContext)
      
      
      
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.0.0) Use SparkSession.builder instead
 
Value Members
- 
      
      
      
        
      
    
      
        final 
        def
      
      
        !=(arg0: Any): Boolean
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        ##(): Int
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        ==(arg0: Any): Boolean
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        asInstanceOf[T0]: T0
      
      
      
- Definition Classes
 - Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        baseRelationToDataFrame(baseRelation: BaseRelation): DataFrame
      
      
      
Convert a
BaseRelationcreated for external data sources into aDataFrame.Convert a
BaseRelationcreated for external data sources into aDataFrame.- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        cacheTable(tableName: String): Unit
      
      
      
Caches the specified table in-memory.
Caches the specified table in-memory.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        clearCache(): Unit
      
      
      
Removes all cached tables from the in-memory cache.
Removes all cached tables from the in-memory cache.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        clone(): AnyRef
      
      
      
- Attributes
 - protected[lang]
 - Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... ) @native()
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame(data: List[_], beanClass: Class[_]): DataFrame
      
      
      
Applies a schema to a List of Java Beans.
Applies a schema to a List of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Since
 1.6.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame
      
      
      
Applies a schema to an RDD of Java Beans.
Applies a schema to an RDD of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame(rdd: RDD[_], beanClass: Class[_]): DataFrame
      
      
      
Applies a schema to an RDD of Java Beans.
Applies a schema to an RDD of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame(rows: List[Row], schema: StructType): DataFrame
      
      
      
:: DeveloperApi :: Creates a
DataFramefrom ajava.util.Listcontaining Rows using the given schema. - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame(rowRDD: JavaRDD[Row], schema: StructType): DataFrame
      
      
      
:: DeveloperApi :: Creates a
DataFramefrom aJavaRDDcontaining Rows using the given schema. - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame(rowRDD: RDD[Row], schema: StructType): DataFrame
      
      
      
:: DeveloperApi :: Creates a
DataFramefrom anRDDcontaining Rows using the given schema.:: DeveloperApi :: Creates a
DataFramefrom anRDDcontaining Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:import org.apache.spark.sql._ import org.apache.spark.sql.types._ val sqlContext = new org.apache.spark.sql.SQLContext(sc) val schema = StructType( StructField("name", StringType, false) :: StructField("age", IntegerType, true) :: Nil) val people = sc.textFile("examples/src/main/resources/people.txt").map( _.split(",")).map(p => Row(p(0), p(1).trim.toInt)) val dataFrame = sqlContext.createDataFrame(people, schema) dataFrame.printSchema // root // |-- name: string (nullable = false) // |-- age: integer (nullable = true) dataFrame.createOrReplaceTempView("people") sqlContext.sql("select name from people").collect.foreach(println)
- Annotations
 - @DeveloperApi()
 - Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame[A <: Product](data: Seq[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame
      
      
      
Creates a DataFrame from a local Seq of Product.
Creates a DataFrame from a local Seq of Product.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataFrame[A <: Product](rdd: RDD[A])(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): DataFrame
      
      
      
Creates a DataFrame from an RDD of Product (e.g.
Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataset[T](data: List[T])(implicit arg0: Encoder[T]): Dataset[T]
      
      
      
Creates a Dataset from a
java.util.Listof a given type.Creates a Dataset from a
java.util.Listof a given type. This method requires an encoder (to convert a JVM object of typeTto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods on Encoders.Java Example
List<String> data = Arrays.asList("hello", "world"); Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
- Since
 2.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataset[T](data: RDD[T])(implicit arg0: Encoder[T]): Dataset[T]
      
      
      
Creates a Dataset from an RDD of a given type.
Creates a Dataset from an RDD of a given type. This method requires an encoder (to convert a JVM object of type
Tto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods on Encoders.- Since
 2.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createDataset[T](data: Seq[T])(implicit arg0: Encoder[T]): Dataset[T]
      
      
      
Creates a Dataset from a local Seq of data of a given type.
Creates a Dataset from a local Seq of data of a given type. This method requires an encoder (to convert a JVM object of type
Tto and from the internal Spark SQL representation) that is generally created automatically through implicits from aSparkSession, or can be created explicitly by calling static methods on Encoders.Example
import spark.implicits._ case class Person(name: String, age: Long) val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19)) val ds = spark.createDataset(data) ds.show() // +-------+---+ // | name|age| // +-------+---+ // |Michael| 29| // | Andy| 30| // | Justin| 19| // +-------+---+
- Since
 2.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        dropTempTable(tableName: String): Unit
      
      
      
Drops the temporary table with the given table name in the catalog.
Drops the temporary table with the given table name in the catalog. If the table has been cached/persisted before, it's also unpersisted.
- tableName
 the name of the table to be unregistered.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        emptyDataFrame: DataFrame
      
      
      
Returns a
DataFramewith no rows or columns.Returns a
DataFramewith no rows or columns.- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        eq(arg0: AnyRef): Boolean
      
      
      
- Definition Classes
 - AnyRef
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        equals(arg0: Any): Boolean
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        experimental: ExperimentalMethods
      
      
      
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
- Annotations
 - @Experimental() @transient() @Unstable()
 - Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        finalize(): Unit
      
      
      
- Attributes
 - protected[lang]
 - Definition Classes
 - AnyRef
 - Annotations
 - @throws( classOf[java.lang.Throwable] )
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        getAllConfs: Map[String, String]
      
      
      
Return all the configuration properties that have been set (i.e.
Return all the configuration properties that have been set (i.e. not the default). This creates a new copy of the config properties in the form of a Map.
- Since
 1.0.0
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        getClass(): Class[_]
      
      
      
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @native()
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        getConf(key: String, defaultValue: String): String
      
      
      
Return the value of Spark SQL configuration property for the given key.
Return the value of Spark SQL configuration property for the given key. If the key is not set yet, return
defaultValue.- Since
 1.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        getConf(key: String): String
      
      
      
Return the value of Spark SQL configuration property for the given key.
Return the value of Spark SQL configuration property for the given key.
- Since
 1.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        hashCode(): Int
      
      
      
- Definition Classes
 - AnyRef → Any
 - Annotations
 - @native()
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        initializeLogIfNecessary(isInterpreter: Boolean): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        isCached(tableName: String): Boolean
      
      
      
Returns true if the table is currently cached in-memory.
Returns true if the table is currently cached in-memory.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        isInstanceOf[T0]: Boolean
      
      
      
- Definition Classes
 - Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        isTraceEnabled(): Boolean
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        listenerManager: ExecutionListenerManager
      
      
      
An interface to register custom org.apache.spark.sql.util.QueryExecutionListeners that listen for execution metrics.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        log: Logger
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logDebug(msg: ⇒ String, throwable: Throwable): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logDebug(msg: ⇒ String): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logError(msg: ⇒ String, throwable: Throwable): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logError(msg: ⇒ String): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logInfo(msg: ⇒ String, throwable: Throwable): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logInfo(msg: ⇒ String): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logName: String
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logTrace(msg: ⇒ String, throwable: Throwable): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logTrace(msg: ⇒ String): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logWarning(msg: ⇒ String, throwable: Throwable): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        logWarning(msg: ⇒ String): Unit
      
      
      
- Attributes
 - protected
 - Definition Classes
 - Logging
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        ne(arg0: AnyRef): Boolean
      
      
      
- Definition Classes
 - AnyRef
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        newSession(): SQLContext
      
      
      
Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same
SparkContext, cached data and other things.Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same
SparkContext, cached data and other things.- Since
 1.6.0
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        notify(): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @native()
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        notifyAll(): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @native()
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        range(start: Long, end: Long, step: Long, numPartitions: Int): DataFrame
      
      
      
Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in an range fromstarttoend(exclusive) with an step value, with partition number specified.Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in an range fromstarttoend(exclusive) with an step value, with partition number specified.- Since
 1.4.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        range(start: Long, end: Long, step: Long): DataFrame
      
      
      
Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value.Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with a step value.- Since
 2.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        range(start: Long, end: Long): DataFrame
      
      
      
Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with step value 1.Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in a range fromstarttoend(exclusive) with step value 1.- Since
 1.4.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        range(end: Long): DataFrame
      
      
      
Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in a range from 0 toend(exclusive) with step value 1.Creates a
DataFramewith a singleLongTypecolumn namedid, containing elements in a range from 0 toend(exclusive) with step value 1.- Since
 1.4.1
 - 
      
      
      
        
      
    
      
        
        def
      
      
        read: DataFrameReader
      
      
      
Returns a DataFrameReader that can be used to read non-streaming data in as a
DataFrame.Returns a DataFrameReader that can be used to read non-streaming data in as a
DataFrame.sqlContext.read.parquet("/path/to/file.parquet") sqlContext.read.schema(schema).json("/path/to/file.json")
- Since
 1.4.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        readStream: DataStreamReader
      
      
      
Returns a
DataStreamReaderthat can be used to read streaming data in as aDataFrame.Returns a
DataStreamReaderthat can be used to read streaming data in as aDataFrame.sparkSession.readStream.parquet("/path/to/directory/of/parquet/files") sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
- Since
 2.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        setConf(key: String, value: String): Unit
      
      
      
Set the given Spark SQL configuration property.
Set the given Spark SQL configuration property.
- Since
 1.0.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        setConf(props: Properties): Unit
      
      
      
Set Spark SQL configuration properties.
Set Spark SQL configuration properties.
- Since
 1.0.0
 -  def sparkContext: SparkContext
 -  val sparkSession: SparkSession
 - 
      
      
      
        
      
    
      
        
        def
      
      
        sql(sqlText: String): DataFrame
      
      
      
Executes a SQL query using Spark, returning the result as a
DataFrame.Executes a SQL query using Spark, returning the result as a
DataFrame. This API eagerly runs DDL/DML commands, but not for SELECT queries.- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        streams: StreamingQueryManager
      
      
      
Returns a
StreamingQueryManagerthat allows managing all the StreamingQueries active onthiscontext.Returns a
StreamingQueryManagerthat allows managing all the StreamingQueries active onthiscontext.- Since
 2.0.0
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        synchronized[T0](arg0: ⇒ T0): T0
      
      
      
- Definition Classes
 - AnyRef
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        table(tableName: String): DataFrame
      
      
      
Returns the specified table as a
DataFrame.Returns the specified table as a
DataFrame.- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        tableNames(databaseName: String): Array[String]
      
      
      
Returns the names of tables in the given database as an array.
Returns the names of tables in the given database as an array.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        tableNames(): Array[String]
      
      
      
Returns the names of tables in the current database as an array.
Returns the names of tables in the current database as an array.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        tables(databaseName: String): DataFrame
      
      
      
Returns a
DataFramecontaining names of existing tables in the given database.Returns a
DataFramecontaining names of existing tables in the given database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        tables(): DataFrame
      
      
      
Returns a
DataFramecontaining names of existing tables in the current database.Returns a
DataFramecontaining names of existing tables in the current database. The returned DataFrame has three columns, database, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        toString(): String
      
      
      
- Definition Classes
 - AnyRef → Any
 
 - 
      
      
      
        
      
    
      
        
        def
      
      
        udf: UDFRegistration
      
      
      
A collection of methods for registering user-defined functions (UDF).
A collection of methods for registering user-defined functions (UDF).
The following example registers a Scala closure as UDF:
sqlContext.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)
The following example registers a UDF in Java:
sqlContext.udf().register("myUDF", (Integer arg1, String arg2) -> arg2 + arg1, DataTypes.StringType);
- Since
 1.3.0
- Note
 The user-defined functions must be deterministic. Due to optimization, duplicate invocations may be eliminated or the function may even be invoked more times than it is present in the query.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        uncacheTable(tableName: String): Unit
      
      
      
Removes the specified table from the in-memory cache.
Removes the specified table from the in-memory cache.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        wait(): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... )
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        wait(arg0: Long, arg1: Int): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... )
 
 - 
      
      
      
        
      
    
      
        final 
        def
      
      
        wait(arg0: Long): Unit
      
      
      
- Definition Classes
 - AnyRef
 - Annotations
 - @throws( ... ) @native()
 
 - 
      
      
      
        
      
    
      
        
        object
      
      
        implicits extends SQLImplicits with Serializable
      
      
      
(Scala-specific) Implicit methods available in Scala for converting common Scala objects into
DataFrames.(Scala-specific) Implicit methods available in Scala for converting common Scala objects into
DataFrames.val sqlContext = new SQLContext(sc) import sqlContext.implicits._
- Since
 1.3.0
 
Deprecated Value Members
- 
      
      
      
        
      
    
      
        
        def
      
      
        applySchema(rdd: JavaRDD[_], beanClass: Class[_]): DataFrame
      
      
      
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.3.0) Use createDataFrame instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        applySchema(rdd: RDD[_], beanClass: Class[_]): DataFrame
      
      
      
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.3.0) Use createDataFrame instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        applySchema(rowRDD: JavaRDD[Row], schema: StructType): DataFrame
      
      
      
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.3.0) Use createDataFrame instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        applySchema(rowRDD: RDD[Row], schema: StructType): DataFrame
      
      
      
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.3.0) Use createDataFrame instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame
      
      
      
(Scala-specific) Create an external table from the given path based on a data source, a schema and a set of options.
(Scala-specific) Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.2.0) use sparkSession.catalog.createTable instead.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createExternalTable(tableName: String, source: String, schema: StructType, options: Map[String, String]): DataFrame
      
      
      
Create an external table from the given path based on a data source, a schema and a set of options.
Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.2.0) use sparkSession.catalog.createTable instead.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame
      
      
      
(Scala-specific) Creates an external table from the given path based on a data source and a set of options.
(Scala-specific) Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.2.0) use sparkSession.catalog.createTable instead.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createExternalTable(tableName: String, source: String, options: Map[String, String]): DataFrame
      
      
      
Creates an external table from the given path based on a data source and a set of options.
Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.2.0) use sparkSession.catalog.createTable instead.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createExternalTable(tableName: String, path: String, source: String): DataFrame
      
      
      
Creates an external table from the given path based on a data source and returns the corresponding DataFrame.
Creates an external table from the given path based on a data source and returns the corresponding DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.2.0) use sparkSession.catalog.createTable instead.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        createExternalTable(tableName: String, path: String): DataFrame
      
      
      
Creates an external table from the given path and returns the corresponding DataFrame.
Creates an external table from the given path and returns the corresponding DataFrame. It will use the default data source configured by spark.sql.sources.default.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 2.2.0) use sparkSession.catalog.createTable instead.
- Since
 1.3.0
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jdbc(url: String, table: String, theParts: Array[String]): DataFrame
      
      
      
Construct a
DataFramerepresenting the database table accessible via JDBC URL url named table.Construct a
DataFramerepresenting the database table accessible via JDBC URL url named table. The theParts parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of theDataFrame.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.jdbc() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jdbc(url: String, table: String, columnName: String, lowerBound: Long, upperBound: Long, numPartitions: Int): DataFrame
      
      
      
Construct a
DataFramerepresenting the database table accessible via JDBC URL url named table.Construct a
DataFramerepresenting the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.- columnName
 the name of a column of integral type that will be used for partitioning.
- lowerBound
 the minimum value of
columnNameused to decide partition stride- upperBound
 the maximum value of
columnNameused to decide partition stride- numPartitions
 the number of partitions. the range
minValue-maxValuewill be split evenly into this many partitions
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.jdbc() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jdbc(url: String, table: String): DataFrame
      
      
      
Construct a
DataFramerepresenting the database table accessible via JDBC URL url named table.Construct a
DataFramerepresenting the database table accessible via JDBC URL url named table.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.jdbc() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonFile(path: String, samplingRatio: Double): DataFrame
      
      
      
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonFile(path: String, schema: StructType): DataFrame
      
      
      
Loads a JSON file (one object per line) and applies the given schema, returning the result as a
DataFrame.Loads a JSON file (one object per line) and applies the given schema, returning the result as a
DataFrame.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonFile(path: String): DataFrame
      
      
      
Loads a JSON file (one object per line), returning the result as a
DataFrame.Loads a JSON file (one object per line), returning the result as a
DataFrame. It goes through the entire dataset once to determine the schema.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonRDD(json: JavaRDD[String], samplingRatio: Double): DataFrame
      
      
      
Loads a JavaRDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a
DataFrame.Loads a JavaRDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a
DataFrame.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonRDD(json: RDD[String], samplingRatio: Double): DataFrame
      
      
      
Loads an RDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a
DataFrame.Loads an RDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a
DataFrame.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonRDD(json: JavaRDD[String], schema: StructType): DataFrame
      
      
      
Loads an JavaRDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a
DataFrame.Loads an JavaRDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a
DataFrame.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonRDD(json: RDD[String], schema: StructType): DataFrame
      
      
      
Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a
DataFrame.Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a
DataFrame.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonRDD(json: JavaRDD[String]): DataFrame
      
      
      
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a
DataFrame.Loads an RDD[String] storing JSON objects (one object per record), returning the result as a
DataFrame. It goes through the entire dataset once to determine the schema.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        jsonRDD(json: RDD[String]): DataFrame
      
      
      
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a
DataFrame.Loads an RDD[String] storing JSON objects (one object per record), returning the result as a
DataFrame. It goes through the entire dataset once to determine the schema.- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.json() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        load(source: String, schema: StructType, options: Map[String, String]): DataFrame
      
      
      
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        load(source: String, schema: StructType, options: Map[String, String]): DataFrame
      
      
      
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.format(source).schema(schema).options(options).load() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        load(source: String, options: Map[String, String]): DataFrame
      
      
      
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.format(source).options(options).load() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        load(source: String, options: Map[String, String]): DataFrame
      
      
      
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.format(source).options(options).load() instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        load(path: String, source: String): DataFrame
      
      
      
Returns the dataset stored at path as a DataFrame, using the given data source.
Returns the dataset stored at path as a DataFrame, using the given data source.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.format(source).load(path) instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        load(path: String): DataFrame
      
      
      
Returns the dataset stored at path as a DataFrame, using the default data source configured by spark.sql.sources.default.
Returns the dataset stored at path as a DataFrame, using the default data source configured by spark.sql.sources.default.
- Annotations
 - @deprecated
 - Deprecated
 (Since version 1.4.0) Use read.load(path) instead.
 - 
      
      
      
        
      
    
      
        
        def
      
      
        parquetFile(paths: String*): DataFrame
      
      
      
Loads a Parquet file, returning the result as a
DataFrame.Loads a Parquet file, returning the result as a
DataFrame. This function returns an emptyDataFrameif no paths are passed in.- Annotations
 - @deprecated @varargs()
 - Deprecated
 (Since version 1.4.0) Use read.parquet() instead.